On Validation, pt III

From the first two articles (here, and here) on this topic arises the obvious question…so what? Not
validating findings has worked well for many, to the point that the lack of validation is not recognized. After all, who notices that findings were not verified? The peer review process? The manager? The customer? Given just the fact how pervasive training materials and processes are that focus solely on single artifacts in isolation should give us a clear understanding that validating findings is not a common practice. That is, if the need for validation is not pervasive in our industry literature, and if someone isn’t asking the question, “…but how do you know?”, then what leads us to assume that validation is part of what we do?

Consider a statement often seen in ransomware investigation/response reports up until about November 2019; that statement was some version of “…no evidence of data exfiltration was observed…”. However, did anyone ask, “…what did you look at?” Was this finding (i.e., “…no evidence of…”) validated by examining data sources that would definitely indicate data exfiltration, such as web server logs, or the BITS Client Event Log? Or how about indirect sources, such as unusual processes making outbound network connections? Understanding how findings were validated is not about assigning blame; rather, it’s about truly understanding the efficacy of controls, as well as risk. If findings such as “…data was not exfiltrated…” are not validated, what happens when we fi

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Windows Incident Response

Read the original article: