Validation – This Time, Tool Validation

I’ve posted previously on validation, and more recently, on validation of findings. In my recent series of posts, I specifically avoided the top of tool validation, because while tool validation predicates the validation of findings, and there is some overlap, I thought it was important separate the two so that the discussion didn’t go astray. 

Well, now the topic of tool validation within DFIR has popped up again, so maybe it’s time to address it yet again.

So, early on in my involvement in the industry, yes, running two or more tools against a data source was considered by some to be a means of tool validation. However, over time and as experience and knowledge grew, the fallacy of this approach became more and more apparent. 

When I first left active duty, one of my first roles in the private sector was performing vulnerability assessments. For the technical aspect (we did interviews and reviewed processes, as well) we used ISS’s Internet Scanner, which was pretty popular at the time. When I started, my boss, a retired Army Colonel, told me that it would take me “about 2 to 3 years of running the tool to really understand it.” Well, within 6 months, I was already seeing the need to improve upon the tool and started writing an in-house scanner that was a vast improvement over the commercial tool.

The reason for this was that we’d started running into questions about how the tool did it’s “thing”. In short, it would connect to a system and run

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Windows Incident Response

Read the original article: