Hunting is particularly useful for rapid triaging - we can focus our attention only on those machines which show potential signs of compromise. For the first time, analysts could pose a question - such as “Which endpoints contain this registry key”, to thousands of endpoints at once, and receive an answer within hours. The GRR framework was one of the first to offer the concept of hunting - actively seeking forensic anomalies on many endpoints at the same time. The agent is able to perform some low level forensic analysis by incorporating other open source tools such as the Sleuthkit and The Rekall Memory forensic suite. GRR is an agent installed on many endpoints controlled by a central server. One of the first notable endpoint agents was GRR, a Google internal project open sourced around 2012. This architectures enables detection of attackers from different endpoints as they traverse through the network and provides a more distributed detection coverage for more assets simultaneously. An agent is specialized software running on enterprise endpoints providing forensic analysis and telemetry to central servers. This transition from traditional forensic techniques to highly scalable distributed analysis has resulted in multiple offering of endpoint agents. As part of this analysis, the practitioner may need to triage many thousands of machines to find those machines who were compromised, avoiding the acquisition of bit-for-bit forensically sound images. In this scenario it is rare for legal prosecution to take place, instead the enterprise is interested in quickly containing the incident and learning of possible impacts. Triaging is particularly prevalent in enterprise incident response. When triaging a system, the practitioner has to be surgical in their approach - examining specific artifacts before even acquiring the hard disk or memory. We now see the emergence of triage techniques to quickly classify a machine as worthy of further forensic analysis. The forensic practitioner is looking to answer questions quickly and efficiently, since the amount and size of digital evidence is increasing with every generation of new computing devices. In many digital evidence based cases, time is of the essence. Wouldn’t it be great to have a way to formally document and encode these methodologies? These methodologies are often encoded informally in practitioners’ experience and training. For example, by examining the timestamps stored in the NTFS filesystem we are able to build a timeline tracing an intruders path through the network. Over the years, DFIR practitioners have developed and refined methodologies for answering such questions. Most practitioners limit their cases around high level questions, such as did the user access a particular file? Was malware run on the user’s workstation? Did an attacker crack an account? We took a lot of inspiration and learned many lessons by using other great tools, and Velociraptor is our attempt at pushing the field forward.ĭigital forensics is primarily focused on answering questions. This is an introductory article explaining the rationale behind Velociraptor’s design and particularly how Velociraptor evolved with some historical context compared with other DFIR tooling.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |