top of page

Search Results

521 results found with an empty search

  • Understanding, Collecting, Parsing the $I30

    Updated on Feb 17,2025 Introduction: In the intricate world of digital forensics, every byte of data tells a story. Within the NTFS file system, "$I30" files stand as silent witnesses, holding valuable insights into file and directory indexing Understanding "$I30" Files: $I30 files function as indexes within NTFS directories , providing a structured layout of files and directories. They contain duplicate sets of $File_Name timestamps, o ffering a comprehensive view of file metadata stored within the Master File Table (MFT). Utilizing "$I30" Files as Forensic Resources: $I30 files provide an additional forensic avenue for accessing MACB timestamp data. Even deleted files, whose remnants linger in unallocated slack space, can often be recovered from these index files. ------------------------------------------------------------------------------------------------------------- If you're into digital forensics, you've probably come across Joakim Schicht’s tools. They’re free, powerful, and packed with features for analyzing different forensic artifacts. One such tool, Indx2Csv, is a lifesaver when it comes to parsing INDX records like $I30 (directory indexes), $O (object IDs), and $R (reparse points). The cool thing about Indx2Csv is that it doesn’t just look at active records; it also digs up deleted entries that are still hanging around due to file system operations. Plus, it can even scan for partial entries, which means you might be able to recover metadata for deleted files or folders, even if their complete records are gone. How Does Indx2Csv Work? Indx2Csv processes I NDX records that have been exported from forensic tools like FTK Imager or The Sleuth Kit’s icat. If you've used FTK Imager before, you might have seen files labeled as $I30 in directories. These aren’t actual files but representations of the $INDEX_ALLOCATION attribute for that directory. You can export them and analyze them with Indx2Csv. Output: (GUI Mode of Ind2xcsv if you're using The Sleuth Kit, you can extract the $INDEX_ALLOCATION attribute with this command: icat DiskImage MFT#-160-AttributeID > $I30 (Just remember, the attribute type for $INDEX_ALLOCATION is always 160 in decimal.) Once you’ve got the file, running Indx2Csv is straightforward: Indx2Csv.exe -i exported_I30_file -o output.csv Indx2Csv has several command-line options for tweaking how it scans and outputs data. You can check out the tool’s GitHub page for a complete list of commands. ------------------------------------------------------------------------------------------------------------- Alternative Tools: Velociraptor & INDXparse.py While Indx2Csv is great, it’s not the only tool in the game. Here are two other options worth mentioning: Velociraptor Velociraptor is an advanced threat-hunting and incident response tool that can also be used for forensic analysis. Unlike Indx2Csv, which works with exported INDX files, Velociraptor can analyze live file systems and mounted volumes. That means you don’t have to manually locate and export the $I30 file—just point Velociraptor to a directory, and it’ll handle the rest. For example, if you've mounted a disk image and want to analyze the  directory, you can run: velociraptor.exe artifacts collect Windows.NTFS.I30 --args \ DirectoryGlobs=" <\\Windows\\Dean\\>" --format=csv --nobanner > C:\output\I30-Dean.csv This will save both active and deleted entries in a CSV file, which you can then analyze with Timeline Explorer or any spreadsheet app. INDXparse.py Another great option is INDXparse.py, a Python-based tool created by Willi Ballenthin. Like Indx2Csv, i t focuses on $I30 index files, but since it's written in Python , it works on multiple operating systems, not just Windows. Collection: You can use FTK Imager to collect Artifact like $I30. Parsing: INDXParse-master Can be used for Parsing: https://github.com/williballenthin/INDXParse Below screenshot is example of INDXParse-master You can use -c or -d (Parameter) based on needs Note: To use INDXParse-master you need have to Python installed on windows as I have do so its easy for me. Wrapping Up Indx2Csv is a powerful, easy-to-use tool for forensic investigators who need to dig into INDX records. Whether you’re analyzing active files, recovering deleted entries, or scanning for hidden metadata, it gets the job done. And if you need alternatives, Velociraptor and INDXparse.py offer additional flexibility for different situations. So, if you haven’t tried Indx2Csv yet, give it a shot—you might be surprised at what you uncover! --------------------------------------------Dean--------------------------------------------

  • The Truth About Changing File Timestamps: Legitimate Uses and Anti-Forensics: Timestomping

    Changing a file’s timestamp might sound shady, but there are actually some valid reasons to do it. At the same time, cybercriminals have found ways to manipulate timestamps to cover their tracks. Let’s break it down in a way that makes sense. When Changing Timestamps is Legitimate Think about cloud storage services like Dropbox. When you sync your files across multiple devices, you’d want the timestamps to reflect when the file was last modified, not when it was downloaded to a new device. But here’s the problem: when you install Dropbox on a new computer and sync your files, your operating system sees them as “new” files and assigns fresh timestamps. To fix this, cloud storage apps like Dropbox adjust the timestamps to match the original modification date. This ensures your files appear the same across all devices. It’s a perfectly legitimate reason for altering timestamps and helps keep things organized. --------------------------------------------------------------------------------------------------------- If you waana learn about cloud storage forensic including dropbox, box, onedrive, Box do check out articles written by me link below happy learning https://www.cyberengage.org/courses-1/mastering-cloud-storage-forensics%3A-google-drive%2C-onedrive%2C-dropbox-%26-box-investigation-techniques -------------------------------------------------------------------------------------------------------- So where were we! Yeah, lets continue When Changing Timestamps is Suspicious Hackers and cybercriminals love to manipulate timestamps too, but for completely different reasons. A common trick is to disguise malicious files by changing their timestamps to blend in with legitimate system files. For example if a hacker sneaks malware into the C:\Windows\System32 folder, they can rename it to look like a normal Windows process. But to make it even less suspicious, they’ll modify the timestamps to match those of other system files. This sneaky technique is called timestomping . How Analysts Detect Fake Timestamps Security analysts have developed several methods to spot timestomping . In the past, it was easier to detect because many tools didn’t set timestamps with fractional-second accuracy. I f a timestamp had all zeros in its decimal places, that was a red flag . Example: Timestamps Time stomping in $J 2. Time Stomping in $MFT**(Very Important) If you see screenshot attacker time stomped the eviloutput.txt they changed timestamp(0x10) to 2005 using anti forensic tool but as anti forensic tool do not modify (0x30) which is showing they original timestamp when file is created 3. Another example But today, newer tools allow hackers to copy timestamps from legitimate files, making detection trickier. Here’s how experts uncover timestamp manipulation: Compare Different Timestamp Records In Windows, files have t imestamps stored in multiple places , such as $STANDARD_INFORMATION and $FILE_NAME  metadata. If these don’t match up, something suspicious might be going on. Tools like mftecmd, fls, istat, and FTK Imager  help with these checks. Look for Zeroed Fractional Seconds Many timestomping tools don’t bother with precise sub-second timestamps. If the decimal places in a timestamp are all zeros, it could indicate foul play. Tools: mftecmd, istat . Compare ShimCache Timestamps Windows tracks when executables were first ru n using a system feature called ShimCache (AppCompatCache) . If a file’s recorded modification time is earlier than when it was first seen by Windows, that’s a big red flag. Tools: AppCompatCacheParser.exe, ShimCacheParser.py . Check Embedded Compile Times for Executables Every executable file has a compile time embedded in its metadata . If a file’s timestamp shows it was modified before it was even compiled, something’s off. Tools: Sysinternals’ sigcheck, ExifTool . Analyze Directory Indexes ($I30 Data) Sometimes, old timestamps are still stored in the parent directory’s index . If a previous timestamp is more recent than the current one, it’s a clue that someone tampered with it. Check the USN Journal Windows keeps a log (USN Journal) of file creation events. If a file’s creation time doesn’t match the time the USN Journal recorded, that’s a clear sign of timestamp backdating. Compare MFT Record Numbers Windows writes new files sequentially in the Master File Table (MFT). If most files in C:\Windows\System32 have close MFT numbers but a backdated file has a much later number, it stands out as suspicious. Tools: mftecmd, fls . Real-World Example Security analysts at Dean service organization investigated a suspicious file (dean.exe) in C:\Windows\System32. Even though its timestamps matched legitimate files, further checks revealed: $STANDARD_INFORMATION creation time was earlier  than $FILE_NAME creation time. The fractional seconds in its timestamp were all zeros. The executable’s compile time (found via ExifTool) was newer  than its modification time. Windows’ ShimCache recorded a modification time that was later  than the file system timestamp. These findings confirmed the file had been timestomped, helping the team uncover a hidden malware attack. ------------------------------------------------------------------------------------------------------------- All anti forensic tool have one thing in common they mostly modify $SI Timestamp. They do not modify the $FN time stamp. So comparing these two time stamp in timeline explorer can help to identify time stopping. ------------------------------------------------------------------------------------------------------------- Now keep in mind as normal there might be False positive while analyzing the $MFT for time stomped this thing must be understand by analyst Screen connect example of timestomp: The Bottom Line Timestamp manipulation is a double-edged sword. While cloud storage services use it for legitimate reasons, hackers exploit it to hide malicious files . Security analysts have developed multiple ways to detect timestomping, but modern tools make it harder than ever to spot. So, the next time you see a file with a suspiciously old timestamp, don’t just take it at face value. There might be more going on under the surface! ----------------------------------------------Dean----------------------------------------------

  • Understanding NTFS Metadata(Entries) and How It Can Help in Investigations

    When dealing with NTFS (New Technology File System), one of the most crucial components to understand is the Master File Table (MFT) . Think of it as the backbone of the file system—it stores metadata for every file and folder, keeping track of things like timestamps, ownership, and even deleted files. Allocated vs. Unallocated Metadata Entries Just like storage clusters, metadata entries in the MFT can either be allocated (actively in use) or unallocated (no longer assigned to a file). If a metadata entry is unallocated, it falls into one of two categories: It has never been used before (essentially empty). It was used in the past, meaning it still contains traces of a deleted file or directory. This is where forensic investigations get interesting. I f an unallocated metadata entry still holds data about a deleted file, we can recover information like filenames, timestamps, and ownership details . In some cases, we may even be able to fully recover the deleted file—provided its storage clusters haven't been overwritten yet. How Metadata Entries Are Assigned MFT entries are typically assigned sequentially . This means that when new files are created rapidly, their metadata records tend to be grouped together in numerical order. Let’s say a malicious program named "mimikatz.exe"  runs and extracts several resource files into the sysetm32 directory. Because all these files are created in quick succession, their metadata entries will be next to each other in the MFT. A similar thing happens when another malicious executable, "svchost.exe" , runs and drops a secondary payload ( "a.exe" ). This action triggers the creation of prefetch files , and since they’re created almost instantly, their MFT entries are also close together . This pattern helps forensic analysts track down related files during an investigation. The Hidden Clues in MFT Clustering While this clustering pattern isn’t guaranteed in every case, it’s common enough that it can serve as a backup timestamp system . Even if a hacker tries to manipulate file timestamps (a technique called timestomping ), looking at the MFT sequence can reveal when files were actually created. This makes it a valuable tool for forensic analysts. Type Name Type Name 0x10 $STANDARD_INFORMATION 0x90 $INDEX_ROOT 0x20 $ATTRIBUTE_LIST 0xA0 $INDEX_ALLOCATION 0x30 $FILE_NAME 0xB0 $BITMAP 0x40 $OBJECT_ID 0xC0 $REPARSE_POINT 0x50 $SECURITY_DESCRIPTOR 0xD0 $EA_INFORMATION 0x60 $VOLUME_NAME 0xE0 $EA 0x70 $VOLUME_INFORMATION 0xF0 0x80 $DATA 0x100 $LOGGED_UTILITY_STREAM Breaking Down the MFT Structure Every file, folder, and even the volume itself has an entry in the MFT. Typically, each entry is 1024 bytes  in size and contains various attributes that describe the file. Here are some of the most commonly used attributes: $STANDARD_INFORMATION (0x10)  – Stores general d etails like file creation, modification, and access timestamps. $FILE_NAME (0x30)  – Contains the filename and another set of timestamps. $DATA (0x80)  – Holds the actual file content (for small files) or a pointer to where the data is stored. $INDEX_ROOT (0x90) & $INDEX_ALLOCATION (0xA0)  – Used for directories to manage file listings. $BITMAP (0xB0)  – Keeps track of allocated and unallocated clusters. Timestamps and Their Forensic Importance NTFS records multiple sets of timestamps, and they don’t always update the same way. Two of the most important timestamp attributes are: $STANDARD_INFORMATION timestamps  – These are affected by actions like copying, modifying, or moving a file. $FILE_NAME timestamps  – These remain more stable and can serve as a secondary reference. Because these two timestamp sets don’t always update together, analysts can spot inconsistencies that reveal timestomping attempts . For instance, if a file’s $STANDARD_INFORMATION creation time  differs from its $FILE_NAME creation time , it could mean that someone tampered with the timestamps. Real-World Challenges in Analyzing NTFS Metadata While these timestamp rules are generally reliable, they aren’t foolproof. Changes in Windows versions, different file operations, and even t ools like the Windows Subsystem for Linux (WSL) can alter how timestamps behave. For example: In Windows 10 v1803 and later , the "last access" timestamp may be re-enabled under certain conditions. The Windows Subsystem for Linux (WSL)  updates timestamps differently than the standard Windows shell. Final Thoughts Analyzing NTFS metadata can unlock a wealth of information, helping forensic investigators reconstruct file activity even after deletion or manipulation. Understanding sequential MFT allocations , timestomping detection , and the role of multiple timestamps  is essential for building a strong case in digital forensics. By looking beyond standard timestamps and diving into the metadata, analysts can uncover hidden traces of activity—providing crucial evidence in cybersecurity investigations. ----------------------------------------Dean---------------------------------------------

  • Understanding NTFS File System Metadata and System Files

    File systems store almost all data in files , but certain special files, collectively known as metadata structures, store essential information about other files and directories . These structures track attributes such as timestamps (created, modified, and accessed), permissions, ownership, file size, and pointers to file locations. Different file systems use unique mechanisms to store clusters allocated to a file. For example: NTFS (New Technology File System)  employs a structure called a "data run" to manage file clusters. FAT (File Allocation Table)  maintains a "chain" of clusters. The Master File Table (MFT) NTFS revolves around the Master File Table (MFT) , a highly structured database storing MFT entries (or MFT records)  for every file and folder on a volume. These entries contain vital metadata, either storing the data directly (for small files) or pointing to clusters where the actual data resides. For files larger than approximately 600 bytes , data is stored in clusters outside the MFT, making them non-resident files . Each NTFS volume has a hidden file called $MFT , which consolidates all MFT entries. NTFS also uses another hidden file, $Bitmap , to track cluster allocation . This file maintains a bit for each cluster, indicating whether it is allocated (1)  or unallocated (0) . Fragmentation occurs when file clusters are non-contiguous, though Windows generally optimizes file storage to minimize fragmentation. The MFT is the Metadata Catalog for NTFS Key NTFS System Files Besides $MFT and $Bitmap, NTFS relies on several other system files, most of which are hidden and start with a $  sign . The first 24 MFT entries are reserved, with the first 12 assigned to these system files: System File MFT Entry $MFT 0 Stores the Master File Table, which tracks all files and directories. $MFTMIRR 1 A backup of the primary MFT to ensure recoverability. $LOGFILE 2 Contains transactional logs to maintain NTFS integrity in case of system crashes. $VOLUME 3 Stores volume information, including the volume name and NTFS version. $ATTRDEF 4 Defines NTFS attributes, detailing metadata structure. “.” 5 The root directory of the NTFS volume. $BITMAP 6 Tracks allocated and unallocated clusters on the volume. $BOOT 7 Stores boot sector information, enabling normal file I/O operations. $BADCLUS 8 Marks physically damaged clusters to prevent data storage in unreliable locations. $SECURE 9 Stores file security details, including ownership and access permissions. $UPCASE 10 Contains Unicode character mappings for case-insensitive file sorting. $EXTEND 11 Holds additional system files introduced in newer NTFS versions. Extended NTFS System Files Beyond the first 12 reserved system files, NTFS also includes several additional $EXTEND  files : Extended System File Purpose $EXTEND$ObjId Tracks object IDs, allowing file tracking despite renaming or movement. $EXTEND$Quota Manages user disk space quotas. $EXTEND$Reparse Stores reparse points, mainly used for symbolic links. $EXTEND$UsnJrnl Maintains the Update Sequence Number (USN) Journal , recording all file changes. Conclusion NTFS is a powerful file system with a robust metadata structure that ensures efficient file management and system integrity. Key system files like $MFT, $Bitmap, $LogFile, and $UsnJrnl  play crucial roles in tracking files, managing disk space, and ensuring recoverability in case of crashes. Understanding these NTFS components is vital for forensic analysts, system administrators, and cybersecurity professionals who need to investigate file system activities or recover lost data. ------------------------------------------------Data--------------------------------------------------

  • NTFS: More Than Just a Filesystem

    Updated on 17 Feb,2025 When it comes to filesystems, NTFS (New Technology File System) is like the Swiss Army knife of Windows storage. It’s packed with features, built for reliability, and miles ahead of the old FAT (File Allocation Table) system. But let’s be real—most people don’t even use half of what NTFS offers. Some of its capabilities are mainly useful in enterprise environments, while others can be game-changers even for regular users. Let’s break it down in a way that actually makes sense. NTFS: The Highlights 1. Built-in Crash Recovery (Journaling) Ever had your system crash in the middle of saving a file? NTFS has your back. It keeps a log (also called a journal) of changes to the filesystem so it can recover from crashes and prevent data corruption . This is a big deal, especially compared to older filesystems where a sudden shutdown could leave your data in shambles. 2. Tracks File Changes with USN Journal NTFS has a feature called the USN (Update Sequence Number) Journal , which keeps track of every file change. This is super useful for antivirus software and backup tools because they don’t have to scan everything —they just check what’s changed. That means faster scans and backups. 3. Hard Links & Soft Links (File Shortcuts on Steroids) NTFS supports both hard links  and soft links : A hard link  makes it look like a file exists in multiple places, but it's actually just one file with multiple names. A soft link  (or symbolic link) is more like a shortcut—clicking it opens the original file . This is useful for organizing files without creating duplicate copies. 4. Stronger Security (But Not Hacker-Proof) NTFS has built-in security features that let administrators control who can access what files. It’s great for keeping prying eyes out—until someone boots into Linux or uses a forensic tool to bypass those restrictions. (But that’s a topic for another day.) 5. Disk Quotas: No More Hoarding! Ever shared a computer with someone who fills up all the storage with movies and games? NTFS allows admins to set quotas, l imiting how much disk space each user can use . Once they hit their limit, they can’t store any more data until they free up space. 6. Reparse Points: Making Magic Happen This sounds complicated, but it’s really cool. NTFS lets the system interact with files in creative ways using something called reparse points . This is how Windows does things like soft links, volume mount points, and single-instance storage (which we’ll talk about in a second). Developers can even create their own reparse points for custom file behavior. 7. Object IDs: Never Lose a File Again Have you ever renamed or moved a file and then had programs freak out because they can’t find it? NTFS assigns Object IDs  to certain files, allowing Windows to track them no matter where they go. So if a shortcut breaks, the system might still be able to find the file. 8. File-Level Encryption & Compression Encryption : NTFS l ets you encrypt individual files and folders so that only you can open them. This happens in the background without you having to do anything special. Compression : If you’re running low on space, NTFS can automatically compress files to save room. Again, this happens behind the scenes without you noticing a difference. 9. Volume Shadow Copies: Your Undo Button for Files Ever made changes to a file, hit save, and immediately regretted it? NTFS keeps Volume Shadow Copies , which are basically automatic backups of your files . If configured properly, you can restore previous versions of files without needing an external backup. 10. Alternate Data Streams: Hidden File Tricks NTFS lets files have extra hidden data  attached to them. For example, when you download something from the internet, Windows tags it so it can warn you before running it. Unfortunately, hackers also love this feature because they can hide malware inside alternate data streams. So, cool feature—but also a bit risky if misused. 11. Mounting Drives as Folders Instead of having a bunch of drive letters like C: and D:, NTFS lets you mount a second drive inside a folder on another drive . This helps keep things organized, especially in server environments where multiple drives are used. 12. Single Instance Storage: Saving Space on Large Servers Let’s say you work at a company where everyone saves the same massive video file on the shared drive. Instead of keeping multiple copies, NTFS can store one copy and create references (soft links) for everyone else , saving tons of disk space. Final Thoughts NTFS is packed with features that most people don’t even realize exist. While some of these are mainly useful for IT admins and businesses, others—like file recovery, security controls, and file compression—are things regular users can take advantage of every day. Next time you’re managing your files, just remember: NTFS is doing a lot more under the hood than you might think! ------------------------------------------------Dean----------------------------------------------------

  • Mastering Timeline Analysis: A Practical Guide for Digital Forensics: (Log2timeline)

    Introduction Timeline analysis is a cornerstone of digital forensics, allowing investigators to reconstruct events leading up to and following an incident. When working with massive amounts of forensic data, such as a super timeline generated by Plaso, the key challenge is making sense of thousands—or even millions—of events . The Power of Super Timelines A super timeline consolidates data from multiple sources, including file system metadata, registry changes, event logs, and web history. After parsing data with log2timeline, the tool psort helps filter and organize this data into meaningful insights. However, once the timeline is loaded into a tool like Timeline Explorer, the sheer volume of entries can be overwhelming. The goal is not to analyze every single row but to apply strategic filtering techniques to extract actionable intelligence. This is where pivot points, filtering, and visualization become crucial . Understanding the Core Fields in Timeline Analysis When working with a super timeline, you'll encounter multiple fields. Here are some key columns to focus on: Date & Time  – The timestamp of the event in MM/DD/YYYY and HH:MM:SS format. Timezone  – Helps standardize timestamps across different system logs. MACB  – Indicates if the event modified (M), accessed (A), changed (C), or was created (B). Source & Source Type  – Identifies the origin of the artifact, such as registry keys (REG), web history (WEBHIST), or log files (LOG). Event Type  – Describes the nature of the event, e.g., file creation, process execution, or a website visit. User & Hostname  – Useful when investigating multi-user systems. Filename & Path  – Identifies where the file resides in the system. Notes & Extra Fields  – May contain additional insights depending on the data source. Filtering and Data Reduction: The Key to Efficiency With thousands of rows to sift through, filtering is your best friend. Here’s how to break down the data efficiently: 1. Start with the Big Picture Before zooming into specifics, look at broad trends. For example: What are the peak activity hours? Are there gaps in timestamps that indicate potential log tampering? 2. Use Color Coding and Sorting Tools like Timeline Explorer automatically highlight different types of events (e.g., executed programs in red, file opens in green, and USB device activity in blue). Use this to your advantage to focus on suspicious patterns. 3. Leverage Advanced Search Techniques Use CTRL-F for quick searches. Use wildcards like % to find variations of keywords. Apply column filters to hide non-essential data and zoom in on specific actions. 4. Pivot on Key Artifacts Instead of getting lost in a sea of data, use key artifacts to guide your analysis: RDP Sessions:  Look at Windows event logs for suspicious remote access. USB Activity:  Filter by removable media insertion events to track external device usage. Process Execution:  Investigate software launches to detect malware or unauthorized tools. 5. Export and Annotate Tag critical findings and export them for reports. Timeline Explorer allows tagging rows, which helps in organizing evidence for presentations or case documentation. Beyond Spreadsheets: The Role of Specialized Tools While CSV-based analysis is a good starting point , dedicated tools like Timeline Explorer offer significant advantages: Multi-tab support:  Analyze multiple timelines simultaneously. Detailed Views:  Double-click any row for a structured breakdown of event details. Pre-set Layouts:  Timeline Explorer provides optimized column layouts for different types of forensic investigations. Pro Tips for Your First Timeline Analysis Minimize Distractions  – Hide unnecessary columns to maximize screen space. Stay Organized  – Label key findings and use tags to revisit them easily. Use Comparative Analysis  – If investigating multiple systems, compare hostnames and user activity. Automate Where Possible  – Scripts can help extract high-priority data points quickly. Conclusion Timeline analysis is an incredibly powerful forensic technique, but its effectiveness depends on how well you filter, categorize, and interpret the data. By mastering tools like log2timeline, psort, and Timeline Explorer, you can efficiently reconstruct digital events and uncover critical evidence. As you gain experience, you’ll develop personal best practices and preferred filtering methods. The key is to approach each case systematically, focusing on high-value artifacts while avoiding data overload. Happy hunting! ------------------------------------------Dean-----------------------------------------------------------

  • Understanding Filesystem Timestamps: A Practical Guide for Investigators

    In the digital forensics world, understanding how timestamps work is crucial. Modern operating systems, with their complexity, make timestamp analysis both fascinating and challenging. Whether you're tracking file modifications, uncovering malware activity, or investigating lateral movement, timestamps serve as valuable clues. How Timestamps Can Change Unexpectedly Files don’t always follow the expected timestamp update rules. Various software and system activities can modify timestamps, sometimes in ways that obscure forensic evidence. Here are some common offenders: Microsoft Office Applications:  These can update access times even when registry settings disable such changes. Anti-Forensic & Malware Tools:  Attackers use file system APIs to modify timestamps, making malicious files blend in. Archiving Software:  When extracting files from a ZIP or RAR archive, the modification time often reflects the original archive's date rather than when the file was actually unzipped. Security Software & AV Scans:  Some antivirus solutions update access timestamps during routine scans, making forensic analysis trickier. Key Takeaway: Timestamps should never be interpreted in isolation. Always correlate with other evidence, such as logs and system events, to understand why a timestamp changed. Timestamps Over the Network: A Hidden Trail Did you know timestamps follow the same rules even when files are transferred over a network? This has major implications for forensic investigations. Lateral Movement and Timestamps When an attacker moves files across systems using SMB (Server Message Block), the modification time of the file remains the same, while a new creation time is assigned . This tells us two things: The modification time predates the creation time—indicating a copy operation. The creation timestamp on the target system is the exact moment the file was transferred. Why This Matters Pivot Points in Investigations:  The creation time can serve as a reference to correlate with logs and execution events. Detecting Lateral Movement:  Attackers often use net use, WMI, or PsExec to copy and execute malware remotely. SMB traffic analysis (e.g., PCAP files) can reveal timestamps matching those in the filesystem. Registry Clues:  The mountpoints2 NTUSER.DAT registry key can help identify locally and remotely mounted volumes, shedding light on attacker activity. Key Takeaway: Identifying files where the modification time predates the creation time can uncover unauthorized file transfers and lateral movement techniques. Deciphering Timeline Analysis: The “MACB” Model When analyzing a timeline, you'll encounter different timestamp types represented by the “MACB” notation: M  – Modified: Content of the file changed. A  – Accessed: The file was read or executed. C  – Metadata Changed: File attributes or permissions were altered. B  – Birth: The file’s creation time. Example: Understanding a Timeline Entry Let’s say you analyze C:\akash.exe and see these entries: 2025-02-17 16:20:37 m.c. C:\akash.exe 2025-02-17 16:25:12 .a.b C:\akash.exe What This Means: The first line (m.c.) shows that modification and metadata change occurred at 16:20:37. The second line (.a.b) tells us the file was accessed and created (copied) at 16:25:12. Conclusion? The file was copied to the system at 16:25:12 and then modified at 16:20:37 —confirming a past existence before it landed on the target machine. Common Timestamp Combinations Notation Meaning m.cb Modified, metadata changed, birth (created) .a.. Accessed only mac. Modified, accessed, metadata changed Key Takeaway: Timeline analysis isn’t just about reading timestamps—it’s about understanding why  those timestamps exist and what they reveal about past activities. Challenges in Timestamp Forensics Overwritten Evidence:  Timestamps get updated with new modifications, erasing past data. You only see the latest  modification, not the full history. Time Skew Issues :  If a system’s clock was incorrect or tampered with, timestamps could be misleading. File System Differences:  NTFS timestamps differ from FAT32, ext4, and other filesystems, so always consider the OS and format. Final Thoughts: The Investigator’s Approach To master timestamp forensics, you need more than just theoretical knowledge—you need an investigative mindset. Correlate with Logs & Events:  Match file timestamps with Windows Event Logs, Sysmon, and execution artifacts. Leverage Registry Artifacts:  Mountpoints2, shellbags, and recent file lists provide extra context. Test Your Hypotheses:  If something doesn’t add up, replicate it in a controlled environment. By understanding how timestamps behave—and how they can be manipulated—you can uncover hidden traces left by attackers. Keep practicing, keep investigating, and timestamps will become one of your most valuable forensic tools. ------------------------------------------------Dean----------------------------------------------------- 🔍 Want to Learn More?   Explore forensic tools like Plaso, Timesketch, and Velociraptor to take your timeline analysis skills to the next level! Velociraptor: https://www.cyberengage.org/courses-1/mastering-velociraptor%3A-a-comprehensive-guide-to-incident-response-and-digital-forensics Plaso: https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools https://www.cyberengage.org/post/running-plaso-log2timeline-on-windows

  • Understanding Filesystem Timelines in Digital Forensics

    Updated on 17 Feb,2025 When it comes to digital forensics, one of the most valuable tools in an investigator’s arsenal is the filesystem timeline . This technique allows forensic analysts to reconstruct events by examining file metadata, helping to determine when files were created, modified, accessed, or deleted What is a Filesystem Timeline? A filesystem timeline  is a chronological record of file and directory activities within a given storage volume. It includes both allocated and unallocated metadata structures, which means it can provide insights into deleted or orphaned files as well. Different filesystems store timestamps in unique ways, but most record four essential time values: M (Modification Time):  When the file’s content was last changed. A (Access Time):  The last time the file was opened or accessed. C (Change Time):   When the metadata of the file (like permissions, ownership, or name) was altered. B (Birth Time or Creation Time):  When the file was initially created on the system. Supported Filesystems for Timeline Analysis Modern forensic tools can parse timelines from various filesystem types, including: NTFS (Windows) FAT12/16/32 (Older Windows systems, external storage devices) EXT2/3/4 (Linux) ISO9660 (CD/DVD media) HFS+ (Mac systems) UFS1 & UFS2 (Unix-based systems) NTFS Timestamps – The Gold Standard in Windows Forensics The NTFS filesystem, used in most Windows environments, maintains four key timestamps (MACB). However, two timestamps often confuse beginners: Change Time (C):   Updated when a file is renamed, permissions change, or ownership is modified. Access Time (A):   Historically unreliable, as Windows has altered how frequently it updates access times, even delaying updates by up to an hour or disabling them altogether in some versions. For practical forensic work, focusing on Modification (M) and Creation (B) times  is usually the best approach, as they are more reliable indicators of file activity. The Importance of Time Formats One of the most crucial factors in forensic timeline analysis is understanding how different filesystems store timestamps: NTFS timestamps are stored in UTC format,  meaning they remain consistent regardless of time zone changes or daylight savings. FAT timestamps use local time,   which can lead to inconsistencies when analyzing files across different locations. Additionally, NTFS uses a high-resolution 64-bit FILETIME structure , which counts time in 100-nanosecond intervals since January 1, 1601 (UTC). In contrast, UNIX systems count seconds since January 1, 1970. How Actions Affect Timestamps Different file actions impact timestamps in various ways. Here are some key forensic takeaways: File Creation:  All four timestamps (MACB) are set at the time of creation. File Modification:  Updates the M (modification), A (access), and C (metadata change) timestamps. File Rename/Move (on the same volume):  Only the C timestamp is updated. File Deletion:  No timestamps are updated (Windows doesn’t maintain a deletion timestamp). File Copying:  The copied file retains the M timestamp from the original but receives a new B (creation) timestamp, making it possible to detect copied files by spotting instances where the modification date is older than the creation date. Command Line vs. GUI Moves:  Interestingly, moving a file via the command line can produce different timestamp behaviors compared to using drag-and-drop in the Windows GUI. The Challenges of Windows Version Differences Different Windows versions handle timestamps in slightly different ways. For example: Windows Vista disabled access time updates  (later re-enabled in Windows 10 and 11). Windows 10 vs. 11 timestamp behaviors are largely similar , but forensic experts should always test assumptions on a similar system before drawing firm conclusions. Practical Takeaways for Investigators Prioritize M and B timestamps.  They are the most consistent and useful in tracking file activity. Be cautious with A and C timestamps.  These can be misleading due to system behaviors and version differences. Recognize copied files .  If a file’s modified date is older than its creation date, it was likely copied from another source. Validate your findings .  If timestamps play a crucial role in your investigation, test your hypothesis on a similar system to confirm expected behaviors. Final Thoughts Filesystem timelines are an incredibly powerful tool in digital forensics. Understanding how different filesystems handle timestamps, recognizing anomalies, and testing assumptions can make all the difference in an investigation. -------------------------------------------------Dean----------------------------------------------------

  • Mastering Timeline Analysis: Unraveling Digital Events with Forensic Precision

    Tracking down malicious activity in a digital environment can feel overwhelming. Modern systems generate an endless stream of logs, timestamps, and events—making it difficult to separate suspicious activity from normal operations. However, by using timeline analysis, we can cut through the noise and identify key events that may indicate an intrusion, unauthorized access, or data exfiltration. Why Timeline Analysis Matters Every system generates logs that track user actions, system processes, and interactions with files. Think of these as footprints in the digital sand. However, attackers often try to erase logs, modify timestamps, or use "living-off-the-land" techniques—methods that blend their activity with normal system operations. This is where timeline analysis shines—it reconstructs past events, correlates different data points, and identifies inconsistencies. Understanding Noise vs. Signal When conducting a timeline investigation, you must recognize that most events are unrelated. Imagine listening to multiple music genres simultaneously—jazz, rock, and classical. At first, it sounds chaotic, but with practice, you can isolate specific melodies. Similarly, timeline analysis helps us filter out background system noise and focus on user activity or intrusions. A key principle here is contextual analysis —the ability to differentiate between normal behavior and anomalies. For example, a SYSTEM process opening a web browser is suspicious, whereas a user logging in and accessing files is expected behavior. Key Concepts in Timeline Forensics Pivot Points:  Every investigation needs a starting place, such as a suspicious file, an unusual login time, or a flagged process. Once we identify a pivot point, we work outward—analyzing what happened before and after that event. Temporal Proximity:  Events rarely happen in isolation. Looking at what occurred immediately before and after an incident helps piece together a clearer picture of what transpired. Super Timelines vs. Targeted Timelines: Super timelines  aggregate all available logs, registry changes, browser history, and system events into one massive dataset. While thorough, they can be overwhelming. Targeted timelines  focus on specific artifacts, making analysis more manageable and efficient. Tools of the Trade Forensic analysts rely on powerful tools to extract and analyze timeline data: Plaso (log2timeline.py):  A tool for creating comprehensive super timelines by extracting data from multiple sources. https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools https://www.cyberengage.org/post/running-plaso-log2timeline-on-windows MFTECmd:  Tools used to extract filesystem metadata and analyze timestamps for file access and modifications. https://www.cyberengage.org/post/mftecmd-mftexplorer-a-forensic-analyst-s-guide KAPE & Timeline Explorer:  Useful for extracting logs and visualizing timeline data in an interactive format. Aurora IR & Velociraptor:  Open-source tools designed for incident response and timeline analysis. How to Conduct Timeline Analysis Define the Scope:   Determine the timeframe of the incident to reduce data overload. Extract Data:   Use tools like Plaso, Sleuthkit, or KAPE to pull logs , registry modifications, browser activity, and event logs. Filter and Organize:  Remove unrelated data to highlight suspicious activity by de-duplicating logs and applying keyword filters. Analyze:  Investigate relationships between artifacts, identify anomalies , and correlate events to reconstruct the sequence of actions. Report Findings:  Document findings clearly, highlighting key events, attack vectors, and any indicators of compromise (IOCs). Real-World Application Imagine investigating a suspected data breach. Your starting point (pivot) is a flagged user login at an unusual time. By analyzing system logs, registry changes, and event timestamps, you notice the user executed PowerShell scripts shortly after logging in. Further analysis reveals that they accessed confidential documents and transferred them to an external USB drive. This sequence of events confirms data exfiltration. Final Thoughts Timeline analysis is one of the most powerful forensic techniques available. It allows us to reconstruct events, pinpoint security incidents, and understand attacker behavior. Whether investigating malware infections, unauthorized access, or data theft, mastering timeline analysis is crucial in uncovering the truth hidden within digital artifacts. With practice and the right tools, you’ll be able to navigate complex datasets and make sense of even the most chaotic digital landscapes. Keep learning, stay curious, and always look for the hidden patterns that reveal the bigger picture. ----------------------------------------------------Dean----------------------------------------------------

  • Baseline Analysis in Memory Forensics: A Practical Guide

    Introduction to Baseline Analysis in Digital Forensics Baseline analysis is an essential technique in digital forensics and incident response, allowing analysts to efficiently identify anomalies in large datasets . At its core, baseline analysis involves comparing a suspect dataset with a "known good" dataset to detect outliers. This approach is particularly useful in memory forensics, where analysts must sift through hundreds of processes, drivers, and services to identify malicious activity. One powerful tool that le verages baseline analysis for memory forensics is Memory Baseliner , developed by Csaba Barta. This tool integrates with Volatility 3 to streamline comparisons between a suspect memory image and a baseline memory image, helping analysts quickly filter out known good items and focus on potential threats. ------------------------------------------------------------------------------------------------------------ The Need for Baseline Analysis in Memory Forensics Windows memory is complex, often containing over a hundred processes, each with numerous associated objects. Even seasoned professionals can struggle to pinpoint malware hidden within the sheer volume of data. A baseline memory image from a clean system allows for direct comparison, making it easier to isolate unusual artifacts. By feeding Volatility with both a suspect memory image and a baseline image, Memory Baseliner enables forensic analysts to: Quickly filter out known good artifacts. Identify new or uncommon processes, drivers, and services. Stack multiple images to determine the least frequently occurring artifacts. This approach reduces the dataset to review, making investigations more efficient. ------------------------------------------------------------------------------------------------------------ How Memory Baseliner Works Memory Baseliner supports four types of memory object analysis: Processes and associated DLLs (-proc) Drivers (-drv) Windows Services (-svc) To perform baseline analysis, two memory images must be provided: Baseline Image (-b) : A clean system memory dump. Suspect Image (-i) : The compromised system's memory dump. python memory_baseliner.py -b baseline.raw -i suspect.raw -o output.txt This command compares the suspect memory image against the baseline, saving results to output.txt for further analysis. A useful option is --showknown, which outputs both known and unknown items , allowing for flexible filtering in spreadsheet tools. Key output details include: Process Name & Command Line Parent Process Details Loaded DLLs Import Table Hashes Known/Unknown Status  (Whether the item was in the baseline) Frequency of Occurrence  (Baseline vs. suspect image) These data points help analysts identify anomalies that might indicate malware presence. ------------------------------------------------------------------------------------------------------------ Stacking for Least Frequency of Occurrence Analysis Stacking is another powerful feature of Memory Baseliner that analyzes multiple memory images to detect rare artifacts . Since malware-related items tend to be less common across systems, identifying low-frequency occurrences can highlight suspicious activity. Stacking Example python memory_baseliner.py -d memory_images_folder -procstack -o stacked_output.txt Here, the t ool scans multiple images in the memory_images_folder directory , identifying the l east frequently occurring processes. By focusing on rare executables, DLLs, drivers, or services, analysts can reduce the dataset and prioritize investigative leads. However, false positives may still exist, requiring manual review. ------------------------------------------------------------------------------------------------------------ Speeding Up Analysis with JSON Baselines One challenge of Memory Baseliner is its processing time. Large memory images can take up to 15 minutes to analyz e. To optimize this, the tool allows users to create and reuse JSON baseline files. JSON Baseline Usage Create a JSON Baseline python memory_baseliner.py -b baseline.raw --jsonbaseline baseline.json --savebaseline Load the JSON Baseline for Faster Analysis python memory_baseliner.py -i suspect.raw --jsonbaseline baseline.json --loadbaseline -o output.csv By leveraging JSON files, analysts can bypass re-analysis of the baseline memory image, significantly speeding up the comparison process. ------------------------------------------------------------------------------------------------------------ I know until now we have gone through theory. But lets go with practical execution and analysis as well , So, you will not face errors like I did. So, I wanted to make your experience better—this is how you install this script. Setting Up Memory Baseliner in WSL Before we dive in, let’s set up the tool properly. Clone the repository into your WSL environment: git clone https://github.com/csababarta/memory-baseliner.git Navigate to the cloned directory and move the scripts into the Volatility3 folder: mv memory-baseliner/*.py ~/Memorytool/volatility3/ Verify the setup by running: python3 baseline.py If it runs without errors, you're good to go! ------------------------------------------------------------------------------------------------------------- Using Memory Baseliner: Practical Examples Now that we’re set up, let’s start running some analysis. Step 1: Process Baselining To dump all processes and compare them against a baseline: python3 baseline.py -proc --state -b /mnt/c/Users/Akash's/Downloads/20250213/20250213.mem -i /mnt/c/Users/Akash's/Downloads/20250213Horizon/20250213.mem --showknown -o /mnt/c/Users/Akash's/Downloads/proc_all.txt Step 2: Driver Baselining To dump all loaded drivers: python3 baseline.py -drv -b /mnt/c/Users/Akash's/Downloads/20250213/20250213.mem -i /mnt/c/Users/Akash's/Downloads/20250213Horizon/20250213.mem --showknown -o /mnt/c/Users/Akash's/Downloads/driv-all.txt Step 3: Service Baselining To analyze running services: python3 baseline.py -svc --state -b /mnt/c/Users/Akash's/Downloads/20250213/20250213.mem -i /mnt/c/Users/Akash's/Downloads/20250213Horizon/20250213.mem --showknown -o /mnt/c/Users/Akash's/Downloads/svc-all.txt ------------------------------------------------------------------------------------------------------------ Output: Converting Output for Better Analysis By default, the tool outputs pipe-separated (|) text files , which aren’t ideal for analysis in tools like Timeline Explorer  or Excel . To convert them to CSV: sed -i 's/|/,/g' /mnt/c/Users/Akash's/Downloads/svc-all.txt > /mnt/c/Users/Akash's/Downloads/svc-all.csv Do the same for other files (proc_all.txt, driv-all.txt). ------------------------------------------------------------------------------------------------------------ Key Analysis Techniques Once you have the data, here’s how to make sense of it: Process Baselining -proc: This generates a lot of information. To narrow down the data to just process names, filter for .exe  in the DLL NAME  column . This works because the executable binary (.exe) will also appear in the loaded DLL list. Combine this with PROCESS STATUS=UNKNOWN  to quickly identify processes that were not present in the original baseline image. If you want to investigate loaded DLLs , filter for DLL STATUS=UNKNOWN  and focus on DLLs with the lowest frequency of occurrence in the IMAGE Fo O  column . If a DLL appears in many processes (i.e., has a high occurrence rate), it is less likely to be malicious. The --cmdline option forces a comparison of the full process command line in addition to the process name . This helps detect anomalies, such as the 32-bit version of an application running even though the system typically uses the 64-bit version. You can also compare process owners (--owner) and import hashes (--imphash) . However, these comparisons might be too restrictive unless your baseline image is very similar to the suspect image. Driver Baselining If your baseline image is a close match, you should see only a few new drivers added to the system. Focus on STATUS=UNKNOWN   entries first. Review the PATH  column to check if any drivers are loaded from unusual location s outside the standard \Windows\System32\Drivers\ and \Windows\System32\ paths. Import hashes (ImpHash) can often be calculated for many drivers present in memory. For deeper analysis, add the --imphash comparison option to detect variations of known drivers. Service Baselining The STATE  column shows whether a service was running. As a first step, filter for SERVICE_RUNNING  to focus on active malware. The --state option allows you to compare service configurations . This helps detect services that were disabled  in the baseline but enabled  in the suspect image —a common persistence tactic used by malware. It can also reveal services that were disabled in the suspect image but should be enabled (e.g., Windows updates or security software). Malware attempting to maintain persistence often uses SERVICE_AUTO_START . Filtering for this value can help identify potential threats. Some malware executes only once and then stops . These may use different start types, such as SERVICE_DEMAND_START, and might appear as SERVICE_STOPPED . To get a complete picture, examine all UNKNOWN services , but segmenting the data in different ways can make anomalies more obvious. Most Windows services run under built-in accounts like LOCAL SERVICE or the computer account (HOSTNAME$). Look for services running under user accounts , as these could indicate unauthorized activity. ------------------------------------------------------------------------------------------------------------- Practical Tips for Using Memory Baseliner Use tailored baseline images : A baseline from a similar system build reduces noise. Filter results in Excel : Use UNKNOWN status and .exe filters to highlight suspicious processes. Leverage JSON baselines : Saves time on repeat analyses. Validate findings with additional tools : Use Volatility’s malfind or yarascan plugins for deeper malware analysis. ------------------------------------------------------------------------------------------------------------ Conclusion Baseline analysis is a crucial technique in memory forensics, enabling rapid identification of suspicious activity by filtering known good artifacts. Memory Baseliner simplifies this process, providing efficient comparisons between suspect and clean memory images. Memory Baseliner is a powerful addition to any forensic analyst’s toolkit. By integrating it into investigations, analysts can significantly reduce data review time and enhance their ability to detect stealthy malware infections. ---------------------------------------------Dean---------------------------------------------

  • Windows Hibernation Files: A Critical Artifact for Forensic Investigations

    Introduction Windows hibernation files are an essential artifact in digital forensic investigations, often overlooked yet highly valuable. These files are created whenever a system is placed in hibernation or enters a "power save" mode . This most commonly occurs in laptop computers when the lid is closed while the system is running . However, with modern versions of Windows, the distinction between sleep and hibernation has become increasingly blurred . As a result, checking for the presence of a hibernation file should be a standard procedure in any forensic examination. The hibernation file is named hiberfil.sys  and is typically located in the root of the system drive (e.g., C:\hiberfil.sys ). Understanding and analyzing this file can provide invaluable insights, as i t contains a snapshot of the system's RAM before it went into hibernation. ---------------------------------------------------------------------------------------------------------- Importance of Hibernation Files in Forensics One of the most significant advantages of hibernation files is that they offer forensic investigators an opportunity to retrieve a memory image of a system , even if it has been shut down before an investigation begins . This provides two key benefits: Historical Memory Analysis : If the system was hibernated days, weeks, or even months ago, the hibernation file may contain valuable forensic artifacts from that time. Comparative Memory Analysis : If the system is currently running, the investigator now has two memory images to analyze—the current RAM dump and the historical hibernation file. Understanding the Hibernation File Format Windows hibernation files use compression , and their format varies across different versions of Windows. Due to these changes, specialized tools are required to extract and analyze the memory contents from hiberfil.sys. Tools for Extracting and Analyzing Hibernation Files Several tools exist to process hibernation files and convert them into usable memory images: 1. Volatility Framework Volatility is a well-known open-source memory forensics framework with built-in support for Windows hibernation files. The imagecopy  plugin in Volatility 2  can convert hibernation files into raw memory dumps for further analysis. Command to Convert a Hibernation File: vol.py -f /memory/hiberfil.sys imagecopy -O hiberfil.raw Or python3 vol.py -f /memory/hiberfil.sys layerwriter Volatility 3 , the imagecopy plugin is being replaced by the layerwriter  plugin. 2. Hibr2Bin by Matthew Suiche Matthew Suiche developed Hibr2Bin , a tool designed to convert hibernation files into raw memory images . The tool has been widely used in digital forensics but has not been updated recently, leading to compatibility issues with Windows 10 and Windows 11 hibernation files. 3. Hibernation Recon (Arsenal Recon) One of the most advanced tools for analyzing hibernation files is Hibernation Recon  by Arsenal Recon . This tool not only decompresses hibernation files but also extracts slack space  left behind by older hibernation files. This is significant because:   After running tool we will get bunch of output file               Output example: Older hibernation files may leave remnants of past system states. Data from previous hibernation sessions can still be recovered. 4. Other Forensic Tools Several forensic tools have integrated hibernation file analysis capabilities, including: BulkExtractor  (string searching and data carving) Magnet Forensics AXIOM Belkasoft Evidence Center Passware Hibernation File Behavior in Windows 8, 10, and 11 With Windows 8 and later , Microsoft introduced a new hibernation file format . Key changes include: Automatic Zeroing of Data : When a system resumes from hibernation, data is read and then zeroed  from hiberfil.sys , making recovery of older memory states more challenging. Variable System Behavior : Some systems retain older hibernation data longer than others. Differences are likely influenced by hardware components, particularly SSD vs. HDD storage . Windows Power Management and Hibernation Artifacts Microsoft has made significant changes to power management in modern Windows versions. These include new power states that affect whether a hibernation file is created: Modern Standby (Connected Standby) : Keeps the system in a low-power state rather than full hibernation. Hybrid Sleep : A combination of sleep and hibernation, which may not always generate a hiberfil.sys file. Fast Startup : Saves a portion of memory state to hiberfil.sys, but may not store full RAM contents. Investigators can use the powercfg.exe  tool to check the system’s current power settings: powercfg /a This command lists all available power states on the system and helps determine whether a hibernation file should be present. Conclusion Hibernation files are a goldmine of forensic data , especially in cases where a system has already been shut down. Understanding how to extract, convert, and analyze hiberfil.sys  can provide forensic analysts with critical insights into system activity. With newer Windows versions introducing changes to hibernation behavior, forensic professionals must stay updated with the latest tools and methodologies to ensure effective investigations. ------------------------------------------Dean-------------------------------------------------

  • Mastering AmcacheParser and appcompatprocessor.py for Amcache.hiv Analysis

    To Understand Amcache.Hive check out below article: https://www.cyberengage.org/post/amcache-hiv-analysis-tool-registry-explorer ------------------------------------------------------------------------------------------------------------ Introduction When conducting digital forensics, understanding the execution history of a system is crucial. Windows operating systems maintain execution artifacts that provide insight into which programs and binaries were executed, making them valuable for forensic investigations. Two of the most powerful tools for analyzing execution artifacts are AmcacheParser  and appcompatprocessor.py . ------------------------------------------------------------------------------------------------------------ AmcacheParser: Understanding Execution Artifacts What is AmcacheParser? AmcacheParser is a tool developed by Eric Zimmerman  that parses the Amcache.hve registry hive, a critical artifact in Windows forensic analysis. This hive stores execution details about applications and drivers, making it a rich source of evidence for identifying malware, persistence mechanisms, and general system activity. Key Features and Data Extracted By default , AmcacheParser focuses on unassociated file entries but can be expanded to include full details of all program-related entries using the -i switch. The tool extracts various data points, including: SHA-1 hash  of the executed file Full file path File size File version number File description and publisher Last modified date Compilation timestamp Language ID Command: E:\Scripted ForensicTools\Zimmerman tools\Get-ZimmermanTools\net6> .\AmcacheParser.exe -i -f C:\Windows\appcompat\Programs\Amcache.hve --csv "E:\Output for testing\Website investigation\Amcache.hiv" Practical Usage in Incident Response AmcacheParser outputs multiple .csv files, categorized based on their source keys in the Amcache.hve registry file. Microsoft frequently updates Amcache, adding new keys and values, which AmcacheParser is designed to parse. The most critical output files include: Amcache_ProgramEntries.csv  – Contains metadata on installed applications (from InventoryApplication key). Amcache_UnassociatedFileEntries.csv  – Lists executables that do not belong to a known installed program, a crucial file for finding standalone malware, credential dumpers, or reconnaissance tools. Amcache_DriverBinaries.csv  – Contains information about installed drivers, helping investigators identify malicious kernel drivers. How AmcacheParser Helps in Threat Hunting AmcacheParser allows analysts to apply allowlisting and blocklisting  based on SHA-1 hashes. This feature is extremely useful in threat hunting across multiple systems, enabling the quick identification of malicious files by comparing them against known-bad hash lists. For example, if an organization is investigating a ransomware attack, running AmcacheParser across affected systems can reveal: Unknown executables  appearing shortly before encryption starts Execution paths  that indicate lateral movement Suspicious programs  launched from unconventional directories like C:\Users\Public\ or C:\ProgramData\ ------------------------------------------------------------------------------------------------------------ appcompatprocessor.py: Automating Execution Analysis What is appcompatprocessor.py? Developed by Matias Bevilacqua , appcompatprocessor.py is a Python-based tool designed to parse and analyze execution artifacts from AppCompatCache (ShimCache) and Amcache . Unlike standalone parsing tools, appcompatprocessor.py integrates these data sources into a SQLite database , allowing for efficient and powerful queries. Why AppCompatCache and Amcache Matter? Both artifacts provide a record of program executions but differ in their capabilities: AppCompatCache (ShimCache) : Primarily tracks file executions, even if they have since been deleted. However, it does not store execution timestamps. Amcache : Contains richer metadata, including SHA-1 hashes, timestamps, and file paths. By combining both sources, appcompatprocessor.py enables forensic analysts to get a comprehensive timeline of executed files , even if malware has attempted to clean up traces. Key Features of appcompatprocessor.py Once data is ingested into SQLite, analysts can leverage various analysis modules  to detect anomalies and malicious activity. Some of the most powerful modules include: 1. Search Modules search: Performs regular expression searches within the database. Prebuilt regex patterns can detect suspicious patterns (e.g., execution from network shares, encoded scripts, or known hacking tools). fsearch: Searches specific fields like FileName, FilePath, LastModified, or ExecutionFlag . 2. Anomaly Detection Modules filehitcount: Counts occurrences of each executable, highlighting unusual or rarely executed binaries. tcorr: Temporal correlation of executions, helping identify which processes frequently run together (e.g., rundll32.exe executing shortly after a suspicious binary). reconscan: Detects reconnaissance tools running in close sequence, assigning a likelihood score to identify probing activity. leven: Identifies slight variations in file names that might indicate masquerading techniques  (e.g., lssass.exe instead of lsass.exe). stack: Performs least frequency of occurrence analysis , helping isolate rare but potentially malicious binaries. 3. Randomized File Name Detection rndsearch: Identifies randomly named executables that could indicate malware execution. ----------------------------------------------------------------------------------------------------------- Case Study: Investigating a Potential Malware Execution A security operations center (SOC) detects suspicious behavior on a Windows endpoint. An unusual svchost.exe process is found running from C:\ProgramData\, which is an uncommon location for a system process. Investigation Steps Using These Tools Run AmcacheParser to extract execution history: AmcacheParser.exe -f C:\Windows\appcompat\Programs\Amcache.hve -i -o output_folder The results in Amcache_UnassociatedFileEntries.csv show svchost.exe executing from an unusual location. SHA-1 hash lookup confirms the file is unknown and possibly malicious. Use appcompatprocessor.py to correlate ShimCache and Amcache data : python3 appcompatprocessor.py -o analysis.db -a amcache -s SYSTEM -A Amcache.hve Running stack on FilePath highlights C:\ProgramData\svchost.exe as a rare occurrence. tcorr shows it was executed right before cmd.exe, indicating potential scripting activity. reconscan detects use of ipconfig, whoami, and nltest, suggesting reconnaissance activity. Pivot and Expand the Investigation Running fsearch for C:\ProgramData in the database finds another suspicious file, svc.bat, confirming a script-based attack. search module detects sdelete.exe, a known anti-forensic tool, suggesting the attacker attempted to delete traces. ------------------------------------------------------------------------------------------------------------- Conclusion By using AmcacheParser and appcompatprocessor.py together, the SOC team quickly identified: A rogue executable masquerading as a system process Correlation between execution times and malicious commands Attempts to delete forensic evidence This investigation underscores why these tools are invaluable for security analysts and incident responders. ------------------------------------------------------------------------------------------------------------- Final Thoughts Understanding AmcacheParser  and appcompatprocessor.py  is essential for anyone in digital forensics, SOC teams, and incident response. These tools provide deep visibility into program executions, helping analysts detect malware, track adversaries, and correlate execution artifacts. Master these tools, and you'll have a significant edge in forensic investigations and threat hunting. 🚀 --------------------------------------------Dean------------------------------------------

bottom of page