
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
500 results found with an empty search
- Understanding NTFS Journaling ($LogFile and $UsnJrnl) : A Goldmine for Investigators
Updated 18 Feb,2025 Ever wonder how your computer keeps track of all the changes happening to files and folders? That’s where NTFS journaling comes in. Think of it as a built-in security camera for your file system, constantly recording what’s going on. For forensic investigators, this is a goldmine of information, helping them rewind time and see exactly what happened on a system. -------------------------------------------------------------------------------------------------------- The Two Journals: $LogFile and $UsnJrnl NTFS actually has two separate journaling features, each with its own purpose: $LogFile – This is like a system safety net. It records every change happening at a low level, ensuring that if the system crashes, it can recover without corrupting data. $UsnJrnl – This is more like an activity tracker, logging file and folder changes so applications (like antivirus or backup software) can react efficiently. Both of these logs give investigators an incredible amount of visibility into past file system activity. Since they’re also backed up inside Volume Shadow Copies, they can provide insights stretching back days, weeks, or even months! -------------------------------------------------------------------------------------------------------- How These Logs Help Investigators Think of an airplane’s black box. There are two recorders: one tracks flight data (like altitude and speed), and the other records cockpit conversations. In a similar way: $LogFile is like the flight data recorder , tracking deep system changes at a technical level. $UsnJrnl is like the cockpit voice recorder , summarizing higher-level file activity. A better analogy might be comparing them to network security tools: $LogFile is like full packet capture —detailed but heavy on data. $UsnJrnl is like NetFlow logs —less detailed but covers a longer time span. -------------------------------------------------------------------------------------------------------- Breaking Down $LogFile $LogFile’s main job is keeping NTFS stable, making sure the system doesn’t corrupt itself if something goes wrong. Changes to the Master File Table (MFT) Directory updates in $I30 indexes Modifications to $UsnJrnl itself (if enabled) Changes in the $Bitmap file , which tracks disk space Even self-maintenance events (it logs its own updates!) What makes $LogFile especially valuable is that it doesn’t just log what changed—it records the actual data that was modified . This means forensic analysts can sometimes recover deleted data by analyzing these logs. However, since NTFS constantly updates multiple system files at once, even simple actions like creating a new file can generate dozens of log entries . -------------------------------------------------------------------------------------------------------- The Downside: $LogFile is Short-Lived The catch? $LogFile is only 64MB by default . That might sound like a lot, but with so much happening under the hood, it typically only holds a few hours’ worth of data on active systems. However, if a system is mostly idle or you’re looking at logs from a secondary drive, you might find logs stretching back days or even weeks . Want to check or increase your $LogFile size? Use these commands: Check current size: chkdsk /L Increase size: chkdsk /L: -------------------------------------------------------------------------------------------------------- What NTFS Journaling Won’t Do While NTFS journaling is great at tracking file system changes, it doesn’t protect actual file content . If your system crashes while a file is being written, NTFS can repair the file system, but the file itself might still be corrupt. This is why databases and critical applications maintain their own transaction logs—to ensure their data stays intact even if the system crashes. -------------------------------------------------------------------------------------------------------- $UsnJrnl The NTFS file system has a hidden gem called the Update Sequence Number (USN) Change Journal , stored in a system file named $UsnJrnl . This file keeps a log of all file and directory changes, along with a reason code indicating what type of modification occurred. While it does help with system recovery in some cases (like quickly re-indexing a volume), its primary role is to let applications efficiently track file changes across the system. Why Does $UsnJrnl Matter? Think about how Windows Backup works. Instead of scanning every single file to see what’s changed, it just checks the USN journal for recent modifications—saving tons of time. The same applies to antivirus software, the Windows Search Index, File Replication Service (FRS), and other applications that need to monitor file activity. Because of its efficiency, Microsoft made sure that $UsnJrnl was enabled by default starting with Windows Vista (it was available in Windows XP and 2000 but usually disabled). -------------------------------------------------------------------------------------------------------- How It Works: A Simpler View Compared to another NTFS journal, $LogFile , which tracks every tiny system change, $UsnJrnl is much more concise and user-friendly . If a new file is created, for instance, $LogFile might log over 20 detailed system events, while $UsnJrnl simplifies it down to just a few records. This makes it a lot easier for investigators and forensic tools to interpret. Each USN record logs: File or folder name MFT number (unique identifier in NTFS) Parent directory’s MFT number Timestamp of change Reason code (what changed?) File size and attributes (hidden, read-only, archived, etc.) Because it logs only major changes, $UsnJrnl can store several days or even weeks of history , depending on system activity. And since these logs are often backed up in volume shadow copies , forensic investigators can sometimes recover over a month’s worth of historical file activity . -------------------------------------------------------------------------------------------------------- Where and How Is It Stored? $UsnJrnl isn’t stored like a regular file—it’s actually an alternate data stream (ADS) of the $UsnJrnl system file. Specifically, the USN records are kept in a data stream called $J . Unlike numbered log entries, each record is positioned based on its offset into the $J stream. Every file and directory has an Update Sequence Number in its MFT record , which links to the matching entry in $J. Since $UsnJrnl is a locked and hidden system file, standard tools won’t allow access. You’ll need forensic utilities to extract it. Also, because $J is a sparse file (meaning it appears large but isn’t fully written to disk), when you try to copy it, the system will fill in the missing parts, leading to massive file sizes . It can often exceed 3 GB on a typical workstation, making remote collection tricky. Fortunately, it compresses well. -------------------------------------------------------------------------------------------------------- A Hidden Benefit: Recovering Deleted USN Records Even though the journal size is capped (usually 32 MB ), Windows tricks NTFS into thinking it’s a much larger file . When new entries are added, Windows allocates disk space at the end while deallocating older parts , marking them as sparse (empty). This means deleted USN records often remain in unallocated space , allowing forensic tools to recover them. -------------------------------------------------------------------------------------------------------- The $Max Data Stream There’s another small alternate data stream in $UsnJrnl called $Max . It’s tiny (about 32 bytes ) and stores metadata like the maximum allowed size of the journal. -------------------------------------------------------------------------------------------------------- Investigation with Kape. Use KAPE to acquire the NTFS Master File Table (MFT) and journals. Then, we'll employ MFTECmd to parse the MFT and USN Journal, as the $LogFile parsing functionality is not available in MFTECmd. Kape triage uses compound target, showcasing snippets of the MFT, $J, $LogFile and link files targets. The output structure of Kape, with raw files and parsed outputs, is detailed, emphasizing the efficiency of this workflow in gathering artifacts for analysis. Now as Kape can be used as GUI version or Cmd version its depend on your choice. command ------------------------------------------------------------------------------------------------- Final Thoughts The NTFS USN journal is an incredibly valuable forensic resource. It logs file changes in a structured, efficient manner and can provide a historical view of system activity stretching back weeks or even months . While Windows limits its size, forensic analysts can often recover old records, making it a powerful tool in investigations. Whether for system maintenance, security monitoring, or digital forensics, ----------------------------------------------Dean----------------------------------------------
- Tracking Recently Opened Files in Microsoft Office: A Forensic Guide
When investigating user activity on a Windows system, knowing what files were accessed and when can provide critical insights. While Windows keeps a list of recently opened files in the RecentDocs registry key , Microsoft Office maintains an even more detailed record called File MRU (Most Recently Used) . This registry key tracks documents, spreadsheets, and presentations opened in Office applications— often storing more history than RecentDocs. ------------------------------------------------------------------------------------------------------------- Where Does Microsoft Office Store Recent Files? Each version of Microsoft Office stores a File MRU list, which logs files opened in Word, Excel, PowerPoint, and other Office applications. The registry location varies based on the Office version and user account type: For older Office versions (2013, 2016, 2019, Microsoft 365): NTUSER\Software\Microsoft\Office\\[App]\File MRU (Office 2016, 2019, and Microsoft 365 all use "16.0" because they share the same code base.) For Microsoft 365 tied to a personal Microsoft account: NTUSER\Software\Microsoft\Office\ \User MRU\LiveID_#### \File MRU For Microsoft 365 accounts tied to an organization (Azure Active Directory): NTUSER\Software\Microsoft\Office\\ User MRU\ADAL \File MRU Alongside File MRU , Office also maintains a Place MRU key, which tracks folder locations accessed by the user. ------------------------------------------------------------------------------------------------------------- What Information Can You Find in File MRU? Each entry in File MRU contains: ✅ Full File Path – Unlike RecentDocs (which only stores filenames), File MRU lists the complete file location. ✅ Last Accessed Timestamp – Stored in Windows 64-bit FILETIME format (Big-Endian). ✅ Order of Access – The most recently opened document is stored as Item 1 , followed by older entries. ✅ Up to 100+ Entries – Newer Office versions keep a longer history. This is particularly useful because it allows forensic analysts to see exactly when a file was last opened and where it was stored (local drive, USB, network share, etc.). ------------------------------------------------------------------------------------------------------------- Tracking More Than Just File Open Times: Reading Locations Starting with Office 2013, Microsoft introduced the Reading Locations registry key , which remembers where a user left off in a document. This is the feature behind the “Welcome back! Pick up where you left off” message when reopening a Word document. Registry Location for Reading Locations NTUSER\Software\Microsoft\Office\\Word\Reading Locations How Can This Data Be Used in Investigations? Forensic analysts and cybersecurity professionals can use File MRU and Reading Locations to: 🔍 Track User Activity – Identify recently accessed files and determine if unauthorized documents were viewed. 💾 Recover Deleted Evidence – Even if a file is deleted , its MRU entry remains in the registry until overwritten. 📂 Identify Storage Locations – Determine if files were accessed from USB drives, network shares, or cloud folders . ⏳ Estimate Document Usage Duration – By comparing the File MRU (last opened time) with Reading Locations (last closed time) , you can estimate how long a file was in use. Final Thoughts When conducting an investigation, don’t just stop at RecentDocs— dig deeper into the Microsoft Office registry keys for a clearer picture of file usage! 🚀 --------------------------------------------Dean----------------------------------------------------------
- Understanding, Collecting, Parsing the $I30
Updated on Feb 17,2025 Introduction: In the intricate world of digital forensics, every byte of data tells a story. Within the NTFS file system, "$I30" files stand as silent witnesses, holding valuable insights into file and directory indexing Understanding "$I30" Files: $I30 files function as indexes within NTFS directories , providing a structured layout of files and directories. They contain duplicate sets of $File_Name timestamps, o ffering a comprehensive view of file metadata stored within the Master File Table (MFT). Utilizing "$I30" Files as Forensic Resources: $I30 files provide an additional forensic avenue for accessing MACB timestamp data. Even deleted files, whose remnants linger in unallocated slack space, can often be recovered from these index files. ------------------------------------------------------------------------------------------------------------- If you're into digital forensics, you've probably come across Joakim Schicht’s tools. They’re free, powerful, and packed with features for analyzing different forensic artifacts. One such tool, Indx2Csv, is a lifesaver when it comes to parsing INDX records like $I30 (directory indexes), $O (object IDs), and $R (reparse points). The cool thing about Indx2Csv is that it doesn’t just look at active records; it also digs up deleted entries that are still hanging around due to file system operations. Plus, it can even scan for partial entries, which means you might be able to recover metadata for deleted files or folders, even if their complete records are gone. How Does Indx2Csv Work? Indx2Csv processes I NDX records that have been exported from forensic tools like FTK Imager or The Sleuth Kit’s icat. If you've used FTK Imager before, you might have seen files labeled as $I30 in directories. These aren’t actual files but representations of the $INDEX_ALLOCATION attribute for that directory. You can export them and analyze them with Indx2Csv. Output: (GUI Mode of Ind2xcsv if you're using The Sleuth Kit, you can extract the $INDEX_ALLOCATION attribute with this command: icat DiskImage MFT#-160-AttributeID > $I30 (Just remember, the attribute type for $INDEX_ALLOCATION is always 160 in decimal.) Once you’ve got the file, running Indx2Csv is straightforward: Indx2Csv.exe -i exported_I30_file -o output.csv Indx2Csv has several command-line options for tweaking how it scans and outputs data. You can check out the tool’s GitHub page for a complete list of commands. ------------------------------------------------------------------------------------------------------------- Alternative Tools: Velociraptor & INDXparse.py While Indx2Csv is great, it’s not the only tool in the game. Here are two other options worth mentioning: Velociraptor Velociraptor is an advanced threat-hunting and incident response tool that can also be used for forensic analysis. Unlike Indx2Csv, which works with exported INDX files, Velociraptor can analyze live file systems and mounted volumes. That means you don’t have to manually locate and export the $I30 file—just point Velociraptor to a directory, and it’ll handle the rest. For example, if you've mounted a disk image and want to analyze the directory, you can run: velociraptor.exe artifacts collect Windows.NTFS.I30 --args \ DirectoryGlobs=" <\\Windows\\Dean\\>" --format=csv --nobanner > C:\output\I30-Dean.csv This will save both active and deleted entries in a CSV file, which you can then analyze with Timeline Explorer or any spreadsheet app. INDXparse.py Another great option is INDXparse.py, a Python-based tool created by Willi Ballenthin. Like Indx2Csv, i t focuses on $I30 index files, but since it's written in Python , it works on multiple operating systems, not just Windows. Collection: You can use FTK Imager to collect Artifact like $I30. Parsing: INDXParse-master Can be used for Parsing: https://github.com/williballenthin/INDXParse Below screenshot is example of INDXParse-master You can use -c or -d (Parameter) based on needs Note: To use INDXParse-master you need have to Python installed on windows as I have do so its easy for me. Wrapping Up Indx2Csv is a powerful, easy-to-use tool for forensic investigators who need to dig into INDX records. Whether you’re analyzing active files, recovering deleted entries, or scanning for hidden metadata, it gets the job done. And if you need alternatives, Velociraptor and INDXparse.py offer additional flexibility for different situations. So, if you haven’t tried Indx2Csv yet, give it a shot—you might be surprised at what you uncover! --------------------------------------------Dean--------------------------------------------
- The Truth About Changing File Timestamps: Legitimate Uses and Anti-Forensics: Timestomping
Changing a file’s timestamp might sound shady, but there are actually some valid reasons to do it. At the same time, cybercriminals have found ways to manipulate timestamps to cover their tracks. Let’s break it down in a way that makes sense. When Changing Timestamps is Legitimate Think about cloud storage services like Dropbox. When you sync your files across multiple devices, you’d want the timestamps to reflect when the file was last modified, not when it was downloaded to a new device. But here’s the problem: when you install Dropbox on a new computer and sync your files, your operating system sees them as “new” files and assigns fresh timestamps. To fix this, cloud storage apps like Dropbox adjust the timestamps to match the original modification date. This ensures your files appear the same across all devices. It’s a perfectly legitimate reason for altering timestamps and helps keep things organized. --------------------------------------------------------------------------------------------------------- If you waana learn about cloud storage forensic including dropbox, box, onedrive, Box do check out articles written by me link below happy learning https://www.cyberengage.org/courses-1/mastering-cloud-storage-forensics%3A-google-drive%2C-onedrive%2C-dropbox-%26-box-investigation-techniques -------------------------------------------------------------------------------------------------------- So where were we! Yeah, lets continue When Changing Timestamps is Suspicious Hackers and cybercriminals love to manipulate timestamps too, but for completely different reasons. A common trick is to disguise malicious files by changing their timestamps to blend in with legitimate system files. For example if a hacker sneaks malware into the C:\Windows\System32 folder, they can rename it to look like a normal Windows process. But to make it even less suspicious, they’ll modify the timestamps to match those of other system files. This sneaky technique is called timestomping . How Analysts Detect Fake Timestamps Security analysts have developed several methods to spot timestomping . In the past, it was easier to detect because many tools didn’t set timestamps with fractional-second accuracy. I f a timestamp had all zeros in its decimal places, that was a red flag . Example: Timestamps Time stomping in $J 2. Time Stomping in $MFT**(Very Important) If you see screenshot attacker time stomped the eviloutput.txt they changed timestamp(0x10) to 2005 using anti forensic tool but as anti forensic tool do not modify (0x30) which is showing they original timestamp when file is created 3. Another example But today, newer tools allow hackers to copy timestamps from legitimate files, making detection trickier. Here’s how experts uncover timestamp manipulation: Compare Different Timestamp Records In Windows, files have t imestamps stored in multiple places , such as $STANDARD_INFORMATION and $FILE_NAME metadata. If these don’t match up, something suspicious might be going on. Tools like mftecmd, fls, istat, and FTK Imager help with these checks. Look for Zeroed Fractional Seconds Many timestomping tools don’t bother with precise sub-second timestamps. If the decimal places in a timestamp are all zeros, it could indicate foul play. Tools: mftecmd, istat . Compare ShimCache Timestamps Windows tracks when executables were first ru n using a system feature called ShimCache (AppCompatCache) . If a file’s recorded modification time is earlier than when it was first seen by Windows, that’s a big red flag. Tools: AppCompatCacheParser.exe, ShimCacheParser.py . Check Embedded Compile Times for Executables Every executable file has a compile time embedded in its metadata . If a file’s timestamp shows it was modified before it was even compiled, something’s off. Tools: Sysinternals’ sigcheck, ExifTool . Analyze Directory Indexes ($I30 Data) Sometimes, old timestamps are still stored in the parent directory’s index . If a previous timestamp is more recent than the current one, it’s a clue that someone tampered with it. Check the USN Journal Windows keeps a log (USN Journal) of file creation events. If a file’s creation time doesn’t match the time the USN Journal recorded, that’s a clear sign of timestamp backdating. Compare MFT Record Numbers Windows writes new files sequentially in the Master File Table (MFT). If most files in C:\Windows\System32 have close MFT numbers but a backdated file has a much later number, it stands out as suspicious. Tools: mftecmd, fls . Real-World Example Security analysts at Dean service organization investigated a suspicious file (dean.exe) in C:\Windows\System32. Even though its timestamps matched legitimate files, further checks revealed: $STANDARD_INFORMATION creation time was earlier than $FILE_NAME creation time. The fractional seconds in its timestamp were all zeros. The executable’s compile time (found via ExifTool) was newer than its modification time. Windows’ ShimCache recorded a modification time that was later than the file system timestamp. These findings confirmed the file had been timestomped, helping the team uncover a hidden malware attack. ------------------------------------------------------------------------------------------------------------- All anti forensic tool have one thing in common they mostly modify $SI Timestamp. They do not modify the $FN time stamp. So comparing these two time stamp in timeline explorer can help to identify time stopping. ------------------------------------------------------------------------------------------------------------- Now keep in mind as normal there might be False positive while analyzing the $MFT for time stomped this thing must be understand by analyst Screen connect example of timestomp: The Bottom Line Timestamp manipulation is a double-edged sword. While cloud storage services use it for legitimate reasons, hackers exploit it to hide malicious files . Security analysts have developed multiple ways to detect timestomping, but modern tools make it harder than ever to spot. So, the next time you see a file with a suspiciously old timestamp, don’t just take it at face value. There might be more going on under the surface! ----------------------------------------------Dean----------------------------------------------
- Understanding NTFS Metadata(Entries) and How It Can Help in Investigations
When dealing with NTFS (New Technology File System), one of the most crucial components to understand is the Master File Table (MFT) . Think of it as the backbone of the file system—it stores metadata for every file and folder, keeping track of things like timestamps, ownership, and even deleted files. Allocated vs. Unallocated Metadata Entries Just like storage clusters, metadata entries in the MFT can either be allocated (actively in use) or unallocated (no longer assigned to a file). If a metadata entry is unallocated, it falls into one of two categories: It has never been used before (essentially empty). It was used in the past, meaning it still contains traces of a deleted file or directory. This is where forensic investigations get interesting. I f an unallocated metadata entry still holds data about a deleted file, we can recover information like filenames, timestamps, and ownership details . In some cases, we may even be able to fully recover the deleted file—provided its storage clusters haven't been overwritten yet. How Metadata Entries Are Assigned MFT entries are typically assigned sequentially . This means that when new files are created rapidly, their metadata records tend to be grouped together in numerical order. Let’s say a malicious program named "mimikatz.exe" runs and extracts several resource files into the sysetm32 directory. Because all these files are created in quick succession, their metadata entries will be next to each other in the MFT. A similar thing happens when another malicious executable, "svchost.exe" , runs and drops a secondary payload ( "a.exe" ). This action triggers the creation of prefetch files , and since they’re created almost instantly, their MFT entries are also close together . This pattern helps forensic analysts track down related files during an investigation. The Hidden Clues in MFT Clustering While this clustering pattern isn’t guaranteed in every case, it’s common enough that it can serve as a backup timestamp system . Even if a hacker tries to manipulate file timestamps (a technique called timestomping ), looking at the MFT sequence can reveal when files were actually created. This makes it a valuable tool for forensic analysts. Type Name Type Name 0x10 $STANDARD_INFORMATION 0x90 $INDEX_ROOT 0x20 $ATTRIBUTE_LIST 0xA0 $INDEX_ALLOCATION 0x30 $FILE_NAME 0xB0 $BITMAP 0x40 $OBJECT_ID 0xC0 $REPARSE_POINT 0x50 $SECURITY_DESCRIPTOR 0xD0 $EA_INFORMATION 0x60 $VOLUME_NAME 0xE0 $EA 0x70 $VOLUME_INFORMATION 0xF0 0x80 $DATA 0x100 $LOGGED_UTILITY_STREAM Breaking Down the MFT Structure Every file, folder, and even the volume itself has an entry in the MFT. Typically, each entry is 1024 bytes in size and contains various attributes that describe the file. Here are some of the most commonly used attributes: $STANDARD_INFORMATION (0x10) – Stores general d etails like file creation, modification, and access timestamps. $FILE_NAME (0x30) – Contains the filename and another set of timestamps. $DATA (0x80) – Holds the actual file content (for small files) or a pointer to where the data is stored. $INDEX_ROOT (0x90) & $INDEX_ALLOCATION (0xA0) – Used for directories to manage file listings. $BITMAP (0xB0) – Keeps track of allocated and unallocated clusters. Timestamps and Their Forensic Importance NTFS records multiple sets of timestamps, and they don’t always update the same way. Two of the most important timestamp attributes are: $STANDARD_INFORMATION timestamps – These are affected by actions like copying, modifying, or moving a file. $FILE_NAME timestamps – These remain more stable and can serve as a secondary reference. Because these two timestamp sets don’t always update together, analysts can spot inconsistencies that reveal timestomping attempts . For instance, if a file’s $STANDARD_INFORMATION creation time differs from its $FILE_NAME creation time , it could mean that someone tampered with the timestamps. Real-World Challenges in Analyzing NTFS Metadata While these timestamp rules are generally reliable, they aren’t foolproof. Changes in Windows versions, different file operations, and even t ools like the Windows Subsystem for Linux (WSL) can alter how timestamps behave. For example: In Windows 10 v1803 and later , the "last access" timestamp may be re-enabled under certain conditions. The Windows Subsystem for Linux (WSL) updates timestamps differently than the standard Windows shell. Final Thoughts Analyzing NTFS metadata can unlock a wealth of information, helping forensic investigators reconstruct file activity even after deletion or manipulation. Understanding sequential MFT allocations , timestomping detection , and the role of multiple timestamps is essential for building a strong case in digital forensics. By looking beyond standard timestamps and diving into the metadata, analysts can uncover hidden traces of activity—providing crucial evidence in cybersecurity investigations. ----------------------------------------Dean---------------------------------------------
- Understanding NTFS File System Metadata and System Files
File systems store almost all data in files , but certain special files, collectively known as metadata structures, store essential information about other files and directories . These structures track attributes such as timestamps (created, modified, and accessed), permissions, ownership, file size, and pointers to file locations. Different file systems use unique mechanisms to store clusters allocated to a file. For example: NTFS (New Technology File System) employs a structure called a "data run" to manage file clusters. FAT (File Allocation Table) maintains a "chain" of clusters. The Master File Table (MFT) NTFS revolves around the Master File Table (MFT) , a highly structured database storing MFT entries (or MFT records) for every file and folder on a volume. These entries contain vital metadata, either storing the data directly (for small files) or pointing to clusters where the actual data resides. For files larger than approximately 600 bytes , data is stored in clusters outside the MFT, making them non-resident files . Each NTFS volume has a hidden file called $MFT , which consolidates all MFT entries. NTFS also uses another hidden file, $Bitmap , to track cluster allocation . This file maintains a bit for each cluster, indicating whether it is allocated (1) or unallocated (0) . Fragmentation occurs when file clusters are non-contiguous, though Windows generally optimizes file storage to minimize fragmentation. The MFT is the Metadata Catalog for NTFS Key NTFS System Files Besides $MFT and $Bitmap, NTFS relies on several other system files, most of which are hidden and start with a $ sign . The first 24 MFT entries are reserved, with the first 12 assigned to these system files: System File MFT Entry $MFT 0 Stores the Master File Table, which tracks all files and directories. $MFTMIRR 1 A backup of the primary MFT to ensure recoverability. $LOGFILE 2 Contains transactional logs to maintain NTFS integrity in case of system crashes. $VOLUME 3 Stores volume information, including the volume name and NTFS version. $ATTRDEF 4 Defines NTFS attributes, detailing metadata structure. “.” 5 The root directory of the NTFS volume. $BITMAP 6 Tracks allocated and unallocated clusters on the volume. $BOOT 7 Stores boot sector information, enabling normal file I/O operations. $BADCLUS 8 Marks physically damaged clusters to prevent data storage in unreliable locations. $SECURE 9 Stores file security details, including ownership and access permissions. $UPCASE 10 Contains Unicode character mappings for case-insensitive file sorting. $EXTEND 11 Holds additional system files introduced in newer NTFS versions. Extended NTFS System Files Beyond the first 12 reserved system files, NTFS also includes several additional $EXTEND files : Extended System File Purpose $EXTEND$ObjId Tracks object IDs, allowing file tracking despite renaming or movement. $EXTEND$Quota Manages user disk space quotas. $EXTEND$Reparse Stores reparse points, mainly used for symbolic links. $EXTEND$UsnJrnl Maintains the Update Sequence Number (USN) Journal , recording all file changes. Conclusion NTFS is a powerful file system with a robust metadata structure that ensures efficient file management and system integrity. Key system files like $MFT, $Bitmap, $LogFile, and $UsnJrnl play crucial roles in tracking files, managing disk space, and ensuring recoverability in case of crashes. Understanding these NTFS components is vital for forensic analysts, system administrators, and cybersecurity professionals who need to investigate file system activities or recover lost data. ------------------------------------------------Data--------------------------------------------------
- NTFS: More Than Just a Filesystem
Updated on 17 Feb,2025 When it comes to filesystems, NTFS (New Technology File System) is like the Swiss Army knife of Windows storage. It’s packed with features, built for reliability, and miles ahead of the old FAT (File Allocation Table) system. But let’s be real—most people don’t even use half of what NTFS offers. Some of its capabilities are mainly useful in enterprise environments, while others can be game-changers even for regular users. Let’s break it down in a way that actually makes sense. NTFS: The Highlights 1. Built-in Crash Recovery (Journaling) Ever had your system crash in the middle of saving a file? NTFS has your back. It keeps a log (also called a journal) of changes to the filesystem so it can recover from crashes and prevent data corruption . This is a big deal, especially compared to older filesystems where a sudden shutdown could leave your data in shambles. 2. Tracks File Changes with USN Journal NTFS has a feature called the USN (Update Sequence Number) Journal , which keeps track of every file change. This is super useful for antivirus software and backup tools because they don’t have to scan everything —they just check what’s changed. That means faster scans and backups. 3. Hard Links & Soft Links (File Shortcuts on Steroids) NTFS supports both hard links and soft links : A hard link makes it look like a file exists in multiple places, but it's actually just one file with multiple names. A soft link (or symbolic link) is more like a shortcut—clicking it opens the original file . This is useful for organizing files without creating duplicate copies. 4. Stronger Security (But Not Hacker-Proof) NTFS has built-in security features that let administrators control who can access what files. It’s great for keeping prying eyes out—until someone boots into Linux or uses a forensic tool to bypass those restrictions. (But that’s a topic for another day.) 5. Disk Quotas: No More Hoarding! Ever shared a computer with someone who fills up all the storage with movies and games? NTFS allows admins to set quotas, l imiting how much disk space each user can use . Once they hit their limit, they can’t store any more data until they free up space. 6. Reparse Points: Making Magic Happen This sounds complicated, but it’s really cool. NTFS lets the system interact with files in creative ways using something called reparse points . This is how Windows does things like soft links, volume mount points, and single-instance storage (which we’ll talk about in a second). Developers can even create their own reparse points for custom file behavior. 7. Object IDs: Never Lose a File Again Have you ever renamed or moved a file and then had programs freak out because they can’t find it? NTFS assigns Object IDs to certain files, allowing Windows to track them no matter where they go. So if a shortcut breaks, the system might still be able to find the file. 8. File-Level Encryption & Compression Encryption : NTFS l ets you encrypt individual files and folders so that only you can open them. This happens in the background without you having to do anything special. Compression : If you’re running low on space, NTFS can automatically compress files to save room. Again, this happens behind the scenes without you noticing a difference. 9. Volume Shadow Copies: Your Undo Button for Files Ever made changes to a file, hit save, and immediately regretted it? NTFS keeps Volume Shadow Copies , which are basically automatic backups of your files . If configured properly, you can restore previous versions of files without needing an external backup. 10. Alternate Data Streams: Hidden File Tricks NTFS lets files have extra hidden data attached to them. For example, when you download something from the internet, Windows tags it so it can warn you before running it. Unfortunately, hackers also love this feature because they can hide malware inside alternate data streams. So, cool feature—but also a bit risky if misused. 11. Mounting Drives as Folders Instead of having a bunch of drive letters like C: and D:, NTFS lets you mount a second drive inside a folder on another drive . This helps keep things organized, especially in server environments where multiple drives are used. 12. Single Instance Storage: Saving Space on Large Servers Let’s say you work at a company where everyone saves the same massive video file on the shared drive. Instead of keeping multiple copies, NTFS can store one copy and create references (soft links) for everyone else , saving tons of disk space. Final Thoughts NTFS is packed with features that most people don’t even realize exist. While some of these are mainly useful for IT admins and businesses, others—like file recovery, security controls, and file compression—are things regular users can take advantage of every day. Next time you’re managing your files, just remember: NTFS is doing a lot more under the hood than you might think! ------------------------------------------------Dean----------------------------------------------------
- Mastering Timeline Analysis: A Practical Guide for Digital Forensics: (Log2timeline)
Introduction Timeline analysis is a cornerstone of digital forensics, allowing investigators to reconstruct events leading up to and following an incident. When working with massive amounts of forensic data, such as a super timeline generated by Plaso, the key challenge is making sense of thousands—or even millions—of events . The Power of Super Timelines A super timeline consolidates data from multiple sources, including file system metadata, registry changes, event logs, and web history. After parsing data with log2timeline, the tool psort helps filter and organize this data into meaningful insights. However, once the timeline is loaded into a tool like Timeline Explorer, the sheer volume of entries can be overwhelming. The goal is not to analyze every single row but to apply strategic filtering techniques to extract actionable intelligence. This is where pivot points, filtering, and visualization become crucial . Understanding the Core Fields in Timeline Analysis When working with a super timeline, you'll encounter multiple fields. Here are some key columns to focus on: Date & Time – The timestamp of the event in MM/DD/YYYY and HH:MM:SS format. Timezone – Helps standardize timestamps across different system logs. MACB – Indicates if the event modified (M), accessed (A), changed (C), or was created (B). Source & Source Type – Identifies the origin of the artifact, such as registry keys (REG), web history (WEBHIST), or log files (LOG). Event Type – Describes the nature of the event, e.g., file creation, process execution, or a website visit. User & Hostname – Useful when investigating multi-user systems. Filename & Path – Identifies where the file resides in the system. Notes & Extra Fields – May contain additional insights depending on the data source. Filtering and Data Reduction: The Key to Efficiency With thousands of rows to sift through, filtering is your best friend. Here’s how to break down the data efficiently: 1. Start with the Big Picture Before zooming into specifics, look at broad trends. For example: What are the peak activity hours? Are there gaps in timestamps that indicate potential log tampering? 2. Use Color Coding and Sorting Tools like Timeline Explorer automatically highlight different types of events (e.g., executed programs in red, file opens in green, and USB device activity in blue). Use this to your advantage to focus on suspicious patterns. 3. Leverage Advanced Search Techniques Use CTRL-F for quick searches. Use wildcards like % to find variations of keywords. Apply column filters to hide non-essential data and zoom in on specific actions. 4. Pivot on Key Artifacts Instead of getting lost in a sea of data, use key artifacts to guide your analysis: RDP Sessions: Look at Windows event logs for suspicious remote access. USB Activity: Filter by removable media insertion events to track external device usage. Process Execution: Investigate software launches to detect malware or unauthorized tools. 5. Export and Annotate Tag critical findings and export them for reports. Timeline Explorer allows tagging rows, which helps in organizing evidence for presentations or case documentation. Beyond Spreadsheets: The Role of Specialized Tools While CSV-based analysis is a good starting point , dedicated tools like Timeline Explorer offer significant advantages: Multi-tab support: Analyze multiple timelines simultaneously. Detailed Views: Double-click any row for a structured breakdown of event details. Pre-set Layouts: Timeline Explorer provides optimized column layouts for different types of forensic investigations. Pro Tips for Your First Timeline Analysis Minimize Distractions – Hide unnecessary columns to maximize screen space. Stay Organized – Label key findings and use tags to revisit them easily. Use Comparative Analysis – If investigating multiple systems, compare hostnames and user activity. Automate Where Possible – Scripts can help extract high-priority data points quickly. Conclusion Timeline analysis is an incredibly powerful forensic technique, but its effectiveness depends on how well you filter, categorize, and interpret the data. By mastering tools like log2timeline, psort, and Timeline Explorer, you can efficiently reconstruct digital events and uncover critical evidence. As you gain experience, you’ll develop personal best practices and preferred filtering methods. The key is to approach each case systematically, focusing on high-value artifacts while avoiding data overload. Happy hunting! ------------------------------------------Dean-----------------------------------------------------------
- Understanding Filesystem Timestamps: A Practical Guide for Investigators
In the digital forensics world, understanding how timestamps work is crucial. Modern operating systems, with their complexity, make timestamp analysis both fascinating and challenging. Whether you're tracking file modifications, uncovering malware activity, or investigating lateral movement, timestamps serve as valuable clues. How Timestamps Can Change Unexpectedly Files don’t always follow the expected timestamp update rules. Various software and system activities can modify timestamps, sometimes in ways that obscure forensic evidence. Here are some common offenders: Microsoft Office Applications: These can update access times even when registry settings disable such changes. Anti-Forensic & Malware Tools: Attackers use file system APIs to modify timestamps, making malicious files blend in. Archiving Software: When extracting files from a ZIP or RAR archive, the modification time often reflects the original archive's date rather than when the file was actually unzipped. Security Software & AV Scans: Some antivirus solutions update access timestamps during routine scans, making forensic analysis trickier. Key Takeaway: Timestamps should never be interpreted in isolation. Always correlate with other evidence, such as logs and system events, to understand why a timestamp changed. Timestamps Over the Network: A Hidden Trail Did you know timestamps follow the same rules even when files are transferred over a network? This has major implications for forensic investigations. Lateral Movement and Timestamps When an attacker moves files across systems using SMB (Server Message Block), the modification time of the file remains the same, while a new creation time is assigned . This tells us two things: The modification time predates the creation time—indicating a copy operation. The creation timestamp on the target system is the exact moment the file was transferred. Why This Matters Pivot Points in Investigations: The creation time can serve as a reference to correlate with logs and execution events. Detecting Lateral Movement: Attackers often use net use, WMI, or PsExec to copy and execute malware remotely. SMB traffic analysis (e.g., PCAP files) can reveal timestamps matching those in the filesystem. Registry Clues: The mountpoints2 NTUSER.DAT registry key can help identify locally and remotely mounted volumes, shedding light on attacker activity. Key Takeaway: Identifying files where the modification time predates the creation time can uncover unauthorized file transfers and lateral movement techniques. Deciphering Timeline Analysis: The “MACB” Model When analyzing a timeline, you'll encounter different timestamp types represented by the “MACB” notation: M – Modified: Content of the file changed. A – Accessed: The file was read or executed. C – Metadata Changed: File attributes or permissions were altered. B – Birth: The file’s creation time. Example: Understanding a Timeline Entry Let’s say you analyze C:\akash.exe and see these entries: 2025-02-17 16:20:37 m.c. C:\akash.exe 2025-02-17 16:25:12 .a.b C:\akash.exe What This Means: The first line (m.c.) shows that modification and metadata change occurred at 16:20:37. The second line (.a.b) tells us the file was accessed and created (copied) at 16:25:12. Conclusion? The file was copied to the system at 16:25:12 and then modified at 16:20:37 —confirming a past existence before it landed on the target machine. Common Timestamp Combinations Notation Meaning m.cb Modified, metadata changed, birth (created) .a.. Accessed only mac. Modified, accessed, metadata changed Key Takeaway: Timeline analysis isn’t just about reading timestamps—it’s about understanding why those timestamps exist and what they reveal about past activities. Challenges in Timestamp Forensics Overwritten Evidence: Timestamps get updated with new modifications, erasing past data. You only see the latest modification, not the full history. Time Skew Issues : If a system’s clock was incorrect or tampered with, timestamps could be misleading. File System Differences: NTFS timestamps differ from FAT32, ext4, and other filesystems, so always consider the OS and format. Final Thoughts: The Investigator’s Approach To master timestamp forensics, you need more than just theoretical knowledge—you need an investigative mindset. Correlate with Logs & Events: Match file timestamps with Windows Event Logs, Sysmon, and execution artifacts. Leverage Registry Artifacts: Mountpoints2, shellbags, and recent file lists provide extra context. Test Your Hypotheses: If something doesn’t add up, replicate it in a controlled environment. By understanding how timestamps behave—and how they can be manipulated—you can uncover hidden traces left by attackers. Keep practicing, keep investigating, and timestamps will become one of your most valuable forensic tools. ------------------------------------------------Dean----------------------------------------------------- 🔍 Want to Learn More? Explore forensic tools like Plaso, Timesketch, and Velociraptor to take your timeline analysis skills to the next level! Velociraptor: https://www.cyberengage.org/courses-1/mastering-velociraptor%3A-a-comprehensive-guide-to-incident-response-and-digital-forensics Plaso: https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools https://www.cyberengage.org/post/running-plaso-log2timeline-on-windows
- Understanding Filesystem Timelines in Digital Forensics
Updated on 17 Feb,2025 When it comes to digital forensics, one of the most valuable tools in an investigator’s arsenal is the filesystem timeline . This technique allows forensic analysts to reconstruct events by examining file metadata, helping to determine when files were created, modified, accessed, or deleted What is a Filesystem Timeline? A filesystem timeline is a chronological record of file and directory activities within a given storage volume. It includes both allocated and unallocated metadata structures, which means it can provide insights into deleted or orphaned files as well. Different filesystems store timestamps in unique ways, but most record four essential time values: M (Modification Time): When the file’s content was last changed. A (Access Time): The last time the file was opened or accessed. C (Change Time): When the metadata of the file (like permissions, ownership, or name) was altered. B (Birth Time or Creation Time): When the file was initially created on the system. Supported Filesystems for Timeline Analysis Modern forensic tools can parse timelines from various filesystem types, including: NTFS (Windows) FAT12/16/32 (Older Windows systems, external storage devices) EXT2/3/4 (Linux) ISO9660 (CD/DVD media) HFS+ (Mac systems) UFS1 & UFS2 (Unix-based systems) NTFS Timestamps – The Gold Standard in Windows Forensics The NTFS filesystem, used in most Windows environments, maintains four key timestamps (MACB). However, two timestamps often confuse beginners: Change Time (C): Updated when a file is renamed, permissions change, or ownership is modified. Access Time (A): Historically unreliable, as Windows has altered how frequently it updates access times, even delaying updates by up to an hour or disabling them altogether in some versions. For practical forensic work, focusing on Modification (M) and Creation (B) times is usually the best approach, as they are more reliable indicators of file activity. The Importance of Time Formats One of the most crucial factors in forensic timeline analysis is understanding how different filesystems store timestamps: NTFS timestamps are stored in UTC format, meaning they remain consistent regardless of time zone changes or daylight savings. FAT timestamps use local time, which can lead to inconsistencies when analyzing files across different locations. Additionally, NTFS uses a high-resolution 64-bit FILETIME structure , which counts time in 100-nanosecond intervals since January 1, 1601 (UTC). In contrast, UNIX systems count seconds since January 1, 1970. How Actions Affect Timestamps Different file actions impact timestamps in various ways. Here are some key forensic takeaways: File Creation: All four timestamps (MACB) are set at the time of creation. File Modification: Updates the M (modification), A (access), and C (metadata change) timestamps. File Rename/Move (on the same volume): Only the C timestamp is updated. File Deletion: No timestamps are updated (Windows doesn’t maintain a deletion timestamp). File Copying: The copied file retains the M timestamp from the original but receives a new B (creation) timestamp, making it possible to detect copied files by spotting instances where the modification date is older than the creation date. Command Line vs. GUI Moves: Interestingly, moving a file via the command line can produce different timestamp behaviors compared to using drag-and-drop in the Windows GUI. The Challenges of Windows Version Differences Different Windows versions handle timestamps in slightly different ways. For example: Windows Vista disabled access time updates (later re-enabled in Windows 10 and 11). Windows 10 vs. 11 timestamp behaviors are largely similar , but forensic experts should always test assumptions on a similar system before drawing firm conclusions. Practical Takeaways for Investigators Prioritize M and B timestamps. They are the most consistent and useful in tracking file activity. Be cautious with A and C timestamps. These can be misleading due to system behaviors and version differences. Recognize copied files . If a file’s modified date is older than its creation date, it was likely copied from another source. Validate your findings . If timestamps play a crucial role in your investigation, test your hypothesis on a similar system to confirm expected behaviors. Final Thoughts Filesystem timelines are an incredibly powerful tool in digital forensics. Understanding how different filesystems handle timestamps, recognizing anomalies, and testing assumptions can make all the difference in an investigation. -------------------------------------------------Dean----------------------------------------------------
- Mastering Timeline Analysis: Unraveling Digital Events with Forensic Precision
Tracking down malicious activity in a digital environment can feel overwhelming. Modern systems generate an endless stream of logs, timestamps, and events—making it difficult to separate suspicious activity from normal operations. However, by using timeline analysis, we can cut through the noise and identify key events that may indicate an intrusion, unauthorized access, or data exfiltration. Why Timeline Analysis Matters Every system generates logs that track user actions, system processes, and interactions with files. Think of these as footprints in the digital sand. However, attackers often try to erase logs, modify timestamps, or use "living-off-the-land" techniques—methods that blend their activity with normal system operations. This is where timeline analysis shines—it reconstructs past events, correlates different data points, and identifies inconsistencies. Understanding Noise vs. Signal When conducting a timeline investigation, you must recognize that most events are unrelated. Imagine listening to multiple music genres simultaneously—jazz, rock, and classical. At first, it sounds chaotic, but with practice, you can isolate specific melodies. Similarly, timeline analysis helps us filter out background system noise and focus on user activity or intrusions. A key principle here is contextual analysis —the ability to differentiate between normal behavior and anomalies. For example, a SYSTEM process opening a web browser is suspicious, whereas a user logging in and accessing files is expected behavior. Key Concepts in Timeline Forensics Pivot Points: Every investigation needs a starting place, such as a suspicious file, an unusual login time, or a flagged process. Once we identify a pivot point, we work outward—analyzing what happened before and after that event. Temporal Proximity: Events rarely happen in isolation. Looking at what occurred immediately before and after an incident helps piece together a clearer picture of what transpired. Super Timelines vs. Targeted Timelines: Super timelines aggregate all available logs, registry changes, browser history, and system events into one massive dataset. While thorough, they can be overwhelming. Targeted timelines focus on specific artifacts, making analysis more manageable and efficient. Tools of the Trade Forensic analysts rely on powerful tools to extract and analyze timeline data: Plaso (log2timeline.py): A tool for creating comprehensive super timelines by extracting data from multiple sources. https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools https://www.cyberengage.org/post/running-plaso-log2timeline-on-windows MFTECmd: Tools used to extract filesystem metadata and analyze timestamps for file access and modifications. https://www.cyberengage.org/post/mftecmd-mftexplorer-a-forensic-analyst-s-guide KAPE & Timeline Explorer: Useful for extracting logs and visualizing timeline data in an interactive format. Aurora IR & Velociraptor: Open-source tools designed for incident response and timeline analysis. How to Conduct Timeline Analysis Define the Scope: Determine the timeframe of the incident to reduce data overload. Extract Data: Use tools like Plaso, Sleuthkit, or KAPE to pull logs , registry modifications, browser activity, and event logs. Filter and Organize: Remove unrelated data to highlight suspicious activity by de-duplicating logs and applying keyword filters. Analyze: Investigate relationships between artifacts, identify anomalies , and correlate events to reconstruct the sequence of actions. Report Findings: Document findings clearly, highlighting key events, attack vectors, and any indicators of compromise (IOCs). Real-World Application Imagine investigating a suspected data breach. Your starting point (pivot) is a flagged user login at an unusual time. By analyzing system logs, registry changes, and event timestamps, you notice the user executed PowerShell scripts shortly after logging in. Further analysis reveals that they accessed confidential documents and transferred them to an external USB drive. This sequence of events confirms data exfiltration. Final Thoughts Timeline analysis is one of the most powerful forensic techniques available. It allows us to reconstruct events, pinpoint security incidents, and understand attacker behavior. Whether investigating malware infections, unauthorized access, or data theft, mastering timeline analysis is crucial in uncovering the truth hidden within digital artifacts. With practice and the right tools, you’ll be able to navigate complex datasets and make sense of even the most chaotic digital landscapes. Keep learning, stay curious, and always look for the hidden patterns that reveal the bigger picture. ----------------------------------------------------Dean----------------------------------------------------
- Baseline Analysis in Memory Forensics: A Practical Guide
Introduction to Baseline Analysis in Digital Forensics Baseline analysis is an essential technique in digital forensics and incident response, allowing analysts to efficiently identify anomalies in large datasets . At its core, baseline analysis involves comparing a suspect dataset with a "known good" dataset to detect outliers. This approach is particularly useful in memory forensics, where analysts must sift through hundreds of processes, drivers, and services to identify malicious activity. One powerful tool that le verages baseline analysis for memory forensics is Memory Baseliner , developed by Csaba Barta. This tool integrates with Volatility 3 to streamline comparisons between a suspect memory image and a baseline memory image, helping analysts quickly filter out known good items and focus on potential threats. ------------------------------------------------------------------------------------------------------------ The Need for Baseline Analysis in Memory Forensics Windows memory is complex, often containing over a hundred processes, each with numerous associated objects. Even seasoned professionals can struggle to pinpoint malware hidden within the sheer volume of data. A baseline memory image from a clean system allows for direct comparison, making it easier to isolate unusual artifacts. By feeding Volatility with both a suspect memory image and a baseline image, Memory Baseliner enables forensic analysts to: Quickly filter out known good artifacts. Identify new or uncommon processes, drivers, and services. Stack multiple images to determine the least frequently occurring artifacts. This approach reduces the dataset to review, making investigations more efficient. ------------------------------------------------------------------------------------------------------------ How Memory Baseliner Works Memory Baseliner supports four types of memory object analysis: Processes and associated DLLs (-proc) Drivers (-drv) Windows Services (-svc) To perform baseline analysis, two memory images must be provided: Baseline Image (-b) : A clean system memory dump. Suspect Image (-i) : The compromised system's memory dump. python memory_baseliner.py -b baseline.raw -i suspect.raw -o output.txt This command compares the suspect memory image against the baseline, saving results to output.txt for further analysis. A useful option is --showknown, which outputs both known and unknown items , allowing for flexible filtering in spreadsheet tools. Key output details include: Process Name & Command Line Parent Process Details Loaded DLLs Import Table Hashes Known/Unknown Status (Whether the item was in the baseline) Frequency of Occurrence (Baseline vs. suspect image) These data points help analysts identify anomalies that might indicate malware presence. ------------------------------------------------------------------------------------------------------------ Stacking for Least Frequency of Occurrence Analysis Stacking is another powerful feature of Memory Baseliner that analyzes multiple memory images to detect rare artifacts . Since malware-related items tend to be less common across systems, identifying low-frequency occurrences can highlight suspicious activity. Stacking Example python memory_baseliner.py -d memory_images_folder -procstack -o stacked_output.txt Here, the t ool scans multiple images in the memory_images_folder directory , identifying the l east frequently occurring processes. By focusing on rare executables, DLLs, drivers, or services, analysts can reduce the dataset and prioritize investigative leads. However, false positives may still exist, requiring manual review. ------------------------------------------------------------------------------------------------------------ Speeding Up Analysis with JSON Baselines One challenge of Memory Baseliner is its processing time. Large memory images can take up to 15 minutes to analyz e. To optimize this, the tool allows users to create and reuse JSON baseline files. JSON Baseline Usage Create a JSON Baseline python memory_baseliner.py -b baseline.raw --jsonbaseline baseline.json --savebaseline Load the JSON Baseline for Faster Analysis python memory_baseliner.py -i suspect.raw --jsonbaseline baseline.json --loadbaseline -o output.csv By leveraging JSON files, analysts can bypass re-analysis of the baseline memory image, significantly speeding up the comparison process. ------------------------------------------------------------------------------------------------------------ I know until now we have gone through theory. But lets go with practical execution and analysis as well , So, you will not face errors like I did. So, I wanted to make your experience better—this is how you install this script. Setting Up Memory Baseliner in WSL Before we dive in, let’s set up the tool properly. Clone the repository into your WSL environment: git clone https://github.com/csababarta/memory-baseliner.git Navigate to the cloned directory and move the scripts into the Volatility3 folder: mv memory-baseliner/*.py ~/Memorytool/volatility3/ Verify the setup by running: python3 baseline.py If it runs without errors, you're good to go! ------------------------------------------------------------------------------------------------------------- Using Memory Baseliner: Practical Examples Now that we’re set up, let’s start running some analysis. Step 1: Process Baselining To dump all processes and compare them against a baseline: python3 baseline.py -proc --state -b /mnt/c/Users/Akash's/Downloads/20250213/20250213.mem -i /mnt/c/Users/Akash's/Downloads/20250213Horizon/20250213.mem --showknown -o /mnt/c/Users/Akash's/Downloads/proc_all.txt Step 2: Driver Baselining To dump all loaded drivers: python3 baseline.py -drv -b /mnt/c/Users/Akash's/Downloads/20250213/20250213.mem -i /mnt/c/Users/Akash's/Downloads/20250213Horizon/20250213.mem --showknown -o /mnt/c/Users/Akash's/Downloads/driv-all.txt Step 3: Service Baselining To analyze running services: python3 baseline.py -svc --state -b /mnt/c/Users/Akash's/Downloads/20250213/20250213.mem -i /mnt/c/Users/Akash's/Downloads/20250213Horizon/20250213.mem --showknown -o /mnt/c/Users/Akash's/Downloads/svc-all.txt ------------------------------------------------------------------------------------------------------------ Output: Converting Output for Better Analysis By default, the tool outputs pipe-separated (|) text files , which aren’t ideal for analysis in tools like Timeline Explorer or Excel . To convert them to CSV: sed -i 's/|/,/g' /mnt/c/Users/Akash's/Downloads/svc-all.txt > /mnt/c/Users/Akash's/Downloads/svc-all.csv Do the same for other files (proc_all.txt, driv-all.txt). ------------------------------------------------------------------------------------------------------------ Key Analysis Techniques Once you have the data, here’s how to make sense of it: Process Baselining -proc: This generates a lot of information. To narrow down the data to just process names, filter for .exe in the DLL NAME column . This works because the executable binary (.exe) will also appear in the loaded DLL list. Combine this with PROCESS STATUS=UNKNOWN to quickly identify processes that were not present in the original baseline image. If you want to investigate loaded DLLs , filter for DLL STATUS=UNKNOWN and focus on DLLs with the lowest frequency of occurrence in the IMAGE Fo O column . If a DLL appears in many processes (i.e., has a high occurrence rate), it is less likely to be malicious. The --cmdline option forces a comparison of the full process command line in addition to the process name . This helps detect anomalies, such as the 32-bit version of an application running even though the system typically uses the 64-bit version. You can also compare process owners (--owner) and import hashes (--imphash) . However, these comparisons might be too restrictive unless your baseline image is very similar to the suspect image. Driver Baselining If your baseline image is a close match, you should see only a few new drivers added to the system. Focus on STATUS=UNKNOWN entries first. Review the PATH column to check if any drivers are loaded from unusual location s outside the standard \Windows\System32\Drivers\ and \Windows\System32\ paths. Import hashes (ImpHash) can often be calculated for many drivers present in memory. For deeper analysis, add the --imphash comparison option to detect variations of known drivers. Service Baselining The STATE column shows whether a service was running. As a first step, filter for SERVICE_RUNNING to focus on active malware. The --state option allows you to compare service configurations . This helps detect services that were disabled in the baseline but enabled in the suspect image —a common persistence tactic used by malware. It can also reveal services that were disabled in the suspect image but should be enabled (e.g., Windows updates or security software). Malware attempting to maintain persistence often uses SERVICE_AUTO_START . Filtering for this value can help identify potential threats. Some malware executes only once and then stops . These may use different start types, such as SERVICE_DEMAND_START, and might appear as SERVICE_STOPPED . To get a complete picture, examine all UNKNOWN services , but segmenting the data in different ways can make anomalies more obvious. Most Windows services run under built-in accounts like LOCAL SERVICE or the computer account (HOSTNAME$). Look for services running under user accounts , as these could indicate unauthorized activity. ------------------------------------------------------------------------------------------------------------- Practical Tips for Using Memory Baseliner Use tailored baseline images : A baseline from a similar system build reduces noise. Filter results in Excel : Use UNKNOWN status and .exe filters to highlight suspicious processes. Leverage JSON baselines : Saves time on repeat analyses. Validate findings with additional tools : Use Volatility’s malfind or yarascan plugins for deeper malware analysis. ------------------------------------------------------------------------------------------------------------ Conclusion Baseline analysis is a crucial technique in memory forensics, enabling rapid identification of suspicious activity by filtering known good artifacts. Memory Baseliner simplifies this process, providing efficient comparisons between suspect and clean memory images. Memory Baseliner is a powerful addition to any forensic analyst’s toolkit. By integrating it into investigations, analysts can significantly reduce data review time and enhance their ability to detect stealthy malware infections. ---------------------------------------------Dean---------------------------------------------











