
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
514 results found with an empty search
- Evidence Collection in Linux Forensics (Disk + Memory Acquisition)
Hey everyone! Today, we’re going to dive into a super important topic when it comes to Linux forensics — evidence collection .We’ll cover the classic tools like dd, dcfldd, and dc3dd, and also talk about modern memory acquisition methods and a very cool script called UAC . Let’s get right into it! Disk Imaging Tools: dd, dcfldd, and dc3dd When you're doing any kind of forensic work, the first rule is: capture an exact copy of the original data. In Linux, we have some legendary tools for this — and the best part? They're super easy to use once you get the hang of it! 1. dd – The Classic One You might think "dd" stands for something, but it actually doesn’t officially mean anything! It's a foundational UNIX tool for copying and converting files. Almost every Linux or UNIX-like system has it installed by default — making it a go-to for forensic investigators. It's often used to create bit-by-bit images of disks (i.e., exact copies). Example command: dd if=/dev/sda of=/path/to/image.dd bs=4M if = input file (your source device) of = output file (where you want to save the image) bs = block size (common values: 1M or 4M) Quick Tip: If you use /dev/sda as input, you capture the entire disk , including all partitions. If you use something like /dev/sda3, you're only capturing a specific partition . You can check your drives using: df -h And when naming your images, you'll often see extensions like .dd, .raw, or .img — they're all pretty standard. 2. dcfldd – Upgraded dd for Forensics dcfldd is basically an enhanced version of dd. Built by the U.S. Department of Defense Computer Forensics Lab (cool, right?). It adds features super useful for investigators: On-the-fly hashing (SHA256, SHA1, etc.) Status output (you see progress!) Splitting output into multiple smaller files. Example command: dcfldd if=/dev/sda of=/path/to/image.dd bs=4M hash=sha256 hashwindow=1G hash=sha256 will hash the image during acquisition. hashwindow=1G means it creates a hash after every 1GB chunk. 3. dc3dd – The Newest and Most Advanced dc3dd is another evolution, developed by the U.S. Department of Defense Cyber Crime Center (DC3) . It extends dcfldd with even more features : Better logging . Drive wiping and pattern writing (if needed). Detailed forensic reporting . Example command: dc3dd if=/dev/sda of=/path/to/image.dd log=/path/to/logfile.txt hash=sha256 hlog=/path/to/hashlog.txt This will: Capture the image. Log everything. Hash the image and save the hash to a separate file. Quick Summary: Tool Highlight dd Basic and universal dcfldd Adds hashing and better status reporting dc3dd Full forensic features with detailed logging Important: Across all three tools, if and of parameters stay the same — so once you learn one, you can easily switch to others! ------------------------------------------------------------------------------------------------------------ Linux Memory Acquisition: Capturing the Volatile Data Now, let’s move on to memory acquisition — another critical part of forensics. Memory holds running processes , network connections , encryption keys , and a lot of other sensitive stuff that disappears if the machine is powered off. Old School Method: In the early days, people used dd to dump memory from /dev/mem or /dev/kmem. But now, we have much better tools! Modern Tool: LiME (Linux Memory Extractor) LiME is specifically designed for live memory acquisition on Linux machines. You can find it here: 🔗 LiME GitHub Repository It allows you to grab a memory image without shutting down the system — which is super important in real investigations. Another Option: AVML (Accelerated Volatile Memory Acquisition) Built by Microsoft, AVML is a super lightweight tool for memory captures on Linux. You can grab it here: 🔗 AVML GitHub Repository Output: ------------------------------------------------------------------------------------------------------------ Extra Goodie: Using UAC Script for Artifact Collection! If you've followed my macOS forensic series, you already know about UAC (Universal Acquisition Collector) — Good news: UAC supports Linux too! 🔗 UAC GitHub Repository Here’s how UAC works: Enumerates available system tools. Loads the uac.conf configuration file . Builds a list of artifacts to collect. Collects data (files, hashes, timestamps). Creates a single output archive and hashes it. Generates a full acquisition log . Quick How-To for UAC on Linux First, download and unzip UAC: tar zxvf uac.tar.gz Inside the unzipped directory, you’ll find multiple folders. The profiles folder is important — it contains YAML files that define what artifacts will be collected. List available profiles: ./uac --profile list Run UAC to collect everything (using the full profile): sudo ./uac -p full /path/to/output/folder ✅ Done! Now you have a full snapshot of the system's forensic artifacts. What’s inside the output? A bodyfile — a text file with all the filesystem metadata (useful for timeline creation). A Live_Response folder — containing processes, network connections, user accounts, and much more. .stderr.txt files — if any command threw an error, it’s logged here. You can easily open and analyze these outputs on Linux or even Windows (with Notepad). Wrapping Up Evidence collection is the foundation of any good forensic investigation. Tools like dd, dcfldd, dc3dd, LiME , AVML , and UAC make it much easier to capture, preserve, and analyze critical data. Whether you're imaging a disk or grabbing volatile memory, remember: 👉 Accuracy and proper documentation are everything in forensics! -----------------------------------------Dean------------------------------------------------------
- Creating a Timeline for Linux Triage with fls, mactime, and Plaso (Log2Timeline)
Building a timeline during forensic investigations is super important — it helps you see what happened and when . Today, I’ll walk you through two simple but powerful ways to create timelines: Using fls + mactime Using Plaso / Log2Timeline (psteal, log2timeline, psort) Don’t worry — I’ll explain everything in a very simple way, just like we’re talking casually! -------------------------------------------------------------------------------------------------------- 🛠 Method 1: Using fls and mactime for Filesystem Timeline First things first: Make sure the tool is installed. If not, you can install it easily: sudo apt install sleuthkit The SleuthKit package gives you useful forensic tools like fls, mactime, icat, and more. Step 1: Create a Body File with fls Now, let's create the timeline body file: fls -r -m "/" /mnt/c/Users/Akash's/Downloads/image.dd > /mnt/c/Users/Akash's/Downloads/timeline.body What’s happening here? -r → Recursively walk through all directories and files. -m "/" → Mount point is root /. /mnt/.../image.dd → This is your disk image. 👉 Combining -r and -m "/", we tell fls: "Hey, start from root and go deep into everything inside." Tip: Check your .body output — it should look clean and pipe-delimited (| characters). If it looks good, you’re all set for the next step! Step 2: Create a CSV Timeline with mactime Now let's process the body file and create a readable timeline: mactime -b /mnt/c/Users/Akash's/Downloads/timeline.body -d -y > /mnt/c/Users/Akash's/Downloads/timeline.csv What do the options mean? -b → Body file input. -d → Output in delimited format (for spreadsheets). -y → Use UTC time zone. Optional: You can also specify a different timezone (not recommended generally): mactime -b file.body -d -y -z germany/berlin Or even specify a date range if you want: mactime -b timeline.body -d -y 2025-04-02 .. 2025-04-22 > timeline.csv Step 3: Analyze Timeline Use Timeline Explorer (Eric Zimmerman's free tool) to open and analyze your CSV file. It’s one of the easiest ways to slice and dice timeline data visually! You can even turn on hidden columns like UID, GID, Permissions by right-clicking and choosing "Column Chooser." Note: Since I’m running on ext4 filesystem , I'm able to see creation/birth times too. 👉 Important: Using fls gives you a filesystem timeline only (file creation, modification, access, and metadata changes). -------------------------------------------------------------------------------------------------------- 🧠 Method 2: Creating Timeline Using Plaso (Log2Timeline) If you want deeper timelines including event logs, browser history, and way more artifacts — use Plaso . I've already made two detailed guides on Plaso for Windows if you want to dive even deeper. Links coming below! 😉 Running Plaso/Log2Timeline on Windows https://www.cyberengage.org/post/running-plaso-log2timeline-on-windows A Deep Dive into Plaso/Log2Timeline Forensic Tools https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools Anyway, let’s jump into it. Option 1: Easy Way — Using psteal.py Let's run everything in a single command: psteal.py --source /mnt/c/Users/Akash's/Downloads/image.dd -o dynamic -w /mnt/c/Users/Akash's/Downloads/plasotimeline.csv What this does: Runs Log2Timeline + psort automatically. Saves output as a nicely formatted CSV (plasotimeline.csv). You can use .vmdk virtual machine images too: psteal.py --source /path/to/your.vmdk -o dynamic -w /path/to/output.csv Super clean and fast! Option 2: Manual Way — (Better Control) Want to control everything yourself? Here’s how: Step 1: Parse the Image with log2timeline.py log2timeline.py --storage-file timeline.plaso /path/to/image.dd timeline.plaso is the storage file that saves extracted events. Step 2: Check Metadata with pinfo.py pinfo.py timeline.plaso See event counts, sources, time ranges, and other goodies inside the .plaso file. Step 3: Create Timeline Output with psort.py psort.py -o dynamic -w timeline.csv timeline.plaso This command sorts the events and outputs them nicely to a CSV! -------------------------------------------------------------------------------------------------------- 💬 But wait… Why Manual Parsing? You might ask — if psteal.py is so easy, why bother with manual steps? Here’s the thing: Manual parsing lets you use powerful filters. You can selectively extract events, artifacts, or specific activities. It's way more flexible for bigger/messier investigations. 🎯 Artifact Filtering with Plaso (Advanced) Let’s say you want to pull only Bash shell history. Here’s how you can do that: Step 1: Download Artifacts Repository From: https://github.com/ForensicArtifacts/artifacts Inside, you'll find tons of .yaml files under the /data folder. Each YAML defines different forensic artifacts! Step 2: Run log2timeline with Artifact Filter log2timeline.py --storage-file test.plaso /path/to/image.vmdk --artifact-filters BashShellHistoryFile 👉 Tip: The names come from the YAML filenames — so if you wonder "where did BashShellHistoryFile come from?" — now you know. 😄 Output: Step 3: Run pinfo with created plaso file pinfo.py /path/to/outputfile.plaso Step 4: Run psort with created plaso file psort.py -o dynamic -w /mnt/c/Users/Admin/Downloads/test.csv /mnt/c/Users/Admin/Downloads/test.plaso Output: ---------------------------------------------------------------------------------------------------------- Using a Custom Filter File You can also create a mini YAML filter file like this: description: LinuxSysLogFiles type: include path_separator: '/' paths: - '/var/log/syslog*' And then run: log2timeline.py --storage-file test3.plaso /path/to/image.vmdk --filter-file /path/to/your_custom.yaml Common Issues Sometimes you may face weird errors while using artifact filters directly with .yaml (after downloading the files. If that happens, create your own YAML and use --filter-file instead. Pro Tip: Always create a full Plaso storage file first, and then filter during psort , instead of during log2timeline.This gives you more flexibility later! -------------------------------------------------------------------------------------------------------- 🛠 Bonus: Narrowing Timelines with Psort You can narrow results easily after timeline creation: Slice Around a Specific Time psort.py -o dynamic -w timeline.csv timeline.plaso --slice 2025-04-23T22:00:00+00:00 Default slice = 5 minutes before and after. Date Range Filter psort.py -o dynamic -w timeline2.csv timeline.plaso "date > '2025-04-01 23:59:59' and date < '2025-04-23 00:00:00'" This will output events only within your specified date window. -------------------------------------------------------------------------------------------------------- 🚀 Conclusion For simple filesystem timelines → fls + mactime works great. For full system artifact timelines → Plaso/Log2Timeline is the best. Recommendation: Always create a full .plaso file, then slice and filter later using psort.py . Would you also like me to format this for your website with: "Thanks for sticking with me through this article! See you in the next one — stay curious and keep exploring!" ----------------------------------------Dean---------------------------------------------------------
- Digital Forensics (Part 2): The Importance of Rapid Triage Collection - Kape vs FTK Imager
In the fast-evolving world of digital forensics, time is critical. Traditional methods of acquiring full disk images are becoming increasingly impractical due to the sheer size of modern storage devices. The reality is that 99% of the necessary evidence typically exists within just 1% of the acquired data. Instead of waiting hours for a full disk image, focusing on this crucial 1% can significantly speed up investigations. Why Rapid Triage Collection Matters Saves Time – Collecting only essential forensic artifacts allows investigators to start analyzing data sooner. Reduces Storage Needs – Full disk images consume massive amounts of storage, whereas triage collection focuses only on critical data. Enhances Efficiency – Investigators can prioritize relevant information and streamline the investigative process. Key Artifacts to Collect During Triage To ensure effective triage, forensic analysts should focus on specific files and artifacts that provide the most insight. These include: File System & Activity Logs $MFT (Master File Table) – Contains metadata about every file and folder on the system. $Logfile & USN Journal – Records changes such as file creation, modification, and deletion. Windows Registry Hives SAM – Stores user account information. SYSTEM – Contains system configuration details. SOFTWARE – Holds installed software and system settings. DEFAULT, NTUSER.DAT & USRCLASS.DAT – User-specific settings and configurations. AMCACHE.HVE – Tracks executed programs. System & User Activity Logs Event Logs (.evtx) – Tracks system and user activities. Other Log Files – Includes setup logs, firewall logs, and web server logs. Prefetch Files (.pf) – Evidence of executed programs, including access history. Shortcut Files (.lnk) – Indicates files and directories opened by the user. Jump Lists – Collection of shortcut files that reveal frequently accessed files and directories. Check Out the below article it contain detail analysis on almost all the artifacts: https://www.cyberengage.org/courses-1/windows-forensic-artifacts User-Specific Data Recent Folder & Subfolders – Stores recent document access history. AppData Folder – Contains browsing history, cookies, and cached files. Pagefile.sys & Hiberfil.sys – Can contain remnants of past user activity stored in virtual memory. Specialized Artifacts for Advanced Investigations Certain artifacts provide deeper insight into a user's actions and past activity, even if data has been deleted. Volume Shadow Copies What It Is: A point-in-time backup of an NTFS volume. Why It’s Useful: Helps recover deleted files, registry hives, and past system states. Location: C:\System Volume Information Recommended Tools: KAPE, VSCMount, Shadow Explorer. ShellBags What It Is: Tracks user navigation through directories, including removable storage and remote servers. Why It’s Useful: Helps reconstruct user activity even if the files/folders no longer exist. Location: Registry keys within NTUSER.DAT and USRCLASS.DAT. Recommended Tools: ShellBags Explorer, SBECmd. Triage Tools for Efficient Collection Forensic professionals can utilize powerful tools to automate and streamline triage collection: FTK Imager – Extracts files by extension. LECmd – Parses .lnk files. JLECmd, JumpList Explorer – Extracts jump list data. PECmd – Analyzes prefetch files. KAPE – Rapid collection of forensic artifacts. Shadow Explorer – Recovers files from volume shadow copies. ------------------------------------------------------------------------------------------------------------- When dealing with digital evidence, one of the most critical steps is proper acquisition. This ensures that investigators can analyze data without tampering with the original evidence. Two powerful tools for forensic acquisition are FTK Imager and KAPE . Each serves a different purpose, and understanding their strengths helps streamline forensic investigations. Why Imaging Matters in Digital Forensics In digital forensics, it’s generally not advisable to work directly on original evidence. Instead, investigators create forensic images— bit-by-bit copies of a device—to analyze while preserving the integrity of the original data. However, imaging takes time, and sometimes investigators must balance speed with thoroughness. This is where triaging becomes an essential technique. Acquisition Using FTK Imager FTK Imager is a well-known forensic imaging tool used to create full disk images, memory dumps, and file captures while maintaining forensic integrity. The step-by-step guide for FTK Imager-based imaging is available in a detailed PDF document on my website. You can download it from the Resume section under the document name "FTK Imager Based Imaging" . Acquisition Using KAPE K APE (Kroll Artifact Parser and Extractor) is a rapid forensic triage tool that can collect targeted artifacts from a live system or forensic image . Unlike FTK Imager, which captures everything, KAPE focuses on extracting critical forensic artifacts such as: Event logs Registry hives Browser history User activity logs https://www.cyberengage.org/courses-1/kape-unleashed%3A-harnessing-power-in-incident-response KAPE is also useful for remote forensic collection, making it highly efficient for Incident Response (IR) cases. You can find my complete article on KAPE acquisition, analysis, and IR cases on my website, which includes detailed screenshots. Triage vs. Full Imaging: When to Use What? A key forensic question is whether to triage first or perform a full disk image before analysis. The decision depends on time constraints and urgency . If time is not an issue , creating a full forensic image first is the best practice. This ensures every piece of data is preserved for in-depth analysis. If speed is critical , such as in incident response cases, triaging first with KAPE allows investigators to gather key forensic artifacts quickly. A balanced approach involves first running KAPE for rapid data collection and then starting full disk imaging with FTK Imager. This way, analysis can begin while the full image is still being created. How to Balance Speed and Completeness? Use a write blocker when dealing with original media to prevent accidental modifications. Run KAPE first to quickly extract key forensic data (~1% of the total data that is most relevant to investigations). Start full imaging with FTK Imager while simultaneously analyzing the KAPE-collected data. By the time imaging is complete , investigators may already have leads from the extracted artifacts. This win-win approach ensures rapid initial analysis while maintaining forensic integrity. Final Thoughts Both FTK Imager and KAPE are invaluable forensic tools. FTK Imager provides a complete forensic image, while KAPE allows for fast triage and targeted artifact collection. The right tool depends on the specific case, but combining both strategically helps investigators work efficiently without compromising forensic standards. For a detailed walkthrough of these processes, check out my full documentation on FTK Imager and KAPE on my website! ----------------------------------------------Dean--------------------------------------------
- Disk Imaging (Part 1) : Memory Acquisition & Encryption Checking
Imagine you need to make a perfect copy of everything on a hard drive—not just the files you see, but also hidden system data, partitions, and even deleted files that might still be recoverable. This is where disk imaging comes in! Whether you’re working in digital forensics, IT, or just want to back up your system. Disk imaging is important What is Disk Imaging? Disk imaging is the process of creating an exact, bit-for-bit copy of a storage device (like a hard drive or SSD) and saving it as a file. Think of it as taking a snapshot of your entire drive , capturing everything from active files to hidden system data. This is different from just copying files, as it preserves the structure and details of the original disk. However, in some cases, creating an exact duplicate isn’t always possible. SSDs (Solid-State Drives) may not allow precise duplication due to how they handle data storage. Bad sectors (damaged parts of a hard drive) might prevent some data from being copied, leaving gaps in the image file. How Does Disk Imaging Work? The disk imaging process involves three key components: The Source Drive – This is the drive you want to copy. A Write Blocker – A tool that prevents any accidental changes to the source drive while imaging. Imaging Software – The program that reads the source drive and creates an image file. Choosing the Right Image Format When creating a disk image, you’ll typically save it in one of two formats: E01 (Expert Witness Format) – The most popular choice because it includes compression, making the file smaller while keeping all the data intact. DD (RAW format) – A bit-for-bit copy with no compression, meaning it takes up more space but remains a direct replica. Some of the most widely used disk imaging tools include: FTK Imager X-Ways Imager Guymager DD (a classic command-line tool) Steps to Create a Disk Image Connect the source drive to your computer using a write blocker. Start the imaging software and select the source drive. Choose a destination location where the image file will be saved. Select the format (E01 or DD) based on your needs. Start the imaging process and wait for completion. Once finished, most imaging software (except DD) generates a log file. This report contains: Drive details (size, sector count, etc.) A hash value (used to verify data integrity) Any errors, such as unreadable sectors Hardware vs. Software Imaging While the above method uses s oftware-based imaging (requiring a computer and write blocker), another option is hardware-based imaging . Hardware Imaging Devices A hardware imager is a standalone device that combines the functions of a computer, write blocker, and imaging software in one unit. These devices: Are faster and more efficient for large-scale imaging Minimize errors and risks of accidental modifications Can save images to another hard drive or even a network location (if supported) However, be careful not to mix up the source and destination drives! Formatting the wrong drive could lead to irreversible data loss. How Long Does Imaging Take? Disk imaging can take several hours, depending on: The size of the drive How much data is stored on it The speed of the connection (USB, SATA, or network transfer) While waiting, many forensic analysts take advantage of this time to review key data (a process called rapid triage ) , helping to identify important leads before the full image is ready. Live vs. Dead Imaging: What’s the Difference? Live Imaging – Done while the system is still running. This is useful when you need to capture volatile data like running processes, open network connections, or system logs. Dead Imaging – Performed after powering down the system. This is the traditional approach and is often used for full disk acquisitions. Why Live Imaging Matters A running system provides valuable forensic insights, such as: What applications are currently running Connected external devices (USBs, external drives, etc.) Potential signs of tampering or malicious activity If the system is off, you won’t get this real-time data. But if it's on, documenting its current state before imaging is crucial. Old vs. Modern Forensic Acquisition Methods In the past, forensic specialists followed a “dead box” approach , where the computer was shut down before data collection. This was because: RAM (temporary memory) was small and not often considered valuable. Encryption was rare, making it easy to access data even after shutting down. However, today’s machines often use encryption and security measures like TPM (Trusted Platform Module) , making live imaging more important than ever. If you shut down an encrypted device, the data could be permanently locked. How Were Systems Handled in the Past? If it was a regular computer (not a server), forensics experts would unplug it directly. If it was a server, they would shut it down properly to avoid issues with RAID configurations or system failures. -------------------------------------------------------------------------------------------------------- Live Response When dealing with a running system , the way you collect data can significantly impact an investigation . Unlike a powered-off system, where everything is static, a running machine holds volatile data that can be lost if not captured correctly. Live response is the process of collecting critical data from a system that is still powered on. This includes memory (RAM), active processes, network connections, and encryption states. U Step 1: Document the System’s Status Before interacting with the machine, it’s essential to document everything : What’s displayed on the screen? Are any applications open? Are there external devices connected? Is the system asleep or in hibernation mode? Many computers may appear off when they are just in sleep mode . A simple press of the spacebar or mouse movement can wake them up. Also, check for indicator lights on the computer case—these can show that the system is still running. Step 2: Determine the Order of Volatility Volatile data disappears quickly once the system is shut down. This means you need to collect the most fragile information first. The order of volatility in a forensic investigation is as follows: Dump Memory (RAM) – This contains running programs, network sessions, user activity, passwords, and even malware that only exists in memory. Check for Encryption – If encryption is present, shutting down the system could permanently lock the data. Perform Triage Collection – Extract key artifacts from the live system for quick analysis while the full forensic image is created. Step 3: Dump Memory (RAM) RAM is one of the richest sources of forensic data , but also the most fragile . I f the computer is turned off before capturing RAM, this data is gone forever. 💡 What can be found in RAM? Running processes Open files and directories Network connections Chat conversations Encryption keys Malware that exists only in memory How to Capture RAM? There are several tools available for memory acquisition, with Windows systems having more options than Macs . Before starting, ensure the system is disconnected from all networks (Ethernet and Wi-Fi) to prevent remote interference. To capture RAM: ✅ Use a USB drive or external SSD with forensic tools installed ✅ Store the memory dump on a fast external drive to speed up the process ✅ Use specialized tools like Volatility to analyze memory contents later Important Considerations: Mac computers are more difficult to analyze due to fewer available tools. Laptops should be plugged in to prevent power loss during acquisition. Be careful with encryption keys —they often exist in RAM and can be retrieved before shutdown. Step 4: Check for Encryption Encryption can be a major roadblock if not handled properly. Many modern computers use full-disk encryption with tools like: BitLocker (Windows) VeraCrypt PGP Encryption If the system is still running, the encrypted data is often accessible. The best approach is to create a logical volume image while the machine is still running. This ensures that decrypted data is preserved. 💡 If encryption is present: ✔️ Image the drive before shutting down ✔️ Extract encryption keys from memory (if possible) ✔️ If no encryption is detected, proceed with normal disk imaging Step 5: Perform Triage Collection While waiting for full disk imaging to complete, triage collection can provide fast insights. Using tools like KAPE , forensic examiners can extract: Browser history User activity logs Recently opened files System logs This allows investigators to identify leads early without waiting hours for a complete forensic image. Step 6: The Reality of Live Data Collection Interacting with a running system always leaves some trace. The key is to minimize changes and document everything. 💡 Common mistakes: Shutting the system down too early and losing RAM data Forgetting to disable network access , allowing remote tampering Using slow USB drives that take too long to capture memory Why RAM Collection Matters More Than Ever With modern encryption and cloud-based applications, RAM is now more valuable than ever in forensic investigations. Unlike 15 years ago, when most data was stored on hard drives, today’s machines: ✔️ Have 8GB, 16GB, or even 32GB of RAM (containing a huge amount of data) ✔️ Store passwords, decryption keys, and session data in memory ✔️ Run software that only exists in RAM (fileless malware) Step 7: Storage and Transfer of Memory Dumps Since memory dumps can be large, choosing the right storage device is critical. A solid-state external hard drive is the best choice due to high-speed data transfer . Final Step: Document Everything! Since live response actively changes system data , it’s crucial to: 📌 Take photos or videos of each step 📌 Write detailed notes on what actions were taken 📌 Record timestamps for each forensic operation -------------------------------------------------------------------------------------------------------- Live Response Tools When performing live forensics on a running system, one of the biggest challenges is introducing your tools without altering or corrupting evidence . While it may seem simple—just plug in a USB drive and start collecting data—there are several critical factors to consider. Key Questions to Ask Before Deploying Live Response Tools Before introducing any tools into a system, ask yourself: ✅ How much space will I need? (Memory dumps and disk images can be large.) ✅ How should my external drive be formatted? (NTFS for Windows, exFAT for cross-compatibility.) ✅ What resources are available? (Are USB ports, network storage, or optical drives an option?) ✅ Can I trust the software already on the target system? (Always bring your own trusted binaries.) ✅ Are there any environmental restrictions? (Some locations, such as government facilities, may restrict USB devices.) ✅ Do I have a backup plan? (If my primary tool fails, do I have an alternative?) Choosing the Right External Storage Since live forensics often involves capturing large amounts of data (such as RAM dumps or forensic images), using a high-quality external storage device is crucial. 💡 Best Practices for External Storage Devices: ✔️ Use a large-capacity, high-quality external SSD for faster read/write speeds. ✔️ Format the drive as NTFS for Windows systems or exFAT for cross-platform compatibility. ✔️ Always document the details of the device before use. Tracking USB Devices with NirSoft USBDeview To maintain a proper chain of custody, document the details of your external storage using a tool like NirSoft USBDeview . This allows you to: Record the make, model, and serial number of your USB device. Include this information in your forensic reports for future reference. Where Should You Store Collected Data? One of the biggest logistical challenges in live response is deciding where to store the collected data . This depends on: The size of the storage device you’re imaging. The amount of memory on the system. The number of devices you need to process. Storage Recommendations: ✅ External SSDs – The preferred option, but always bring more space than you think you’ll need. I f you estimate needing 1TB , bring 4TB —unexpected extra data is common! ✅ Network Storage (Less Optimal) – If an external drive isn’t an option, a network share may work, but consider security risks (who else has access?). ✅ Chain of Custody Considerations – Keep strict control over the storage device to prevent tampering or unauthorized access . Selecting the Right Live Response Tools Once you have a storage device ready, the next step is choosing and deploying the right tools for live response. Your toolkit should include: 🔹 Memory collection tools (e.g., DumpIt, Belkasoft RAM Capturer, FTK Imager) Command Line vs. GUI Tools When performing live forensics, minimizing system impact is critical . Using command-line (CLI) tools instead of graphical user interface (GUI) tools can help: ✔️ Reduce memory usage ✔️ Minimize system modifications ✔️ Prevent unnecessary process execution Top Memory Collection Tools for Live Forensics 1. DumpIt (by Comae Technologies) Pros: ✅ Simple command-line tool with minimal system impact ✅ Can be executed without additional arguments for quick memory dumps ✅ Allows file compression to save space Cons: ❌ Compressed files may not be compatible with all memory analysis tools 💡 Usage: To capture memory using DumpIt, simply execute: DumpIt /OUTPUT If run without arguments, DumpIt will prompt for confirmation before proceeding. The collected memory file will automatically be named with the machine name and timestamp. 2. Belkasoft RAM Capturer Pros: ✅ Minimal GUI interface , reducing system modifications ✅ Uses kernel mode driver to bypass anti-forensic techniques ✅ Available in 32-bit and 64-bit versions to minimize unnecessary code execution Cons: ❌ Requires administrator privileges 💡 Usage: Launch Belkasoft RAM Capturer . Select an output folder for the memory dump. Click “Capture!” to start memory acquisition. 3. FTK Imager Pros: ✅ Well-known forensic tool with wide industry adoption ✅ Can capture both memory and full disk images ✅ Provides verification logs for integrity checks Cons: ❌ Older versions (pre-3.0.0) operate in user mode , which may limit access to certain memory areas ❌ May not detect advanced malware hiding in kernel memory 💡 Important Note: If using FTK Imager, update to version 3.0.0 or later to ensure kernel-level access to all memory areas. Final Considerations: Ensuring a Secure and Effective Live Response 🔹 Plan ahead – Know the environment and what resources are available. 🔹 Minimize system impact – Use command-line tools whenever possible. 🔹 Document everything – Keep detailed records of every action taken. 🔹 Secure collected data – Store forensic images and memory dumps on encrypted, controlled-access storage. 🔹 Always have a backup plan – If one tool fails, be ready with an alternative. -------------------------------------------------------------------------------------------------------- Handling Encrypted Drives Encryption presents a major challenge in digital forensics. While forensic imaging techniques typically allow investigators to access data on a storage device, encryption software like BitLocker, VeraCrypt, and PGP can make this data completely inaccessible without the proper decryption key . What Happens When a Drive is Encrypted? If encryption is enabled, imaging the physical volume (even with a write blocker) only captures the encrypted data , which is useless without the decryption key. This is especially problematic if the device is turned off because, in many cases, powering down the system can permanently lock the data . 💡 Key Takeaways: ✔️ If encryption is detected, do NOT shut down the system before performing a live capture. ✔️ If the system is running, logical imaging may allow access to decrypted data. ✔️ Failing to check for encryption before imaging can result in lost evidence. How to Detect Encryption on a Running System To determine whether a system is using encryption, forensic analysts use specialized tools that can scan for encryption signatures . One such tool is Encrypted Disk Detector (EDD) from Magnet Forensics. 🔍 Using EDD to Identify Encrypted Volumes EDD is a command-line tool that checks local physical drives for encryption software, including: BitLocker (Windows) VeraCrypt & TrueCrypt PGP® (Pretty Good Privacy) Checkpoint, Sophos, and Symantec encrypted volumes 💡 How EDD Works I have created an complete article on EDD (Do check it out you will learn how to use the tool https://www.cyberengage.org/post/exploring-magnet-encrypted-disk-detector-eddv310 EDD does not locate encrypted container files that are not mounted, but other forensic tools can assist with that. Handling VeraCrypt and TrueCrypt Encryption VeraCrypt is the successor to TrueCrypt , and both function similarly: 🔹 Users create an encrypted container that appears as a mounted drive. 🔹 Files stored inside are inaccessible without a password or keyfile . 🔹 A hidden partition can be created within the primary encrypted volume. Detecting VeraCrypt/TrueCrypt Artifacts If a container is currently mounted , EDD will detect and flag it . However, once unmounted, traces of its existence may be deleted from the system . 💡 Registry Analysis for VeraCrypt/TrueCrypt Older versions of these tools left traces in the Windows Registry under: HKEY_LOCAL_MACHINE\SYSTEM\MountedDevices Older versions left artifacts even after unmounting . Newer versions delete traces after unmounting (though remnants may still exist). Pro Tip: Finding Encrypted Containers Since encrypted containers store a large amount of data, they tend to be some of the biggest files on the system . You can identify them by: ✅ Scanning for large, unexplained files on the system. ✅ Ignoring system files like pagefile.sys and hiberfil.sys. ✅ Checking recently accessed files for unusual activity. BitLocker Encryption: Challenges & Solutions BitLocker is Microsoft’s built-in encryption tool, included with Windows Enterprise, Pro, and Ultimate editions. 💡 How BitLocker Works: ✔️ Uses AES encryption (128-bit or 256-bit) . ✔️ Can be enabled via Group Policy (common in corporate environments). ✔️ Requires a password, PIN, or recovery key to unlock data. The Biggest Forensic Challenge with BitLocker If a BitLocker-encrypted drive is removed from the original computer , the data is completely inaccessible without the recovery key . However, if the system is still running , forensic analysts can bypass encryption and extract data while it remains unlocked . Two Ways to Handle BitLocker-Protected Drives 🔹 Option 1: Live Logical Imaging If the system is running, image the logical drive instead of the physical disk . This ensures you capture decrypted data. 🔹 Option 2: Recover BitLocker Keys BitLocker requires users to save a recovery key to a separate drive or print it. In corporate settings, IT administrators may have stored recovery keys via Group Policy. Best Practices for Handling Encrypted Systems 🔹 Always check for encryption before shutting down the system. 🔹 If encryption is detected, prioritize live imaging. 🔹 Use tools like EDD to scan for encryption software. 🔹 Look for large container files if encryption is suspected. 🔹 Consult Group Policy settings for corporate BitLocker deployments. -------------------------------------------------------------------------------------------------------- Wrapping Up Digital forensic acquisition is as much about strategy and preparation as it is about technical execution. Whether capturing volatile memory, imaging a disk, or handling encrypted data, the right approach can mean the difference between retrieving crucial evidence or losing it forever . By following best practices, using trusted tools, and adapting to evolving challenges, forensic investigators can ensure data integrity, accuracy, and reliability in every case they handle. 🚀 ---------------------------------------------Dean-------------------------------------------
- Extracting Memory Objects with MemProcFS/Volatility3/Bstrings: A Practical Guide
---------------------------------------------------------------------------------------------------- I have already article related to MemProcFS, Bstring, Voaltility3 in depth, Do check those out to learn tool in depth! Link below https://www.cyberengage.org/post/memprocfs-memprocfs-analyzer-comprehensive-analysis-guide Volatility 3 https://www.cyberengage.org/post/step-by-step-guide-to-uncovering-threats-with-volatility-a-beginner-s-memory-forensics-walkthrough Strings/Bstrings https://www.cyberengage.org/post/memory-forensics-using-strings-and-bstrings-a-comprehensive-guide ---------------------------------------------------------------------------------------------------- Today we will discuss kind a comparison lets get started ------------------------------------------------------------------------------------------------------------ When analyzing a system’s memory, you’re often looking for key artifacts like suspicious processes, DLLs, drivers, or even cached files. These could be crucial for forensic investigations, malware analysis, or troubleshooting. With MemProcFS, extracting these objects becomes incredibly simple—just like browsing files in a regular folder ------------------------------------------------------------------------------------------------------------ MemProcFS Why Extract Memory Objects? Think of RAM as a goldmine of real-time data. Anything that has happened on a system—running programs, opened documents, registry changes, and even deleted files—can still be floating around in memory. If you know where to look, you can extract critical pieces of evidence, such as: Running processes and their memory sections Loaded DLLs and executables Cached documents and registry hives The NTFS Master File Table (which contains a list of all files on disk) Active Windows services With MemProcFS, all of these objects can be accessed like regular files, making extraction quick and hassle-free. ------------------------------------------------------------------------------------------------------------ Navigating Memory Objects in MemProcFS MemProcFS organizes memory data in a virtual folder structure, making it intuitive to browse and extract files. Here’s how you can locate key objects: Processes and Memory Sections You can find process-related data under: M:\name\powershell.exe-5352\ (organized by process name) M:\pid\7164\ (organized by process ID) These folders contain everything from heaps and memory dumps to loaded DLLs. DLLs and Executables The modules folder holds DLLs and executables loaded into memory. Each DLL or executable is stored as pefile.dll, allowing you to extract and analyze it. Tracking Memory Sections The vmemd folder helps you track specific memory regions linked to suspicious activities. The heaps folder is useful for f inding private memory allocations, where processes store sensitive data. The minidump folder provides a snapshot of process memory, including both code and data. Drivers and System Modules Most kernel drivers can be found under the System process folder (M:\pid\4\modules\). Some graphics drivers (Win32k) reside in the CSRSS.exe process, though they’re rarely useful for most investigations. ------------------------------------------------------------------------------------------------------------ Extracting and Analyzing Memory Objects MemProcFS makes extraction as simple as copying a file. You can: Open memory sections in a hex editor for low-level analysis. Extract strings from executables to identify potential malware behavior. Upload a suspicious DLL or EXE to VirusTotal for threat intelligence. Open DLLs in a disassembler to inspect their functionality. Run an antivirus scan —though it’s best to copy the file first, as security tools may quarantine it. Pro Tip: If a tool fails to open a virtual file, try copying it to a local folder first. ------------------------------------------------------------------------------------------------------------ Handling Terminated Processes Not seeing a process under M:\name or M:\pid? It might have exited before you started your analysis. By default, MemProcFS doesn’t display terminated processes since their memory can be incomplete or corrupted. However, you can enable this feature by modifying: M:/config/config_process_show_terminated.txt Change the value to 1, and MemProcFS will attempt to reconstruct folders for terminated processes. ------------------------------------------------------------------------------------------------------------ Volatility3 You might be wondering why the dedicated dumping plugins disappeared in Volatility 3. The truth is—they haven't! The functionality is still there; it's just been integrated into the standard plugins with an additional --dump option. Key Changes in Volatility 3 The --dump option: If a plugin supports dumping memory objects, you'll see this option in the plugin help. Output folder (-o) parameter: This replaces Volatility 2’s --dump-dir= and is crucial when extracting drivers, DLLs, and other artifacts to keep things organized. Parameter Order Matters: Unlike Volatility 2, where things were more flexible , Volatility 3 requires -o to come before the plugin, while plugin-specific options like --pid and --dump come after . Extracting Executables To extract suspicious processes from memory, use the windows.pslist --dump plugin. By default, it dumps all processes in the EPROCESS list, but you can narrow it down using --pid. Commands: python3 vol.py -f memory.img -o output-folder windows.pslist --dump For terminated or unlinked processes, use windows.psscan --dump, which replaces the old procdump plugin in Volatility 2. Extracting DLLs If you need to pull DLLs from memory, windows.dlllist --dump is your go-to plugin. It extracts all DLLs by default, but filtering by --pid is a good practice to avoid unnecessary files. Commands: python3 vol.py -f memory.img -o output-folder windows.dlllist --pid 1040 --dump The equivalent Volatility 2 plugin was dlldump. Extracting Drivers When analyzing potentially malicious drivers, use windows.modules --dump. If you need to go deeper and retrieve unloaded or unlinked drivers, windows.modscan --dump is the way to go. Commands: python3 vol.py -f memory.img -o output-folder windows.modules --dump In Volatility 2, this was handled by moddump. Important Notes: No Guarantees on Data Availability: Some memory objects might be paged out, making extraction incomplete. Including Page Files Helps: If possible, analyze the page file to recover missing artifacts. Process Memory Extraction Dumping process memory is trickier than extracting files. Process memory contains both code (executable sections) and data (buffers, command-line inputs, PowerShell scripts, etc.). Tools for Dumping Process Memory: windows.pslist --dump: Extracts executable code, similar to Volatility 2’s procdump. windows.memmap --dump: Dumps all memory-resident pages, capturing both code and data (like Volatility 2’s memdump). MemProcFS: Creates a pefile.dll representing the executable part of a process and a minidump.dmp file containing key process memory sections. ------------------------------------------------------------------------------------------------------------ Strings/Bstrings Searching for Artifacts in Memory Dumps One of the most effective forensic techniques is string searching , which helps identify artifacts like IP addresses, domains, malware commands, and user credentials. Here’s how to do it: Using strings (Linux) strings -a -t d memory.img > strings.txt strings -a -t d -e l memory.img >> strings.txt sort strings.txt > sorted_strings.txt Using grep (for targeted searches) grep -i "search_term" sorted_strings.txt Using bstrings.exe (Windows/Linux) Eric Zimmerman's bstrings is a great alternative that extracts ASCII and Unicode strings simultaneously and even performs initial searches. Commands: bstrings -f memory.img -m 8 # Extracts strings of length 8+ bstrings -f memory.img --ls search_term # Searches for a specific term bstrings -f memory.img --lr ipv4 (use a regex to find IP version 4 addresses) ------------------------------------------------------------------------------------------------------------ MemProcFS vs. Volatility While MemProcFS makes memory analysis incredibly convenient, it’s not a one-size-fits-all solution. Volatility is another powerful tool that provides more in-depth forensic capabilities, such as: Advanced memory carving techniques More detailed malware analysis Reconstruction of deleted or hidden processes For best results, combine both tools—use MemProcFS for quick and easy extraction, and Volatility for deeper analysis. ------------------------------------------------------------------------------------------------------------ Wrapping Up Memory forensics can be overwhelming, but tools like MemProcFS simplify the process. By treating memory like a file system, it allows you to quickly extract key artifacts, analyze suspicious activity, and uncover critical forensic evidence. Whether you’re investigating malware, troubleshooting system crashes, or performing digital forensics, MemProcFS gives you the power to dig deep into memory with ease. ---------------------------------------------Dean----------------------------------------------
- Understanding Userland Hooks and Rootkits in Real-World Investigations
Security improvements have made kernel rootkit techniques like (Import Address Table) IDT and SSDT hooks much harder for attackers to pull off. So, they’ve started looking for new ways—or sometimes, going back to old ones. One such method is userland hooking, which works in user mode instead of the kernel. That makes it less powerful but also harder to detect because legitimate applications use similar techniques all the time. What Are Userland Hooks? Userland hooks are ways for malware (or legit software) to intercept function calls in a program. Here are the two main types: Import Address Table (IAT) Hooks : This is a simple trick where malware changes a function’s address in a process’s IAT, redirecting it to malicious code. Inline/Trampoline Hooks : These modify the actual function code by inserting a jump instruction that leads to malware instead . These hooks are sneaky because they don’t mess with the IAT directly and only modify the function in memory. For attackers, userland hooks are useful because they let them manipulate programs without triggering deep security defenses. Plus, they don’t have to mess with the kernel, where modern security tools are always watching. Detecting Malicious Hooks Using tools like Volatility’s apihooks plugin , you can find these hooks. But be warned—tons of legit software also use hooks, so you’ll need to separate the suspicious ones from the normal ones. Some safe DLLs that are known to hook functions include: • setupapi.dll • mswsock.dll • sfc_os.dll • adsldpc.dll • advapi32.dll • secur32.dll • ws2_32.dll • iphlpapi.dll • ntdll.dll • kernel32.dll • user32.dll • gdi32.dll Knowing which DLLs are commonly used by legitimate applications helps filter out false positives. ------------------------------------------------------------------------------------------------------------- Digging Deeper with Driver Analysis Malware often loads drivers to maintain deeper access to the system. Rootkits, in particular, use drivers to hide malicious activity. The Volatility modules plugin helps list loaded drivers, while modscan scans memory for additional, possibly hidden drivers . These tools help you spot suspicious drivers, even if they’ve been unlinked from active memory structures. --------------------------------------------------------------------------------------------------------------- You know identifying malicious drivers can be tricky—there are too many of them, and most analysts aren’t familiar with what should or shouldn’t be present. To make thing easy for us I will use tool called Memory Baseliner I have created an complete article related to Memory baseliner. https://www.cyberengage.org/post/baseline-analysis-in-memory-forensics-a-practical-guide Now I will not explain what is this tool and what this tool. Check out the link above. Finding Suspicious Drivers In this case, I used Memory Baseliner with the --showknown and --imphash options. Here’s why: --showknown displays both known (baseline) and unknown (new) drivers. --imphash calculates the import hash of drivers, making it easier to spot changes between versions. A close-matching baseline image usually means only a handful of new drivers will appear. Focus on those first, especially ones outside the usual \Windows\System32\Drivers and \Windows\System32 directories. --------------------------------------------------------------------------------------------------------------- As an investigator, your job is to identify what’s normal and what’s suspicious. Tools like Volatility’s apihooks, modules, modscan, and moddump are great for uncovering these threats. Remember: Just because something is hooked doesn’t mean it’s bad. Look for unknown modules or drivers in memory. Compare findings with clean systems to reduce false positives. --------------------------------------------------------------------------------------------------------------- Rootkits: A Rare but Serious Threat Rootkits are designed to hide processes, files, registry keys, and network artifacts. They aren’t as common as code injection, but they’re harder to detect. Volatility 2 has some of the best memory analysis plugins for detecting rootkits by identifying process hooking and unlinking. But here’s the challenge: Many legitimate applications (like antivirus tools) use hooking too. Only a tiny percentage of malware actually use rootkits. Because of this, rootkit detection isn’t the first step in memory analysis. Instead, analysts typically discover them through: Suspicious processes Unexpected network connections Unknown drivers By the time you’re hunting for a rootkit, you’ve probably already found other indicators of compromise. The goal at this stage is to gather more evidence to understand the attack. ------------------------------------------------------------------------------------------------------------- Wrapping Up Memory Baseliner is a powerful tool for reducing the noise in memory analysis and quickly identifying unknown drivers. As attackers continue to exploit vulnerabilities in drivers, security teams must stay ahead by integrating tools like Volatility, and Memory Baseliner into their workflows. Keep hunting, keep learning, and most importantly—stay curious! ----------------------------------------------------------Dean-------------------------------------------
- Understanding Rootkits: The Ultimate Cybersecurity Nightmare and Direct Kernel Object Manipulation
Rootkits have been keeping cybersecurity pros up at night for years. These sneaky pieces of malware can hide deep inside a system, making their presence nearly impossible to detect using regular security tools. They can mask processes, files, registry entries, and even network connections, making incident response a real headache. So, what exactly is a rootkit? Think of it like a magician’s trick—it diverts attention and manipulates what you see. A rootkit alters the system’s usual flow, rerouting commands and data to conceal itself. In some cases, rootkits can give attackers full control over a system while remaining completely invisible. This is why traditional antivirus tools often fail against them. ----------------------------------------------------------------------------------------------------------- How Do You Detect Rootkits? Since rootkits are built to hide, finding them requires an approach beyond standard security tools. The best way to detect them is through memory analysis and offline disk forensics . Every action a rootkit takes—executing code, establishing network connections, or installing drivers—leaves some trace. The key is knowing where to look. The Evolution of Rootkits Rootkits have evolved over time, moving from basic techniques to highly sophisticated methods: Userland Rootkits – These operate in the user space, where regular applications run. They modify processes by hooking the Import Address Table (IAT) or patching code in memory to redirect execution. These are easier to detect but still effective at avoiding basic security tools. Kernel Rootkits – These are much more dangerous. They manipulate core system structures like the Interrupt Descriptor Table (IDT), System Service Descriptor Table (SSDT), and IRP (I/O Request Packets) to stay hidden . Since they work at the kernel level, they are harder to detect and remove. Microsoft has implemented security measures like PatchGuard and Driver Signature Enforcement to counter them. Bootkits – These take things to the next level by attacking the system before the operating system even loads. A bootkit can completely take over a system by running it inside a malicious hypervisor, similar to old-school Master Boot Record (MBR) attacks , but with modern complexity. Firmware and Hardware Rootkits – These are the latest breed and the hardest to remove. They embed themselves in system firmware, meaning they persist even after formatting the disk or reinstalling the OS. The good news? They’re still rare, but as cybersecurity defenses improve, attackers are shifting toward these more advanced techniques. ------------------------------------------------------------------------------------------------------------- How to Detect and Remove Rootkits Since rootkits are masters of deception, you need specialized tools to uncover them. Volatility , a popular memory forensics tool, offers several plugins to detect different types of rootkit activity: apihooks – Identifies userland hooks in the IAT and inline functions. idt, ssdt, and driverirp – Audits kernel structures commonly targeted by rootkits. psxview – Cross-checks process listings from multiple sources to find hidden processes. modules and modscan – Identifies suspicious kernel modules and drivers. Plugins in Volatility 3: ssdt : Supported as windows.ssdt.SSDT. driverirp : Supported as windows.driverirp.DriverIrp. psxview : Supported as windows.psxview.PsXView. modules : Supported as windows.modules.Modules. modscan : Supported as windows.modscan.ModScan. ---------------------------------------------------------------------------------------------------------------- By understanding classic rootkit detection methods, security analysts can better prepare for emerging threats. And who knows? With the right skills, you might just be the one to discover a brand-new rootkit in the wild! ---------------------------------------------------------------------------------------------------------------- Understanding Direct Kernel Object Manipulation (DKOM) in the Real World Direct Kernel Object Manipulation (DKOM) is one of the stealthiest techniques used by rootkits to hide malicious activity in an operating system. Think of it as a hacker sneaking into a party and removing their name from the guest list , yet still roaming around unnoticed. What is DKOM? As the name suggests, DKOM allows malware to tamper with kernel objects directly in memory. These changes never touch the disk, making detection incredibly difficult. Since traditional security tools rely on standard kernel structures to track processes and drivers, any tampering with these structures can cause certain activities to become invisible to these tools. How DKOM Works One of the most common ways DKOM is used is by unlinking processes from the standard kernel process list . For instance, let's say a malicious process called attacker.exe is running. Normally, tools like tasklist.exe, Sysinternals’ pslist.exe, or even forensic tools like Volatility’s pslist would be able to detect it. But if an attacker uses DKOM to remove attacker.exe from the process list, it will still run, but most system monitoring tools will fail to see it. To put it simply, DKOM exploits how the system keeps track of running processes and drivers, removing malicious ones from standard monitoring methods without actually stopping them. ---------------------------------------------------------------------------------------------------------------- Detecting DKOM with Volatility’s psxview Even though DKOM is designed to be stealthy, it’s not completely undetectable. Volatility’s psxview plugin is one of the best tools to catch DKOM-based process hiding . Instead of relying on just one method to list processes, psxview cross-checks multiple sources to identify discrepancies . akashpatel@Akash-Laptop:~/Memorytool/volatility3$ python3 vol.py -f /mnt/c/Users/Akash\'s/Downloads/solarmarker/solarmarker.img windows.psxview -R > /mnt/c/Users/Akash\'s/Downloads/psxr.txt Here’s how: pslist – Reads the EPROCESS doubly linked list (standard process tracking method). psscan – Scans the entire memory for EPROCESS structures. thrdproc – Looks at all running threads and maps them to processes. pspcid – Uses the PspCid table, another kernel object tracking processes. csrss – Uses Windows’ csrss.exe process to track child processes. session – Lists processes based on user login sessions. deskthrd – Checks desktop-associated threads for process tracking. The psxview output gives a “True” or “False” indication for each of these checks. If a process is missing from pslist but appears in other sources like psscan, it’s a red flag that something is trying to hide. ------------------------------------------------------------------------------------------------------------- You Might ask me question Dean. If psscan can find hidden processes, why bother with psxview? Well Because comparison is key! A process appearing in low-level scans but missing from high-level system calls suggests manipulation. Without this comparison, a hidden process might look like any other legitimate one. ------------------------------------------------------------------------------------------------------------- We will talk more about rootkit and detection in up-coming articles so stay connected.Happy hunting see you in next one ---------------------------------------------------------------------------------------------------------- Final Thoughts DKOM remains one of the most effective ways malware hides in a system, and while modern security features help mitigate its impact, it’s still a viable attack technique in many environments. Using tools like psxview in Volatility provides a solid method for uncovering hidden processes and detecting rootkits in memory. The key takeaway? If something doesn’t show up where it should, dig deeper! ----------------------------------------Dean----------------------------------------------------
- Using Pattern of Life (APOLLO) for macOS investigation
When investigating macOS, one of the most valuable sources of forensic data is the knowledgeC.db database . This database logs a wide range of activities related to application usage, media playback, device status, and user interactions. ------------------------------------------------------------------------------------------------------------- Application Usage Tracking Apps Used on macOS The knowledgeC.db database stores details about application usage , including: macOS: ~/Library/Application Support/Knowledge/knowledgeC.db Start and end times of app usage Bundle ID of the application Time spent in seconds and minutes Launch reason Day of the week GMT offset Entry creation timestamp The Best tool which can help you to extract is mac4n6 /APOLLO https://github.com/mac4n6/APOLLO/blob/master/modules/knowledge_app_inFocus.txt Application Intents Beyond app usage, knowledgeC.db records contextual data in the form of Intents , which includes: Start and end times App name and Bundle ID Intent verb and action class (i.e., what the app was doing) Device ID (hardware UUID) for tracking synced activity across iCloud devices Contact details and contextual information More granular data stored in serialized plist files, such as direct messaging activity in apps like Twitter Use Below Query or Tool: https://github.com/mac4n6/APOLLO/blob/master/modules/knowledge_app_intents.txt ------------------------------------------------------------------------------------------------------------- Media Tracking: What’s Playing on the Device Forensic analysis of media playback on macOS is also possible via knowledgeC.db , which logs details like: Start and end times Usage duration in seconds Bundle ID of the media-playing app Metadata including album, artist, title, and duration Device output details (e.g., MAC addresses of audio output devices) Use Below Query or Tool: https://github.com/mac4n6/APOLLO/blob/master/modules/knowledge_audio_media_nowplaying.txt ------------------------------------------------------------------------------------------------------------- Device Status Monitoring Locked and Plugged-In Status We ca determine when a device was locked and whether it was plugged into power using knowledgeC.db . Query for Locked device https://github.com/mac4n6/APOLLO/blob/master/modules/knowledge_device_locked.txt Query for device plugged in https://github.com/mac4n6/APOLLO/blob/master/modules/knowledge_device_pluggedin.txt ------------------------------------------------------------------------------------------------------------- Volume and Battery Level Using CurrentPowerlog.PLSQL , investigators can track the battery status and volume levels of macOS and iOS devices. macOS: /private/var/db/powerlog/Library/BatteryLife/ (and /Archives directory) Query for powerlog_battery_level https://github.com/mac4n6/APOLLO/blob/master/modules/powerlog_battery_level.txt Query for powerlog_device_volume https://github.com/mac4n6/APOLLO/blob/master/modules/powerlog_device_volume.txt ------------------------------------------------------------------------------------------------------------- Call and Camera Status For those examining call activity or camera usage, Powerlog maintains records of: Whether the front or rear camera was in use Ongoing call statuses macOS: /private/var/db/powerlog/Library/BatteryLife/ (and /Archives directory) Query for powerlog_camera_state https://github.com/mac4n6/APOLLO/blob/master/modules/powerlog_camera_state.txt Query for powerlog_incallservice https://github.com/mac4n6/APOLLO/blob/master/modules/powerlog_incallservice.txt ------------------------------------------------------------------------------------------------------------- Health Data Tracking Heart Rate Monitoring The healthdb_secure.sqlite database, available in an encrypted backup or via a physical device dump , logs heart rate data collected via Apple Watch. Query for health_heart_rate https://github.com/mac4n6/APOLLO/blob/master/modules/health_heart_rate.txt Steps and Distance Data This same database also records step count and distance traveled , which can be useful for understanding movement patterns. Query for health_distance https://github.com/mac4n6/APOLLO/blob/master/modules/health_distance.txt Query for health_steps https://github.com/mac4n6/APOLLO/blob/master/modules/health_steps.txt ------------------------------------------------------------------------------------------------------------- Other Key Data Sources Passcode Unlock and AirDrop Activity The Aggregate Dictionary (ADDataStore.db) stores device activity logs for up to a week, including: Methods used to unlock a device Changes in passcode settings AirDrop activity, including files sent Query for aggregate_dictionary_scalars https://github.com/mac4n6/APOLLO/blob/master/modules/aggregate_dictionary_scalars.txt Frequent and Significant Locations Apple devices track Frequent Locations under System Services . These logs store routine location data to assist with features like traffic predictions. While all location data is tracked, only the most frequently visited places appear in the user-facing settings. Find this setting under: Settings → Privacy → Location Services → System Services → Significant Locations ------------------------------------------------------------------------------------------------------------- Significant Locations macOS 10.13+ Where is the Data Stored? macOS 10.13 and newer, significant location data is stored in: /private/var/folders/.../com.apple.routined/Cache/ (macOS - appears encrypted) Key Databases The following databases contain location information: Cloud[-V2].sqlite – Stores long-term visit records Cache.sqlite – Holds granular location data for approximately one week Local.sqlite – Another data store, though its specific purpose may vary A major change in iOS 11 introduced a new format for storing routine location data, making analysis different from previous versions. Wi-Fi Location Data (locationd) Apart from significant locations, macOS also track cellular and Wi-Fi access points. These records can be found in: Wi-Fi Location Data Wi-Fi-related data is stored in: macOS: /private/var/folders/zz/.../cache_encryptedA.db Retention: ~4 days Stored Information: Timestamp, MAC address, channel, and location coordinates Wi-Fi location tracking works in the background, meaning the user does not need to connect to an access point for their device to log nearby Wi-Fi networks. ------------------------------------------------------------------------------------------------------------- I have given you query one by one above, But thing is you can run APOLLO tool at once and get output lets understand how ------------------------------------------------------------------------------------------------------------- APOLLO (Apple Pattern of Life Lazy Output’er) APOLLO is a powerful open-source tool designed to analyze Apple’s pattern-of-life data. Easy SQL-based analysis for various Apple devices and OS versions Works with multiple platforms , including iOS, macOS, Android, and Windows Fast correlation of location data for forensic investigations 📌 GitHub Repository: https://github.com/mac4n6/APOLLO First Clone the directory than install simplekml once done run below command command: python3 apollo.py extract -o sql -p apple -v 11 -k ./module / Output: ------------------------------------------------------------------------------------------------------------- Other Useful Forensic Tools Apart from APOLLO, several other tools can assist in extracting and analyzing iOS and macOS location data: iLEAPP (iOS Logs, Events, and Properties Parser) – Open-source tool for iOS forensics ( GitHub ) Magnet Axiom – Commercial tool for mobile and computer forensics Cellebrite Physical Analyzer & Inspector – Industry-standard tools for mobile device analysis ------------------------------------------------------------------------------------------------------------- Final Thoughts Understanding Apple’s significant location data and how it is stored can provide critical insights during forensic investigations. With the right tools, investigators can extract granular movement data, identify key locations, and correlate cellular and Wi-Fi records to build a comprehensive timeline of device activity. As Apple continues to update its security and data encryption methods, forensic experts must stay updated with the latest tools and methodologies to ensure efficient and accurate analysis. ----------------------------------------Dean---------------------------------------------------------
- Analyzing Safari Browser, Apple Mail Data and Recents Database Artifacts on macOS
Safari, the default web browser for Apple devices, leaves behind various artifacts that can be useful for forensic analysis. These artifacts store information such as browsing history, session details, cached files, and thumbnails of visited websites. Understanding where and how Safari stores data on macOScan help investigators retrieve valuable insights. Key Safari Data Locations Safari stores different types of data across macOS . Below are the primary locations where forensic artifacts can be found: macOS Locations: ~/Library/Safari/ ~/Library/Containers/com.apple.safari/ These directories contain various types of browser-related data, including: Browsing history Cache files Session information Tab snapshots and thumbnails Downloaded files Cookies ------------------------------------------------------------------------------------------------------ Safari Browsing History Safari tracks user browsing activity in a SQLite database file called History.db . This database is found in different locations depending on the device: macOS: ~/Library/Safari/History.db Retention Period: macOS: Stores history for up to a year (default) but can be configured to retain data for a shorter period (one month, two weeks, one week, or one day). Tracking iCloud-Synced Browsing Activity If Safari history is synced across devices using iCloud, the origin field in History.db will indicate where the page was visited: 0 – Visited on this device 1 – Visited on another iCloud-connected device Two primary tables within History.db store crucial data: history_items – Stores URLs, domains, and visit count. history_visits – Contains visit timestamps (Mac Epoch format) and webpage titles. ------------------------------------------------------------------------------------------------------ Safari Session Data Safari maintains session-related information that helps reconstruct a user’s last browsing session. The session data is stored differently on macOS. macOS: LastSession.plist (~/Library/Safari/LastSession.plist) Stores tab history in a binary plist format. If unencrypted, it contains tab identifiers, webpage titles, and URLs. ---------------------------------------------------------------------------------------------------------- Safari Thumbnails and Snapshots Safari captures tab snapshots and thumbnails to provide a visual representation of open webpages. macOS Snapshots: ~/Library/Containers/com.apple.Safari/Data/Library/Caches/com.apple.Safari/TabSnapshots/Metadata.db Stores cached tab screenshots along with metadata. Each snapshot has a UUID, which links it to its associated screenshot file. ------------------------------------------------------------------------------------------------------------- Cloud-Synced Safari Tabs Safari allows users to sync their open tabs across multiple Apple devices via iCloud . The CloudTabs.db file stores this information. Locations: macOS: ~/Library/Safari/CloudTabs.db Each record in this database includes: Hostname of the device where the tab is open. A list of currently open non-private tabs. A last modification timestamp indicating when the tab data was last updated. Additional metadata about synced tabs can be found in: macOS: ~/Library/Containers/com.apple.Safari/Data/Library/Preferences/ByHost/com.apple.Safari..plist ------------------------------------------------------------------------------------------------------------- Safari Cache and Cached Data Cached data can provide insights into previously visited web pages, even if they are no longer stored in history. Cache Database Location: macOS: ~/Library/Containers/com.apple.Safari/Data/Library/Caches/com.apple.Safari/ The Cache.db SQLite database holds cached files and metadata: cfurl_cache_response table – Stores cache metadata, including URL and timestamps. cfurl_cache_receiver_data table – Contains the actual cached files. While newer macOS versions store less cache in Cache.db, more recent data may be available in the WebKit cache system. ------------------------------------------------------------------------------------------------------------- Safari WebKit Cache: Understanding Cached Data Safari uses the WebKit Cache to store cached website data, which can be found in different locations depending on the device: macOS: ~/Library/Containers/com.apple.Safari/Data/Library/Caches/com.apple.Safari/WebKitCache/ The WebKit cache contains different types of data, including: Records Directory: Stores cached data for each website visit. SubResources: Contains a list of cached items linked to a specific website visit. Resources Directory: Stores actual cached content such as images, scripts, and HTML pages. Blobs Directory: Stores additional cached media files that are too large to fit in a single resource file. Correlation of Cached Files All WebKit cached items can be correlated using 20-byte SHA1 hash filenames. For example, if a user visits cyberengage.org to view an article , the SubResources file will list cached items such as images and scripts. These cached items can be matched to their corresponding data in the Resources directory using embedded SHA1 hashes. ------------------------------------------------------------------------------------------------------------- Key Safari Browser Artifacts for Investigation Beyond cached data, Safari stores valuable information in several key files: 1. Safari Configuration and Recent Searches File: com.apple.Safari.plist (~/Library/Preferences/) Contents: Stores Safari’s configuration settings and a list of recent searches performed by the user. 2. Cookies Storage File: Cookies.binarycookies Contents: Stores cookies in a proprietary binary format. Parsing Tools: Open-source scripts like Safari-Binary-Cookie-Parser can be used to extract cookie data. Note: Other applications using WebKit (such as Twitter’s in-app browser) may also store cookies in a similar manner. 3. Bookmarks and Browsing History macOS: Bookmarks.plist (/users/deanwinchester/library/safari) Recently Closed Tabs: Stored in RecentlyClosedTabs.plist, which keeps track of tabs recently closed by the user. 4. Download History Safari keeps a record of downloaded files in Downloads.plist, (/users/deanwinchester/library/safari) but this data is automatically deleted after one day by default. macOS Download History Details: DownloadEntryIdentifier: Unique identifier for each download. DownloadEntryURL: URL where the file was downloaded from. DownloadEntryPath: File location on the system (usually in ~/Downloads). Timestamps: Records the start and completion time of the download. DownloadEntryBookmar k: Bookmark BLOB ----------------------------------------------------------------------------------------------------------------- Apple Mail Apple Mail, the default email client for macOS , stores a wealth of information about email accounts, messages, and attachments. Apple Mail Data Locations macOS Mail Storage On macOS, Apple Mail data is stored in the following locations: ~/Library/Mail/ – Primary storage for all email messages and metadata. ~/Library/Containers/com.apple.mail/ – Contains additional application-specific data. Each version of macOS assigns a version number to its Mail directory: macOS 10.13 – V5 macOS 10.14 – V6 macOS 10.15 – V7.... and so on Each email account has a dedicated GUID directory , which can be correlated using Accounts3.sqlite or Accounts4.sqlite databases. Types of Apple Mail Data Apple Mail stores various types of data that can provide insights into email activity: Accounts – Information about configured email accounts. Cached Messages and Attachments – Locally stored copies of emails and their attachments. Envelope Index – A database containing metadata about emails. Mail Downloads – Attachments saved by the user. Understanding Mailbox Structures Each email account has multiple mailboxes corresponding to different folders: Inbox Sent Messages Drafts Deleted Messages Junk Notes User-created mailboxes Mailboxes are stored as .mbox directories within the user's Mail directory: Example paths: ~/Library/Mail/V#/GUID/Inbox.mbox, Sent Messages.mbox, etc. The .mboxCache.plist file contains details about mailbox organization. Email messages are stored as .emlx files within the Messages directory inside .mbox folders. Email Messages and Attachments Apple Mail stores individual emails as .emlx files, which contain: Plaintext email headers and body content. An embedded property list with metadata. Attachments are handled in two ways: Quick Look Viewing: Temporarily stored in ~/Library/Mail Downloads/ or ~/Library/Containers/com.apple.mail/Data/Library/Mail Downloads/ . Saved Attachments: Stored in the ~/Downloads directory. Metadata for downloaded attachments includes extended attributes, such as quarantine information, which tracks when and how a file was downloaded. Envelope Index: The Email Metadata Database The Envelope Index SQLite database (found in MailData) indexes Apple Mail messages and includes: Addresses Table: Stores indexed email addresses and contact names. Attachments Table: Lists email attachments. Mailboxes Table: Stores mailbox details, including message counts. Messages Table: Contains metadata such as sender, recipient, subject, timestamps, and read status. Subjects Table: Stores email subjects. ------------------------------------------------------------------------------------------------------------------- Recents Database Apple devices store a wealth of user interaction data to enhance user experience and functionality. One such data source is the Recents Database , which keeps track of recent interactions across various applications, including email, phone calls, and messages . This data can be valuable for both forensic investigations and general system understanding. Where is the Recents Database Stored? The Recents database is found on macOS: macOS: ~/Library/Containers/com.apple.corerecents.recentsd/Data/Library/Recents/ What Information Does the Recents Database Contain? The Recents database logs interactions with various applications, helping track recent activities such as: Associated Applications: Identifies which app (Mail, Messages, Phone, etc.) was used. Contacts & Locations: Stores recent interactions with contacts or locations. Timestamps: Logs the last few instances of communication or activity. Additional Metadata: Stores various keys and values related to interactions. ------------------------------------------------------------------------------------------------------------------ Conclusion Safari stores a vast amount of information that can be crucial in forensic investigations. From cached web pages to download history and Apple Mail, analyzing these artifacts can provide valuable insights. By understanding where and how Safari stores data, forensic experts can uncover hidden user activity, track browsing habits, and retrieve valuable evidence during investigations. ---------------------------------------Dean--------------------------------------------------
- Understanding macOS App Preference Files, (MRU) Files Shared File Lists and Account Artifacts for Digital Forensics
When analyzing applications on macOS, understanding where configuration files, databases, and caches are stored is crucial. These files can provide insights into user activity, preferences, and even location data. Application Configuration Files Application configuration files store essential settings, preferences, and permissions. These are typically found in .plist files, which use the reverse DNS format (e.g., net.whatsapp.WhatsApp.plist). Location for Configuration Files: (~ Means user directory) macOS: ~/Library/Preferences/ ~/Library/Containers/...//.../Preferences/ These files store user-defined settings for applications, making them an essential resource in forensic investigations. ------------------------------------------------------------------------------------------------------------ App Databases and Other Files Many applications store user-generated data, logs, and proprietary files in SQLite databases or other structured file formats. Locations for App Databases: macOS: ~/Library/ ~/Library/Application Support/ ~/Library/Containers/... These databases often contain crucial data such as messages, login details, and activity logs, depending on the application. ------------------------------------------------------------------------------------------------------------ Application Cache Files Caches store temporary data to improve app performance. Although they are less persistent, they can sometimes hold valuable forensic evidence. Locations for Cache Files: macOS: ~/Library/Caches/ ~/Library/Containers/...//.../Cache/ ------------------------------------------------------------------------------------------------------------ Some applications use caches to store location-related data. A good example is the Cache.db file in the Spotlight app. . ------------------------------------------------------------------------------------------------------------- Application Transparency, Consent, and Control (TCC) Applications on macOS require user permission to access system resources like the camera, microphone, and location. These permissions are stored in the TCC.db SQLite database. TCC Database Locations: macOS: User-Level: ~/Library/Application Support/com.apple.TCC/TCC.db System-Level: /Library/Application Support/com.apple.TCC/TCC.db MacOS Privacy Settings: TCC.db Analysis For macOS versions 11 and later, the auth_value column replaces the older allowed column: 0 = Unallowed 2 = Allowed You can find records of applications that have been granted access to system files. Location Services Authorization (clients.plist) Location permissions for applications are stored in clients.plist. On macOS, this file is found at: /private/var/db/locationd/clients.plist This file tracks which apps have requested location access, making it useful in forensic investigations involving location data. ------------------------------------------------------------------------------------------------------------- Most Recently Used (MRU) Files When investigating macOS forensics, understanding the Most Recently Used (MRU) files and Shared File Lists (SFL) is essential. These artifacts provide valuable insights into user activity, such as recently opened documents, accessed folders, and used applications. 1. Microsoft Office 365 MRU Storage Microsoft Office 365 applications maintain their own MRU lists in a specific location: Location: ~/Library/Containers/com.microsoft./Data/Library/Preferences/com.microsoft..securebookmarks.plist Structure: Each Office application has a separate plist file containing MRUs. Data Stored: Document paths Bookmark data (BLOBs) Unique identifiers (UUIDs) Last accessed timestamps Unlike native macOS MRUs that typically retain only the last 10 items, Microsoft Office applications store significantly more historical data. 2. macOS Finder Recent Folders Finder keeps track of recently accessed folders within a specific plist file: Location: ~/Library/Preferences/com.apple.finder.plist Key: FXRecentFolders Structure: The plist contains folder names and bookmark BLOBs. The first entry (Item 0) is the most recent, while Item 9 is the oldest. The GUI order may differ from the plist contents, making direct plist analysis more accurate. 3. Application-Specific Recent Documents macOS applications store recent document lists using the Shared File List (SFL) format. Location: ~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/ File Format: .sfl2 Data Stored: Recently accessed documents per application Example: com.apple.LSSharedFileList.RecentApplications.sfl2 com.apple.LSSharedFileList.RecentDocuments.sfl2 com.apple.LSSharedFileList.RecentHosts.sfl2 com.apple.LSSharedFileList.RecentServers.sfl2 *.sfl2 5. Understanding NSKeyedArchiver Binary Plist Files Shared File List (SFL) files use the NSKeyedArchiver format, which is a binary plist structure. These files store serialized data, making them slightly more complex to parse. Key Characteristics: File Extension: .sfl or .sfl2 (since macOS High Sierra 10.13) Stored Data: $version $objects $archiver (value: NSKeyedArchiver) $top (root of the plist structure) Parsing Binary Plists: Forensic analysts can use the plutil command: plutil -p This converts the binary plist into a more readable JSON-style output. 6. Extracting Bookmark Data Bookmarks in macOS serve as references to files or directories, similar to Windows LNK files. The bookmark data starts with the book (0x626F6F6B) header and contains: File path information Volume name (e.g., Macintosh HD) Volume GUID (Globally Unique Identifier) MacMRU Python Script (To Run on Live System/Mounted Image) https://github.com/mac4n6/macMRU-Parser ------------------------------------------------------------------------------------------------------- Account Artifacts Where macOS Store Account Information macOS account configurations in SQLite databases and plist (property list) files. These files store details about email, calendar, and other connected services. Account Databases macOS 10.11: ~/Library/Accounts/Accounts3.sqlite macOS 10.12+: ~/Library/Accounts/Accounts4.sqlite Quick Triage with Plist Files For a quick analysis, investigators can check the com.apple.accounts.exists.plist file located at: /preferences/SystemConfiguration/ (accessible via backups, file system extractions, or physical images). This plist file provides an overview of the types of accounts configured on a device. It contains two key values: Exists: Indicates whether an account type (e.g., Google, Exchange, iCloud) is present. Count: Shows the number of accounts for a particular type. ------------------------------------------------------------------------------------------------------------- Exploring Accounts3.sqlite & Accounts4.sqlite Databases These SQLite databases track user-configured accounts and store credentials, descriptions, and identifiers. Investigators can extract useful information from the following tables: 1. ZACCOUNTTYPE Table This table contains the types of accounts configured on the device. Important fields include: Z_PK: Primary key (identification number for each account type) ZACCOUNTTYPEDESCRIPTION: Description of the account type (e.g., Google, Exchange, iCloud) 2. ZACCOUNT Table This table stores details for individual user accounts. Key fields include: Z_PK: Primary Key ZUSERNAME: Account username ZACCOUNTDESCRIPTION: More specific account description ZPARENTACCOUNT: Parent account type (if applicable) ZDATE: Timestamp of account creation (Mac Epoch format) ZIDENTIFIER: Globally unique identifier (GUID) for the account ZKEY & ZVALUE: Configuration key-value pairs (e.g., email servers, ports, authentication settings) ------------------------------------------------------------------------------------------------------------- Forensic Tools for Analyzing macOS On a macOS Analysis Host For analyzing extracted artifacts, forensic examiners can use: sqlite3 / SQLite Viewer – Database inspection Xcode – Viewing plist files Virtual Machines – Controlled analysis environments ------------------------------------------------------------------------------------------------------------- Conclusion Understanding how macOS data is crucial for digital forensic investigations. By analyzing SQLite databases and plist files, investigators can uncover valuable details about user accounts, authentication methods, and linked services. With the right tools and techniques, forensic professionals can extract and interpret this information effectively, aiding in cybercrime investigations and incident response. ----------------------------------------------Dean--------------------------------------------------
- macOS Tracking Users Activity ,Autoruns Application-Level Firewall and Forensic Insights
When investigating a macOS system, understanding user accounts, logins, privilege escalations, and screen activity is crucial. Whether you're a forensic analyst, IT administrator, or cybersecurity enthusiast, knowing where to look can make all the difference. ------------------------------------------------------------------------------------------------------------- 🔍 Where Are User Accounts Stored? User accounts and related settings are stored in plist files , which are the backbone of macOS configurations. Key locations include: /private/var/db/dslocal/nodes/Default/users/.plist → Stores detailed user account info. \Library\Preferences\ com.apple.preferences.accounts.plist → Contains system-wide account preferences. \Library\Preferences\ com.apple.loginwindow.plist → Tracks login window settings and user preferences. These files can help identify active, deleted, or even hidden user accounts on a system. ------------------------------------------------------------------------------------------------------------- 🏠 User Logins & Logouts: Who’s Been Using the System? Tracking user sessions helps determine who has accessed the system and when. macOS users can log in through multiple methods: Login Window → The standard graphical login. Local Terminal → Using the built-in Terminal. SSH → Remote access via OpenSSH. Screen Sharing → Apple’s built-in VNC solution. 🔹 How to Find Login & Logout Events Each login process is labeled USER_PROCESS , and logouts are marked DEAD_PROCESS . These events are logged in system.log, Apple System Logs (ASL), and Unified Logs . Examples: GUI Login ( ( system.log and ASL…also BSM)) Feb 22 15:02:47 Mac loginwindow[95]: USER_PROCESS : 95 console Terminal Login( (10.12 system.log and Unified)) Feb 22 15:29:37 Deans-Mac login[1860]: USER_PROCESS : 1860 ttys000 SSH Login( (10.12 system.log and Unified)) Feb 22 16:29:37 sshd [1831]: USER PROCESS: 842 ttys002 Screen Sharing( (Unified) screensharingd: Authentication: SUCCEEDED :: User Name: deanwinchester :: Viewer Address: 192.168.1.1 By analyzing these logs, you can determine if an unauthorized user accessed the system remotely or via screen sharing. ------------------------------------------------------------------------------------------------------------- 🔓 macOS Screen Unlock Events Even if a user is already logged in, i t’s useful to track whether the screen was locked or unlocked. This can indicate when someone was actively using the system. 🔹 Find Screen Lock & Unlock Events Use the following commands: log show --predicate 'eventMessage contains "com.apple.sessionagent.screenIs"' Locked Screen: com.apple.sessionagent.screenIsLocked Unlocked Screen: com.apple.sessionagent.screenIsUnlocked 🔹 How Was the System Unlocked? macOS logs the method used to unlock the screen: Regular Password: Verifying using PAM configuration screensaver Touch ID: Using localAuthentication hints Apple Watch Auto Unlock: Using continuity hints Tracking these logs helps confirm whether the legitimate user accessed the system or if someone bypassed authentication. Note While knowing if the screen is locked or unlocked is good, sometimes you may want to know how a macOS system was unlocked. We can use below query: log show --predicate 'eventMessage contains "LWScreenLockAuthentication" and (eventMessage contains "| Verifying” or eventMessage contains "| Using")' Regular Password: • “Verifying using PAM configuration screensaver” TouchID: • “Using localAuthentication hints” • “Using hint-provided username" • “Verifying using PAM configuration screensaver_la” Auto Unlock with Apple Watch: • “Using continuity hints” • “Using hint-provided username ” • “Verifying using PAM configuration screensaver_aks” ------------------------------------------------------------------------------------------------------------- 🔥 Privilege Escalation: sudo & su Commands Privilege escalation is a key indicator of potential misuse or malicious activity. The sudo and su commands allow users to execute root-level actions. 🔹 How to Detect Privilege Escalation Use these commands to filter logs: log show --predicate '(process == "su" or process == "sudo") and eventMessage contains "tty"' 🔹 What to Look For Terminal Window Used Current Directory User Account Performing the Action Command Executed -------------------------------------------------------------------------------------------------------------------- Ever wondered why some applications launch automatically when you start your Mac? What Are Autoruns? Autoruns are mechanisms that allow applications and services to start automatically when you boot up your Mac or log in. While legitimate applications use these to enhance user experience (like iCloud syncing or antivirus tools), malicious software can exploit them to maintain persistence on your machine. Common Autorun Locations on macOS 1. Login Items (macOS 10.13+) Login Items are programs that launch when a user logs into the system via the graphical interface (GUI) . These can be managed through System Preferences > Users & Groups > Login Items , but not all of them are visible there. Some are stored in system files, making them harder to detect. 📂 Where to find them? ~/Library/Application Support/com.apple.backgroundtaskmanagementagent/backgrounditems.btm (User) .app/Contents/Library/LoginItems/ 💡 Did you know? Login Items are similar to Windows HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run registry key! 2. Launch Agents (For Users) Launch Agents are background processes that start when a user logs in . These can interact with the user session and sometimes have a graphical interface. 📂 Where to find them? /System/Library/LaunchAgents/ /Library/LaunchAgents/ ~/Library/LaunchAgents/ 🚨 Red Flag: If you find unusual or unknown files in these directories, they could be signs of malware! 3. Launch Daemons (For System-Wide Services) Launch Daemons are similar to Launch Agents but run at the system level, meaning they start before any user logs in and do not interact directly with the user. 📂 Where to find them? /System/Library/LaunchDaemons/ /Library/LaunchDaemons/ 💡 Fun Fact: Apple’s periodic maintenance scripts, which clean logs and optimize system performance, run using Launch Daemons! ------------------------------------------------------------------------------------------------------------------------- How Attackers Exploit Autoruns Malware authors love using autoruns because they allow persistent infections. Some common techniques include: Placing malicious files in LaunchAgents or LaunchDaemons. Using hidden login items that don’t appear in System Preferences. Modifying existing system files to automatically execute malware. 🔎 How to Detect Suspicious Autoruns? One of the best tools to inspect autoruns on macOS is KnockKnock by Objective-See. It scans your system for persistent software, helping you identify unwanted or malicious programs. 👉 Download KnockKnock ----------------------------------------------------------------------------------------------------------------------------- Application-Level Firewall (ALF): Your First Line of Defense Unlike traditional firewalls that manage network traffic at the packet level, macOS uses an Application-Level Firewall (ALF) to control inbound connections for specific applications. ALF determines which apps can receive incoming connections based on their identity. How to Access and Configure ALF Go to System Settings: Navigate to System Preferences > Security & Privacy > Firewall . Enable the Firewall: If it’s not turned on, click Turn On Firewall . Customize Firewall Options: Click on Firewall Options to fine-tune the settings. Here, you’ll see: Allow signed software – Lets macOS automatically allow incoming connections for trusted applications. Enable Stealth Mode – Prevents your device from responding to network probes like ping requests, making it less detectable online. Manually Configure App Access – Choose which applications can or cannot accept incoming connections. Under the Hood: ALF Configuration File For those who like to dig deeper, ALF’s settings are stored in a property list file located at: /Library/Preferences/com.apple.alf.plist Here are some key parameters: globalstate: 1 = Firewall enabled, 0 = Firewall disabled allowsignedenabled: 1 = Allow signed software, 0 = Block all by default stealthenabled: 1 = Stealth mode on, 0 = Stealth mode off If you’re a power user, you can tweak these settings manually using the plutil command in Terminal. ----------------------------------------------------------------------------------------------------------------------------- Final Thoughts macOS hides a wealth of forensic. Whether you're a security professional, a digital forensic analyst, or just a power user, understanding these artifacts can give you a deeper grasp of what’s happening under the hood. 🚀 -----------------------------------------------------------------------------------------------------------------------------
- macOS System Artifacts: macOS Finder, GUI Configurations, Time Changes, Bluetooth, Printing, and Sharing
macOS Finder Preferences Location: ~/Library/Preferences/com.apple.finder.plist Finder is the macOS equivalent of Windows Explorer, providing access to files, directories, applications, and networks. The Finder sidebar is customizable and includes: Favorites: Displays user directories like Documents, Downloads, Pictures, and Music. Locations: Shows mounted drives such as Macintosh HD, USBs, and disk images (DMGs). The com.apple.finder.plist file stores various user preferences, such as: ✅ Showing mounted servers and hard drives ✅ Column view preferences ✅ Secure empty trash settings ✅ X and Y coordinates of GUI elements These settings provide insights into a user’s workflow, such as frequently accessed directories and how organized they are. ------------------------------------------------------------------------------------------------------------- Saved Application State – Reopen Apps After Restart Locations: 📌 Legacy macOS: ~/Library/Saved Application State/ 📌 Sandboxed Apps: ~/Library/Containers//Data/Library/Application Support//Saved Application State/ Since macOS 10.7, the "resume" feature allows applications to reopen exactly as they were before a system reboot or app exit. Each app’s saved state is stored in a dedicated folder named .savedState, containing: windows.plist (holds window names and positions) data.data (encrypted session data, including opened files, URLs, and commands) ****Forensic analysts often examine windows.plist files to uncover recently accessed files , websites visited in Safari, or commands executed in Terminal (e.g., sudo, ssh). Even Microsoft Office keeps a history of opened documents here! ------------------------------------------------------------------------------------------------------------- Understanding macOS Time & Date Settings macOS stores time zone and system time preferences in multiple files: 🕒 /etc/localtime – System-wide time settings 🕒 .GlobalPreferences.plist – Stores user-specific time settings 🕒 com.apple.timezone.auto.plist – Automatically detects and sets the time zone These files help macOS maintain accurate timestamps for file modifications, notifications, and system events. ------------------------------------------------------------------------------------------------------------- Time Changes & Location-Based Adjustments Ever wondered how macOS adjusts time zones when you travel? The system relies on location services and network lookups to update time settings automatically. How it Works: macOS uses location services (if enabled) and network-based lookups to detect where the system is and adjust the time zone accordingly. The first network connection after traveling triggers a time zone update. Time zone settings are stored in com.apple.timezone.auto.plist at /Library/Preferences/ If location services are disabled , macOS relies on timestamps from system logs (/var/log/*) and the symlink at /etc/localtime How to Investigate: Look for logs containing " location" (location services daemon) or "timezoned" (time zone daemon). Analyze system log timestamps for sudden jumps. Check /etc/localtime symlink updates , which indicate manual time zone changes ------------------------------------------------------------------------------------------------------------- Tracking Bluetooth Devices on macOS Bluetooth activity can be a goldmine for forensic investigations, revealing which devices were connected and when. Where macOS Stores Bluetooth Data: User-specific devices: ~/Library/Preferences/ByHost/com.apple.Bluetooth..plist System-wide devices: /Library/Preferences/com.apple.Bluetooth.plist Organized by Bluetooth MAC address Understanding Timestamps: Last Used Time: Found under RecentDevices (user) or LastQueryUpdate & LastServicesUpdate (system). First Used Time: LastNameUpdate (system). However, renaming a device (e.g., AirPods) can reset this timestamp. Logs to Check: bluetoothd logs provide additional details. Forensic Considerations: Devices can be removed from the cache, making real-time analysis crucial. Apple ecosystem devices (AirPods, iPads, etc.) may connect automatically through Continuity, even if not manually paired. ------------------------------------------------------------------------------------------------------------- macOS Printing Artifacts: What’s Left Behind? Every print job leaves digital footprints in multiple locations on a macOS system. Key Files to Examine: Printer settings: /Library/Preferences/org.cups.printers.plist Printer configurations: /etc/cups/printers.conf and /etc/cups/ppd/ (PPD files store printer capabilities) Print Job Metadata: Stored in /private/var/spool/cups/ Print control files (c#####) contain : Printer name User account that printed the job Job name (file/document title) Application used (e.g., Safari, Word) Print data files (d#####-001) store the actual content (usually as PDFs) ------------------------------------------------------------------------------------------------------------- macOS Sharing Preferences: What’s Accessible? Sharing settings determine what resources are accessible on a Mac . Even if features are disabled, historical data can reveal past configurations. Where to Look: Main settings file: /private/var/db/com.apple.xpc.launchd/disabled.plist 1 = Yes 0 = No Important bundle IDs: ( If the bundle ID for the service does not appear in this list at all, it was likely not checked ever in the past and therefore never enabled.) com.apple.screensharing → Screen Sharing com.openssh.sshd → Remote Login (SSH) File Sharing Data: Located in /private/var/db/dslocal/nodes/Default/sharepoints/ As per screenshot, test folder is shared Shows shared folders, permissions, and network access settings. Look for services like com.apple.smbd (SMB file sharing) or com.apple.AppleFileServer (AFP file sharing). Forensic Takeaways: Even if a service is currently disabled , historical configurations may indicate past activity. Files shared over the network might still be accessible through logs or cached settings. ------------------------------------------------------------------------------------------------------------- Understanding macOS Screen Sharing macOS comes with a built-in Screen Sharing applicatio n that allows users to remotely access another Mac using the VNC (Virtual Network Computing) protocol. Unlike regular applications found in /Applications this utility is tucked away in /System/Library/CoreServices/Applications It can be incredibly useful for troubleshooting, remote assistance, or even managing multiple machines. When a user enables Screen Sharing or Remote Management in the Sharing preferences pane, macOS generates a file called com.apple.RemoteManagement.plist in /Library/Preferences/ This file stores configuration settings that determine how remote connections are handled. ------------------------------------------------------------------------------------------------------------- VNC Access and Credentials If VNC access is enabled , another important file comes into play: /Library/Preferences/com.apple.VNCSettings.txt This file contains an XOR-encrypted password used for VNC authenticatio n. Script to recover password: cat com.apple.VNCSettings.txt | perl -wne 'BEGIN { @k = unpack "C*", pack "H*", "1734516E8BA8C5E2FF1C39567390ADCA"}; chomp; @p = unpack "C*", pack "H*", $_; foreach (@k) { printf "%c", $_ ^ (shift @p || 0) }; print "\n"' ------------------------------------------------------------------------------------------------------------- Tracking SSH Connections: known_hosts File For users who prefer command-line remote access, macOS also supports SSH (Secure Shell). The system records SSH connections in ~/.ssh/known_hosts (or authorized_hosts). This file logs previously accessed remote machines using a combination of IP addresses, hostnames, and public keys. However, if the HashKnownHosts setting is enabled in /etc/ssh/ssh_config , this data is stored in a hashed format, making it difficult to retrieve the original hostname or IP address. ------------------------------------------------------------------------------------------------------------- Terminal Command History: The Hidden Treasure macOS keeps track of commands executed in the Terminal through history files stored in each user’s home directory: ~/.bash_history (for older macOS versions and users still using bash) ~/.zsh_history (default shell starting from macOS Catalina 10.15) These plaintext files log user-entered commands, which can provide valuable insights into: Applications and scripts the user executed Privilege escalation attempts (e.g., sudo usage) Accessed files, directories, and mounted volumes Remote systems or networks the user interacted with Key Considerations The history file is not updated in real-time —it only writes data upon user logout. Commands lack timestamps. Live response tip: You can view an active session’s history using the command: history This will display the command history for the currently logged-in user across open Terminal windows. ------------------------------------------------------------------------------------------------------------- Session-Based History: ~/.zsh_sessions/.history With the introduction of zsh in macOS 10.15, Apple also brought back session-based history under ~/.zsh_sessions/. Each session gets a unique GUID-based history file, containing executed commands along with timestamps. Unlike .zsh_history, these session logs include file system timestamps and are only deleted after a few weeks. However, similar to .zsh_history, they are only written when the Terminal session is closed. ------------------------------------------------------------------------------------------------------------- Final Thoughts macOS hides a wealth of forensic data in plain sight. Whether you're a security professional, a digital forensic analyst, or just a power user, understanding these artifacts can give you a deeper grasp of what’s happening under the hood. 🚀 --------------------------------------------------Dean------------------------------------------------------





