top of page

Search Results

513 results found with an empty search

  • Volatility Plugins — Plugin windows.ssdt Let’s Talk About it

    Now we’re stepping into kernel territory. And once malware gets here, things get serious. One of the biggest wins for kernel malware is SSDT hooking. If you understand this, you understand how rootkits control the entire system. What Is the SSDT The System Service Descriptor Table (SSDT) is basically a lookup table used by the Windows kernel. When a process asks Windows to do something like: open a file read registry data enumerate processes allocate memory …the kernel looks up the corresponding system function in the SSDT and jumps to the code. Each SSDT entry is just a pointer to kernel code. How SSDT Hooking Works An SSDT hook does something very simple — and very dangerous: It replaces one or more SSDT pointers Redirects them to malicious code That code runs before or instead of the legitimate kernel function And because this lives in the kernel: it affects every  process it’s system-wide no user-mode security tool is safe That’s why attackers love it. Why Attackers Don’t Use It Everywhere SSDT hooking comes with a big risk. On 64-bit Windows, PatchGuard (Kernel Patch Protection) watches critical structures like: SSDT IDT GDT kernel code regions If PatchGuard detects tampering, it doesn’t log an alert. It blue-screens the system on purpose. So yes — SSDT hooks still exist, but they’re much rarer today. That said… attackers always find ways around protections . So we can’t ignore SSDT analysis. Finding SSDT Hooks with Volatility Volatility 3 gives us a plugin called: windows.ssdt Important thing to know: This plugin does not  only show suspicious entries. It dumps everything . And that’s a lot. Modern Windows systems can have 1,500+ SSDT entries . So filtering is mandatory. What the ssdt Plugin Shows For each SSDT entry, Volatility gives: Table Entry (Index) Function Offset (Address) Function Name (Module) Function Owner  (Symbol)(which kernel module owns it) And this Second last column is the key. The Simple Rule That Catches Rootkits On a clean Windows system: SSDT entries should point to only two modules: ntoskrnl.exe win32k.sys That’s it. So our job becomes very easy: Show me SSDT entries that point anywhere else. Anything outside those two is immediately suspicious. Filtering the Noise (The Practical Way) We pipe the output and remove known-good entries: python3 vol.py -f /mnt/c/Users/Akash/Downloads/laptop.raw windows.ssdt | egrep -v '(ntoskrnl.exe|win32k.sys)' Now instead of thousands of lines…you get only the interesting stuff. Small note: Some systems use: ntkrnlpa.exe ntkrnlmp.exe So adjust your filter if you ever hit that edge case. Logical Step: Dump the Driver Finding the hook is just step one. Next step: extract the malicious driver from memory Volatility gives us: moddump  (Volatility 2) modules  (Volatility 3) Once dumped: reverse engineer it extract IOCs identify capabilities confirm attribution ------------------------------------------------------------------------------------------------------------- Final Takeaway SSDT hooking is: powerful dangerous rare (but real) PatchGuard made it harder — not impossible. And when attackers do  use it, memory forensics exposes them very clearly. If a kernel function suddenly belongs to a random driver…that driver has some explaining to do. -------------------------------------------------Dean-----------------------------------------------------

  • Volatility Plugins — Plugin windows.ldrmodules Let’s Talk About it

    This plugin is honestly one of the best examples of why Volatility still matters in memory forensics. Why? Because instead of trusting a single data source, ldrmodules does something very smart —it cross-checks multiple memory structures and looks for inconsistencies. And malware absolutely hates consistency. What ldrmodules Is Actually Doing When we’re looking for suspiciously loaded code inside a process, there isn’t just one  place to look. Windows tracks loaded DLLs in multiple ways, and ldrmodules compares all of them. Every process has a Process Environment Block (PEB ) .Inside the PEB, Windows maintains three linked lists that track loaded DLLs: InLoadOrderModule InInitializationOrderModule InMemoryOrderModule In a normal process, all three lists usually contain the same DLLs, just ordered differently. Now here’s where it gets interesting. ldrmodules doesn’t stop there. It also looks at the VAD tree , which tracks image-mapped memory pages — basically what is actually mapped into memory . So now we can ask a very powerful question: Does what Windows thinks  is loaded match what is actually  mapped in memory? When the answer is “no” — that’s where malware shows up. What Information ldrmodules Gives Us For each process, the plugin shows: PID  – Process ID Process  – Process name Base  – Base address from the VAD tree InLoad  – Present in PEB InLoad list InInit  – Present in PEB InInitialization list InMem  – Present in PEB InMemory list MappedPath  – File path from disk (from the VAD) A legitimate DLL  should: be marked True  in all PEB lists have a valid MappedPath on disk Example Look: Anything that breaks this pattern deserves attention. Important: False Positives Are Normal Before you panic over every “False”, keep these in mind: Process executable The process binary itself (like lsass.exe) will always show False in InInit . This is expected and normal. Special files Files like: .mui .fon .winmd .msstyles are mapped into memory but are not real DLLs, so they won’t appear in PEB lists. Legit DLLs not yet loaded Sometimes DLLs are mapped but not actually used yet — they’ll show in VAD but not in PEB. SysWOW64 weirdness Volatility 3 sometimes marks SysWOW64 DLLs as FALSE. This is likely due to 32-bit tracking differences. That’s why I always run ldrmodules in both Volatility 2 and 3 when possible. The Biggest Red Flag: Empty MappedPath This is huge — so don’t miss it. If a DLL has no MappedPath : it was not loaded from disk it was not loaded via LoadLibrary Windows has no idea where it came from That almost always means DLL injection . Even if the DLL still appears in one or more PEB lists. How Malware Hides DLLs: PEB Unlinking Because the PEB lives in userland, malware doesn’t need kernel access to mess with it. Administrator rights are enough. A common trick is unlinking a DLL from one or more PEB lists: Process keeps running normally DLL stays in memory Tools like Task Manager and dlllist don’t see it From the outside, it looks clean. But… The VAD tree still knows the truth. So when ldrmodules finds: memory-mapped executable code that is missing from PEB lists …it becomes very obvious something is wrong. How ldrmodules Thinks Instead of trusting the PEB, ldrmodules does this: Walk the VAD tree Find image-mapped executable memory Compare each entry with: InLoad InInit InMem Show mismatches as True / False This gives you a visual way to spot: unlinked DLLs injected memory reflective loaders process hollowing Normal Example (What “Good” Looks Like) A clean entry from wininit.exe might show: DLL: \Windows\System32\iertutil.dll InLoad = True InInit = True InMem = True MappedPath = present This is exactly what we expect. Zeus Banking Trojan Example Now let’s look at malware. Zeus injects itself into almost every running process . In ldrmodules, we see an entry with: InLoad = False InInit = False InMem = False MappedPath = empty This tells us: code exists in memory Windows loader never loaded it VAD has executable memory no backing file on disk Classic injected code. Process Executable Entry — Don’t Get Confused You’ll often see the first entry representing the process itself: \WINDOWS\system32\svchost.exe It will: be present in InLoad and InMem show False in InInit This is normal  and expected. Stuxnet & Process Hollowing Stuxnet takes things further using process hollowing . In an lsass.exe , ldrmodules shows: entries with no MappedPath missing from multiple PEB lists One section: not in any list → injected DLL Another section: missing only from InInit → looks normal BUT still has no MappedPath This is the key clue. The original lsass.exe image was unmapped and replaced entirely in memory. Since this all happened in RAM: no file on disk no mapped path nothing for AV to scan Exactly how hollowing is supposed to work. Cross-Checking With malfind If you compare: Base address from ldrmodules Base address reported by malfind You’ll see both tools pointing to the same suspicious memory regions. Two tools. Two data sources. Same conclusion. That’s confidence. Advanced Trick: ldrmodules vs dlllist One last powerful trick: dlllist → reads PEB (easy to manipulate) ldrmodules → reads VAD (kernel-side, harder to fake) If base addresses don’t match between the two? That’s a strong indicator of process hollowing. Final Thought Attackers can: unlink DLLs avoid LoadLibrary never touch disk hollow processes But they can’t escape memory structures entirely. As long as: code must execute memory must be executable Plugins like ldrmodules  will always give you something to pull on. --------------------------------------------------------Dean-----------------------------------

  • Volatility Plugins — Plugin windows.malfind Let’s Talk About it

    Let’s get into Second Plugin windows.malfind  — my favorite plugin when I want to quickly spot weird injected memory in a process. What malfind Actually Does malfind looks for two suspicious things inside process memory: Memory region is executable → PAGE_EXECUTE_READWRITE or similar permissions→ This is already a red flag because legit apps rarely need RWX memory. Memory region is NOT mapped to a file on disk → Meaning the process has code in memory that didn’t come from an EXE or DLL on the system. If malfind finds both together… boom! You have a potential injected section. And if you include --dump-dir, malfind will dump that entire memory section into files so you can reverse, scan, or investigate further. What malfind Shows You Volatility 3 gives you useful columns like: PID Process Name Start/End VPN  (basically the memory range) Protection  (this is where RWX shows up) PrivateMemory  (0 = mapped, 1 = private → injected stuff is usually private) CommitCharge Tag  (pool tag) But honestly, the real magic is the hex + assembly dump  of the first 64 bytes. This part tells you everything  if you know what you’re looking at. Checking for Real Code (The Final Injection Test) malfind purposely doesn’t confirm whether the bytes are actual code — that’s the analyst’s job. So your last check is: Does the memory actually contain real executable code? You look for: MZ header (4D 5A) → Means a PE file (EXE/DLL) was injected reflectively. Function prologue patterns When you run malfind and found  EBP and ESP   it often indicates that some part of the memory that is traditionally not executable (such as the stack) now contains executable code. On x86: PUSH EBP MOV EBP, ESP On x64 :a bunch of pushes like: push rbp mov rbp, rsp push rdi push rsi ... [rax], al) → probably false positive. malfind gives a lot of false positives, so this final step is critical. Another Example: OneDrive False Positive malfind flagged a section inside OneDrive.exe. Looked scary because permissions were RWX. But the assembly dump was: 00 00 00 00 00 00 ... Disassembler tried to interpret it as: add [rax], al add [rax], al add [rax], al ... This means garbage. Not real code. Not injection. So this is one of those common false positives that malfind throws around. The Evolution of Code Injection: LoadLibrary → Reflective Injection → Header Cleaning Load Library: - ( A load library contains programs ready to be executed) Originally, malware injected code using LoadLibrary. Easy to detect, easy to monitor. Then came frameworks like Metasploit using reflective DLL injection, which bypasses Windows loader APIs completely. Tools like malfind  were built specifically to catch reflective injection — and they did a brilliant job. So attackers adapted again. Malware started wiping its PE headers. CoreFlood botnet cleared the first 4096 bytes of its injected DLL. APT29 , APT32 , use Cobalt Strike loaders that erase headers after loading. Some newer families pad their shellcode deeper in the page so the first 64 bytes look harmless. This is a huge problem because malfind only shows you a tiny preview (64 bytes).If the malicious code sits after  the preview window, the output looks like a false positive. Bypassing RWX Detection (Attackers Getting Clever) malfind heavily relies on detecting EXECUTE_READWRITE memory. So, malware authors figured out an elegant bypass: Allocate memory as READWRITE Write payload into it Flip permissions to EXECUTE_READ only at execution time PEB Manipulation — A Common Target for Stealth Because the Process Environment Block (PEB) is userland and easy to modify, attackers love tampering with it: Unlinking entries from the loaded DLL lists Rewriting DLL names or paths Masquerading the process name Using PowerShell to patch fields directly If your detection tool trusts the PEB blindly… you’re already in trouble. Advanced Techniques: Stomping, Patching & Camouflage Newer malware families move beyond simple injections: Module Stomping Overwrite a legitimate DLL in memory with malicious code. Patching Legitimate Code Replace functions inside real modules with attacker logic. These make the injected payload look like it belongs to a real module. How We Fight Back: Dump Everything When in doubt, dump the whole region. Using --dump with malfind gives you a full memory section, not just the misleading 64-byte preview. Once dumped, you can: Run strings Scan with antivirus Use YARA rules Load into IDA/Ghidra Compare with known-good modules Reverse engineering is the most accurate option — but also the most expensive. And yes, attackers know this. Reality Check: Not All Malware Uses Fancy Techniques Don’t overthink it. Most real-world injections are still: Reflective DLL injection Process hollowing Remote thread injection Shellcode in RWX pages Why? Because these techniques are simple, tested, reliable, and still bypass many defenses . So while we prepare for the future, remember the majority of cases you’ll see are still detectable using standard techniques. ------------------------------------------------------------------------------------------------------------- The Memory Layout Matters (And Helps You Catch Stealthy Stuff) Windows process memory has three major areas: 1. Private Memory Allocated via VirtualAlloc Contains stack, heap, app data Usually READWRITE Should NEVER contain real executable code If you see RX or RWX here → high suspicion. 2. Shareable (Mapped) Memory Contains mapped files like .dat, .mui Mostly READONLY Rarely executable 3. Image Memory (Executable Section) Legit place for EXEs and DLLs Usually EXECUTE_READ or EXECUTE_WRITECOPY Rarely RWX Important insight from Forrest Orr’s memory research: Memory Type Normal RWX Rate Private 0.24% Shareable 0.014% Image 0.01% RWX almost NEVER occurs in legitimate scenarios. Even RX pages outside of image memory should raise eyebrows. Why This Matters for Injection Detection Most attacks — hollowing, reflective injection, manual mapping — end up placing executable code in the wrong part of memory: Hollowing → executable code ends up in private memory Doppelganging → ends up in shareable memory Reflective DLL → RWX pages created temporarily Your job is to understand: Where should executable code exist, and where SHOULD it never exist? If you master this, you’ll detect even advanced injection without needing heavy automation. Because memory forensics is about recognizing what is normal — and what isn't. ------------------------------------------------------------------------------------------------------------- Final Thoughts (Important) Malfind is still one of the best first-pass detectors for injections, even though attackers have more sophisticated tricks today. Use malfind to get your first leads, then: Dump the section Scan it Reverse engineer Compare patterns Check surrounding processes And always remember: malfind does NOT confirm the injection — YOU do. Your eyes + pattern recognition = the real detection engine. ---------------------------------------------Dean---------------------------------------------

  • Volatility Plugins — Plugin windows.handles Let’s Talk About it

    So yeah… I know I already wrote a bunch of blogs on memory forensics — Volatility step‑by‑step, code injection, rootkits, all of that. And you might be wondering: “Bro, why are we still talking about memory forensics?” Well… because some Volatility plugins are actually important , a bit tricky, and very underrated. Everyone knows the basics like psscan , pslist , dlllist , etc. If not — go check my earlier guide, I won’t repeat the boring stuff here. https://www.cyberengage.org/courses-1/mastering-memory-forensics%3A-in-depth-analysis-with-volatility-and-advanced-tools In this series, we’re going deeper  — focusing on the plugins that help you confirm attacks, pivot to real artifacts, and catch sneaky malware behavior. And today’s topic? Process Handles — A Small Thing That Reveals BIG Clues Handles are basically pointers to objects that a process is using — files, registry keys, named pipes, mutants, events, threads, sections… everything. Sounds cool, right? But here’s the problem: There are SO MANY handles. And 90% of them are boring: unnamed internal things that tell you nothing. But the remaining 10%? That’s where investigations become fun. Most of the time, handles won’t start  your investigation. You usually look at handles after you’ve already identified a suspicious process.But once you’re there, handles can help you: Confirm your suspicion Find related malware components Discover persistence Spot communication channels (e.g., C2 pipes) Catch rare artifacts like injected DLLs Volatility Plugin: windows.handles.Handles By default, this plugin prints every handle from every process. python3 vol.py -f /mnt/c/Users/Akash/Downloads/laptop.raw windows.handles So yeah, it’s a mess. That's why: ✔ If you know the suspicious PID → always use --pid Example: python3 vol.py -f /mnt/c/Users/Akash/Downloads/laptop.raw windows.handles --pid < > Example 1 — DLL Handle That Should Not  Be There So imagine you’re investigating a Shell . You trace it back to its parent process — Example w3wp.exe You start checking its file/registry handles. Most are normal… until one jumps out: A DLL referenced in the handles table That's weird because: DLLs are usually listed in the PEB And appear in dlllist , not in handles So seeing a DLL via handles is suspicious by itself One little handle uncovered the entire Shell chain. This is why handles matter. Example 2 — Infostealer Investigation This is a fun one. During the investigation, Suspicious processes: powershell.exe This is classic LOLBins used heavily by modern malware. PowerShell PowerShell had large number of handles… yeah, typical PowerShell behaviour. After filtering unnamed handles, and keeping only File + Registry ones, you brought it down to 100 handles: Still many, but manageable. Inside all this noise was: A randomly named registry key Sitting somewhere under HKCM\CLASSES\... At first look it blends in, but it was actually the persistence: Random registry names Storing fileless scripts Classic modern persistence trick A lot of people miss this because it looks  normal. But handles will pick it up easily. Handles help you rewind the malware timeline. That’s why they’re gold. Named Pipes — The Secret Communication Channels Okay, named pipes are literally everywhere in attack frameworks. Tools like: PsExec Metasploit Trickbot HyperStack Empire Covenant Cobalt Strike … all use named pipes  to communicate quietly. Because unlike sockets, pipes: Don’t show up in netstat Blend into normal system behaviour Work over SMB Carry less operational noise And attackers can name them anything . Some pipes look normal Some pipes embed IPs or PIDs Some pipes follow known patterns For example, PsExec  uses names like: \Device\NamedPipe\psexecsvc---stdout Cobalt Strike uses: MSSE-****-server \\.\pipe\****** If the operator doesn't change defaults → instant IOC. And yeah, handles can catch them. Just filter for File handles containing “pipe”. python3 vol.py -f /mnt/c/Users/Akash/Downloads/laptop.raw windows.handles | grep -i pipe Mutants (Mutexes) — Malware’s Way of Saying “I Was Here First” Mutants (mutexes) are used by malware to avoid reinfecting a system. Malware sets a mutex →Before infecting again, it checks if that mutex exists. That makes mutexes: ✔ Perfect indicators of compromise ✔ Unique to specific malware families ✔ Easy to scan in memory python3 vol.py -f /mnt/c/Users/Akash/Downloads/laptop.raw windows.handles | egrep "PID|Mutant" Final Thoughts (and why this plugin matters so much) Handles might look boring at first, but they give you: Persistence clues Malware DLL loading artefacts Malicious named pipes (C2 comms) Mutexes that identify malware families File paths and registry entries to pivot on 90% of the time you find nothing .That 10%? You find GOLD. ----------------------------------------------Dean------------------------------------------------------

  • Memory Forensic vs EDR – Talk

    If you look at how cybersecurity has evolved over the past few years, one thing becomes very clear: we finally have the horsepower to see what’s actually happening on our systems in real time. Thanks to cheaper storage, faster processing, and advances in forensics, we can now monitor both live and historical activity like never before. And that visibility isn’t just for show — we can act on it, whether automatically or manually, before attackers get too comfortable. A big part of this change is due to a new generation of Endpoint Detection and Response (EDR) tools. BTW I have written complete series on Sentinel One and Carbon If you want you can check out Lin below: Sentinel One https://www.cyberengage.org/courses-1/mastering-sentinelone%3A-a-comprehensive-guide-to-deep-visibility%2C-threat-hunting%2C-and-advanced-querying " Carbon Black https://www.cyberengage.org/courses-1/carbon-black-edr Continuing where we left So these solutions don’t just sit and wait for an alert. They use pattern recognition, heuristics, and machine learning on the back end to automatically block suspicious actions. But what really sets EDR apart is that it supports both detection and response. That means security teams aren’t just watching attacks happen — they can dig into the data, hunt for threats at scale, perform historical searches, and quickly understand how far an intrusion reaches. Here’s why this matters: once you identify an indicator of attack, being able to look backward in time and see where that same behavior occurred across the network can drastically reduce the time it takes to contain a threat. It also makes life more difficult for attackers, because their methods and patterns get exposed. ------------------------------------------------------------------------------------------------------------- Why Memory Matters More Than Ever Modern threats aren’t playing by the old rules. Attackers are moving away from traditional, file-based techniques because they know security tools are watching disk activity. Instead, many attacks now live in memory — the rise of “fileless” malware is a perfect example. That means in-memory detection is no longer optional. It’s critical. EDR tools focus heavily on memory analysis and event tracing, which allows them to catch malicious activity involving PowerShell, WMI, code injection, obfuscation, and other stealthy techniques. Because many EDR platforms have kernel-level access, they can see details that traditional antivirus tools would miss. Some common data points EDR tools capture include: Process information Windows API usage Command line history Process handles and execution tracing Suspicious thread creation and memory allocation DLL injection and rootkit techniques Host network activity ------------------------------------------------------------------------------------------------------------- EDR vs Memory Forensics — They’re Not the Same It’s important not to confuse EDR with full forensic tools. No one can capture every event across every system all the time — it would be impossible. A single device can log millions of events daily. For instance, Sysinternals Process Monitor can detect over 1000+ events per second, while an EDR system might intentionally limit itself to around 20-30 events per minute to avoid slowing the machine down. EDR tools focus on scale and practicalit y . They record a carefully chosen list of data points rather than everything under the sun. You typically can’t customize that list, but you get lightweight coverage across the entire environment. On the other hand, forensic tools aim for completeness. They capture entire memory and disk images, helping analysts tell the full story of an attack — including activity that may have happened before  EDR was installed. That’s why EDR should be seen as a supplement , not a replacement, for: Network monitoring SIEM log collection Deep memory/disk forensics EDR is great for real-time detection and quick investigation, but when you need deep answers, forensic tools are still king. ------------------------------------------------------------------------------------------------------------- Tools Are Only As Good As the Analyst I always say artificial intelligence, automation, and machine learning in security — and those technologies absolutely help — but they aren’t magic. At the end of the day, human analysts are still essential. Analysts need to understand how attackers think, what techniques they use, and how to connect data from multiple sources. Strong memory forensics and process analysis skills make EDR dramatically more effective. When you know what “normal” looks like, it becomes much easier to spot what doesn’t belong. The truth is, traditional forensics might eventually uncover everything EDR can reveal, but it would take much longer. EDR brings everything together in one place, speeding up identification of malicious activity and helping analysts make faster decisions. The goal is simple: use powerful tools, but keep strong foundational knowledge. That foundation is what lets you sort normal behavior from abnormal and respond confidently when something looks off. --------------------------------------------Dean-----------------------------------------------------------

  • Deep Dive: How Dropzone AI Investigates Alerts (Example Explained)

    In the previous article, I explained the Dropzone AI dashboard and overall features. Now, let’s get into the real action — how Dropzone actually investigates an alert , using Panther  as the example. Let’s begin. How Alerts Flow From Panther → Dropzone Let’s say you’ve integrated: Panther data source Panther alert source This means: Every alert Panther generates will be picked up automatically by Dropzone. No manual work. No need to forward anything. Dropzone grabs the alert → starts investigation immediately. Alright, good. ----------------------------------------------------------------------------------------------------------- Dropzone Picks the Alert & Starts Investigation Once Dropzone receives the alert, it begins analysis instantly. After a short time (usually under a minute), the AI spits out: Full investigation Conclusion (benign, suspicious, or malicious) Summary Top findings And here’s the important part: If Dropzone marks an alert as “Benign,” 99.9% of the time it is actually benign. So you can safely close it. Also important: If you close the alert in Dropzone, it gets closed in Panther as well. Any note or comment you add → also gets added to Panther. Super useful. ----------------------------------------------------------------------------------------------------------- Opening the Alert – The Investigation Screen When you open the alert, this is what you see (I had to hide sensitive info in the real screenshot): Alert Summary AI Conclusion Top Findings Links to original tool (Panther button on the left side)[ It’s extremely clean and simple. From here, you have 2 choices as an analyst: Method 1: Investigate manually in Panther Click the Panther button → open the alert in Panther → do your normal manual investigation. Method 2: Use Dropzone’s full investigation This is the easier method, and I’ll explain it below. After reviewing the alert: Select your conclusion (benign/suspicious/malicious) Add your comment Save That’s it. Alert will close in both systems. ----------------------------------------------------------------------------------------------------------- Findings Tab – The Brain of Dropzone The Findings  tab is where you see exactly what questions the AI asked  during its investigation. Dropzone literally interrogates the alert: “Has this IP been used before?” “Is this user associated with risky activity?” “Does this resemble past malicious alerts?” “Is this event common for this device/user?” “Any MITRE technique indicators?” Every question → AI’s response → Final verdict. You can click on each question to see: What Dropzone checked What information it used Why it reached the verdict This transparency is what makes Dropzone so powerful. It’s like sitting inside the brain of an analyst. ----------------------------------------------------------------------------------------------------------- Evidence Locker – All Evidence in One Place This is basically a collection of everything Dropzone used during the investigation. Examples: IPInfo lookup results Geo location ASN Device/user history File reputation Prior alerts Previous analyst comments Other tools' data You can click View Response  to see details. Dropzone also checks previous 5 alerts related to this case. It looks at: What analysts concluded What comments they left Whether those alerts were benign or malicious This is where your past decisions matter — Dropzone keeps learning from your behaviour. I call this “AI learning from the past”  😂 ----------------------------------------------------------------------------------------------------------- Remediation Tab – What To Do If Malicious If the alert is malicious, Dropzone provides: Recommendations Steps to take What should be contained What needs review At top, you’ll see Containment Actions . Dropzone currently supports automatic remediation for a few integrated apps (for example: Microsoft Defender, Okta, etc.) If enabled: AI will automatically isolate the machine or disconnect the user if the alert is malicious. This is insanely powerful. ----------------------------------------------------------------------------------------------------------- Change Log – Timeline of Everything The Change Log tab shows a timeline: When alert came in When investigation started Any status change Analyst comments If alert was closed If remediation was triggered It’s a clean, readable timeline. ----------------------------------------------------------------------------------------------------------- Second Method of Investigation – “Ask a Question” Earlier, I mentioned two ways to investigate: 1. Manual investigation using Panther 2. Using Dropzone findings Here’s the third  and easiest  method: Just ask Dropzone a question. Example questions: “Has this IP been used by any other user?” “Is this user agent associated with this IP anywhere in our logs?” “Show me all events linked to this process.” Just type your question like human language → Dropzone runs a full investigation across all logs and gives you the answer. No querying. No log diving. No manual searching. One of my favourite features. ----------------------------------------------------------------------------------------------------------- False Positive Example – Benign Alert Let’s take another example: MFA disabled by user. Normally this may look suspicious. But in reality, this was normal behaviour. Dropzone marked it as benign , and if you read the investigation, it clearly tells you why. Here you don’t need to investigate — just approve and close. ----------------------------------------------------------------------------------------------------------- Why AI Saves Massive Time (Realistic SOC Example) Let’s say 10 alerts arrive: 9 are false positives 1 is true positive A human analyst — even a very good one — will take about 45 minutes  to investigate all 10. There is also the chance they may get tired or distracted and miss the true positive. Now compare this with Dropzone: It will mark 9 alerts as benign  within 10 minutes It will highlight 1 true positive  clearly You just review the 1 true positive Approve the 9 benign ones This saves enormous time. This is why AI is becoming a huge part of SOC. Not because it replaces jobs, but because: It removes the noise so analysts can focus on the real threats. ----------------------------------------------------------------------------------------------------------- Final Thoughts You’ve now seen exactly how a Dropzone AI alert looks, how it thinks, how it asks questions, and how to review it. In the next article, I will show: Sentinel One, CrowdStrike Example --------------------------------------------------Dean---------------------------------------------------- Check out next article below: https://www.cyberengage.org/post/dropzone-ai-final-conclusion-what-all-these-examples-really-show

  • Dropzone AI Final Conclusion – What All These Examples Really Show

    Now that I’ve shown you investigations from Panther  — I think you can clearly see what Dropzone AI is actually doing behind the scenes. No matter which security tool generates the alert: Dropzone picks it up instantly Investigates it faster than any human Asks all the important questions automatically Pulls evidence from everywhere Checks historical behaviour Compares with analyst verdicts Correlates with MITRE framework And finally gives you a clear conclusion All of this happens in seconds , not minutes — and definitely not hours. This is why I keep saying: AI is already transforming the SOC team, whether someone believes it or not. Look at the examples again: ✔ SentinelOne → Investigation + Findings + Remediation Conclusion Findings: Remediation ✔ CrowdStrike → Investigation + Findings Conclusion Findings: ✔ Microsoft Sentinel → Investigation + Findings Conclusion Findings: ✔ Splunk → Investigation + Evidence Locker + Findings Conclusion Evidence Locker Findings: Different tools, different alert types…But Dropzone handles all of them with the same speed, same accuracy, and same style. ----------------------------------------------------------------------------------------------------------- Why This Matters (Even if People Don’t Want to Hear It) Let’s be honest: Most SOC analysts today spend 70% of their time doing: Routine triage Repeating basic checks Searching logs Closing false positives This is exactly the work that AI automates perfectly . And when AI can: Analyze 10 alerts in 2 minutes Mark 9 as benign Show you only the real threat Pull evidence from all tools Provide ready-made conclusions Recommend remediation actions Even perform automated remediation …then the role of a SOC analyst changes forever. It’s not about “AI replacing jobs.” It’s about AI replacing the boring part of the job , and you focusing on real incident response. But people who refuse to learn these tools? Those are the ones AI will replace. ----------------------------------------------------------------------------------------------------------- My Final Advice to Every SOC Analyst / IR Engineer If you takeaway one thing from all these examples, let it be this: 👉 Start learning how to work WITH AI, not against it. 👉 Become the person who understands AI-driven investigations. 👉 Learn how to verify AI decisions, not manually do everything. 👉 Focus on deeper skills: threat hunting, forensics, malware analysis. AI is not taking your job. AI is taking your old  job. Your new  job is to supervise, validate, and respond — not chase false positives. Dropzone is just one example. So the smart move? Start upgrading your skills now. ------------------------------------------------Dean------------------------------------------

  • Dropzone AI Dashboard & Investigation Overview

    Your SOC, but finally without the headache. In the previous article, I talked about how AI is changing SOC operations forever — especially tools like Dropzone AI  that automate full investigations. If you ask me which tools I enjoy working with the most, I will always say CrowdStrike , SentinelOne , and Forensic tools . But recently, one tool has impressed me so much that I genuinely feel like every SOC team should see it at least once. And that tool is Dropzone AI . This Article part is all about how Dropzone actually looks and feels  when you use it every day. --------------------------------------------------------------------------------------------------------- The Dropzone Dashboard When you open Dropzone AI, the first thing you see is the Dashboard . And trust me — I love simple dashboards . SentinelOne has one of the best UIs , and Dropzone follows the same philosophy: clean, clear, and not overloaded. The dashboard lets you filter investigations by: Conclusion  (Benign / Suspicious / Malicious) Priority Status Source This is super helpful when you have multiple log sources connected. You can instantly see: Where alerts are coming from Which tools are generating noise Which sources need tuning How Dropzone is handling everything in real-time Lifetime Metrics You also get three very important metrics: ✔ Lifetime Investigations Total number of investigations Dropzone has done for your environment. ✔ Lifetime Median TTI TTI = Time to Investigate . Humans take 30–90 minutes per alert. Dropzone does this in under 20 minutes . ✔ Time Saved This is your “why am I not doing 24/7 shifts anymore” metric. This is the reason I say AI kills alert fatigue . Response Metrics (My favourite) This section shows: The time between the event happening and Dropzone completing the investigation. This is 🔥.Because humans simply cannot  operate with this speed or consistency — especially at 3 A.M. And if you want the best results? Make sure you ingest all logs . Dropzone correlates telemetry across tools — EDR, Identity, SIEM, Cloud — and then produces a final conclusion. More logs = more accurate investigations. Finally, the dashboard also includes: MITRE ATT&CK Correlation For every alert, Dropzone maps it to relevant MITRE techniques. This is extremely helpful for understanding attacks at a glance. --------------------------------------------------------------------------------------------------------- Fleet Dashboard — One Console for All Clients This is a new feature and honestly a game changer for MSSPs. If you manage many clients with different: Domains Tenants Log sources Alert volumes You don’t need to jump into each one separately. The Fleet Dashboard  shows: Total investigations per client Priority breakdown Status breakdown High-level overview of all environments Think of it as a master SOC console . Important: You can only see dashboards here — not  individual alerts. To analyze alerts, you still open that specific client’s workspace. There’s also a search bar on top: Just type the client name → instantly jump into their console. --------------------------------------------------------------------------------------------------------- Investigation Tab — The Heart of Dropzone This is where the magic happens. Whenever an alert comes in (CrowdStrike, SentinelOne, Panther, etc.): Dropzone picks it up → Triaged It starts investigation → Running It finishes and gives a conclusion → Benign / Suspicious / Malicious It categorizes into → Urgent / Notable / Informational And then it’s your job to review it. The Review Workflow Once Dropzone gives its conclusion: If you agree→ Approve the review→ Alert moves to Reviewed If you don’t agree→ Add your own analysis→ Change the category (e.g., benign) The right side shows: Queued  alerts Running  investigations Stopped  analysis (if you manually stop one) You don’t have to babysit anything. It runs automatically in the background. This UI is very  clean — honestly easier than CrowdStrike. Dropzone feels more like SentinelOne: simple, smooth, functional. --------------------------------------------------------------------------------------------------------- Ask a Question — AI Threat Hunting for Humans This is hands down one of the best features. You can ask Dropzone anything in human language, such as: “Was this IP seen with any other user?” “Did this hash appear anywhere else in the last 30 days?” “Show me all failed logins from this user across all sources.” Dropzone will go through every  integrated data source: SIEM EDR Identity logs Cloud logs Network logs …and give you a correct answer in under a minute. I tested it. I checked the logs manually. It was 100% correct. Search Result This feature alone saves hours of manual threat hunting. --------------------------------------------------------------------------------------------------------- Context Memory — The Brain of the SOC This part makes Dropzone feel less like a tool and more like a human teammate. This is one feature I truly love. Dropzone remembers your actions and your environment context. Example: You have a user who usually works in Europe but is traveling to the USA for 7 days. You simply write this in human language: “akash@cyberengage is traveling to the USA for 7 days. Login from USA is expected.” Dropzone stores this. Now, for the next 7 days, if the user logs in from the US, the alert will be marked benign automatically. It learns from: Your comments Your decisions Your organization context Seriously… this is next-level SOC automation. And the best part? If you mark the same alert type false positive  10 times→ Dropzone will automatically mark it benign next time. In 20,000+ investigations I observed, Dropzone never missed a true positive . It only produces false positives occasionally, and those are labeled benign — which you can simply approve. --------------------------------------------------------------------------------------------------------- Settings — Custom Strategies, Integrations, and Response Actions Let’s go through the most useful settings. Custom Strategies Think of these like “If this happens → do this” rules, but in AI style. Example 1: EICAR Test File If you often run EICAR tests: You can create a strategy: “If alert contains EICAR hash → mark as benign.” Next time the EICAR test runs? Dropzone auto-marks it benign. Example 2: Critical Assets If you have crown jewels (domain controllers, VIP laptops, financial systems): Create a strategy: “Always mark alerts from this asset as suspicious.” That way, analysts always review them — no risk. Integrations Dropzone supports easy, one-click integrations with tools like: SentinelOne CrowdStrike Microsoft Defender 365 Panther Okta Slack AWS GCP Azure…and many more. There are three parts: ✔ Connected Apps Which tools you’ve connected. ✔ Data Sources Where logs are coming from. ✔ Alert Sources Which alerts Dropzone should pick up and investigate. If Alert Source  is not enabled, Dropzone won’t triage alerts — it will only analyze data. Alert Source Sentinel one configuration example --------------------------------------------------------------------------------------------------------- Response Actions This is basically notifications & automation. You can configure Dropzone to send updates to: Slack Teams Email Custom scripts Webhooks Examples: “Send me a Slack message when a malicious investigation is completed.” “Trigger a script whenever Dropzone starts analyzing a new alert.” This means you don’t have to keep Dropzone open 24/7. --------------------------------------------------------------------------------------------------------- Automatic Remediation This is extremely powerful. If integrated with tools like Okta or Microsoft Defender auto remediation action be taken. Based on Dropzone’s conclusion. Or you can trigger remediation manually from the investigation page — without opening the original tool. --------------------------------------------------------------------------------------------------------- What’s Next? Alert Analysis I know this is the part everyone is waiting for. In the next article, I will show you: 🔥 Real investigation examples 🔥 CrowdStrike alert → Dropzone output 🔥 SentinelOne alert → Dropzone reasoning ------------------------------------Dean----------------------------------------------------------- Check Out next article below: https://www.cyberengage.org/post/deep-dive-how-dropzone-ai-investigates-alerts-panther-example-explained

  • Is AI Coming for SOC Jobs? A Real Talk + My First Look at Dropzone AI

    Let’s be honest for a second. I’ve been in forensics and incident response long enough to see the cybersecurity world change fast — but nothing is shaking things up more than AI inside SOCs . And no matter how many people say “ AI won’t take jobs, it will only assist us, ”  the reality I’m seeing in the field is completely different. I’m on calls with security teams, MSSPs, product vendors… and the pattern is the same everywhere: 🔥 Tasks that used to require 20–30 analysts are now being done by 3–4 people — with AI doing all the heavy lifting. 🔥 Threat hunting, alert triage, correlation, enrichment, reporting — all automated. 🔥 24/7 monitoring with no night shifts… because AI doesn’t need sleep. You can ignore it. You can debate it. But you cannot  deny it. AI is already replacing a huge portion of SOC work. So when I say “AI is coming for SOC jobs,”  trust me — this is not fear. This is observation. I personally know teams handling 50 clients with just four analysts , because the AI platform they’re using handles all investigations automatically. This is where the world is going. But okay… let’s pause the rant for a moment. Because today I want to talk about one specific tool that made me smile and feel sad at the same time: ----------------------------------------------------------------------------------------------------- Dropzone AI — The SOC Analyst That Never Sleeps Before I jump in: I need to say something I deeply hate about this industry… The 24/7 SOC Problem I’ve done 24/7 work. You’ve probably done it too. I don’t need to explain how mentally and physically draining it is. Once, I asked my manager: “Why do analysts in Asia have to do 24/7 shifts? Why can’t we do a follow-the-sun model if you already have offices in Europe and the US?” The manager told me: 👉 “India is cheaper. 👉 Other countries’ labor policies won’t allow that. 👉 India’s policies allow it.” And that answer stuck with me. Why should people in one country sacrifice their health and family time just because it’s cheap ? Anyway, that’s a topic for another day. Let’s jump back to Dropzone AI — because this is exactly the kind of tool that makes 24/7 SOCs unnecessary. ----------------------------------------------------------------------------------------------------- Alert Fatigue: The Problem Dropzone Is Trying to Solve If you’ve ever worked in a SOC, you already know: The alert volume is insane. According to research: 90%  of SOCs are drowning in false positives and backlog 80%  of analysts feel they can’t keep up Humans naturally start ignoring alerts when there are too many Attackers actively use this fatigue to slip in quietly False positives are the biggest enemy. When 98 out of 100 alerts are useless, the brain learns to ignore them — and the dangerous ones hide among the noise. This is where tools like Dropzone AI enter the game. ----------------------------------------------------------------------------------------------------- AI SOC Analysts: What They Really Are Let me break it down simply: A normal SIEM tells you:👉 “Hey, something suspicious happened. Good luck.” A SOAR platform automates a workflow you already built manually. But an AI SOC analyst doesn’t just raise alerts — it conducts the entire investigation by itself. According to the 2025 AI SOC Market report: A typical SOC sees ~960 alerts daily 40% never get investigated 66% of SOCs cannot keep up 70% of analysts leave within 3 years  due to pressure This is the crisis. AI SOC analysts solve this by doing what humans don’t have time to do: They run end-to-end investigations like a real analyst: ✔ Pull evidence from EDR, SIEM, Identity, Cloud✔ Correlate data across platforms ✔ Analyze lateral movement, patterns, anomalies ✔ Summarize everything in a human-readable narrative ✔ Provide recommendations ✔ Do all of this in parallel  — infinitely What takes a human 60–90 minutes , Dropzone AI does in 3–10 minutes . No playbooks. No rules. No babysitting. It reasons through the problem like an actual analyst. ----------------------------------------------------------------------------------------------------- SOAR vs AI SOC Analyst (The Real Difference) People confuse these two a lot, so let me clear it: SOAR = Static. Plays back predefined steps. If the workflow breaks, the SOAR breaks. AI SOC Analyst = Dynamic. Investigates like a human. Adapts based on findings. Requires zero playbooks . In simple words: SOAR follows a recipe. AI SOC cooks based on whatever is in the kitchen. ----------------------------------------------------------------------------------------------------- Human Analyst vs AI SOC Analyst — A Fair Comparison Here’s the truth nobody wants to say out loud: Aspect Human SOC Analyst AI SOC Analyst Alert Processing 25–40 min per alert 3–10 min per alert Availability 8 hours + breaks 24/7/365 Daily Capacity 10–20 deep investigations Unlimited Consistency Varies with mood, fatigue 100% Learning Curve 6–12 months Instant Investigation Depth Deep for selected alerts Deep for every  alert Cost $75k–150k per year Subscription Yes — it is  expensive. But not more expensive than hiring a 20-person SOC team. Especially in India 😅 ----------------------------------------------------------------------------------------------------- Why Dropzone AI Got My Attention Because this tool actually works . It takes the alerts from: CrowdStrike SentinelOne Panther SIEMs Splunk, Microsoft Sentinel Identity platforms Cloud logs …and turns them into full investigation reports. No nonsense. No fluff. Actual DFIR-style analysis. But before I show you the investigations and output (especially for CrowdStrike and SentinelOne), I want to start with the dashboard. That will be in the next article . ----------------------------------------------------------------------------------------------------- Final Thoughts (For Now) AI is not the enemy. But pretending that AI isn’t replacing jobs is just denial. The industry is changing.T he SOC model is changing. The skillset needed is changing. Instead of competing against  AI, the smart move is to work with  it. This article is just Part 1. Next up:👉 Dropzone AI Dashboard Deep Dive 👉 Real alert investigations 👉 CrowdStrike + SentinelOne examples 👉 How it handles correlation and storytelling -------------------------------------------Dean------------------------------------------------------- Check Out next article below: https://www.cyberengage.org/post/dropzone-ai-dashboard-investigation-overview

  • SentinelOne Series: The SSO Workaround You’ll Actually Thank Me For

    Hey everyone! Welcome back to another post in my SentinelOne series  — if you haven’t checked out the earlier ones, I recommend scrolling back and giving them a read. https://www.cyberengage.org/courses-1/mastering-sentinelone%3A-a-comprehensive-guide-to-deep-visibility%2C-threat-hunting%2C-and-advanced-querying%22 Today, I’m here to share something different  — a real-world workaround  that helped me fix an interesting SSO problem with SentinelOne. --------------------------------------------------------------------------------------------------------- The Background So here’s the situation. There are two kinds of SentinelOne setups you can find yourself in: Scenario 1: You Own the SentinelOne Server You’re the big boss here — you bought the SentinelOne server or have full global-level access  on the console.You can create tenants, sites, roles, policies — the whole thing. If you’re in this camp, this trick won’t apply  to you (and you’ll soon see why). You already have the keys to the kingdom. Scenario 2: You Don’t Have Global Access Now this is where the magic happens. Let’s say you don’t own the SentinelOne backend. Instead, you asked SentinelOne (or your MSSP) to create an account for you . So they set you up as a tenant on one of their servers , and you start deploying your client environments under your account. Nothing complex so far — right? Easy stuff. --------------------------------------------------------------------------------------------------------- The Challenge Now imagine this — I’ve got a client  who manages about seven of their own clients  under my SentinelOne account , and here’s where things start getting interesting : each of those seven clients uses their own SSO (Single Sign-On)  setup. Sounds simple, right? But here’s the real issue SentinelOne doesn’t make it easy  to handle multiple SSO configurations under one parent structure. If my client and his each client  all have different identity providers (like Okta, Azure AD, Ping, etc.), things quickly become messy  when it comes to access and visibility. The client wants his IT team to access the specific client sites  they’re responsible for — but they should not  see the there Sentinel One site or endpoints  that I (as the managed security provider) monitor and manage for them. So the problem was clear: How can I let their IT team log in through their own SSO , get to their client’s SentinelOne site , and still keep my internal SentinelOne environment completely hidden  from them? That’s where the workaround comes in. 😉 In short — you want separation between authentication  and visibility . --------------------------------------------------------------------------------------------------------- The Workaround I Used Alright, here’s what I did (and honestly, it worked beautifully). Created a new site  in SentinelOne called something like: 👉 Cyberengage Auth Passthrough This site will have zero endpoints  — it’s purely there to authenticate users through SSO . Configured SSO  on this site. Deleted all local SentinelOne accounts  for the managed IT users from the parent tenant. We don’t want anyone logging in locally anymore — SSO all the way. Then, had each Managed IT user log in via SSO  to the new Auth Passthrough  site. Once they were authenticated through that SSO site, I reassigned them roles  for the client sites they actually needed access to. And just like that… 🎯 Employees can’t see any endpoints of there site. Managed IT can access the client sites they support. Everyone uses SSO — clean and compliant. --------------------------------------------------------------------------------------------------------- Why This Works Think of it this way —I separated authentication  (the SSO part) from data access  (the endpoints). The Auth Passthrough  site acts as a secure door  — people come through it, prove who they are via SSO, and then I decide which rooms (client sites) they can enter after that. This setup keeps SentinelOne access organized, auditable, and most importantly — isolated  per client. --------------------------------------------------------------------------------------------------------- The End Result ✅ All users authenticate via SSO. ✅ No local accounts  left to manage. ✅ Employees  can’t see internal endpoints. ✅ Managed IT  can view and manage the client monitoring I responsible for. ✅ No global-level permissions needed. It’s a simple but powerful design when you’re operating inside someone else’s SentinelOne tenant and need a clear boundary between teams and clients. --------------------------------------------------------------------------------------------------------- Final Thoughts Sometimes SentinelOne setups aren’t one-size-fits-all — especially when you’re working under another organization’s infrastructure or managing multiple clients. You’ve got to be creative to make SSO, visibility, and role-based access all play nice together. This workaround gave us that perfect balance. If you’re struggling with multi-client SSO management in SentinelOne, try this approach — it might just save you a lot of headache (and support tickets 😉). --------------------------------------------Dean-------------------------------------------------------

  • Carving Hidden Evidence with Bulk Extractor: The Power of Record Recovery

    Before diving in, I’d like to highlight a comprehensive series I’ve created on Data Carving—feel free to check it out via the link below. https://www.cyberengage.org/courses-1/data-carving%3A-advanced-techniques-in-digital-forensics --------------------------------------------------------------------------------------------------------- If you’ve been in digital forensics long enough, you’ve probably heard about Bulk Extractor  — the legendary tool  that can scan through massive amounts of data and pull out meaningful information like emails, IPs, URLs, and even credit card numbers in record time. But what if I told you there’s an upgraded version that goes beyond basic carving — one that digs deep into the very record structures  of Windows file systems and event logs? Let’s talk about Bulk Extractor with Record Carving (bulk_extractor-rec)   ----------------------------------------------------------------------------------------------------------- Why Bulk Extractor (and this fork) Matters Traditional carving tools (like PhotoRec, Scalpel, or Foremost) are great for recovering deleted files . But they usually focus on whole files — not the records inside them. https://www.kazamiya.net/en/bulk_extractor-rec bulk_extractor-rec , on the other hand, looks for specific forensic record types  — and this is a game-changer. Why? Because it can pull out the small but crucial artifacts that tell us what happened on a system, even when the original files are gone. Here’s what it can recover: EVTX logs  — Windows Event Log chunks NTFS MFT records  — metadata for files and folders $UsnJrnl:$J  — change journal entries (fantastic for timeline work) $LogFile  — transactional logs that reveal filesystem changes $INDEX_ALLOCATION  records (INDX) — directory index data utmp records  — Unix/Linux login/logout records Now, those first five are gold for Windows forensics. These are exactly the artifacts you need to reconstruct activity, detect tampering, or trace attacker movements — especially when original logs or MFT files have been partially overwritten. ----------------------------------------------------------------------------------------------------------- The Smart Part: Record Reconstruction Here’s what I really  love about bulk_extractor-rec : it doesn’t just rip out raw data — it tries to rebuild valid structures . For example, when it carves out Windows Event Log chunks, it doesn’t just dump fragments. It rebuilds them into valid .evtx files that you can directly open in tools like Event Log Explorer  or Eric Zimmerman’s EvtxECmd . That means your recovered logs can be parsed just like normal event logs . This saves hours of manual hex editing or XML parsing — and makes this fork incredibly practical during investigations. ----------------------------------------------------------------------------------------------------------- Working with NTFS Artifacts When carving NTFS-related artifacts (like MFT or USN records), Bulk Extractor outputs two main files: A clean file with all valid records (for example, MFT or UsnJrnl-J) A _corrupted file with invalid or partial records that didn’t pass integrity checks You can feed the valid ones straight into MFTECmd  or similar tools for easy parsing. The corrupted ones can still contain useful fragments . ----------------------------------------------------------------------------------------------------------- Performance and Speed Bulk Extractor is known for one thing — speed . That means it doesn’t just read surface data — it digs into compressed containers too. Even better, it can process hibernation files  (prior to Windows 8) automatically — which often contain tons of evidence about user sessions. Focusing on Unallocated Space When I’m investigating, I often want to focus carving on unallocated space  — that’s where deleted or lost records usually live. Since Bulk Extractor isn’t filesystem-aware (by design), I use another tool — blkls  from The Sleuth Kit  — to extract just the unallocated clusters first. Here’s how that works: blkls image.dd > image.unallocated This command dumps all the unallocated data into a new file , ready to be carved by Bulk Extractor. You can even extract slack space  (the tiny gaps between files) using the -s switc h — useful when you want to catch small remnants left behind by deleted files. ----------------------------------------------------------------------------------------------------------- Alternatives & Complements As I always say no single tool does it all (Especially if we are using open source)— and that’s totally fine. I often combine bulk_extractor-rec  with other tools to maximize recovery: Joakim Schicht’s NTFS Tools  – specialized parsers and carvers for $MFT, $LogFile, and $UsnJrnl https://github.com/jschicht EVTXtract (by Willi Ballenthin)  – carves EVTX records in raw XML format (great for deep event log recovery) https://github.com/williballenthin/EVTXtract One gives you structured .evtx logs, and the other gives you raw XML records — a powerful combo! ----------------------------------------------------------------------------------------------------------- Final Thoughts If you’ve never tried Bulk Extractor with Record Carving, you’re missing out on one of the most efficient ways to dig deep into deleted or fragmented forensic artifacts . It’s fast, multi-threaded, reconstructs readable logs, and supports critical NTFS and EVTX records — all in one go. And best of all? It’s free and open-source . --------------------------------------------------Dean-----------------------------------------------------

  • Every forensic investigator should know these common antiforensic wipers

    Everyone who does digital forensics has seen wipers. Funny part is attackers and careless admins both sometimes want files gone . Tools that overwrite/delete files — “wipers” — are common and can hide evidence. SDelete (a Sysinternals tool signed by Microsoft) is famous because it can slip past some whitelisting and looks “legit” on a system. But SDelete is only the tip of the iceberg — there are other tools and each leaves its own marks. Knowing those marks helps you figure out what happened  even when the file contents are gone. ------------------------------------------------------------------------------------------------------------ The main players Here are the common tools investigators run into: SDelete (Sysinternals)  — overwrites file clusters and free space. Popular because admins use Sysinternals and signatures make it look benign. BCWipe (commercial)  — very thorough, has features to wipe MFT records, slack space, and other NTFS artifacts . Commercial product; trial exists. Eraser (open source)  — long-lived tool. Renames files many times (seven by default), overwrites clusters , etc. cipher.exe (built-in Windows)  — intended for EFS tasks but /w:  can wipe free space. Very stealthy because it’s a system binary. ------------------------------------------------------------------------------------------------------------ What these tools try  to hide — and what they often fail to hide Wipers attempt to remove traces of a file. But Windows and NTFS create lots of metadata, logs, and side-files that are harder to fully erase. From an investigator’s point of view, the goal  is often just to prove the file existed  and that wiping happened  — not necessarily to recover the original content. Commonly left-behind evidence includes: USN Journal ($UsnJrnl) entries  — rename, delete, data-overwrite, stream changes. Wipers produce many USN events if they rename/overwrite repeatedly. NTFS LogFile ($LogFile)  — sometimes contains original file names or operations even when MFT entries are gone. MFT records ($MFT)  — deleted or overwritten MFT entries, reused MFT entry numbers (tools may create new files using same MFT index). Prefetch / evidence-of-execution  — prefetch files and other execution traces often show the wiper ran. ADS (Zone.Identifier)  — some tools (e.g., Eraser) leave alternate data streams like Zone.Identifier intact, which can reveal source URLs or original filenames. Temporary directories / filenames  — e.g., EFSTMPWP left behind by cipher.exe, or ~BCWipe.tmp created by BCWipe. Odd timestamps  — some wipers zero timestamps (e.g., set to Windows epoch Jan 1, 1601) which looks suspicious. Large flurries of rename / DataOverwrite / DataExtend events  — pattern of many sequential operations in a short time window. ------------------------------------------------------------------------------------------------------------ Short tool profiles + investigator takeaways SDelete What it does:  Overwrites clusters and free space. Why it’s sneaky:  Signed Sysinternals binary → looks legitimate. Look for:  USN and MFT evidence of overwrites and prefetch/execution traces for sdelete.exe. BCWipe What it does:  Commercial, deep-wiping features (MFT, slack, NTFS $LogFile features advertised). Real behavior:  Very noisy in NTFS journals — lots of renames, data-overwrite events, and creative file/directory creation to overwrite metadata (e.g., ~BCWipe.tmp, ... filenames). Look for:  ~BCWipe.tmp directories, massive $UsnJrnl activity in a short time, entries that show rename → overwrite → delete sequences, prefetched BCWipe executables. Eraser What it does:  Open-source, renames files repeatedly (7 passes by default), overwrites clusters. Quirks:  Leaves Zone.Identifier ADS sometimes; renames and timestamp zeroing (Jan 1, 1601) are common. Look for:  Repeated rename patterns in USN, C (change ) time updated but other times zeroed, leftover ADS pointing to original download URL. cipher.exe What it does:  Windows built-in — /w: wipes free space by creating temporary files to overwrite free clusters. Quirks:  Leaves a directory named EFSTMPWP at the root (observed persisting across reboots in many tests), creates many fil.tmp files while running. Look for:  EFSTMPWP directory, temporary fil*.tmp files, prefetch entries showing cipher run (and Windows event traces of disk activity). ------------------------------------------------------------------------------------------------------------ Example artifact patterns to search for: You can use these heuristics in your triage/scripting/search: Search USN journal for many sequence-like events (rename → data overwrite → delete) within seconds — suspicious. Look for directory names and temporary filenames: ~BCWipe.tmp, BCW, filXXXX.tmp. Check prefetch for unexpected executables: sdelete.exe, bcwipe*.pf, eraser*.pf, cipher*.pf. Scan for Zone.Identifier ADS on recently deleted files (may include original download URL or filename). Find files with timestamps set to zeroed timestamps — a potential sign of Eraser or timestamp wiping. Look for an MFT entry number reused by a later file or directory — indicates the original MFT record was targeted and may have been overwritten. Parse $LogFile (transaction log) for strange entries that mention original file names even when $MFT shows deletion. ------------------------------------------------------------------------------------------------------------ Investigator workflow Snapshot everything  (image the volume) — you need a forensically sound copy. Parse the MFT and USN  — timeline representation is crucial. Many wipers create big bursts in the USN journal that are easy to see in a timeline. Check $LogFile and shadow copies  — sometimes these hold remnants of filenames or older versions. Search ADS  — Zone.Identifier can unexpectedly reveal original source/location. Look for prefetch and execution evidence  — often the wiper executable will leave a prefetch or service entry. Remember SSD caveats  — wear-leveling and TRIM can make complete overwrites unreliable on SSDs; artifacts can be missing or inconsistent. Correlate with logs  — application logs, Windows event logs, and backup logs can confirm when delete/wipe activity occurred. ------------------------------------------------------------------------------------------------------------ Caveats and testing notes (be honest about limits) Tests often assume the active file clusters were overwritten — but you can’t always prove every  copy was overwritten (especially on SSDs). Some wipers advertise wiping certain structures (like $LogFile), but testing showed mixed results — so always verify with artifacts rather than relying on vendor claims. ------------------------------------------------------------------------------------------------------------ Short example: cipher.exe /w:C: — what to expect If someone runs cipher.exe /w:C: after deleting files: You may see EFSTMPWP at C:\ root. Temporary fil####.tmp files created and deleted during the run. No direct evidence of which files were wiped (cipher writes free space), but you can correlate deletion times from USN/MFT earlier in the timeline to guess what got targeted. Prefetch and process execution traces will show cipher.exe ran. ------------------------------------------------------------------------------------------------------------ Wrap-up — final thought Wipers try to erase content , but they often leave stories . The job of a forensic examiner is to read those stories in metadata, journals, and side-files. Look for patterns — rapid renames, heaps of USN events, leftover temp folders, strange timestamps, MFT reuse, and ADS — and you’ll often reconstruct what happened even when the file is gone. -------------------------------------Dean---------------------------------------------------------------

bottom of page