top of page

Search Results

540 results found with an empty search

  • Dropbox Forensic Investigations: Logs, Activity Tracking, and External Sharing

    Dropbox presents significant challenges  for forensic investigations due to encrypted databases, limited endpoint logs, and obfuscated external IP s . However, with the right approach, investigators can extract valuable metadata, user activity records, and external sharing reports . 🚀 Key Topics Covered: ✅ Extracting Dropbox metadata from local databases ✅ Using SQLECmd to automate SQLite analysis ✅ Tracking user actions via cloud activity logs ✅ Investigating file sharing and external access -------------------------------------------------------------------------------------------------------- 1️⃣ Dropbox Local Artifacts: Databases & Metadata Files 🔍 Where Does Dropbox Store Metadata Locally? File/Database Location Purpose info.json %AppData%\Local\Dropbox\ Dropbox configuration & sync folder location .dropbox.cache %UserProfile%\Dropbox\ Cached & staged file versions aggregation.dbx %AppData%\Local\Dropbox\instance<#> Recent file updates (JSON format) home.db %AppData%\Local\Dropbox\instance<#> Tracks Dropbox file changes (Server File Journal) sync_history.db %AppData%\Local\Dropbox\instance<#> Upload/download activity nucleus.sqlite3 %AppData%\Local\Dropbox\instance<#>\sync List of local & cloud-only files 📌 Forensic Use: ✅ Identify Dropbox folder locations & linked accounts ✅ Recover deleted/staged files from .dropbox.cache ✅ Reconstruct file modification history using home.db -------------------------------------------------------------------------------------------------------- 2️⃣ Automating Dropbox Analysis with SQLECmd 🔍 What is SQLECmd? SQLECmd  is an open-source forensic tool created by Eric Zimmerman  to automate SQLite database parsing . It utilizes map files  to identify Dropbox, Google Drive, and other forensic databases , automatically extracting file activity, timestamps, and metadata . What I did? Used gkape to extract all dropbox related files: 📍 Example: Running SQLECmd on Dropbox Data SQLECmd.exe -d C:\Users\Akash's\Incident response Dropbox --csv . 📌 How It Works: 🔹 -d : Specifies the directory to scan (Dropbox data folder) 🔹 --csv . : Saves results as CSV files in the current directory 📌 Forensic Use: ✅ Quickly extract metadata from Dropbox SQLite databases ✅ Identify synced, modified, and deleted files ✅ Analyze file movement within Dropbox folders -------------------------------------------------------------------------------------------------------- 1️⃣ Dropbox Logging: Free vs. Business Tiers 🔍 Comparing Activity Logs Across Dropbox Tiers Feature Basic (Free) Dropbox Business File Add/Edit/Delete Logs ❌ No logs ✅ Yes File Download & Upload Logs ❌ No logs ✅ Yes User Login & Session History ✅ Limited ✅ Full IP & Geolocation External File Sharing Reports ❌ No ✅ Yes Export Logs to CSV ❌ No ✅ Yes API Access for Logs ❌ No ✅ Yes 📌 Forensic Use: ✅ Track file modifications & deletion history ✅ Identify suspicious logins based on IP & location ✅ Monitor shared links for data exfiltration -------------------------------------------------------------------------------------------------------- 2️⃣ Accessing Dropbox Logs via the Admin Console 🔍 Steps to Retrieve Logs: 1️⃣ Log in  to the Dropbox Admin Console 2️⃣ Navigate to Reports > Activity Logs 3️⃣ Use Filters  to narrow results by user, file, folder, or event type 4️⃣ Click "Create Report"  to export logs in CSV format 📌 Forensic Use: ✅ Track who accessed or modified sensitive files ✅ Identify suspicious external IP addresses ✅ Monitor deleted files & restoration attempts -------------------------------------------------------------------------------------------------------- 3️⃣ Investigating IP Addresses & Geolocation Data 🔍 Analyzing IP Logs for Unauthorized Access Dropbox logs user IP addresses and device locations , which can help track unauthorized logins . ⚠ Limitations:  Dropbox obfuscates some external IP addresses , making it difficult to identify non-employee access . 4️⃣ Tracking External File Sharing & Anonymous Links 🔍 Dropbox Business "External Sharing" Report Dropbox tracks files shared outside the organization , but free users lack visibility into external recipients . 5️⃣ Advanced Filtering for Dropbox Logs 🔍 Filtering Logs for Specific Investigations Dropbox allows filtering logs  by various criteria, improving forensic analysis.  Key Filters for Investigation Filter Use Case Date Range Identify activity before & after an incident User Track a specific employee's Dropbox usage File/Folder Name Find modifications to critical documents Event Type Focus on file downloads, sharing, or deletions ------------------------------------------------------------------------------------------------------------- Before leaving, I waana update that in forensics, not everything is a piece of cake—there are limitations. Same for Dropbox lets talk about limitation Understanding Dropbox Event Logging All Dropbox users, regardless of their plan, have access to basic event logging  through the "Events"  section. However, users with Business or Advanced Business  plans have access to more extensive logging , which is particularly valuable in forensic investigations. What Does Dropbox Log? Administrators of Advanced Business  plans can track detailed user activity , including: ✔ File-level events  – Adding, downloading, editing, moving, renaming, and deleting files. ✔ Sharing actions  – Shared folder creation, shared link creation, Paper doc sharing, and permission changes. ✔ Access tracking  – Internal and external interactions with shared files and folders. These logs can be exported in CSV format , allowing investigators to filter data more effectively and analyze additional fields, such as IP addresses . Logs can be retained for years , making them a valuable resource for forensic analysis. However, new event entries may take up to 24 hours  to appear. Limitations and Blind Spots in Dropbox Logging While Dropbox's cloud logging is valuable, it is important to recognize its limitations : 🔹 Limited endpoint visibility  – Actions performed on locally stored Dropbox files may not be logged . For example, if a user copies a file from the Dropbox folder to their desktop or an external USB device, Dropbox may not  record this activi ty. 🔹 Synchronization tracking challenges  – While Dropbox logs when an unauthorized devic e connects and authenticates, it does not always track what files were synchronized  to that device. 🔹 Difficulty reconstructing deleted files  – Dropbox logs make it challenging to determine what files were once in a folder  after they are deleted. However, Dropbox's versioning feature  can sometimes help retrieve previous versions of a file. Due to these blind spots, forensic investigators should not rely solely on cloud logs . Instead, combining cloud logs with endpoint forensic analysis   (such as examining sync databases and local metadata) provides a more complete picture. Best Practices for Dropbox Forensics Since breaches and data theft are inevitable , proactive measures are necessary: ✔ Test forensic scenarios  – Simulating real-world incidents can help determine the exact scope of logging available in your environment. ✔ Export and analyze logs regularly  – Using CSV exports allows deeper filtering and historical tracking. ✔ Correlate with endpoint forensics  – Combining Dropbox logs with local forensic evidence  (if available) can help bridge information gaps. While Dropbox logging isn't perfect , it is still a crucial tool  for digital investigations . By understanding its capabilities and limitations, forensic analysts can make informed decisions  when investigating incidents involving Dropbox. ------------------------------------------------------------------------------------------------------- Conclusion Dropbox forensics is a crucial aspect of modern investigations, as cloud storage plays a key role in how users store, access, and share files. By analyzing local sync folders, logs, SQLite databases, and API activity , forensic analysts can reconstruct file movements, modifications, deletions, and access history  with precision . As cloud storage becomes an integral part of personal and corporate data management, the ability to track and analyze Dropbox activity  is essential for digital forensics, cybersecurity, and incident response . Staying updated on Dropbox forensic techniques  ensures that investigators can effectively follow digital trails and uncover critical evidence. 🚀 Keep exploring, stay curious, and refine your forensic skills—because digital evidence is everywhere!  🔍 🎯 Next Up: Box Forensics – Investigating Cloud Storage Security  🚀 Complete Series Below: https://www.cyberengage.org/courses-1/mastering-cloud-storage-forensics%3A-google-drive%2C-onedrive%2C-dropbox-%26-box-investigation-techniques

  • OneDrive Forensics : Investigating Cloud Storage on Windows Systems

    Microsoft OneDrive  is the most widely used cloud storage service, thanks to its default integration in Windows  and its enterprise adoption via Microsoft 365 . Understanding OneDrive forensic artifacts  is crucial for investigations involving data exfiltration, insider threats, or deleted cloud files . We will cover: ✅ How to locate and analyze OneDrive data on a Windows system ✅ Key forensic artifacts, including logs, databases, and registry entries ✅ How to determine OneDrive activity, authentication, and file synchronization history ✅ How OneDrive’s new sync model affects forensic investigations ✅ Tracking cloud-only files & deleted data ✅ Using OneDrive’s forensic artifacts to recover missing evidence ---------------------------------------------------------------------------------------------------------- 1️⃣ Locating OneDrive Files on a Windows System By default, synced OneDrive files are stored in: %UserProfile%\OneDrive 💡 Important: If a user changes the default storage location , the original OneDrive folder remains empty . The true OneDrive folder location  can be found in the Windows registry . Registry Key to Identify OneDrive Folder Location NTUSER\Software\Microsoft\OneDrive\Accounts\Personal This key contains: UserFolder  → The actual OneDrive sync folder  location cid/UserCid  → A unique Microsoft Cloud ID UserEmail  → The email used for the Microsoft account LastSignInTime  → Last authentication timestamp (Unix epoch format) 💡 Why This Matters: If OneDrive is enabled , this registry key must exist . Investigators can track user activity  even if OneDrive files have been moved or deleted. ---------------------------------------------------------------------------------------------------------- 2️⃣ Analyzing OneDrive File Metadata & Sync Database OneDrive stores metadata and sync information  in: %UserProfile%\AppData\Local\Microsoft\OneDrive\settings This folder contains key artifacts, including: 📌 SyncEngineDatabase.db (Main OneDrive Database) Tracks both local and cloud-only files Lists file names, folder structure, and metadata Provides timestamps for file sync operations 💡 Why This Matters: Even cloud-only files  (not only stored locally) are recorded here . Investigators can track deleted or moved files  that no longer exist on the device. ---------------------------------------------------------------------------------------------------------- 3️⃣ OneDrive Logs: Tracking Uploads, Downloads, & File Changes OneDrive keeps detailed logs  of file sync activities  in: %UserProfile%\AppData\Local\Microsoft\OneDrive\logs These logs store up to 30 days  of data and record: ✅ File uploads & downloads ✅ File renames & deletions ✅ Shared file access events 💡 Forensic Insight: Log files can reveal file activity , even if the user deleted local copies . Timestamps  in .odl logs can correlate file transfers   with other system activity. ---------------------------------------------------------------------------------------------------------- 4️⃣ OneDrive for Business: Additional Registry Artifacts Users with OneDrive for Business (Microsoft 365)  will have a separate registry key: NTUSER\Software\Microsoft\OneDrive\Accounts\Business1 This key includes: UserFolder : Location of root of OneDrive local file storage UserEmail : Email tied to Microsoft cloud account LastSignInTime : Date and time of last authentication (Unix epoch time) ClientFirstSignInTimestamp : Time of first authentication of the account (Unix epoch time) SPOResourceID : SharePoint URL for OneDrive instance 💡 Why This Matters: Business OneDrive accounts store work-related data —a key forensic focus. The SPOResourceID  can link OneDrive for Business files  to a SharePoint instance . ---------------------------------------------------------------------------------------------------------- 5️⃣ Investigating Shared Files & Synced Data from Other Users OneDrive supports file sharing and folder synchronization across multiple accounts . Shared folders are tracked under: NTUSER\Software\Microsoft\OneDrive\Accounts\Personal\Tenants NTUSER\Software\Microsoft\OneDrive\Accounts\Business1\Tenants This key logs shared folders synced to OneDrive . It tracks files shared via Microsoft Teams & SharePoint . 💡 Forensic Insight: Shared folders may not be stored in the default OneDrive folder . Investigators should check all Tenant folders  to avoid missing critical evidence . ---------------------------------------------------------------------------------------------------------- 6️⃣ SyncEngines Key: Advanced OneDrive Tracking A final high-value artifact  for OneDrive investigations is: NTUSER\Software\SyncEngines\Providers\OneDrive It contains: MountPoint  → Local file storage location (useful for tracking shared folders) UrlNamespace  → Specifies whether the folder belongs to OneDrive, SharePoint, or Teams LastModifiedTime  → The last time the folder was updated 💡 Why This Matters: Identifies all folders being synced , even if they are not in the default OneDrive location . Correlates data across Microsoft cloud services  (OneDrive, Teams, SharePoint) . ---------------------------------------------------------------------------------------------------------- 7️⃣ Tracking OneDrive Web Access (Cloud-Only Activity) If a user accessed OneDrive through a web browser  (instead of the local app), artifacts may appear in: Browser History  (Edge, Chrome, Firefox) Windows Event Logs Cloud Access Logs (if available from Microsoft 365) OneDrive web access URLs look like this: https[:]/onedrive.live.com/?cid=310ff47e40c97767&id=310ff47e40c97767!145750 💡 Forensic Insight: The cid value  in the URL matches the UserCid in registry keys —helpful for tracking multiple accounts . The res id parameter  refers to specific files or folders  accessed via the web client. ---------------------------------------------------------------------------------------------------------- 🛑 Key Challenges in OneDrive Forensics 🚨 1. Cloud-Only Files May Not Be Stored Locally Files accessed via " Files on Demand"  may never be fully downloaded . Investigators must analyze metadata & sync logs  to track cloud-only data. 🚨 2. Remote Deletions Can Hide Evidence Files deleted in OneDrive sync across all devices . Investigators may need Volume Shadow Copies or Microsoft 365 logs  to recover data. 🚨 3. Personal & Business OneDrive Accounts Can Be Mixed Users often log into both accounts  on the same system. Check registry keys  to differentiate personal vs. business data . ---------------------------------------------------------------------------------------------------------- OneDrive as a Crucial Forensic Artifact Microsoft OneDrive leaves behind substantial forensic evidence , even for files that no longer exist locally . We will explore more about OneDrive  in the next article (Advanced OneDrive Forensics: Investigating Cloud-Only Files & Synchronization) , so stay tuned! See you in the next one. Complete Series Below: https://www.cyberengage.org/courses-1/mastering-cloud-storage-forensics%3A-google-drive%2C-onedrive%2C-dropbox-%26-box-investigation-techniques --------------------------------------------Dean-------------------------------------------------

  • OAlerts.evtx — The Hidden Microsoft Office Evidence Log

    Most people have never heard of it. But when someone opened a suspicious file, deleted emails to cover their tracks, or tried to access an encrypted document they weren't supposed to — Office quietly wrote it all down. --------------------------------------------------------------------------------------------------------- Wait, What Even Is OAlerts? Okay let me start with a question. You know when you're about to close a Word document and it hasn't been saved, and that little popup appears saying "Do you want to save changes to Document1?" — that annoying box that's interrupted everyone's day a thousand times? Right. Turns out every single time that box appears, Windows writes a note about it. Name of the file. Timestamp. What the message said. Everything. That's OAlerts.evtx in a nutshell. Every time Microsoft Office shows the user a dialog box — any application, any alert — the contents of that dialog get logged in a custom Windows Event Log file called OAlerts.evtx. It's been there since Office 2010 and most investigators completely miss it. 📁 Location: C:\Windows\System32\winevt\Logs\OAlerts.evtx You can open it directly in Windows Event Viewer — just search "event viewer" in the Start menu, navigate to Applications and Services Logs, and find OAlerts. Every single event in this log has the same Event ID: 300 . That's it. Just one ID. The application name and the message content sit inside the event description, which is the part worth reading. --------------------------------------------------------------------------------------------------------- Why Does This Matter Forensically? Here's the thing — most forensic artifacts tell you what files exist. Shellbags show you folders that were browsed. LNK files show you files that were opened. Jump Lists show recently accessed documents. These are all useful. But These artifacts do not  reliably show content changes . That gap is a real problem when you're trying to prove someone tampered with data. OAlerts fills that gap in a really specific way. Because the "unsaved changes" dialog only appears when there are unsaved changes, seeing that event in the log is evidence that the file was opened and modified. The name of the file is recorded verbatim in the log entry. That's far more than most artifacts give you. One more thing worth calling out: it doesn't matter where the file lives. Local drive, USB stick, network share — if Office shows a dialog about it, OAlerts records it. That means you can catch file activity on removable media that other artifacts might miss entirely. --------------------------------------------------------------------------------------------------------- The Scenarios You'll Actually Encounter Let's go through the real situations where OAlerts becomes useful. These aren't edge cases — I've ordered them by how often they come up in investigations. --------------------------------------------------------------------------------------------------------- Real Examples — Let's Look at Actual Events This is where it gets interesting. Let me show you three events you might encounter — the kind that show up in real investigations. Notice how the log records the dialog message word for word. Whatever Windows showed the user on screen is exactly what ends up in the log. Example 1 — Someone opened something they shouldn't have This one is a classic. Someone on the system had a document called " handles.xlsx " open in Word. They closed it without saving. Word showed the standard "save changes?" dialog — and OAlerts faithfully recorded the entire thing, including the filename. Now we know this document existed on this machine, was opened in Word, was modified (because unsaved changes existed), and the user interacted with it at this exact timestamp. Example 2 — Someone emptied their email trash Outlook is one of the most forensically opaque applications in the Office suite — there aren't many artifacts that track what a user actually did inside it. OAlerts is one of the few exceptions. When a user right-clicks their Deleted Items folder and chooses "Empty Folder", Outlook asks for confirmation first. That confirmation dialog — and the fact it was triggered — goes straight into OAlerts. Worth noting: OAlerts doesn't record which user account triggered the event. The log entry itself doesn't have user identity. But you can cross-reference it with Windows Security Event Log logon events (4624/4648) around the same timestamp to work out who was active on the machine at that moment. Example 3 — Wrong password on an encrypted document This one is particularly interesting for insider threat investigations. When someone tries to open a password-protected Office document and enters the wrong password, Word shows an error dialog. And yes — OAlerts records that too. You'll see the filename and a note that the password was incorrect. This could mean the document was encrypted and the person trying to open it wasn't supposed to have access. --------------------------------------------------------------------------------------------------------- At a Glance — Common Events You'll See Here's a quick reference of the most common scenarios you'll encounter in this log, what they look like in the event description, and what each one tells you forensically. -------------------------------------------------------------------------------------------------------- Connecting OAlerts to the Bigger Picture OAlerts doesn't tell you the whole story on its own — but it connects really well with other artifacts. Here's how I think about combining it with other evidence sources. Think of it this way: OAlerts tells you what happened (a specific file was modified, emails were deleted, a bad password was entered). The Security Event Log tells you who did i t (which account was logged in). LNK files and Jump Lists tell you where the file lived (the full path on disk). Together they build a timeline that's hard to dispute. --------------------------------------------------------------------------------------------------------- How to Collect It During an Investigation Collecting OAlerts.evtx is the same as collecting any other Windows event log. The file lives at C:\Windows\System32\winevt\Logs\OAlerts.evtx. On a live system you can copy it with administrative rights. On a forensic image you just navigate to that path within the image and extract it. For parsing, my Favorite options. Eric Zimmerman's EvtxECmd.exe will parse it cleanly into CSV and you can open it in Timeline Explorer — which means OAlerts events slot right into the same workflow as your other SRUM and event log data. Log Parser and PowerShell's Get-WinEvent both work too. --------------------------------------------------------------------------------------------------------- Quick Reference --------------------------------------------Dean----------------------------------------------------

  • SRUM-DUMP v3: A Practical Guide to Windows Forensics with the New GUI and Feature

    Intro In previous articles we covered ESEDatabaseView for raw database exploration, and SrumECmd for fast command-line parsing. https://www.cyberengage.org/post/how-to-use-srumecmd-to-parse-and-analyze-srudb-dat-files https://www.cyberengage.org/post/examining-srum-with-esedatabaseview This article introduces a fourth approach: SRUM-DUMP v3. Version 3 is a significant redesign from 2.6. If you waana learn or see how version 2.6 works Check out below article https://www.cyberengage.org/post/making-sense-of-srum-data-with-srum_dump-tool The old single-dialog interface is gone. In its place is a three-step GUI wizard, a JSON configuration system, a "dirty words" feature for keyword highlighting, built-in Volume Shadow Copy support for locked live-system files, and full command-line support for automated workflows. If you used the old version and haven't upgraded, the new interface looks entirely different. ------------------------------------------------------------------------------------------------------- Section 1 — What Changed from 2.6 to 3.2 The 2.6 interface was a single dialog — you selected files, clicked OK, and got a spreadsheet. Fast and simple, but no opportunity to guide the analysis. Version 3 rebuilds the workflow around a three-step process that adds two important new concepts: the configuration file and dirty words. The configuration file (srum_dump_config.json) is generated after the tool's first pass through the database. It lists every process path, user SID, and network interface found — before the full analysis runs. This lets you see exactly what's in the database before committing to the full extraction. You can rename entries to be more readable, and flag specific strings for highlighting. The dirty words feature lets you define keywords that will be colour-coded in the output. Any string matching cmd.exe, powershell.exe, a specific malware filename, a suspect username, or a suspicious network name will be highlighted in the colour you specify. This means when you open the output spreadsheet, your points of interest are already visually flagged — you don't have to manually scan thousands of rows. The biggest operational change is locked file handling. SRUM-DUMP 3 can extract SRUDB.dat from a live system through Volume Shadow Copies without requiring manual esentutl repair first . And if the database is corrupt, there are now two ESE parsing engines to try — pyesedb and dissect — and switching between them is a single flag change. ------------------------------------------------------------------------------------------------------- Section 2 — The Three-Step GUI Wizard Download the prebuilt executable from the Releases page at github.com/MarkBaggett/srum-dump. No installation is required — just run the executable. The interface opens on a three-step wizard. Step 1 is file selection. You choose an empty output directory first, then select SRUDB.dat. On a live system you'll find it at C:\Windows\System32\sru\srudb.dat and administrative privileges are required. If those files are locked by the OS — which is normal on a running system — SRUM-DUMP will extract them through Volume Shadow Copies automatically. You don't need to manually copy or repair the database first. You can also optionally provide the SOFTWARE registry hive, which enables automatic resolution of interface LUIDs to human-readable SSID network names. Step 2 is configuration review. After the initial analysis pass, the tool generates srum_dump_config.json and opens it for editing. This is where SRUM-DUMP v3 fundamentally differs from version 2.6. Before the full extraction runs, you can see every process, user, and network in the database. You can rename entries to make them more readable, and you can define dirty words to highlight during analysis. Step 3 is execution. Click Confirm, then Continue. A progress dialog appears and the Close button is disabled until the analysis completes. The output — an Excel spreadsheet with one tab per SRUM table — is written to your output directory. Open it and your dirty word matches are already highlighted in the colours you defined. ------------------------------------------------------------------------------------------------------- Section 3 — The Configuration File and dirty_words The configuration file is the most powerful new feature in version 3. It's generated automatically after the first analysis pass and saved as srum_dump_config.json in your output directory. Think of it as a manifest — before the full extraction runs SRUM-DUMP has already identified every process path, user SID, and network interface in the database and listed them here. The dirty_words section is where you define keywords to colour-highlight in the output. Any string you add — a process name, a username, a network name — will be changed to the specified colour wherever it appears in the spreadsheet. "dirty_words": { "cmd.exe": "highlight-red", "powershell.exe": "highlight-yellow", "suspicious_process": "general-red-bold" } Available colours include highlight-red, highlight-yellow, and general-red-bold. Note that dirty words do have a processing cost — adding many of them will increase analysis time . Use them for your top suspects rather than broad filters. The strings section is also editable. Each AppID and UserID in the database is listed with its resolved string. If you know the username behind a SID, or want to flag a network with a label like "SuspectWifi", you can modify the strings here before the final run. String modifications don't carry the same performance cost as dirty words, so use them freely. A practical workflow: after the initial pass generates the config file, scan the process list for anything unusual before adding dirty words. If you're looking for a specific piece of malware, add its filename. If you know a suspect username, add that. If a particular network is under scrutiny, add that too. Then run the full analysis — the output will have your suspects pre-highlighted, saving significant manual review time on large datasets. ------------------------------------------------------------------------------------------------------- Section 4 — Command-Line Usage SRUM-DUMP 3 includes full command-line support, making it suitable for scripted workflows, automated collection pipelines, and integration with tools like KAPE. Every option available in the GUI is also available as a CLI flag. The --NO_CONFIRM flag (-q) skips the configuration review dialog, which is what you want for automated processing. The --ESE_ENGINE flag is particularly useful when dealing with corrupt databases — if the default pyesedb engine fails or produces incomplete output, switching to dissect sometimes recovers data the other engine cannot. The --OUTPUT_FORMAT flag lets you choose CSV instead of XLS, which is useful when piping output into other tools. All flags use double-dash for long form and single-dash for short form. The input file flag is --SRUM_INFILE or -i. Output directory is --OUT_DIR or -o. ------------------------------------------------------------------------------------------------------- Section 5 — Reading the Output: Three Key Tables The Excel spreadsheet output contains one tab per SRUM table. The three most important for investigation are Network Connectivity Usage, Network Data Usage, and Application Resource Usage. Network Connectivity Usage The Network Connectivity Usage table documents when the system connected to each network, how long each connection lasted, and which wireless interface was used. This is one of the best tables for establishing the physical location of a computer during a given time window. Each row represents a SRUM entry. The timestamp column shows when the SRUM entry was recorded — typically at each hour boundary. The network interface column tells you the protocol used — Wireless 802.11 for WiFi. The network name column shows the resolved SSID if a SOFTWARE hive was provided. The connected time column shows how long that network had been connected at the time of the SRUM entry. The connect start time column shows when the connection originally began. One important interpretation note: you will frequently see the same network appearing across two SRUM entries with the same ConnectStartTime. This is not two separate connections — it's a single connection spanning across two recording periods. When you see this pattern, take the ConnectStartTime as the beginning of the connection and the longest ConnectedTime value as the total duration. Adding the two time values together would overstate the connection length. Additionally, if two consecutive SRUM entries are more than 60 minutes apart (plus or minus 10 minutes), a system shutdown or hibernate almost certainly occurred during that gap. SRUM entries are typically recorded every hour, so a gap larger than that is meaningful. ------------------------------------------------------------------------------------------------------- Network Data Usage The Network Data Usage table records which specific applications were using each network during each SRUM recording period. Where the Connectivity table tells you which networks the system was connected to and when, this table tells you what was happening on those networks at the application level. Each row shows an application path, the user SID responsible for running it, the network it communicated on, and the total bytes sent and received since the last SRUM entry. The application path is the full executable path, already resolved from AppID by the tool. This table is particularly useful for identifying unusual data volumes. If you see a process transferring large amounts of data on a network that wouldn't normally have that traffic, or a process you wouldn't expect to be making network calls at all, those are your investigative leads. Filter by the AppId column to isolate specific applications across all recording periods and see their full network activity history. One precision note: the byte counts represent raw data at the protocol level and include protocol overhead and compressed data. They may not match exactly to a file size you're trying to account for, but they are reliable for identifying relative volumes and patterns. ------------------------------------------------------------------------------------------------------- Application Resource Usage The Application Resource Usage table is the broadest of the three — it records all applications active during each SRUM recording period, not just those using the network. This makes it the primary table for evidence of application execution. Each row records the full executable path, the user SID, the volume and directory, CPU cycle time for foreground and background processing, memory working set size, and foreground and background bytes read and written to disk. The timestamp represents the end of the 60-minute window during which that application was active. For a forensic analyst, the most immediately useful fields are the application path, the user SID, and the timestamp. Together these establish what ran, under whose account, and during what hour. Cross-referencing with the Network Data Usage table for the same time window shows whether that application was also making network connections. The CPU and disk I/O fields require more research before they can be relied upon for definitive conclusions — the forensic community is still developing best practices for interpreting them. The execution evidence they provide, however, is solid. ------------------------------------------------------------------------------------------------------- Conclusion + Quick Reference SRUM-DUMP 3 takes what was already a capable forensic tool and adds the investigative workflow it was always missing. The configuration file gives you a preview of what's in the database before you commit to the full extraction. Dirty words mean your suspects are flagged before you open the spreadsheet. Volume Shadow Copy support removes the manual locked-file problem from the process entirely. The three key tables remain the same as in version 2.6 — Network Connectivity Usage for location and connection timeline, Network Data Usage for application-level bandwidth and exfiltration indicators, and Application Resource Usage for comprehensive execution evidence. SRUM-DUMP is available free. It is the fourth tool in this series covering SRUM analysis, alongside ESEDatabaseView, SrumECmd, and the broader raw database exploration approaches documented in earlier articles. ----------------------------------------------------Dean---------------------------------------------------- Check Out series Link below: https://www.cyberengage.org/courses-1/srum%3A-unveiling-insights-for-digital-investigations

  • How to Use SrumECmd to Parse and Analyze SRUDB.dat Files

    Intro The Windows operating system maintains various logs and databases for performance monitoring, user activity tracking, and resource usage statistics. One such database is the SRUDB.dat file — System Resource Usage Databas e. For forensic analysis, performance troubleshooting, and security auditing, parsing and analyzing this database can provide valuable insights. Eric Zimmerman's tool SrumECmd (currently v0.5.0.1) is designed to facilitate extraction and analysis of data from SRUDB.dat. It parses the database, optionally cross-references the SOFTWARE registry hive to resolve network names, and outputs clean CSV files ready for analysis in Timeline Explorer. One underappreciated forensic advantage: SrumECmd can surface evidence of applications that no longer exist on disk. Because SRUM records execution data independently of whether the executable is still present, deleted malware or removed tools still leave traces in the database. If it ran, SRUM recorded it. Section 1 — Prerequisites Before you start, ensure you have the following: SrumECmd — Download from Eric Zimmerman's official toolkit at ericzimmerman.github.io. Use the .NET 6 or Net 9version. Extract to a convenient location such as C:\Tools\ZimmermanTools\. No installation required. SRUDB.dat — Located at C:\Windows\System32\sru\SRUDB.dat. On a live running system this file is locked by Windows — you cannot copy it with File Explorer. Use a forensic imager or live triage tool. Always work on copies, never originals. SOFTWARE hive — Located at C:\Windows\System32\config\SOFTWARE. This is optional but strongly recommended. Without it, L2ProfileId values in the network tables remain as raw numbers and network names won't be resolved. Collect it at the same time as SRUDB.dat. KAPE (Optional) — Available free for non-commercial use from Kroll at kroll.com. Automates the entire collection and parsing workflow and handles locked file access on live systems. Section 2 — Running SrumECmd: Verified Command Syntax There are two primary modes — file mode (-f) and directory mode (-d) . Use -f when you know the exact path to your copied SRUDB.dat. Use -d when pointing at a folder, which is the mode used with KAPE — it recursively scans the directory and locates both SRUDB.dat and the SOFTWARE hive automatically. You only need one or the other, not both. In either case, --csv is required and must be a full path in double quotes. The -r flag for the SOFTWARE hive is optional but strongly recommended for any network investigation — without it, you'll have unresolved numeric network IDs throughout your output. Open the command prompt as Administrator before running. Both modes are shown below with exact syntax taken from the official README. Section 3 — Dirty Database: Official Repair Process When you run SrumECmd against a SRUDB.dat copied from a live machine, you will almost certainly encounter this error: "Error processing file! Message: Object reference not set to an instance of an object." This means the database is dirty — it wasn't cleanly shut down before you copied it. This is the norm in incident response, not an exception. The fix is a two-command esentutl repair process. Section 4 — Output Files and Data Retention After a successful run, navigate to your output directory . You'll find several CSV files — one per SRUM table parsed . Open them in Timeline Explorer for filtering, date ranges, and column pivoting. Data retention is not uniform across tables. Application and process data is kept for approximately 30 days. Network usage data is retained for approximately 60 days. This difference matters for your collection timeline — network evidence may extend twice as far back as application evidence, so don't assume they share the same window. The application resource usage CSV is particularly valuable for malware hunting. SRUM records application execution independently of whether the file still exists on disk. Deleted executables still appear with their full path, the user SID that ran them, and execution timestamps. This is frequently the only remaining evidence of tools that were run and then cleaned up. Section 5 — KAPE Integration (Corrected Syntax) The correct KAPE syntax uses two separate flags: --tdest for where collected raw files are saved, and --mdest for where parsed module output is written. .\kape.exe --tsource C: --tdest "F:\Output for testing" --target SRUM --gui Additionally, there is a second KAPE module worth knowing about: PowerShell_SrumECmd_SRUM-RepairAndParse. This module uses a PowerShell script (SRUM-Repair.ps1) to automatically handle the dirty state repair and then run SrumECmd in a single step. This is particularly useful when you know your source database is likely dirty and you want to automate the entire collect → repair → parse workflow. Section 6 — Correlating with Other Windows Artifacts SrumECmd CSV output becomes significantly more valuable when you correlate it against other Windows artifacts in Timeline Explorer. No single artifact tells the whole stor y — building a reliable timeline means cross-referencing multiple sources. Prefetch is the most natural first check. If AppResourceUseInfo shows an executable running at a specific time, Prefetch should confirm the same execution. Discrepancies between the two are worth investigating — if SRUM has it but Prefetch doesn't, the executable may have been run in a way that bypasses Prefetch tracking. UserAssist in the registry records interactive application launches via Explorer . If SRUM shows execution but UserAssist has no record, the application was likely launched programmatically rather than by the user clicking on it directly — relevant for distinguishing manual from automated actions. Event Logs (specifically logon events 4624 and 4634) let you confirm which user accounts were active during time windows that show up in SRUM. Combining these with the UserId field in SRUM output builds a strong attribution chain. Browser history and web artifacts explain the destination of high-volume network transfers recorded in SRUM network tables. If SRUM shows a browser process sending 800MB outbound, browser history tells you where it went. Conclusion + Quick Reference Eric Zimmerman's SrumECmd is a powerful tool for parsing and analyzing SRUDB.dat files, providing detailed insights into system resource usage and user activity. Whether you use it standalone or integrate it with KAPE for automated workflows, SrumECmd can significantly enhance your forensic and troubleshooting capabilities. --------------------------------------------Dean--------------------------------------------------

  • Unpacking SRUM: The Digital Forensics Goldmine in Windows

    Updated on 25 Feb, 2026 Intro In this article on SRUM we covered the basics — what the database is, why it matters for digital forensics, and a few real-world cases where it changed everything. But if you're doing serious incident response or forensic analysis, the basics only take you so far. The deeper you go into SRUM, the more useful it becomes — and the more edge cases you run into. What does SRUM actually track? Where does the data live before it's written to the database? What changed between Windows 8 and Windows 11? Why do all your SRUM entries have the same timestamp — and what does that actually mean? And what do you do when the database file comes back corrupted? This article answers all of those questions. Let's get into the technical side of SRUM. System Resource Usage Monitor (SRUM), a powerful tool that has become a game-changer in digital forensic investigations. Section 1 — What SRUM Actually Records (All Four Categories) SRUM doesn't just log one thing — it maintains four distinct categories of data, each tracked separately and each telling a different part of the story. Understanding what lives in each category is key to knowing which one to focus on during an investigation. The most valuable for most investigations are the three big ones : what applications ran (and who ran them), what happened on the network, and how the system connected to networks . Energy usage rounds things out and is often overlooked, but it has its own unique value — especially when you need to prove whether a device was powered on and active during a specific window of time. Section 1.1 - Accessing and Managing SRUM Data Users can get a glimpse of SRUM data through the Task Manager's "App history" and "Details" tabs, showcasing performance statistics and approximately 30 days of historical data. However, a mere click on "Delete usage history" doesn't erase SRUM data immediately, requiring further investigation into data retention and purging policies. Section 2 — Where the Data Lives and When It Gets Written This is one of the most important technical details to get right — and one that trips up a lot of analysts on their first few SRUM investigations. SRUM doesn't write data instantly. It batches everything and flushes it out on a schedule . In Windows 8 and 8.1 , SRUM performance data was first staged in the Windows registry under HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SRUM, and then transferred into the SRUDB.dat database approximately once per hour , or when the system properly rebooted or shut down . Think of the registry as the short-term buffer. Windows 10 and 11  changed this. Data is still written to SRUDB.dat every 60 minutes, but the registry staging step is largely gone. The SOFTWARE registry is now only referenced to look up the names of the SRUM database tables — the actual pre-write buffer doesn't live there anymore. There's also an important quirk discovered in Windows 10 Version 2004 and carried through Windows 11: SRUDB.dat is not always written on shutdown. Testing showed that if a system is shut down twice within 10-minute intervals, data may not be flushed until the third reboot — specifically, once the system has been running for more than 60 minutes since the last SRUM entry. This matters a lot in live forensics scenarios where you're racing against a reboot. Section 3 — How Long Does SRUM Keep Data? The standard answer you'll hear is that SRUM keeps about 30 days of data — and that's largely true for most tables. But the full picture is more nuanced, and there's one major exception that's incredibly useful. For most SRUM tables, data is retained for 30 days under normal operation .  It's not uncommon to find 60 days worth of historical records though  — Windows doesn't always purge aggressively. Data beyond 60 days is typically gone. One important operational note: if a system is powered off for an extended period (weeks), when it boots back up, SRUM may immediately purge anything older than 30 days. So the clock on that older data can run out fast when a device comes back online. The exception is the Energy Usage LT table — that "LT" stands for Long-Term. This table operates on a completely different retention schedule. In Some cases, the Energy Usage LT table has contained data going back more than four years. The trade-off is that it only tracks high-level information: whether the system was running on AC or DC power, and for how long. But in the right case, four years of "was this laptop plugged in or on battery at this time" data can be surprisingly powerful. There's also one more lifeline worth knowing about: SRUM is one of the artifacts that gets captured inside Volume Shadow Copies . If shadow copies are available on the system, you can potentially pull historical versions of the SRUDB.dat database from earlier points in time — effectively extending your forensic window even further. Section 4 — Dealing With a Dirty or Corrupted SRUM Database Here's a practical reality of incident response: most of the time, systems don't get cleanly shut down before investigators get to them. Someone pulls a power cable, a system crashes, or the machine is seized while running. This means the SRUDB.dat file you're working with may be in a "dirty" state — the ESE database wasn't properly closed and the file could be partially corrupted. The good news is Windows has a built-in tool for exactly this situation: esentutl.exe . This utility handles defragmentation, recovery, integrity checking, data dumping, and repair of ESE databases. It's already on the machine — you just need to know the right commands. There are two important rules when using esentutl to repair a SRUM database. First: always run the repair on the same version of Windows as the system the dirty database came from. ESE database formats have version differences, and repairing a Windows 11 database on a Windows 10 machine (or vice versa) can make things worse. Second: always check the database header first before attempting repair — this confirms it's actually dirty before you do anything to it. If deleted records have been removed from the SRUM database, there's also a recovery path worth knowing about. A utility called EseCarve can potentially recover deleted entries from the ESE database file through carving techniques — the same general approach used for file carving during traditional forensics. Section 5 — SRUM Extensions: The Table GUIDs Explained Under the registry key  SOFTWARE\Microsoft\Windows NT\CurrentVersion\SRUM, there are three subkeys: Extensions, Parameters, and Telemetry. The Extensions subkeys are particularly useful for analysts because they map directly to the tables inside the SRUDB.dat database — each GUID corresponds to a specific table and tells you what kind of data to expect there. In Windows 10 (from version 1803 onwards) and Windows 11, there are nine extension subkeys. Not all of them have equal forensic value — the research community consistently finds that the three most valuable tables are the Network Connectivity Usage Monitor, the Network Data Usage Monitor, and the Application Resource Usage Provider.  These are the ones you should prioritize in an investigation. Windows 11 made only one change to the table structure: the Energy Usage Provider table gained two additional columns — "Battery Count" and "Battery Charge Limit." Neither has proven particularly useful in analysis so far, but it's worth knowing they're there. The SRUM extension subkeys have also evolved across Windows versions, particularly the Energy Estimation Provider which changed its GUID three times between Windows 10 versions 1511, 1607, and 1803. If you're analyzing systems across different Windows versions, you need to be aware that the same table may be stored under different GUIDs depending on the OS version. Section 6 — How SRUM Evolved Across Windows Versions SRUM has been in constant evolution since it first appeared in Windows 8. For forensic analysts working across multiple systems running different versions of Windows, understanding these changes is genuinely important — what you find in SRUDB.dat on a Windows 8 machine looks different from what you'll find on a Windows 11 system, and the tools and techniques need to account for that. Conclusion SRUM is one of those artifacts that rewards the analyst who takes the time to understand it properly. The basics are straightforward — but once you dig into the write timing behavior, the table structure, the version differences, and the database repair workflow, it becomes a genuinely powerful tool in your arsenal.

  • SRUM: The Digital Detective in Windows

    Intro In this article on SRUM we covered the basics — what the database is, why it matters for digital forensics, and a few real-world cases where it changed everything. But if you're doing serious incident response or forensic analysis, the basics only take you so far. The deeper you go into SRUM, the more useful it becomes — and the more edge cases you run into. What does SRUM actually track? Where does the data live before it's written to the database? What changed between Windows 8 and Windows 11? Why do all your SRUM entries have the same timestamp — and what does that actually mean? And what do you do when the database file comes back corrupted? This article answers all of those questions. Let's get into the technical side of SRUM. System Resource Usage Monitor (SRUM), a powerful tool that has become a game-changer in digital forensic investigations. Section 1 — What SRUM Actually Records (All Four Categories) SRUM doesn't just log one thing — it maintains four distinct categories of data, each tracked separately and each telling a different part of the story. Understanding what lives in each category is key to knowing which one to focus on during an investigation. The most valuable for most investigations are the three big ones : what applications ran (and who ran them), what happened on the network, and how the system connected to networks . Energy usage rounds things out and is often overlooked, but it has its own unique value — especially when you need to prove whether a device was powered on and active during a specific window of time. Section 1.1 - Accessing and Managing SRUM Data Users can get a glimpse of SRUM data through the Task Manager's "App history" and "Details" tabs, showcasing performance statistics and approximately 30 days of historical data. However, a mere click on "Delete usage history" doesn't erase SRUM data immediately, requiring further investigation into data retention and purging policies. Section 2 — Where the Data Lives and When It Gets Written This is one of the most important technical details to get right — and one that trips up a lot of analysts on their first few SRUM investigations. SRUM doesn't write data instantly. It batches everything and flushes it out on a schedule . In Windows 8 and 8.1 , SRUM performance data was first staged in the Windows registry under HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SRUM, and then transferred into the SRUDB.dat database approximately once per hour , or when the system properly rebooted or shut down . Think of the registry as the short-term buffer. Windows 10 and 11 changed this. Data is still written to SRUDB.dat every 60 minutes, but the registry staging step is largely gone. The SOFTWARE registry is now only referenced to look up the names of the SRUM database tables — the actual pre-write buffer doesn't live there anymore. There's also an important quirk discovered in Windows 10 Version 2004 and carried through Windows 11: SRUDB.dat is not always written on shutdown. Testing showed that if a system is shut down twice within 10-minute intervals, data may not be flushed until the third reboot — specifically, once the system has been running for more than 60 minutes since the last SRUM entry. This matters a lot in live forensics scenarios where you're racing against a reboot. Section 3 — How Long Does SRUM Keep Data? The standard answer you'll hear is that SRUM keeps about 30 days of data — and that's largely true for most tables. But the full picture is more nuanced, and there's one major exception that's incredibly useful. For most SRUM tables, data is retained for 30 days under normal operation . It's not uncommon to find 60 days worth of historical records though — Windows doesn't always purge aggressively. Data beyond 60 days is typically gone. One important operational note: if a system is powered off for an extended period (weeks), when it boots back up, SRUM may immediately purge anything older than 30 days. So the clock on that older data can run out fast when a device comes back online. The exception is the Energy Usage LT table — that "LT" stands for Long-Term. This table operates on a completely different retention schedule. In Some cases, the Energy Usage LT table has contained data going back more than four years. The trade-off is that it only tracks high-level information: whether the system was running on AC or DC power, and for how long. But in the right case, four years of "was this laptop plugged in or on battery at this time" data can be surprisingly powerful. There's also one more lifeline worth knowing about: SRUM is one of the artifacts that gets captured inside Volume Shadow Copies . If shadow copies are available on the system, you can potentially pull historical versions of the SRUDB.dat database from earlier points in time — effectively extending your forensic window even further. Section 4 — Dealing With a Dirty or Corrupted SRUM Database Here's a practical reality of incident response: most of the time, systems don't get cleanly shut down before investigators get to them. Someone pulls a power cable, a system crashes, or the machine is seized while running. This means the SRUDB.dat file you're working with may be in a "dirty" state — the ESE database wasn't properly closed and the file could be partially corrupted. The good news is Windows has a built-in tool for exactly this situation: esentutl.exe . This utility handles defragmentation, recovery, integrity checking, data dumping, and repair of ESE databases. It's already on the machine — you just need to know the right commands. There are two important rules when using esentutl to repair a SRUM database. First: always run the repair on the same version of Windows as the system the dirty database came from. ESE database formats have version differences, and repairing a Windows 11 database on a Windows 10 machine (or vice versa) can make things worse. Second: always check the database header first before attempting repair — this confirms it's actually dirty before you do anything to it. If deleted records have been removed from the SRUM database, there's also a recovery path worth knowing about. A utility called EseCarve can potentially recover deleted entries from the ESE database file through carving techniques — the same general approach used for file carving during traditional forensics. Section 5 — SRUM Extensions: The Table GUIDs Explained Under the registry key SOFTWARE\Microsoft\Windows NT\CurrentVersion\SRUM, there are three subkeys: Extensions, Parameters, and Telemetry. The Extensions subkeys are particularly useful for analysts because they map directly to the tables inside the SRUDB.dat database — each GUID corresponds to a specific table and tells you what kind of data to expect there. In Windows 10 (from version 1803 onwards) and Windows 11, there are nine extension subkeys. Not all of them have equal forensic value — the research community consistently finds that the three most valuable tables are the Network Connectivity Usage Monitor, the Network Data Usage Monitor, and the Application Resource Usage Provider. These are the ones you should prioritize in an investigation. Windows 11 made only one change to the table structure: the Energy Usage Provider table gained two additional columns — "Battery Count" and "Battery Charge Limit." Neither has proven particularly useful in analysis so far, but it's worth knowing they're there. The SRUM extension subkeys have also evolved across Windows versions, particularly the Energy Estimation Provider which changed its GUID three times between Windows 10 versions 1511, 1607, and 1803. If you're analyzing systems across different Windows versions, you need to be aware that the same table may be stored under different GUIDs depending on the OS version. Section 6 — How SRUM Evolved Across Windows Versions SRUM has been in constant evolution since it first appeared in Windows 8. For forensic analysts working across multiple systems running different versions of Windows, understanding these changes is genuinely important — what you find in SRUDB.dat on a Windows 8 machine looks different from what you'll find on a Windows 11 system, and the tools and techniques need to account for that. Conclusion SRUM is one of those artifacts that rewards the analyst who takes the time to understand it properly. The basics are straightforward — but once you dig into the write timing behavior, the table structure, the version differences, and the database repair workflow, it becomes a genuinely powerful tool in your arsenal. ---------------------------------------------------Dean------------------------------------------------

  • Examining SRUM with ESEDatabaseView

    Updated on 26 Feb, 2026 If you waana know more about SRUM do checkout my previous articles! https://www.cyberengage.org/post/srum-the-digital-detective-in-windows Intro You've heard about SRUM. You know it tracks application usage, network activity, and energy data going back 30 days. But knowing it exists and actually getting useful evidence out of it are two different things. In this article we're going to walk through the whole process from scratch We're using ESEDatabaseView by NirSoft because it's free, doesn't require installation, and gives you direct access to every table in the raw database. It's the best tool for understanding what SRUM actually contains before you move on to automated parsers. Section 1 — The Full Workflow at a Glance Before we go step by step, here's the full picture. There are six stages to a SRUM analysis using ESEDatabaseView. Each one feeds into the next, and skipping any of them — especially the repair check — can cause problems further down the line. Section 2 — Step 1 & 2: Collect the Files and Check for Dirty State The SRUM database lives at C:\Windows\System32\sru\SRUDB.dat . On a live running system, Windows has this file locked — you can't just browse to it and copy it with File Explorer. You'll need either a forensic imaging tool or Kape, a live triage script, or a tool like SRUM-DUMP that handles the locked file access for you. You also need a second file: the SOFTWARE registry hive at C:\Windows\System32\config\SOFTWARE You'll need this later to resolve network names from the L2ProfileId values you'll find in the database. Grab both at the same time. Once you have your copies, the very first thing to do before opening ESEDatabaseView is check whether the database was closed cleanly. ESE databases can end up in a "dirty" state if the system was powered off abruptly, crashed, or was seized while running — which happens a lot in real incident response. If you try to open a dirty database in ESEDatabaseView, you'll either get an error or see incomplete, unreliable data. Check first, always. Section 3 — Step 3: Opening SRUDB.dat in ESEDatabaseView Download ESEDatabaseView from NirSoft — it's a small portable executable, no installation needed. Run it as administrator. To open the database: File → Open → navigate to your SRUDB.dat copy → click Open. The tool will load the database and show you a table browser. By default it opens to a system table called MSysObjects — this is just the database's internal index of all its own tables and you don't need to worry about it. What you do want to look at is the combo box just below the toolbar, which lists every table in the database. Click on it and you'll see all the tables available — typically 16 in a modern Windows 10 or 11 SRUDB.dat. Each table has a GUID as its name. Most of them won't mean anything at first glance, but you'll quickly learn the three or four that matter. The column headers in each table are the field names, and you can click any header to sort by that column — useful when you're trying to find entries for a specific time window or a specific application. One thing to keep in mind as a beginner: the data in some fields looks like raw numbers or index values. That's intentional — the database stores references, not human-readable values. Part of what we're going to cover is exactly how to decode those references into something meaningful. Section 4 — Step 4: Reading the Network Data Usage Table Select {973F5D5C-1D90-4944-BE8E-24B94231A174} from the table dropdown — this is the Network Data Usage Monitor, and it's usually where you start. Each row in this table represents one application's network activity during one recorded hour. The columns you'll immediately notice are things like AppID, UserId, BytesSent, BytesRecvd, InterfaceLuid, and L2ProfileId. Some of these are immediately meaningful — BytesSent and BytesRecvd are exactly what they sound like. Others need decoding. AppID is just a number — it's an index that points to another table. UserId is a numeric reference too. L2ProfileId is a number that refers to a specific Wi-Fi or network profile. InterfaceLuid encodes the type of network interface (Wi-Fi, Ethernet, or something like Point-to-Point Protocol). Don't be put off by the raw numbers. The next few steps show you exactly how to decode each one into something human-readable. Think of this table as the hub — everything else connects back to it. Section 5 — Step 5: Correlating Fields Across Tables This is the step where everything comes together. You have two things to resolve from the network table: what application AppID 3621actually is, and what network L2ProfileId 268435458 actually refers to. Here's how to do both. To decode an AppID: switch to the SruDbIdMapTable in the table dropdown . Find the row where IdIndex equals your AppID — in this case 3621. The IdBlob column in that row contains a Unicode string (UTF-16 Little Endian) with the full path to the executable. That's your application. If you see C:\Windows\SysWOW64\audiodg.exe, you now know exactly what was generating that network traffic. To decode an L2ProfileId : this one requires leaving ESEDatabaseView and opening the SOFTWARE registry hive. Open it in Registry Explorer (or regedit if you're working on a live system), navigate to SOFTWARE\Microsoft\WlanSvc\Interfaces\ find the GUID that has a Profiles subkey, and browse the profile entries until you find one whose ProfileIndex value matches your L2ProfileId. Then expand that profile → MetaData → look at the Channel Hints value. That hex value decodes to the human-readable network name — the actual Wi-Fi SSID. Once you have both pieces, go back to your original network table row and you can now read the full picture: audiodg transferred inbound and outbound on the [Network name] network at [Tim and date], under user account ID 2275. Section 6 — Step 6: Export and Build Your Timeline Once you've done your correlation work, the last step is getting the data out of ESEDatabaseView into a format you can actually work with and report from. In ESEDatabaseView, go to File → Export Current Table. You can export as a comma-delimited text file (CSV) or tab-delimited. Export the tables you've been working with — at minimum the Network Data Usage table and the SruDbIdMapTable. Then open them in Excel or your preferred spreadsheet tool. In Excel, you can do the AppID resolution yourself using VLOOKUP — put the SruDbIdMapTable on one sheet, use VLOOKUP to pull the IdBlob (executable path) into the network table using AppID as the key. Do the same for your network name resolutions. Now every row in your network table has a human-readable application name and a human-readable network name instead of raw index numbers. --------------------------------------------------------Dean--------------------------------------------

  • Hidden in Plain Sight: How Attackers Weaponize Alternate Data Streams to Hide Malware

    A while back I wrote about how Windows uses Alternate Data Streams to tag files downloaded from the internet — that Zone.Identifier trick that quietly labels your files as "came from the web." A lot of people found it interesting because it's one of those Windows features that silently runs in the background and most users never think about. But here's the thing about ADS that I didn't cover in that article, and honestly it's the part that should make defenders a little nervous: the exact same feature that Microsoft uses to label your downloads? Attackers use it to hide malware . And they've been doing it for years, targeting major organizations, hiding ransomware payloads, and evading security tools — all inside a feature built right into Windows. So if you read the first article and thought "huh, cool Windows feature" — this one's the darker chapter. Let's talk about how attackers actually weaponize ADS. Section 1 — Quick Recap: What Is ADS Again? Quick refresher before we get into the attack stuff. Every file on an NTFS volume (which is basically every modern Windows system) can carry more than one "stream" of data. You have the main data stream — that's the file content you normally see and interact with. But NTFS allows additional streams to be attached to the same file, hidden under a colon syntax like this: filename.txt:hiddenstream Most Windows applications, Windows Explorer, and a lot of security tools only look at the primary data stream. The hidden streams? Completely invisible to them. You can't see their size in Explorer, they don't show up in a normal DIR listing, and they travel with the file if you copy it on NTFS. That last part is key. Zone.Identifier is the legitimate example — Windows writes it automatically when you download a file. But the exact same mechanism works for an attacker who wants to tuck a malicious executable inside what looks like a completely harmless text file. MITRE ATT&CK tracks this as T1564.004 — Hide Artifacts: NTFS File Attributes. Section 2 — Four Ways Attackers Actually Use This So how does this actually show up in real attacks? There are four main things attackers do with ADS, and they often chain them together in the same campaign. Here's how each one works in practice. Section 3 — Executing Payloads with Windows' Own Tools (LOLBAS) This is where it gets really uncomfortable for defenders. Hiding a file in an ADS is one thing — but attackers don't even need a separate dropper to execute it. Windows ships with a long list of native binaries that will happily run content directly from an alternate data stream. The LOLBAS project (Living Off the Land Binaries and Scripts) documented a whole category of these, and it's a wild read. The idea of LOLBAS is simple: if you can make a legitimate, signed Windows tool do your dirty work, you blend in perfectly. No sketchy executables, no unsigned code. Just Windows doing what Windows does — except the attacker is the one pulling the strings. The classic example is rundll32.exe. This is the standard Windows utility for loading DLL files. Normally harmless. But if you point it at an ADS path, it'll execute whatever DLL is hiding in that stream — and to most security tools, all they'll see is rundll32 running, which is totally normal. Same story with wscript, certutil, bitsadmin, and others. They all have documented capability to interact with ADS. Section 4 — Real Malware That Did This This isn't theoretical. MITRE ATT&CK lists over a dozen named malware families that used ADS in real campaigns. Here are three of the most well-known examples — and they're a good reminder that this technique has been used by everyone from sophisticated APTs to big ransomware operations. Section 5 — Okay, How Do We Actually Find These? So now that we know what's possible, the obvious question is: how do defenders catch this? The good news is there are several ways to detect ADS abuse, both on live systems and during forensic analysis. The key is knowing where to look — because default Windows tooling doesn't make it easy. The two built-in Windows commands that surface ADS data are dir /r in CMD, and Get-Item with the -Stream parameter in PowerShell. These will show you streams you wouldn't normally see. For forensic analysis , Sysinternals Streams.exe is the go-to tool for getting a clean list of non-standard streams across a directory. On the detection and hunting side, MITRE's CAR analytics and tools like Sysmon give you command-line argument visibility — which is where ADS execution leaves its traces. When rundll32 or wscript get called with a path containing a colon followed by a stream name, that's your indicator. Normal legitimate calls to these tools don't look like that. Conclusion The thing that makes ADS such an effective attacker technique is the same thing that made Zone.Identifier interesting in the first article — it's hidden in plain sight. The file is right there on the filesystem . You can see it, you can open it, everything looks normal. The malicious content is just... attached to it in a place most people never think to look. The good news is that with the right tooling — Sysmon, EDR with command-line visibility, or forensic tools that parse the MFT properly — ADS abuse leaves traces. But the bigger takeaway is this: the security gap here isn't really technical — it's awareness. Most security teams know about ADS, but how many have actually tuned their detection rules for it? How many have checked whether their EDR surfaces ADS execution events? I f the answer is "not sure," that's worth a few hours of your time to find out. Because if ransomware groups like ALPHV and WastedLocker are using this technique in real campaigns against real companies, you can bet the less-famous threat actors are too. ---------------------------------------------Dean-----------------------------------------------------------

  • Volume Shadow Copies: The Hidden Evidence Goldmine You Need to Know About

    Updated 22 Feb, 2026 v2 Section 1 — Why Attackers Can't Always Hide Their Tracks When a sophisticated attacker gets into a system, one of the first things they think about is cleanup. We're talking file wipers, free space wipers, deleting archive files — the whole nine yards. Say they used a privilege escalation tool to move through the network. Before they leave, they'll try to wipe that tool so nobody finds it. Same goes for those .rar archives they used to bundle up stolen data before exfiltrating it — gone. The problem for them (and the good news for us) is that Windows has been quietly taking snapshots of the system in the background the whole time. Even if an attacker nukes a file, there's a decent chance a copy of it is sitting in a volume shadow snapshot from a few hours or days earlier. That's the whole game here. To know more about forensic Wipers: Link below https://www.cyberengage.org/post/every-forensic-investigator-should-know-these-common-antiforensic-wipers Section 2 — What Even Is a Volume Shadow Copy? Let's back up a second. Volume Shadow Copies (VSCs) are point-in-time snapshots of your file system, managed by the Volume Shadow Copy Service (VSS). This thing has been around since Windows XP — though back then it was called System Restore points and it was a lot more limited. Starting with Vista and Server 2008, Microsoft upgraded it significantly. Instead of just backing up a handful of key system files, VSS started capturing near-complete snapshots of the entire volume. That's a huge deal for forensics — we're talking recovering deleted executables, DLLs, drivers, registry files, event logs the attacker deleted. Basically rewinding the whole system to a previous state. The way it works under the hood is called copy-on-write (COW). Whenever something gets written to disk, VSS first saves a backup copy of those data blocks before letting the new data overwrite them. These backed-up blocks are stored in 16KB chunks inside the System Volume Information folder, tracked by a catalog file named with a specific GUID. Section 3 — The ScopeSnapshots Problem Here's where things get a little annoying. Starting with Windows 8, Microsoft introduced a feature called ScopeSnapshots, which is now enabled by default on Windows 8, 8.1, 10, and 11. When this is turned on, volume snapshots only capture files "relevant for system restore" — which basically brings us back to the limited Windows XP era. Files on the user's desktop, random directories, stuff an attacker might leave behind? Potentially not captured. The good news: Windows Server platforms still use the full snapshot functionality — so if you're analyzing a server (which is often the most critical machine in an intrusion), you're in good shape. And on client systems you can disable ScopeSnapshots with a registry tweak shown below. Also worth knowing — there's a small exclusion list at HKLM\SYSTEM\CurrentControlSet\Control\BackupRestore\FilesNotToSnapshot for files VSS won't capture. The hibernation file and page file are typically excluded too, though some have found them present in certain cases — so don't write them off completely. Section 4 — Listing Available Shadow Copies First thing you want to do on a live Windows machine is see what shadow copies are actually available. Open Command Prompt as Administrator and run the command below — replace C: with whatever drive you're targeting. The output will show each shadow copy with its volume name, the originating machine, and — most importantly — the creation timestamp. That timestamp is how you figure out which snapshot might contain the evidence you're after. Section 5 — Accessing Shadow Copies from a Live System If you're working on a live machine and want to browse a shadow copy, symbolic links are your friend. Here's the process — once you've created the link, navigate to that folder in File Explorer or Command Prompt. It'll look just like a regular directory, but you're actually browsing the snapshot from that point in time. This is a quick way to pull files that have since been deleted or modified on the live system. Section 6 — Analyzing Shadow Copies from a Disk Image For critical systems — patient zero, the executive's laptop, whatever the main target was — you're going to want a full disk image. That way you have everything and you're not touching the live system any more than necessary. Here's where the real forensic tools come in. Option 1 : Arsenal Image Mounter Arsenal Image Mounter does something clever — it uses a driver to make the disk image look like a real physical SCSI drive to Windows. Once Windows thinks it's a real disk, it automatically exposes all the volume shadow copies on it. Note: FTK Imager's mount feature does NOT expose VSCs to the OS, which is why Arsenal is the go-to here. Option 2: libvshadow When you need to work without relying on Windows, the libvshadow tools from Joachim Metz are fantastic. The two main tools are vshadowinfo (lists shadow copies) and vshadowmount (exposes them as raw disk images). Here's the full workflow: Option 3: Kape  Link below https://www.cyberengage.org/post/volume-shadow-copy-with-kape Section 7 — Timeline Analysis with log2timeline Here's where things get really powerful. If you're building a forensic timeline, log2timeline.py has built-in support for VSS. When you point it at a disk image, it'll prompt you to include shadow copies and let you pick which ones — none, some, or all. The big challenge with VSS timeline analysis is duplicate data — the same event log entry might show up across five different snapshots. That's where psort's deduplication feature saves you. It filters out identical entries across snapshots so you're not drowning in noise. To learn more using Plaso : log2timeline Link below https://www.cyberengage.org/post/a-deep-dive-into-plaso-log2timeline-forensic-tools Section 8 — What This Means for an Investigation To put it simply — volume shadow copies can completely change the outcome of a case. Here's a snapshot of what you can realistically recover: Even when a machine has been thoroughly wiped, shadow copies often survive because attackers either don't know about them, don't have time to clear them, or can't access them without administrative tools that would leave their own traces. Their oversight is your advantage. Conclusion Volume Shadow Copies are one of those features that exist quietly in the background, doing their job whether anyone pays attention or not. For forensic analysts, that's a gift. They give us a time machine — imperfect, yes, especially on modern Windows client systems with ScopeSnapshots enabled — but powerful enough to recover evidence that attackers thought was gone forever. ---------------------------------------------Dean-----------------------------------------------------------

  • Tycoon Nation: How Commoditised AiTM Kits Are Owning Microsoft 365

    Unlike Google-targeted attacks, the Microsoft 365 PhaaS ecosystem is well-documented, heavily researched — and quietly industrialised. Here's the full picture from kit purchase to BEC payout. Business email compromise used to require skill. Attackers needed to understand Exchange internals, craft convincing social engineering at scale, and know how to quietly live inside a compromised tenant without triggering alerts. That skillset still exists — but it's no longer required . Today you can rent it for $120. The Microsoft 365 PhaaS ecosystem is, frankly, mature. While Google-targeting kits are underreported and likely circulating in the same underground markets , the M365 side has been thoroughly catalogued by threat researchers at Sekoia, Proofpoint, Barracuda, Sygnia, Invictus IR, and Microsoft's own Defender team. What has emerged is a portrait of an industrialised attack supply chain that makes sophisticated MFA bypass accessible to any moderately motivated criminal with a Telegram account and a few hundred dollars in Bitcoin. This article documents how these kits work, what attackers do once inside, and — critically — what forensic artefacts they leave behind, because the most repeatable attacks leave the most repeatable evidence. The Kit: Tycoon 2FA Tycoon 2FA is the dominant player. First observed in August 2023 by Sekoia researchers, it emerged as an evolution of an earlier kit called Dadsec OTT — the Tycoon developer likely forked that codebase and extended it with AiTM-specific capabilities. It is sold via Telegram through a channel called the "Saad Tycoon Group", advertising ready-to-use Microsoft 365 and Gmail phishing pages, attachment templates, and access to an administration panel that lets customers monitor ongoing campaigns in real time. Pricing starts at $120 for a 10-day window, scaling upward depending on the top-level domain and kit features selected — typically maxing around $320. Payment is via Bitcoin. By mid-2024, the operator's wallet had logged more than 1,800 transactions, with cumulative revenues estimated at over $394,000. This is not a hobby project. It is a running business with active product development: a major updated version was released in March 2024 with enhanced obfuscation and anti-detection capabilities, followed by another significant update in November 2024 specifically designed to defeat security tooling inspection. How the Attack Works: The Kill Chain The attack is an Adversary-in-the-Middle operation. Unlike traditional phishing that captures static credentials and codes, an AiTM kit inserts a reverse proxy between the victim and Microsoft's real authentication infrastructure. The victim's browser is talking to a pixel-perfect Microsoft login page — which is, technically, real, because all traffic is being relayed through the proxy. MFA is not broken; it is completed legitimately by the victim, and the authenticated session cookie produced by that successful MFA challenge is captured by the proxy in real time. The Inbox Rule: The Most Important Forensic Artefact If there is one finding that IR practitioners should prioritise in any M365 compromise, i t is the inbox rule created immediately after session takeove r. This is documented extensively across independent IR firms' caseloads — Invictus IR, Sygnia, Microsoft Defender researchers, and Huntress have all highlighted it — and it is operationally deliberate. The attacker's goal with these rules is simple : the victim must not know the account is compromised . A rule that deletes all incoming email, or silently moves security alert messages to a folder the victim never opens, can buy days of undetected access. In the Microsoft-documented energy sector campaign, the attacker's rule was specific: delete all incoming emails and mark all messages as read, eliminating visual cues of new activity. The hidden rule problem is particularly insidious. Attackers have learned that rules created through MAPI manipulation — rather than the standard Outlook or OWA interface — do not appear in the Exchange admin center's rules list. Standard client-side auditing misses them entirely. MFCMAPI or PowerShell with the -IncludeHidden flag is required to surface them, a fact that many incident responders do not encounter until they're deep into a case wondering why a mailbox appears clean despite clear signs of compromise. Why MFA Didn't Stop It — And What Would The most common client reaction to these incidents is disbelief. MFA was enabled. The user completed the challenge. How is there a compromise? The answer is that AiTM attacks do not attack MFA — they work around it by stealing the output of a successful MFA session (the cookie) rather than trying to intercept or defeat the MFA mechanism itself. Once a session cookie is obtained, it represents an authenticated, trusted browser session. Microsoft's infrastructure sees it as a legitimate continuation of a session that was properly MFA-verified. Changing the password after the fact does not help: the cookie was issued before the password change, and Microsoft's session management does not automatically invalidate cookies when passwords are reset unless administrators explicitly revoke all active sessions. "Password resets alone are insufficient — impacted organizations must ensure that they have revoked active session cookies and removed attacker-created inbox rules." — Microsoft Defender Security Research Team, January 2026 The only authentication mechanisms technically resistant to AiTM attacks are FIDO2 hardware security keys and passkeys . Both bind the authentication response cryptographically to the legitimate origin domain using the WebAuthn standard. A proxy server relaying traffic from a phishing domain cannot forge this binding — the cryptographic assertion will fail if the origin doesn't match. TOTP codes, SMS OTPs, and Authenticator push notifications are all susceptible, because they produce portable, origin-agnostic proofs that a proxy can relay unchanged. The Anti-Analysis Arms Race What makes Tycoon 2FA and its competitors genuinely sophisticated is not the AiTM technique itself — that has been publicly documented and implementable via open-source tools like Evilginx for years. It is the anti-analysis layer that now ships as a standard product feature . The March 2024 update introduced heavily obfuscated JavaScript with dynamic code generation that alters its structure on each execution, defeating signature-based detection. The November 2024 update specifically targeted the tooling security researchers use to analyse phishing pages — blocking developer tool shortcuts, detecting debugger attachment, preventing element inspection, and redirecting to legitimate decoy sites when automated analysis is detected. Backend validation ensures phishing payloads only execute if a specific server response value is returned, meaning URL scanners that don't fully emulate the authentication flow receive only a clean redirect. The Multi-Org Cascade: When One Compromise Becomes Ten One of the more alarming real-world patterns, documented by both Sygnia and Microsoft's Defender team, is the cascading multi-organisation spread that can result from a single AiTM compromise. The attacker, once inside a victim account, harvests the victim's recent email contacts and threads. Phishing emails sent from the compromised account to those contacts arrive from a trusted domain with legitimate email authentication. Each recipient who clicks and completes MFA yields another compromised session. Each of those victims' contacts becomes the next target pool. In the energy sector campaign documented by Microsoft in January 2026, a single initial compromise spawned a chain of AiTM attacks across multiple distinct organisations. The attack was specifically designed to abuse SharePoint file-sharing links — because a link to a shared file in SharePoint looks inherently legitimate, even to security-aware users. The phishing campaign from just one compromised user sent over 600 emails targeting contacts both inside and outside the victim's organisation. What to Look For: IR Triage Checklist For practitioners responding to a suspected M365 AiTM compromise, the following artifacts are the highest-priority items in the Unified Audit Log and Entra ID sign-in logs . Confirm the UAL is enabled first — query Get-AdminAuditLogConfig | Format-List UnifiedAuditLogIsEnabled — because without it, forensic reconstruction is severely limited. Sign-in logs should be examined for the originating IP and ASN of the first post-compromise session: expect a VPS provider, often in a jurisdiction inconsistent with the victim's normal login pattern. The timestamp delta between the phishing email being clicked, the MFA completion, and the attacker's VPS login is often under five minutes in automated kit operations. UAL operations to search include New-InboxRule, Set-Mailbox, UpdateInboxRules, and MailItemsAccessed — the last being critical for understanding what the attacker read before the compromise was detected. Remediation must include explicit session revocation — not just a password reset. All active refresh tokens for the compromised account must be revoked via Entra ID (formerly Azure AD), and all inbox rules should be audited and removed, including those hidden from standard views. MFA method changes made by the attacker during the compromise window should also be reviewed and rolled back. Bottom Line Microsoft 365 AiTM attacks via PhaaS toolkits are no longer an emerging threat — they are the dominant mode of MFA bypass against enterprise Microsoft environments. Tycoon 2FA alone has been tied to over 64,000 documented incidents, operates across more than 1,100 domains, and generated nearly $400,000 in revenue before many organisations had updated their defensive playbooks to account for session-cookie theft as distinct from credential theft. The key shift in posture required is treating post-authentication session management as a security control in its own right. FIDO2 mandates for high-value accounts eliminate the AiTM vector entirely . Conditional access policies that continuously evaluate session legitimacy — not just at login — reduce attacker dwell time when cookies are stolen. And inbox rule monitoring in the UAL, correlated with anomalous sign-in events, gives defenders the best forensic hook into detecting kit-based operations, because the most automated attacks are also the most consistent. The kit economy has made this easy to deploy. Defenders need to make it hard to survive. --------------------------------------------------Dean------------------------------------------- If you want to check out for article related to Gmail PhaaS Link below https://www.cyberengage.org/post/the-gmail-phaas-playbook-anatomy-of-a-repeat-offender ---------------------------------------------------------------------------------------------------

  • The Gmail PhaaS Playbook: Anatomy of a Repeat Offender

    After seeing more than a dozen Gmail account-compromise incidents, a pattern has emerged that is too consistent to be coincidental. The victim receives a legitimate-looking Google MFA prompt on their mobile device, accepts it thinking nothing of it, and their account is silently handed to an attacker sitting on a VPS somewhere overseas. Within hours — sometimes minutes — the hijacked inbox becomes a launchpad, blasting hundreds of phishing emails to the victim's contact list. The kill chain is almost identical across every case. Same hosting providers, same post-compromise behaviour, same evasion technique. This article documents what I've observed in the field, and makes the case that these campaigns are being powered by a commoditised, black-market Phishing-as-a-Service (PhaaS) toolkit — the Google-targeting cousin of well-documented Microsoft 365 kits like Tycoon 2FA. How the Attack Works: The attack flow is a textbook Adversary-in-the-Middle (AiTM) operation. Rather than breaking encryption or exploiting a Google vulnerability, the attacker positions a reverse proxy server between the victim and Google's real login page. The victim authenticates — for real — and the proxy captures their live session cookie. MFA is never "bypassed" in the traditional sense; the user completes it legitimately, and the attacker simply steals the authenticated session that results. The Mailer-Daemon Block: An Underappreciated Tell The single most distinctive indicator in these cases — and the one I've rarely seen documented specifically for Gmail AiTM campaigns — is the deliberate blocking of mailer-daemon@googlemail.com immediately prior to the outbound spam run. When an email fails to deliver, Google's mail delivery subsystem sends a Non-Delivery Report (NDR), or "bounce," back to the sending address from mailer-daemon@googlemail.com . In a bulk phishing operation sending to hundreds of targets, a significant proportion of those addresses will be invalid, dormant, or protected by spam filters — generating a flood of bounce-back messages into the victim's inbox. These bounces are a bright red flag. A victim seeing their inbox fill with hundreds of delivery failures for emails they never sent would immediately know something is wrong and likely raise the alarm or change their credentials before the phishing campaign reaches full effect. By creating the block rule first , the attacker buys time. The victim's inbox appears normal. No bounces arrive. The campaign runs undetected for longer. This is not a casual decision — it's an operationally deliberate step that indicates the actor understands the detection risk and has scripted a mitigation into their workflow. This Is a PhaaS Toolkit — Not a Solo Operator The uniformity across unrelated cases is the giveaway. Independent victims, different organisations, different time periods — yet the same hosting providers, the same post-compromise playbook, the same sequencing. This is not the signature of a creative threat actor adapting their approach. It is the signature of a product. The Microsoft 365 side of this problem is well-documented. Tycoon 2FA , first surfaced by Sekoia researchers in late 2023, is the most prominent example: a fully commercialised AiTM PhaaS platform sold via Telegram for as little as $120 for a 10-day phishing window . It targets both Microsoft 365 and Gmail accounts, operates across over 1,100 domains, and had generated more than $394,000 in Bitcoin transactions by mid-2024 alone. It is actively maintained, with regular updates to improve evasion and obfuscation of its phishing pages. The Google-specific variant I've encountered in the field bears the same hallmarks of a packaged kit: automated steps, consistent infrastructure choices, and pre-built post-compromise actions (like the mailer-daemon filter) that no manual operator would apply identically across ten separate victim accounts. The most likely explanation is that there is a Google-focused PhaaS toolkit — or a Google module within an existing one — circulating in black-market channels that has simply received less public research attention than the Microsoft 365-focused kits. Why MFA Didn't Stop This A common client reaction when these incidents are presented is disbelief that MFA "failed." It didn't fail — it was bypassed elegantly. The AiTM technique doesn't attack MFA at the protocol level. It weaponises the user's trust in their own device notification and the real-time nature of the proxy relay. The victim completes a genuine MFA challenge against the real Google infrastructure . The attacker simply intercepts what that authentication produces: a session cookie representing an already-authenticated session. Stolen cookies allow attackers to replay a session and maintain access even if credentials are subsequently changed, because the session was established with valid MFA consent. The only authentication methods that are technically resistant to AiTM are FIDO2 hardware keys and passkeys, both of which bind the authentication response cryptographically to the legitimate origin domain — something a proxy cannot forge. Traditional TOTP codes, SMS codes, and push-notification approvals are all susceptible. What IR Reports Should Document If you're handling a similar case, the following artifacts are the most valuable to preserve and document. Timeline of Gmail filter creation (found in Gmail's audit logs or via Google Workspace Admin Console) is critical — the timestamp of the mailer-daemon block rule relative to the first anomalous login and the first outbound phishing email establishes the operator's automated playbook. Login IP addresses and ASN data will likely cluster around a small set of VPS providers; cross-case correlation on these is highly productive for attribution and building shared IoC sets. Sent mail folder content — if not deleted — reveals phishing template design, which can often be matched to known PhaaS kit templates. And device approval logs will show the precise moment the victim accepted the fraudulent MFA prompt, which is useful both for forensic reconstruction and for explaining the compromise to the victim. The good news, if there is any, is that the repeatability of this attack pattern means that once you've worked one case thoroughly, you have a reliable template for the next. The bad news is that the repeatability also means the toolkit is stable, functional, and being actively used at scale. Bottom Line Gmail-targeted AiTM attacks are being conducted with the same tooling discipline seen in the well-documented Microsoft 365 PhaaS ecosystem. The specific post-compromise behaviour of blocking bounce notifications before a bulk phishing blast is a repeatable, operational artefact of an automated kit — not improvised tradecraft. Security teams responding to Gmail BEC incidents should treat this pattern as a reliable indicator of kit-based attacks, add the mailer-daemon filter check to their standard Gmail triage checklist, and escalate intelligence on hosting providers and infrastructure to contribute to broader community detection efforts. Phishing-as-a-Service has lowered the floor for conducting sophisticated MFA-bypassing campaigns to the price of a budget software subscription. The Google ecosystem deserves the same research scrutiny the Microsoft 365 PhaaS space has received. Hopefully, public documentation of these field patterns will accelerate that work. -------------------------------------------Dean----------------------------------------------------

bottom of page