
Search Results
540 results found with an empty search
- System Configuration: Reading the Machine's Own Biography
Before you chase a single artifact, before you open a single log file, you need to answer a deceptively simple question: what exactly is this machine? Not in a philosophical sense. In a very practical one. What version of Windows is it running? How long has it been running? What's it called? What time zone does it think it's in? These aren't glamorous questions, but getting them wrong — or skipping them entirely — will quietly poison the rest of your investigation. Think of system configuration forensics as writing the opening chapter of a case file. You're establishing the scene before anything else happens. --------------------------------------------------------------------------------------------------- Start Here: Operating System Version The first registry key worth visiting on any Windows examination is: SOFTWARE\Microsoft\Windows NT\CurrentVersion This key is your quick snapshot of the current OS state — version, build number, and the timestamp of the most recent major update. It's also where a very common misconception lives, so let's address it immediately: the InstallDate and InstallTime values here reflect the last major update , not necessarily the original installation of the operating system. If someone installed Windows three years ago and updated it last month, this key shows last month. To go further back, you need to dig deeper. ------------------------------------------------------------------------------------------------------- Walking the Update History The Source OS key is where the full story lives. Each time Windows goes through a major update or upgrade cycle, it stamps the previous state into a separate subkey here — preserving the version, build, and install time of that snapshot. By iterating through every Source OS subkey and combining it with the CurrentVersion data, you can reconstruct the entire update biography of the machine. When was Windows first installed? What version? When was it upgraded? How many times? This matters more than it might seem. If you're chasing an artifact that only exists in Windows 10 build 19041 and above, knowing that the system was running an earlier build during the period of interest changes everything. You can't find what didn't exist yet. One important caution: the timestamp embedded in each Source OS subkey name — the (Updated on...) part — does not reliably match the InstallDate or InstallTime values inside that key. Microsoft's own update process is multi-stage, involving downloads, backups, and sometimes multiple reboot cycles that can span days. Different timestamps get recorded at different stages of that process. Standardize on InstallDate/InstallTime — they match what Windows' own systeminfo command reports, making cross-verification straightforward. ------------------------------------------------------------------------------------------------------- Control Sets: Which Configuration Is Actually Active? Here's a concept that confuses almost everyone the first time: the SYSTEM hive doesn't store its configuration data at a single fixed path. It uses something called control sets . A control set is essentially a complete snapshot of system configuration — drivers, services, boot settings, all of it. Historically, Windows kept multiple control sets as recovery backups. If a bad driver crashed the system, you could boot into the LastKnownGood control set and recover. Modern Windows (post-Win7) has largely moved away from keeping multiple copies, but the architecture remains — and several critical registry paths require you to know which control set is currently active before you can navigate to them. ------------------------------------------------------------------------------------------------------- The Computer Name — Boring Until It Isn't Recording the hostname feels almost too simple to mention. But skip it and you'll regret it. Windows Event Logs, network logs, and a surprising number of other artifacts tag their entries by hostname rather than IP address. If you're correlating a suspicious event across multiple log sources and you don't know what the machine is called, you'll spend time chasing ghosts. The computer name lives at: SYSTEM\\Control\ComputerName\ComputerName Note that needs to be substituted with the actual control set you identified in the previous step — usually ControlSet001. It's also a useful sanity check: verifying the hostname early confirms you're examining the right machine, which matters more than it sounds when you have strict authorization boundaries on what you're allowed to examine. ------------------------------------------------------------------------------------------------------- Time Zones: The Silent Killer of Case Timelines This is the part of system configuration forensics where experienced analysts get genuinely opinionated — and rightfully so. Most Windows timestamps are stored in UTC. NTFS file timestamps, registry Last Write Times, Event Logs — all UTC. This is excellent news because it means you can correlate artifacts across different systems and different geographic locations without any conversion math. But some artifacts aren't in UTC. They're stored in local time. Antivirus logs are a notorious offender. Application logs from poorly-written software. Various third-party tools. If you don't know what time zone the system was set to, you can't convert those outliers to UTC — and a misaligned timestamp in a timeline can send an entire investigation in the wrong direction. ------------------------------------------------------------------------------------------------------- The Pro Tip You Shouldn't Skip There's one piece of advice buried in this topic that's worth repeating in bold: set your forensic analysis machine's time zone to UTC before you begin any examination . Seriously. Just do it as a standing policy. The danger isn't that you'll misread a single timestamp — it's that an event log viewer or artifact parser will silently convert times for you, and you'll never know it happened. You'll build a timeline that's off by exactly one time zone offset, and the resulting confusion will cost you hours at best, and a false conclusion at worst. Work in UTC. Report in local time. Never the other way around. ------------------------------------------------------------------------------------------------------- Putting It Together: The System Configuration Checklist These aren't standalone facts to collect and forget. They form a foundation that everything else in your investigation rests on. Get the OS version wrong and you'll misinterpret artifacts that changed behavior between builds. Miss a time zone and your entire timeline shifts. Skip the computer name and you'll spend time correlating logs from the wrong machine. Ignore the control set and you'll navigate to the wrong registry path and wonder why a key doesn't exist. Do these steps first, document the results, and every subsequent phase of the investigation will be anchored to solid, verified facts. The unglamorous work always pays dividends later. -----------------------------------------------Dean---------------------------------------------------- Full Series Below: https://www.cyberengage.org/courses-1/mastering-windows-registry-forensics%3A
- SAM Hive: The Registry Knows Who You Are
Every investigation eventually comes back to the same question: who was actually sitting at that keyboard? You can find the most damning files, the most suspicious network connections, the most carefully hidden evidence — but none of it means much until you can tie it to a specific person. That's where the SAM hive comes in. It's Windows' own internal roster of every local account on the machine, and it's usually one of the first stops in any serious forensic examination. Think of it as the HR department of your operating system. It knows who works there, when they showed up last, how many times they've tried and failed to badge in, and exactly what level of access they have. ---------------------------------------------------------------------------------------------------------- Why User Profiling Comes First Before you chase artifacts, before you dig into execution history or browser forensics, y ou need to know who you're looking fo r . This sounds obvious, but it has a very practical implication that trips up newer analysts: many Windows artifacts don't use usernames. They use RIDs. The Recycle Bin folder structure? RID. Certain Event Log entries? RID. The BAM registry key? RID. If you haven't mapped usernames to their corresponding Relative Identifiers early in the investigation, you'll find yourself staring at numbers that point to a person you haven't identified yet. The SAM hive solves this problem completely — and gives you a lot more besides. ---------------------------------------------------------------------------------------------------------- What the SAM Actually Stores ---------------------------------------------------------------------------------------------------------- Three Reasons the SAM Hive Is Always Worth Checking First — RID mapping. Say you're looking at a Recycle Bin folder named $RECYCLE.BIN\S-1-5-21-XXXXXXXX-1001. That 1001 at the end is a RID. Without the SAM telling you that RID 1001 belongs to akash, that folder is just a number. With it, you have a name. Second — account profiling. The login statistics alone can tell a story. An account that's only logged in twice ever could be a ghost account created for a specific purpose. An account showing hundreds of failed login attempts screams brute force. An admin account with a last login from two years ago — probably irrelevant. An admin account with a login from last Tuesday that nobody mentioned? Very relevant. The SAM gives you the context to ask the right questions. Third — the built-in Administrator account. Every Windows machine has one. Most organizations disable it. If the SAM shows it has an active login count and a recent last login, that's a flag worth pulling on — especially in intrusion cases where attackers love to abuse built-in accounts that sometimes get overlooked by monitoring tools. ---------------------------------------------------------------------------------------------------------- The Cloud Account Wrinkle Here's something that trips up analysts who haven't encountered it before: Microsoft cloud accounts behave differently in the SAM , and the differences matter. When a user logs in with a Microsoft account (their Outlook or Hotmail email address) instead of a traditional local account, Windows still creates a SAM entry — but it doesn't populate it the same way. Example: The InternetUserName value is the smoking gun here. If it's present, the account is cloud-linked — and that has downstream implications for the entire investigation. Cloud accounts are tied to OneDrive, SharePoint, browser sync, cross-device history. A cloud account isn't just a login on this one machine. It's a thread that potentially connects to an entire ecosystem of synced data elsewhere. ---------------------------------------------------------------------------------------------------------- Beyond Local Accounts: The ProfileList The SAM is excellent — but it only covers local accounts. In any enterprise environment, you'll also be dealing with domain accounts, and those don't live in the SAM. They live on the domain controller. What does live on the endpoint, however, is a key called ProfileList — and it's the bridge between local and domain account identification. I do not have real example to show :( ---------------------------------------------------------------------------------------------------------- ProfileList vs SAM: Know the Difference The SAM and ProfileList solve related but different problems. Here's how to think about each one: SAM gives you depth on local accounts — rich login statistics, group membership, cloud linkage. If you want to know everything about a local user's habits on this specific machine, go there first. ProfileList gives you breadth — it casts a wider net and catches both local and domain accounts that have ever sat down at this machine. No deep statistics, but an invaluable roster of everyone who's had a genuine interactive session. Use them together. Map out the full account landscape with ProfileList first, then go deep on relevant local accounts with the SAM. For domain accounts, ProfileList is just your starting point — the real detail lives on the domain controller, which is a separate investigation entirely. One important caveat: ProfileList's Last Write timestamp is notoriously unreliable . Operating system updates have a tendency to touch these keys, resetting the timestamp to something meaningless. Don't build a timeline argument around it. ---------------------------------------------------------------------------------------------------------- The Analyst Mindset What makes SAM-based account profiling powerful isn't any single data point — it's the combination of them. A last login time by itself is a fact. A last login time on an account that theoretically hasn't been used in three years, with a logon count of two, at 3am, from an admin account nobody mentioned? That's a story. Let the data ask the questions. The SAM will give you plenty of material to work with. ------------------------------------------Dean---------------------------------------------------------- Full Series: https://www.cyberengage.org/courses-1/mastering-windows-registry-forensics%3A
- The Registry's Dirty Little Secret: Transaction Logs
So you've pulled the registry hives off a suspect machine. You've loaded them into your forensic tool. You're feeling good. Timestamps are lining up, keys are telling their stories, and you're building a solid picture of what happened. And then you realize you might be missing the most recent — and most critical — data entirely. Welcome to the world of registry transaction logs . The part of Windows forensics that quietly humbles analysts who think grabbing the hive files is enough. ----------------------------------------------------------------------------------------------------------- Why Windows Doesn't Write Everything Immediately Here's the thing about the Windows Registry: it's constantly being modified . Every app launch, every setting tweak, every USB plug-in — the registry is getting poked hundreds of times a day. If Windows wrote every single one of those changes directly to the hive file on disk in real time, your storage would be thrashing non-stop and your system would feel sluggish. So Windows does what any sensible system does — it cheats a little. It caches registry writes in two places: system memory first, then transaction log files on disk, and only eventually flushes all of that into the actual hive file. This process is called a hive flush , and it's the source of a genuinely important forensic blind spot. ----------------------------------------------------------------------------------------------------------- The Windows 8 Plot Twist Up until Windows 8, this caching behavior was relatively predictable. But researcher discovered: starting with Windows 8, Microsoft changed the flushing behavior so that temporary data is routinely written to the transaction logs first — and the primary hive file is only updated when one of three things happens: This is genuinely elegant from a performance standpoint. Fewer disk writes, snappier system. But from a forensics standpoint? It creates a gap — sometimes a significant one. ----------------------------------------------------------------------------------------------------------- The "Dirty Hive" Problem When a hive hasn't been fully flushed, it's called a dirty hive . And here's where analysts can unknowingly shoot themselves in the foot: if you pull a registry hive from a machine that was running actively — maybe it crashed, maybe it was seized mid-session — and you only analyze the hive file, you could be missing the most recent hour (or more) of activity. The freshest, most forensically relevant data might only exist in the .LOG1 and .LOG2 files sitting right next to the hive. ----------------------------------------------------------------------------------------------------------- The Tool Problem Nobody Talks About Enough Here's the uncomfortable truth: many forensic registry tools don't check for dirty hives . They load the hive file, show you what's there, and never once mention that the .LOG1 file sitting right next to it might contain newer, critical data. That's a real problem. An analyst who doesn't know to look will build their timeline from incomplete data — and potentially miss the exact activity that matters most. The most recent action a user took before a machine was seized is exactly what you'd want to know. And it's exactly what lives in the transaction logs. The gold standard tool for registry forensics — Registry Explorer by Eric Zimmerman — does this right. It detects dirty hives and prompts you to load the corresponding log files before proceeding. That's the behavior every tool should have. If yours doesn't do this, consider it a gap in your workflow. ----------------------------------------------------------------------------------------------------------- The Naming Convention You Need to Memorize Transaction logs follow a dead-simple naming pattern. Sit next to their hive file. Easy to spot once you know what you're looking for: ----------------------------------------------------------------------------------------------------------- The One Rule to Walk Away With This entire topic boils down to one rule that every forensic analyst should have tattooed on the back of their hand: Always collect the hive files and the transaction logs. Always. It's not optional. It's not a nice-to-have. If you're doing triage collection on a live or recently-seized system and you only grab NTUSER.DAT without ntuser.dat.LOG1 and ntuser.dat.LOG2, you may be handing over an incomplete picture — and the missing slice might be exactly the hour that matters most. Windows is quietly optimizing for performance. Your job is to make sure that optimization doesn't quietly optimize away your evidence. The registry doesn't lie. But if you don't collect all of it, you might only be hearing half the story. -------------------------------------------------------Dean----------------------------------------- Complete Series Below https://www.cyberengage.org/courses-1/mastering-windows-registry-forensics%3A
- The Windows Registry: The Black Box Flight Recorder of Your PC
You know those crime shows where the detective walks into a room and somehow reads the entire history of what happened just by looking around? That's basically what a forensic analyst does with the Windows Registry — except instead of a crime scene, it's your computer, and instead of cigarette ash and broken glass, it's a labyrinth of cryptic keys, timestamps, and nested data. The Registry isn't something most people ever think about. It sits silently in the background, humming away, keeping meticulous notes on everything . Every app you installed. Every device you plugged in . Every setting you changed at 2am when you were tweaking your PC and probably shouldn't have been. It's all in there. So let's pull back the curtain. -------------------------------------------------------------------------------------------------- What Even Is the Registry? Think of the Registry as Windows' own personal diary — obsessively detailed, never forgets a thing, and absolutely judgemental. It's a massive hierarchical database that stores configuration data for the operating system, your hardware, every piece of software installed, and every user who's ever touched the machine. When your computer boots up, Windows doesn't just "wake up" — it consults the Registry obsessively. What drivers do I need? Which services should start? What's the desktop wallpaper supposed to be? All of it lives in the Registry. -------------------------------------------------------------------------------------------------- The Core Hives — Where the Good Stuff Lives The Registry is divided into chunks called hives . Think of them like filing cabinets, each responsible for a different department of your system. -------------------------------------------------------------------------------------------------- But Wait — Every User Has Their Own Registry Too Here's where things get genuinely interesting. Beyond the system-wide hives, Windows keeps a personal registry for every user account on the machine. This is where forensics analysts basically strike gold. Your user hives remember what files you opened, what you searched for, which USB drives you plugged in, which websites you visited through certain apps. It's your digital shadow — and it follows you everywhere. -------------------------------------------------------------------------------------------------- Timestamps: The Registry Never Forgets When Now here's the part that should make you sit up straight: every single registry key has a Last Write Time stamped on it — and unlike a lot of other Windows artifacts, this timestamp is stored in UTC and is remarkably reliable. What does that mean practically? It means a forensic analyst can tell you that at exactly 01:39:35 UTC on January 30th, 2016 something changed in your startup programs list. Maybe malware snuck itself in. Maybe you installed a new app. The registry doesn't care why it happened — it just dutifully wrote down when . The kicker? Windows' own Registry Editor — regedit.exe — doesn't even show you these timestamps. They're completely hidden from regular users. You need specialized forensic tools to surface them. Here's what gets really spicy: when a value is added, changed, or deleted, the parent key's timestamp updates. So even if someone deletes a suspicious entry, the timestamp on the key above it will betray the fact that something changed at that exact moment. The cover-up leaves evidence of the cover-up. -------------------------------------------------------------------------------------------------- The Deleted Registry: Forensics' Hidden Goldmine This is where things get into true crime territory. When someone deletes a registry key, Windows doesn't actually scrub it from existence. It just marks that space as "unallocated" — exactly like deleting a file . The data sits there, perfectly intact, waiting to be found by anyone with the right tools. Privacy cleaner apps love to target the registry. They'll nuke entire key branches trying to erase evidence of what a user was doing. But here's the irony: deleting those keys is itself evidence. Forensic tools like Registry Explorer can detect these deleted-but-still-present keys and display them with an "X" marker — showing the analyst exactly what was wiped and when. So the person who ran Privacy Cleaner Pro to cover their tracks? They didn't just fail to erase evidence — they created new evidence. The absence of keys that should always exist is its own red flag. And underneath those deleted keys, often the original data is completely recoverable. ---------------------------------------------------------------------------------------------------------- Live vs. Offline: Two Different Worlds One last thing worth understanding — the registry looks different depending on how you're looking at it. When you're doing live forensics on a running machine, you see these four root keys through regedit. But serious analysts almost never work that way — they pull the actual hive files from disk and load them into tools like Registry Explorer or Arsenal Registry Recon . Why? Because those tools surface the hidden timestamps, expose deleted keys, and decode data that regedit simply glosses over. It's the difference between reading a newspaper's headline and reading the full classified report underneath. ---------------------------------------------------------------------------------------------------------- The Takeaway The Windows Registry is, without exaggeration, one of the most information-dense artifacts on any Windows machine. It's not glamorous. Most users never open it. But for a forensic analyst — or for anyone trying to understand what really happened on a system — it's the closest thing to a complete activity log that Windows silently maintains. Every key tells a story. Every timestamp is a witness. And every deleted entry that's still quietly sitting in unallocated space? That's a confession waiting to be found. The registry doesn't judge. It just remembers. ------------------------------------------Dean----------------------------------------------------------- Complete Series Below: https://www.cyberengage.org/courses-1/mastering-windows-registry-forensics%3A
- Enabling Auditing, Logging and Log explorer in Google Cloud
(How logs are generated, why they matter, and how investigators actually use them) Big picture Before you can analyze logs , you need to understand where logs even come from in Google Cloud. Google Cloud generates logs in two fundamental ways : Platform-level Audit Logs → Logs generated automatically by Google Cloud itself Application / workload logs → Logs generated by what you run (VMs, apps, network traffic, etc.) From a DFIR point of view: Audit Logs tell you “what changed in the cloud control plane” Application logs tell you “what happened inside workloads” You almost always need both during an incident. ------------------------------------------------------------------------------------------------ Platform Audit Logs – what Google logs for you Audit Logs record actions like: Who logged in Who created / modified / deleted resources Who changed IAM permissions Which actions were denied by policy These logs are generated by Google Cloud , not by your apps. Why this matters in practice Audit Logs are: Hard for attackers to tamper with Centralized Often the first place you detect compromise If IAM abuse, privilege escalation, or lateral movement happens —👉 Audit Logs are your ground truth ------------------------------------------------------------------------------------------------ Enforcing logging at the Organization level Concept Google Cloud lets you enforce audit logging: At Organization At Folder At Project Logging rules flow top-down . If something is enforced at the Org level: Projects cannot disable it They can only add more logging ------------------------------------------------------------------------------------------------ Example (real-world) An organization enforces: Admin Write logs at Org level This means: Every admin-level change is logged No project owner can turn it off Even compromised Owner accounts still generate logs This is critical for post-compromise investigations . ------------------------------------------------------------------------------------------------ Audit log types you must understand (not all logs are equal) Required Log Bucket (most important) These logs: Cannot be disabled Stored 400 days Free High-value security events Includes: Admin Activity Logs System Events Login events Access Transparency logs 👉 From an investigator’s perspective: This is your “black box recorder.” ------------------------------------------------------------------------------------------------ Default Log Bucket These logs: Often capture denied actions Stored 30 days for free Cost money if retained longer Why denied logs matter: Brute-force attempts Repeated IAM failures Early-stage recon attempts In real incidents: The successful login might be one event —the denied attempts tell the full story . ------------------------------------------------------------------------------------------------ Exempted Users – useful but dangerous Concept Google Cloud allows exempted users : Their actions are not logged Why this exists Some service accounts generate massive noise Cost and signal-to-noise ratio matter DFIR risk If misused: An attacker may intentionally target exempted accounts Blind spots are created in audit trails 👉 As an investigator, always ask: “Which accounts are exempted from logging — and why?” ------------------------------------------------------------------------------------------------ Cost model (what actually costs money) Google Cloud logging costs are based on two things , not one: 1. Log Ingestion Logs entering the logging system 50 GiB per project is free Required logs do NOT count toward this 2. Log Storage How long logs are retained Default bucket: 30 days free Required bucket: 400 days free Key insight: You don’t usually pay because you log too much You pay because you retain logs too long . For incident response: Short retention = cheaper Long retention = better historical visibility This is a risk vs cost decision , not just technical. ------------------------------------------------------------------------------------------------ Accessing logs – where investigations actually happen Log Explorer (Google’s built-in “SIEM-lite”) Google significantly upgraded Log Explorer, and today it behaves very much like: Splunk ELK Chronicle-style query systems How investigators use Log Explorer 1. Scope Defines where you’re searching: Project Folder Entire Org (if permissions allow) In real investigations: Start broad → narrow down Scope mistakes = missed evidence 2. Query Builder Uses structured, SQL-like queries. You typically hunt for: IAM permission changes Service account usage API key creation actAs events Login anomalies Very similar mental model to: ELK Splunk SOF-ELK timelines 3. Results Each log entry: Is collapsed by default Must be expanded for full context Important fields often hidden until expanded: Caller IP Principal email Authentication method Resource name Permission granted or denied ------------------------------------------------------------------------------------------------ Investigator mindset shift (important) Traditional IR: “Logs come from servers” Cloud IR: “Logs come from the control plane” If you only look at VM logs and ignore Audit Logs: You miss IAM abuse You miss lateral movement You miss Org takeover paths ------------------------------------------------------------------------------------------------ Query Builder – what it really does Concept Log Explorer’s Query Builder is not magic. It’s simply a UI-assisted way of writing structured queries against JSON logs. You: Pick a resource type Narrow it down using resource labels Add fields relevant to that resource Set a time range The UI then converts your selections into the underlying query syntax. 👉 Important mindset: Log Explorer will only search what you explicitly ask for , and only inside the selected scope . Practical implication (DFIR) If: You forget to include the right resource type Or your scope is wrong (wrong project / folder) Or your time range is too small Then events do not “not exist” — you just didn’t ask correctly . This is a very common cloud IR mistake. ------------------------------------------------------------------------------------------------ Resource-based searching (why it feels backward) Concept Google Cloud logs are resource-centric , not user-centric. So instead of: “Show me everything user X did” You often start with: “Show me everything that happened to resource Y” Example1 : resource.type="gcs_bucket" resource.labels.bucket_name="securitz resource.labels.location="us-east1" Example2 : resource.type="audited_resource" resource.labels.method="google.login.LoginService.riskySensitiveActionAllowed" resource.labels.service="login.googleapis.com" Why this is powerful for investigations This matches Google Cloud’s IAM model: Permissions are attached to resources Members are granted access by the resource owner So if a bucket, VM, or project was abused:👉 Start with the resource , then pivot to the actor. ------------------------------------------------------------------------------------------------ Time range – not just a filter Concept Time range is part of the query logic , not just a display option. You can: Search seconds, minutes, hours, days Use custom absolute ranges (incident window) Investigation workflow A common IR pattern: Start with a tight time window (alert timestamp) Validate suspicious activity Expand the time window without changing the query Watch how activity builds up before and after the incident Log Explorer keeps previously matched results visible when expanding time — this helps you see progression , not just isolated events. ------------------------------------------------------------------------------------------------ Results view – summary vs evidence Concept The default results pane: Shows a condensed summary Hides most fields This is intentional — logs are JSON and very verbose. Investigator reality The real evidence is always inside the expanded event: principalEmail callerIp userAgent timestamp serviceName methodName You rarely care about every field. You care about: Who, from where, did what, to which resource, and when. ------------------------------------------------------------------------------------------------ JSON structure – why queries feel “long” Concept Google Cloud logs are structured JSON. That means: Fields are nested You must specify full paths Example: resource.labels.method="google.login.LoginService.riskySensitiveActionAllowed" Practical tip (this saves time) If you already found a relevant event: Expand it Click a field (e.g., principalEmail) Select “Show matching entries” Log Explorer automatically: Adds the correct field path Adds the value Updates your query This avoids syntax mistakes and speeds up hunting. ------------------------------------------------------------------------------------------------ How investigators actually build queries You rarely write one “perfect” query upfront. Real workflow: Broad resource-based query Identify suspicious event Pivot using fields from that event Narrow down to: User IP Service account API method Expand time window Repeat This is iterative threat hunting , not static searching. ------------------------------------------------------------------------------------------------ Logging pipeline – what happens behind the scenes Conceptual flow Every log follows the same path: Generated (platform or workload) Sent to Google Cloud Logging API Passed through Log Sinks Either: Stored Exported Dropped This happens before you ever see the log in Log Explorer. ------------------------------------------------------------------------------------------------ Log Sinks – control points (and blind spots) Concept Log Sinks exist at: Project level Organization level They decide: Which logs are kept Which logs are excluded Where logs are sent (storage, SIEM, Pub/Sub) DFIR relevance If a log does not appear: It may have been excluded It may have been routed elsewhere It may have been dropped by design During investigations, always confirm: Sink configuration Exclusion rules Retention settings Missing logs ≠ attacker tampering (most of the time). ------------------------------------------------------------------------------------------------ Exclusions – useful but dangerous Concept Exclusions reduce noise: Ignore repetitive service account actions Reduce cost Improve signal quality Investigation risk Over-aggressive exclusions can: Remove early attacker recon Hide lateral movement Remove failed attempts that give context Good practice: Exclude volume , not security-relevant behavior . ------------------------------------------------------------------------------------------------ Takeaway Google Cloud Log Explorer queries are built around resource-centric, JSON-structured logs that require investigators to think differently than traditional user-based logging models. By starting with affected resources, iteratively refining queries using nested fields, and understanding how time ranges and log sinks influence visibility, analysts can reconstruct attacker behavior across projects and organizational boundaries. Effective investigations rely not on writing perfect queries upfront, but on pivoting through relevant fields and understanding where logs may be excluded or redirected within the logging pipeline. ------------------------------------------Dean------------------------------------------------
- Service Accounts in Google Cloud
The core idea In Google Cloud, Service Accounts are identities for machines , not humans .They are used by resources like VMs, Cloud Functions, Kubernetes, etc. to talk to other Google Cloud services. Unlike AWS (where users can directly generate API keys), Google Cloud forces you to use Service Accounts when you want: Programmatic access Static credentials Non-interactive authentication So:👉 If code needs access, it almost always runs as a Service Account. ---------------------------------------------------------------------------------------------- What actually happens when you create a VM When you create a VM: Google Cloud automatically creates (or assigns) a Service Account That Service Account: Appears in IAM Can be granted roles just like a user Is used by the VM to access other resources (Storage, APIs, databases) You can delete this Service Account from IAM —but then your VM will break if it needs to talk to anything else. Best practice in reality : Don’t delete it — restrict its permissions . ---------------------------------------------------------------------------------------------- The real danger: Basic Roles (Owner & Editor) On paper Google Cloud has Basic Roles : Viewer Editor Owner They were created early on to make things “easy”. In practice (this is where things go wrong) The Editor role is dangerously powerful : An account with Editor can: Modify resources Create API keys Use actAs to impersonate other accounts Create credentials for other accounts Key insight : Editor ≈ Owner (from an attacker’s point of view) ---------------------------------------------------------------------------------------------- Why attackers love Editor accounts If a threat actor compromises any account with Editor : They can create an API key They can impersonate (actAs) higher-privileged accounts They can effectively privilege-escalate to Owner This is not theoretical — it’s abused in real incidents . ---------------------------------------------------------------------------------------------- Real-world lateral movement scenario Let’s walk through the actual attack flow . Step 1 – Environment setup (normal behavior) Infra team creates a Development Project VMs are deployed for developers Each VM has a Service Account Everything is isolated. Looks safe. Step 2 – Developer needs storage (very common) Developer needs persistent storage Creates a Cloud Storage Bucket Grants the VM’s Service Account Editor access to the bucket From the developer’s perspective: “It works, job done.” Step 3 – Credentials exposure One of the following happens: Service Account key committed to GitHub Credentials stored in code VM is compromised and metadata server is abused Now the attacker has:👉 Service Account credentials with Editor permissions Step 4 – Privilege escalation With Editor access, the attacker can: Create new API keys Impersonate other IAM accounts Take over Owner accounts in the same project Step 5 – Organization-level impact If any Org-level bound account exists in that project: The attacker can impersonate it Escalate to the Organization Gain control over: All projects All resources Entire cloud environment Single Service Account compromise → Full Org takeover ---------------------------------------------------------------------------------------------- Why this is hard to detect (investigation challenge) The IAM visibility problem In Google Cloud: Resource owners decide access There’s no single place that shows: “What does this account have access to across the Org?” This creates: Hidden trust relationships Accidental cross-project access Silent privilege escalation paths ---------------------------------------------------------------------------------------------- TInvestigator & Defender mindset When you’re investigating or hardening Google Cloud: Red flags to look for Service Accounts with Editor role Shared Service Accounts across projects Exposed Service Account keys Unexpected actAs activity API keys created by non-human identities ---------------------------------------------------------------------------------------------- Defensive mindset shift ❌ “Editor is fine for dev”✅ “Editor is a privilege escalation waiting to happen” ---------------------------------------------------------------------------------------------- One-line summary In Google Cloud, Service Accounts with Editor permissions act as silent trust bridges—once compromised, they enable privilege escalation, lateral movement, and even full organization takeover without deploying malware. -----------------------------------------------Dean---------------------------------------------
- Detecting Time Manipulation in Windows — You Don't Always Need Full Forensics
Okay so if you've been following along, I've already written about timestomping and time manipulation from a forensics angle — both for Linux and Windows. Links below if you missed those: Linux: https://www.cyberengage.org/post/timestomping-in-linux-techniques-detection-and-forensic-insights Windows: https://www.cyberengage.org/post/anti-forensics-timestomping But today I want to talk about something a little different. What if you didn't have to go full forensics mode to catch this? What if Windows logs already told you everything you needed? Spoiler: they do. If they're still there --------------------------------------------------------------------------------------------------- First — Why Does System Time Even Matter? Let me set the scene. Timestomping and clock manipulation are some of the oldest anti-forensic tricks in the book. The idea is simple — if you can control what time the system thinks it is, you can control what timestamps get written to files, logs, and artifacts. Need a document to look like it was created last week? Roll the clock back, create the file, roll it forward. Done. Now here's something interesting that doesn't get talked about enough — this technique isn't just about covering tracks after the fact. There's actually a documented attack where pushing the system clock forward can be used to evade certain EDR alerts. Let that sink in. Time manipulation as an active evasion technique, not just a cleanup step. This is why, starting with Windows 10, Microsoft restricted system time changes to administrators only. Regular users can be granted the right explicitly, but by default — they can't touch the clock. That was a genuinely good security win. --------------------------------------------------------------------------------------------------- Windows Time Service — The Baseline You Need to Know Before you go hunting for suspicious time changes, you need to understand what normal looks like. Windows runs the Windows Time Service by default. It connects to external NTP servers at regular intervals and makes small automatic adjustments to keep things accurate. These normal adjustments get logged too — so not every time-related event you see is an attacker. Here's how you tell the difference: NTP automatic adjustments → logged under the SYSTEM or LOCAL SERVICE account, small time deltas, svchost.exe process User-initiated changes → logged under an actual user account, usually much larger time jumps, and on Windows 10+ you'll see SystemSettingsAdminFlows.exe as the process — because admin rights are required and that's the process that handles it That process name is actually a really clean indicator. If you see a time change event and the process is SystemSettingsAdminFlows.exe tied to a user account? That's a human doing it, not the system. --------------------------------------------------------------------------------------------------- The Event IDs You Actually Need Let's get to the practical part. There are two main places Windows records time changes and two event IDs that matter most. --------------------------------------------------------------------------------------------------- The Audit Policy Catch — Read This Carefully Here's something that catches a lot of people out. Event ID 4616 in the Security log is genuinely the most readable and informative event for time changes — but it only gets written if the Security State Change audit policy is enabled. If it's not turned on, you won't see it . Period. This is why Event ID 1 in the System log matters so much. It doesn't depend on your audit policy. Since Windows 8, it reliably records time changes including the responsible account. It's noisier — Event ID 1 is used for a lot of things — but it's always there. Lesson: check your audit policy. If Security State Change isn't enabled for success events, fix that now, before you need it. --------------------------------------------------------------------------------------------------- The Time Zone Problem — And a Clever Trick Here's something that gets almost no attention — detecting time zone changes is actually harder than detecting clock changes. Some Windows versions will log a time zone change in the System log with Event ID 1, but here's the bizarre part — they won't tell you what time zone was selected. Just that something changed. Not great. Remember Event ID 6013 — the daily system uptime event? If you look at the raw XML of that event, it contains the current system time zone. So while you won't catch every time zone change in real time , you get one snapshot per day embedded in an event that fires automatically. Stack those up over time and you can track zone drift across a timeline. ----------------------------------------------------------------------------------------------------------- Putting It Together — What to Look For When you're investigating a potential time manipulation incident, here's the mental checklist: Start with Event ID 4616 in the Security log if your audit policy is enabled. It's clean, readable, and tells you exactly what changed. Look at the account name — if it's a user account and not SYSTEM, that's your first flag. Then check the process — if you see SystemSettingsAdminFlows.exe, a human touched the clock. Cross-reference with Event ID 1 in the System log. You should see matching entries. If you see an Event ID 1 time change but no corresponding 4616, that tells you the audit policy wasn't enabled — important context for your investigation. Then go pull the Event ID 6013 entries across the relevant time window and check the raw XML for time zone values. If the zone is shifting around, that's another red flag. The size of the time change matters too. A few seconds or minutes? Probably NTP drift correction. Ten days backwards? Someone did that on purpose. ----------------------------------------------------------------------------------------------------------- The Hard Truth — Logs Have Limits I want to be straight with you here. All of this only works if the logs are still there. A smart attacker who knows what they're doing will clear the event logs. If that happens, Windows log analysis won't save you — you need to fall back on the traditional digital forensics approach I covered in the previous articles. Filesystem metadata, $MFT analysis, prefetch, shellbags — the full toolkit. Logs are fast and accessible. Forensics is thorough. Use both. And if you find cleared logs, that itself is a significant indicator — Event ID 1102 (Security log cleared) or Event ID 104 (System log cleared) will tell you someone tried to clean up. -------------------------------------------Dean-------------------------------------------------------
- Identity and Access Management in Google Cloud
When setting up Google Cloud, one of the first and most important decisions an organization must make is how authentication and user management will be handled . Google Cloud provides two primary, native approaches for managing identities and authentication: Cloud Identity and Google Workspace . Cloud Identity is Google Cloud’s standalone IAM service and is typically used when an organization does not rely on Google Workspace for email and collaboration. Google Workspace , on the other hand, extends beyond IAM to include services such as Gmail, Drive, and Calendar, while also acting as a centralized identity provider for Google Cloud. ------------------------------------------------------------------------------------------------------------- Google Workspace and Google Cloud Organization When Google Workspace is linked to Google Cloud, the primary domain used in Google Workspace becomes the Google Cloud Organization . This linkage is important because it establishes a single authoritative identity source across both platforms. Core IAM Building Blocks in Google Cloud Google Cloud’s IAM system is built on three foundational components: Members Roles Policies Together, these components define who can access resources, what actions they can perform, and where those permissions apply. For DFIR practitioners, IAM is one of the most critical evidence sources. Nearly every meaningful action in Google Cloud—creating resources, modifying configurations, accessing data—requires IAM authorization and therefore leaves IAM-related audit trails. ------------------------------------------------------------------------------------------------------------- Members: Who Is Requesting Access? A Member represents an identity that can be granted permissions. Members are treated as objects within Google Cloud and can include: Individual users Groups Service accounts Users or groups from Google Workspace Users or service accounts from other Google Cloud organizations Even external Gmail accounts Members are typically identified by email address , although permissions can also be assigned at the domain level when managing very large groups of users. From an investigation standpoint, understanding Members is essential because attackers often abuse service accounts, compromised user credentials, or overly broad group memberships to gain persistent access. ------------------------------------------------------------------------------------------------------------- Roles: What Actions Are Allowed? A Role is a collection of permissions grouped together to define what actions a Member can perform. Google Cloud provides three types of roles, each serving a different purpose. Basic Roles Basic Roles were introduced early in Google Cloud’s development and include: Owner Editor Viewer These roles are broad and convenient but often grant far more permissions than necessary . Predefined Roles Predefined Roles are created and maintained by Google Cloud and provide fine-grained permission control . Each predefined role is tailored to a specific service or function, allowing organizations to apply the principle of least privilege more effectively. From a security and DFIR perspective, predefined roles are preferred because: Permissions are well-documented Scope is limited and predictable Over-privileging is easier to detect Custom Roles Custom Roles allow organizations to create their own roles by selecting specific permissions. These roles are useful when predefined roles are either too broad or too restrictive. However, custom roles introduce complexity. Google Cloud has hundreds of individual permissions , and incorrectly assembling them can result in unintended access. ------------------------------------------------------------------------------------------------------------- Permissions: The Smallest Unit of Access Permissions are the most granular level of access control in Google Cloud. Each permission corresponds to a specific action on a resource, such as: Listing storage buckets Creating or deleting storage objects Viewing encryption keys Individually, permissions are rarely useful. Their real value comes from being grouped into roles, which are then applied through policies. ------------------------------------------------------------------------------------------------------------- Policies: Where Access Is Enforced A Policy binds together: One or more Members A Role (which contains permissions) A specific Resource Policies are applied to resources , not to Members. This is a fundamental concept that often confuses administrators and investigators alike. Unlike some other cloud platforms, you cannot ask Google Cloud “what permissions does this user have?” Instead, you must ask “who has access to this resource?” because permissions are evaluated at the resource level. ------------------------------------------------------------------------------------------------------------- One of the biggest mental shifts people struggle with in Google Cloud is how permissions actually work . Most of us come from a world like Microsoft Active Directory , where identity management is user-centric . You click on a user, check their group memberships, and boom — you more or less know what they can access. Google Cloud flips this model on its head. In Google Cloud, permissions are applied to Resources, not to Members . That single design decision changes everything — especially for incident response. This means you cannot simply query a user and say: “Show me everything this user has access to.” That view doesn’t exist in a clean, centralized way. Why This Becomes a DFIR Problem Now imagine you’re in the middle of an incident. You suspect a user account is compromised. In an on-prem AD environment, you’d immediately: Check the user Review group memberships Identify privilege escalation paths In Google Cloud, that approach doesn’t work. Instead, investigators are forced to lean heavily on logs : Audit logs Access logs IAM policy change logs You’re essentially reconstructing permissions based on behavior , not configuration alone. That’s why logging becomes far more critical in Google Cloud investigations than many teams expect. ------------------------------------------------------------------------------------------------------------- The Upside: Resource-Focused Investigations While this model feels painful at first, it does have a huge investigative advantage . If you already know what was abused — say: A storage bucket A VM A logging sink A project Then Google Cloud’s model actually helps you. You can: Go directly to the Resource Inspect the IAM Policy attached to it See every Member who has access See exactly what level of access they have So instead of asking “What can this user access? You ask “Who can access this thing that was abused?” That shift is incredibly powerful during scoping and impact analysis. ------------------------------------------------------------------------------------------------------------- Cross-Organization Access: Where It Gets Messy Here’s where things get uncomfortable. Google Cloud allows Members from one Organization to have access to Resources in another Organization . Two key problems come out of this: Your users might have access to external resources you know nothing about If your user accesses a Resource in another Organization, the logs live over there — not with you So during an investigation, you might see: Suspicious authentication API usage Token activity …but no corresponding resource access logs , because the activity occurred in another Organization entirely. This is one of the hardest things to explain to management during an incident: “Yes, our account did it — but the evidence is stored in someone else’s cloud.” ------------------------------------------------------------------------------------------------------------- Service Accounts: Same Problem, Bigger Impact Now let’s make it even more interesting. Resources in Google Cloud — like VMs — often need to access other Resources. They don’t do this as users. They do it using Service Accounts . For example: A VM needs to read/write to a Storage Bucket The VM is assigned a Service Account The Storage Bucket has a Policy allowing that Service Account access From an IAM perspective, Service Accounts are Members . Which means all the same permission challenges apply . If a Service Account is compromised: You still can’t easily list everything it has access to You still have to investigate from the Resource side You still rely heavily on logs This is why Service Account abuse is so dangerous in cloud incidents — they’re quiet, persistent, and often over-privileged. ------------------------------------------------------------------------------------------------------------- Grouping Still Exists — Just Depends Where You Do It Google Cloud does support grouping (thankfully), but where you create those groups depends on your IAM setup . If you’re using Cloud Identity only (IDaaS) → Groups are created in the Google Cloud IAM console If you’re using Google Workspace → Groups are created in the Workspace Admin console Functionally, the idea is the same: Users are added to groups Groups are assigned Roles Roles are enforced via Policies on Resources For enterprises, Workspace-based groups are usually cleaner because: Identity already exists Group lifecycle is managed centrally DFIR teams can reuse security-focused group structures ------------------------------------------------------------------------------------------------------------- One Critical Requirement People Miss If you’re linking Google Workspace with Google Cloud, there is one non-negotiable requirement : You must create the gcp-organization-admins group in Google Workspace and place your Organization Admins inside it. Without this group: Google Cloud will not properly link to Workspace IAM inheritance breaks Administrative visibility becomes inconsistent Everything else builds on top of this foundation. ------------------------------------------------------------------------------------------------------------- DFIR Takeaway Google Cloud IAM isn’t worse than traditional IAM — it’s different . You investigate Resources , not users Logs matter more than static permission views Service Accounts deserve as much scrutiny as human users Cross-Organization access can hide evidence Group design directly impacts incident response speed Once this model clicks, Google Cloud investigations start to feel methodical instead of chaotic . ------------------------------------------------------------------------------------------------------------- Why IAM Matters So Much for DFIR IAM is not just an access control mechanism—it is a primary evidence source . It reveals: How attackers gained access What permissions they abused Whether misconfigurations enabled lateral movement How persistence was established --------------------------------------------Dean-----------------------------------------------------------
- Meet the CE SentinelOne Assistant — I Built It for Myself, But You Can Try It Too
⚡ CE S1 Assistant So, Why Did I Build This? Let me be real with you — I built this tool for myself. That’s it. No grand master plan, no startup pitch deck. Just a guy who got tired of the same problem every single time he opened SentinelOne Deep Visibility. If you’ve ever used Deep Visibility, you know exactly what I’m talking about. You get an alert, you need to hunt across your endpoints fast , and you open that query box... and then you’re sitting there trying to remember the exact field name. Is it src.process.name or dns ? Does the operator use contains or matches ? One wrong character and your query returns absolutely nothing. S1QL — SentinelOne’s query language — is powerful. Really powerful. But it’s also very specific. It takes months to get comfortable with the syntax, and even then you’re constantly checking the documentation for edge cases. I’d find myself spending more time formatting the query than actually thinking about the threat. So I thought: what if I could just describe what I’m looking for in plain English and get a production-ready query back? No syntax memorisation. No documentation diving. Just say what you need and get a query you can copy straight into the console. That’s the CE S1 Assistant. That’s why it exists ------------------------------------------------------------------------------------------------------------- What Is It, Exactly? The CE S1 Assistant is a web-based tool that lives at https://s1copilot.onrender.com/ It does one job and it does it well: it helps security analysts write better SentinelOne Deep Visibility queries, faster. It has three main modes for generating queries: 1. Natural Language to S1QL — You type what you want in plain English. The tool gives you a working S1QL query. Done. 2. Threat URL to IOC Hunt Query — You paste a threat intelligence article URL. The tool reads the entire article, pulls out every IOC it can find, and builds a multi-layered detection query automatically. 3. Direct IOC Input — You paste hashes, IPs, or domains directly. You get an exact-match detection query back. On top of query generation, the tool also has a live threat intelligence dashboard that pulls from eight industry feeds — so you have context before you even start hunting. But let me walk you through each feature properly. ------------------------------------------------------------------------------------------------------------- The Natural Language Query Generator This is the main event. The feature I use the most, and honestly the reason the whole tool exists. You type something like: “show me all unsigned processes that ran from AppData in the last hour” and the tool generates a complete, valid Powerquery with the correct field names, operators, filters, and output columns. Or maybe you’re thinking bigger: “find ransomware behaviour on Windows endpoints” . Akira ransomware detection with behavior It handles that too. It knows what behaviours typically indicate ransomware — file encryption patterns, shadow copy deletion, ransom note creation — and builds a query that covers those angles. What makes this actually reliable and not just a party trick is that it’s built on a deep knowledge base of SentinelOne’s exact field schema — over 80 validated field names. It knows S1QL-specific operators. It understands platform differences — Windows paths vs macOS paths vs Linux paths. And it avoids the common PowerQuery pitfalls that trip people up. You can also throw IOCs into the natural language input — it’ll combine them with your behavioural description and give you a single query that covers everything. Super useful when you have partial intelligence and a hunch. ------------------------------------------------------------------------------------------------------------- Threat URL to IOC Hunt Query This is the feature that I’m honestly most proud of, and I think it’s what sets this tool apart from anything else out there. Here’s the scenario. A new threat report drops — maybe from CISA, maybe from a vendor blog, maybe from a researcher on Twitter. You read through it, you manually copy the IOCs into a spreadsheet, you format them into S1QL queries, you double-check the syntax... and 45 minutes later you finally have something you can run. With the CE S1 Assistant, you just paste the URL . That’s it. The tool fetches the article, reads every section including tables, code blocks, appendices, and footnotes, and extracts every confirmed IOC it can find: SHA256, SHA1, MD5 hashes IP addresses — C2 servers download servers. Domains and URLs. Malware-specific file paths. Process names and command line patterns. Even registry keys for Windows persistence. Then it builds a multi-layered IOC hunt query covering all detection angles: process hashes, file hashes, network connections, DNS requests, URL access, file path creation, and command line execution. Each block is commented so you know exactly what each section is catching. I tested it against the SparkRAT threat intelligence report from hunt.io , and the results were impressive — it correctly extracted 3 SHA256 hashes, 4 C2 IPs, 18 C2 domains, 6 malware-specific file paths, 4 process names, and 2 command line IOCs. It built a production-ready 7-block detection query with zero manual input from me. ------------------------------------------------------------------------------------------------------------- The Threat Intelligence Dashboard Before you hunt, you need context. What’s active right now? What CVEs are being exploited in the wild? What C2 infrastructure is live? The CE S1 Assistant pulls live threat intelligence from eight industry sources and surfaces it right in the dashboard: CISA KEV for known exploited vulnerabilities, AlienVault OTX for community threat reports, MalwareBazaar for malware samples, ThreatFox for IOCs from active threat actors, Feodo Tracker for botnet C2 infrastructure, URLhaus for malicious URLs, MITRE ATT&CK for technique mapping, IPsum + C2-Tracker for high-confidence malicious IPs. All feeds sync automatically every 24 hours, and you can trigger a manual sync any time. The part I really like is that every threat entry in the dashboard has a one-click “Generate Query” button — so you see a threat, you click the button, and you’ve got a hunt query ready to go. I’m adding more feeds and working on integrating my own intelligence so I can connect it all together. ------------------------------------------------------------------------------------------------------------- The Query Library — 70 Prebuilt Queries Not every hunt starts from scratch. Sometimes you just need a solid starting point — a known-good query for a common scenario that you can tweak for your environment. The Query Library has 70 curated, validated S1QL queries covering the most common threat hunting scenarios across all major platforms. Windows stuff like credential access, lateral movement, and privilege escalation. macOS persistence via LaunchAgents, TCC bypass, keychain access. Linux cron persistence, reverse shells, rootkit indicators. Defence evasion techniques like PowerShell abuse, LOLBins, AMSI bypass. And exfiltration patterns like DNS exfil, cloud upload detection, and C2 beaconing. Every query is categorised, tagged, and ready to copy. If you’re just getting started with threat hunting in SentinelOne, this library alone will save you weeks. ------------------------------------------------------------------------------------------------------------- Custom Rule Generator (STAR Rules) Beyond ad-hoc hunting, SentinelOne has STAR rules — these are detection rules that run continuously and trigger responses in the console. you can also use them for hunting. They use a different syntax from Power Query, which means you need to learn yet another format. You can give it IOCs directly to CE S1 Assistant and it’ll create a custom rule, or you can use the same output for Deep Visibility hunting — your choice. ------------------------------------------------------------------------------------------------------------- Query History Every query you generate gets saved to the history log with full context — the original input, the generated query, MITRE ATT&CK technique tags, severity rating, token usage, and estimated cost. You can review, copy, and reuse past queries without regenerating them. It’s one of those features that sounds small until you’ve been using the tool for a week and you’re constantly going back to previous queries. ------------------------------------------------------------------------------------------------------------- Who Is This For? Like I said — I built this for myself first. Every time I needed to write a query, I was tired of digging through documentation to find the right field format. Now I just describe what I want and get a query back. But if you’re any of these people, it’ll probably help you too: Experienced threat hunters — You know exactly what you’re looking for, but S1QL syntax slows you down. This lets you hunt at the speed of thought. Junior SOC analysts — You understand threats conceptually, but you haven’t had time to master S1QL yet. Now you can generate queries on day one and learn from the output. Incident responders — A new threat report drops and you need a detection query in minutes, not hours. Paste the URL, get the query, start hunting. ------------------------------------------------------------------------------------------------------------- What’s Coming Next I’m working on a query feedback system so analysts can report issues directly and I can fix things from the backend. There’s also a syntax validator in the works that will check query structure before you paste into SentinelOne. And I’m planning multi-instance sync so you can share query history and threat intel across deployments. I’ve got a lot of ideas. It’s going to keep getting better. ------------------------------------------------------------------------------------------------------------- Want to Try It? The tool does have running costs, so I’m not leaving it wide open — but if you want to test it, just ping me on LinkedIn . I’ll create an account for you so you can give it a spin. I’ve got a lot of friends who use SentinelOne, and anyone who wants to try it is welcome. No catch, no sales pitch. I just want feedback from real analysts who use S1 every day. The tool is live at: https://s1copilot.onrender.com/ ------------------------------------------------------------------------------------------------------------- About Cyberengage So why name Cyberengage? Honestly, it because of my website — I was solving a problem for myself and figured other people might find it useful too. Cyberengage is my platform for practical knowledge. Not theoretical. Not 200-page whitepapers. Actual to information anyone can use to get start with Security today. The CE S1 Assistant is the first major one is my first project. If you want to follow along, check out https://www.cyberengage.org/ . And if you have ideas for tools you wish existed, I’m always listening. --------------------------------------------------Dean----------------------------------------------- Look, I'm not going to pretend this tool is perfect — no tool is. But I've worked hard to get it as close as possible. You might get false positives or queries that need tweaking. That's normal. You narrow those down, adjust the filters, and you're good. The goal was never to replace your judgment — it's to save you the 20 minutes you'd spend fighting syntax so you can focus on the actual hunt."
- How a Single Behavioral Indicator in SentinelOne Uncovered a Full Infostealer Attack
Okay, I know — another SentinelOne article. But hear me out. What I'm about to show you changed how you think about detection engineering, and I genuinely can't stop thinking about it. If you've been following this series, you already know I covered the Detection Center in the last article. https://www.cyberengage.org/post/sentinelone-detection-center-library-rules-emerging-threats-and-what-it-all-actually-means Go check that one out if you haven't — link at the top. But today? We're going somewhere slightly different. We're talking about Indicators — and specifically, why they might be one of the most underrated features SentinelOne quietly ships with every agent deployment ----------------------------------------------------------------------------------------------------------- Let's Start With What You Already Know If you've spent any time with SentinelOne, you know about Deep Visibility. That's the data lake where S1 stores everything the agent captures — every process creation, network connection, file event — retained for as long as your subscription allows. It's basically a time machine for your endpoints. You also know S1 has detection engines running under the hood. We touched on those into my SentinelOne series. But here's the thing I want to highlight today: those engines aren't just detecting threats — they're also tagging events with metadata. Specifically, they're attaching what SentinelOne calls Behavioral Indicators. ----------------------------------------------------------------------------------------------------------- So What? Why Should I Care? Here's the thing most people miss: you don't need a hash, a domain, or an IP address to write a detection rule. You can write a STAR Custom Rule using just an indicator name. I know what you're thinking — "that's going to fire everywhere, false positives galore." And yes, you'll need to tune it. But let me show you how powerful this actually is with a real-world example. That single line. That's it. No hashes, no IPs, no paths. Just an indicator name that SentinelOne's engine already stamps on suspicious events in Deep Visibility. ----------------------------------------------------------------------------------------------------------- How This Actually Caught an Infostealer This is where it gets real. A massive thank you to my friend Jeremy Jethro . He's the reason I'm writing this article. An alert triggered. On the surface it looked completely unremarkable — Visual Studio activity, something most analysts would glance at and close as a false positive. But the alert was triggered based on a behavioral indicator, not a signature . And Jeremy being Jeremy, he didn't just close it. He dug. What he found underneath was a full infostealer execution chain. Here's a sanitized summary of what the timeline looked like: The entire above chain — discovered because of one behavioral indicator that an analyst almost dismissed as a false positive. ----------------------------------------------------------------------------------------------------------- The Real Lesson Here SentinelOne — like every EDR — will miss detections sometimes. That's just reality. But here's the thing people get wrong: a missed detection doesn't mean missed data. Deep Visibility is capturing everything, every second, and the agent is silently tagging behavioral activity the whole time. Your job as a detection engineer or threat hunter isn't to wait for an alert to fire. It's to build rules that surface what the engines are already seeing . Behavioral indicator-based STAR rules are exactly how you do that. One note: there is a delay in STAR Custom Rule detection. I've written a full breakdown of that elsewhere but the delay doesn't mean you ignore it. It means you account for it. -----------------------------------------------------------------------------------------------------------
- Browser Forensics Just Got Way Easier — And It's Free
Okay let me be real with you for a second. Browser forensics manually? It's a pain. You're digging through SQLite databases, remembering artifact locations, writing queries — and if you're doing it with free tools, it only gets worse. I actually built a full series on how to do this manually if you want to go deep on it — link here: https://www.cyberengage.org/courses-1/introducing%3A-browser-forensics-%E2%80%93-your-ultimate-guide-to-manual-analysis But today? I found a tool that makes all of that dramatically simpler. And it pairs beautifully with KAPE, which if you know me, you know I love. ------------------------------------------------------------------------------------------------------- Step 1 — Collect Your Artifacts With KAPE Before the tool does anything, you need to actually collect the browser artifacts off the system. KAPE handles this perfectly. If you're running PowerShell: If you prefer the GUI, even easier — just tick WebBrowsers as your target. ------------------------------------------------------------------------------------------------------- Step 2 — Meet the Tool Drum roll... 🥁 https://github.com/acquiredsecurity/forensic-webhistory That's it. That's the tool. And I genuinely love it. What makes it special? You can run it on Windows, WSL, Linux, or Mac. Doesn't matter where the evidence came from — Mac, Windows, whatever — the tool just reads the SQLite files directly. Cross-platform by nature. ------------------------------------------------------------------------------------------------------- Running It on Windows Download the executable, run as Administrator, and you get this menu: Select 1 , point it at your KAPE output folder, choose where you want results saved, hit enter — done. About a minute later you have a clean Excel output. That's it. No SQL queries, no manual path hunting. Output! (Analyse all the output with my another favorite tool Timeline explorer) ------------------------------------------------------------------------------------------------------- What Browsers Does It Support? ------------------------------------------------------------------------------------------------------- Bonus — Parsing Mac Evidence on WSL This is where it gets cool. I had a Mac artifact set collected using UAC (Unix Artifact Collector) and I wanted to parse it on Windows via WSL2. Here's the exact command I ran: MAC output in excel ------------------------------------------------------------------------------------------------------- W hy I Actually Like This Tool Look, paid tools like Magnet AXIOM or Cellebrite make this trivial — but they cost money, sometimes a lot of it. This tool gives you clean Excel output, covers every major browser, runs cross-platform, and pairs with KAPE out of the box. For anyone doing DFIR on a budget or just learning the craft, this is genuinely one of the best free tools out there right now. Go try it. You'll get it immediately. ------------------------------------------------------------------------------------------------------- If this helped — share it, react, drop a comment. More coming. -----------------------------------------------Dean----------------------------------------------
- SentinelOne Detection Center — Library Rules, Emerging Threats, and What It All Actually Means
Okay so if you've been following this SentinelOne series, you know we've covered a lot of ground. Complete Series: https://www.cyberengage.org/courses-1/mastering-sentinelone%3A-a-comprehensive-guide-to-deep-visibility%2C-threat-hunting%2C-and-advanced-querying%22 But this one is genuinely exciting — SentinelOne just dropped something that takes a big burden off security teams, especially those who don't have the time or expertise to write custom detection rules from scratch. It's called the Detection Center, and the headline feature is the Library tab — a collection of pre-built detection rules created by the SentinelOne research team that you can switch on immediately. No query writing. No logic to figure out. Just activate and go. Before we dive in, I actually researched more about this — things that weren't immediately obvious from the documentation. And Answers cleared up a lot of confusion, and I'm including everything that I learned throughout this article. So this isn't just a feature walkthrough — it's the feature walkthrough plus the answers you'd get if you spent 60 minutes researching more or asking support team. ----------------------------------------------------------------------------------------------------------- What Is the Detection Center? The Detection Center is the new unified home for all your detection rules — both the ones you write yourself and the ones SentinelOne's research team maintains. You get to it from the sidebar: click Detections . It has two tabs: Custom tab — this is where your own rules live. Everything you've built, everything you're managing yourself. You can view, edit, create, and manage rules here. Library tab — this is the new bit. Pre-built, advanced detection rules from the SentinelOne research team, ready to activate. The full-screen view shows you each rule's name, description, severity, MITRE tactics, data source, category, status, and when it last triggered an alert. One important thing to understand up front: the Detection Center is available in the Singularity Operations Center (SOC) interface only . If you're still on the legacy Management Console, you won't see the Library tab there. More on why that matters in a moment. ----------------------------------------------------------------------------------------------------------- The Question I Researched— Console Availability When I first saw that library rules were only in the SOC interface, my immediate question was: why? And more importantly — if I enable rules in SOC, do they do anything if I'm still partly using the legacy console? Here's exactly what I found: ----------------------------------------------------------------------------------------------------------- How Library Rules Are Different From Your Existing Detection Engines This was probably the most important question I had. SentinelOne already has detection engines — behavioral AI, static AI, reputation — so what exactly do these library rules add ? The answer is that they're entirely separate. The engines run automatically on endpoint activity using SentinelOne's core AI. Library rules are query-based — they look at your telemetry data (stored in Singularity Data Lake) and fire when a specific set of conditions is met. You're essentially telling the platform "alert me whenever X, Y, and Z happen together." They don't replace the engines. They sit alongside them and expand what you can detect — especially for scenarios the engines weren't built for, like cloud activity, identity events, or very specific behavioral patterns that require correlating multiple data points. ----------------------------------------------------------------------------------------------------------- The Three Categories of Library Rules Not all library rules behave the same way. SentinelOne has split them into three enablement categories, and understanding this is important before you start activating things. Auto enabled by default — these are turned on across all environments automatically. You don't need to do anything. You can disable them if they're not right for you, and your opt-out choice will be remembered even after platform updates. Disabled by default — available in the library but you have to manually switch them on. These are typically more specialised rules that don't make sense for every environment. Emergency detection — this category is activated immediately in response to global outbreaks. If there's a major zero-day or widespread attack campaign happening, SentinelOne can push these out automatically. Example: Emergency detection ----------------------------------------------------------------------------------------------------------- Most Important Emerging Threat and Core Rule Labels Inside the library you'll see two label types on certain rules: Emerging Threat and Core . Here's what they mean. Core rules are rules that SentinelOne recommends for most environments — broadly applicable, well-tested detections. Emerging Threat rules are specifically about evolving cyberattacks — newer tactics, active campaigns, things that are happening right now rather than established patterns. You can bulk-activate rules by label using the Automatic Detections by type button in the top-right corner of the Detections dashboard. Click it, select the labels you want, and hit Save. When an alert is triggered by one of these labelled rules, the label appears next to the alert name in the Alerts page. One display note from the documentation: due to space constraints in the UAM drawer, only a single label is shown even if a rule has both. Emerging Threat takes display priority over Core. ----------------------------------------------------------------------------------------------------------- Activity Logs — What Gets Recorded When You Change Rules Every time you enable or disable Emerging Threat or Core rules at any scope level, an activity log entry is generated. This matters for audit trails and change management. ----------------------------------------------------------------------------------------------------------- My Question About EDR-Specific Rules Here's something that genuinely confused me when I first looked at the library. A lot of the rules are for CloudTrail, Okta, and other non-endpoint sources. So I wanted to know : is there a way to filter down to just the rules that are relevant to traditional EDR? Their answer was simple and useful: ----------------------------------------------------------------------------------------------------------- New Workflow Features — Alert Simulation and Multi-Instance View Two more features worth calling out quickly. Alert Simulation lets you test a rule against recent ingested data before activating it in your live environment. You can see what alerts would have fired without triggering any actual responses or mitigations. It's available for rules using Query Language 2.0. Rules with protected logic or scheduled intervals won't show the simulation option. Multi-instance view lets you open multiple rules side-by-side in floating panels for comparison. You can resize panels, keep them minimised at the bottom of the screen, and copy details without losing your place in the main rule list. Example: ----------------------------------------------------------------------------------------------------------- Quick Reference ------------------------------------------------------Dean-------------------------------------------------- Why This Feature Is a Big Deal Here's my honest take on this — and the reason I wanted to write about it. Most security teams have at least a few people who are great at responding to incidents but don't have the time or background to sit down and write detection logic from scratch . Custom rules in any platform require you to understand the query language , know what telemetry fields to look for, understand what a "normal" baseline looks like, and then figure out how to express a threat pattern in code. That's a lot to ask. What SentinelOne has done with the Library is essentially say — we'll do that part for you. Their research team is tracking emerging threats full time. When a new attack pattern shows up in the wild, they can push a rule to your environment automatically. You don't have to read the threat intel report, understand the technique, write the query, test it, and deploy it. It's already there. For smaller teams, for analysts who are more blue team than threat hunter, and for organisations that want solid detection coverage without hiring a dedicated detection engineer — this is genuinely useful. The Emerging Threat category especially. The whole point is that SentinelOne's researchers are watching the threat landscape so you don't have to react from scratch every time something new hits. This is the kind of feature that makes a real difference — not just on paper, but on a Tuesday afternoon when something new is spreading and you already have coverage before you've even heard about it. ------------------------------------------------------Dean--------------------------------------------------








