top of page

Search Results

497 results found with an empty search

  • Memory Forensic vs EDR – Talk

    If you look at how cybersecurity has evolved over the past few years, one thing becomes very clear: we finally have the horsepower to see what’s actually happening on our systems in real time. Thanks to cheaper storage, faster processing, and advances in forensics, we can now monitor both live and historical activity like never before. And that visibility isn’t just for show — we can act on it, whether automatically or manually, before attackers get too comfortable. A big part of this change is due to a new generation of Endpoint Detection and Response (EDR) tools. BTW I have written complete series on Sentinel One and Carbon If you want you can check out Lin below: Sentinel One https://www.cyberengage.org/courses-1/mastering-sentinelone%3A-a-comprehensive-guide-to-deep-visibility%2C-threat-hunting%2C-and-advanced-querying " Carbon Black https://www.cyberengage.org/courses-1/carbon-black-edr Continuing where we left So these solutions don’t just sit and wait for an alert. They use pattern recognition, heuristics, and machine learning on the back end to automatically block suspicious actions. But what really sets EDR apart is that it supports both detection and response. That means security teams aren’t just watching attacks happen — they can dig into the data, hunt for threats at scale, perform historical searches, and quickly understand how far an intrusion reaches. Here’s why this matters: once you identify an indicator of attack, being able to look backward in time and see where that same behavior occurred across the network can drastically reduce the time it takes to contain a threat. It also makes life more difficult for attackers, because their methods and patterns get exposed. ------------------------------------------------------------------------------------------------------------- Why Memory Matters More Than Ever Modern threats aren’t playing by the old rules. Attackers are moving away from traditional, file-based techniques because they know security tools are watching disk activity. Instead, many attacks now live in memory — the rise of “fileless” malware is a perfect example. That means in-memory detection is no longer optional. It’s critical. EDR tools focus heavily on memory analysis and event tracing, which allows them to catch malicious activity involving PowerShell, WMI, code injection, obfuscation, and other stealthy techniques. Because many EDR platforms have kernel-level access, they can see details that traditional antivirus tools would miss. Some common data points EDR tools capture include: Process information Windows API usage Command line history Process handles and execution tracing Suspicious thread creation and memory allocation DLL injection and rootkit techniques Host network activity ------------------------------------------------------------------------------------------------------------- EDR vs Memory Forensics — They’re Not the Same It’s important not to confuse EDR with full forensic tools. No one can capture every event across every system all the time — it would be impossible. A single device can log millions of events daily. For instance, Sysinternals Process Monitor can detect over 1000+ events per second, while an EDR system might intentionally limit itself to around 20-30 events per minute to avoid slowing the machine down. EDR tools focus on scale and practicalit y . They record a carefully chosen list of data points rather than everything under the sun. You typically can’t customize that list, but you get lightweight coverage across the entire environment. On the other hand, forensic tools aim for completeness. They capture entire memory and disk images, helping analysts tell the full story of an attack — including activity that may have happened before  EDR was installed. That’s why EDR should be seen as a supplement , not a replacement, for: Network monitoring SIEM log collection Deep memory/disk forensics EDR is great for real-time detection and quick investigation, but when you need deep answers, forensic tools are still king. ------------------------------------------------------------------------------------------------------------- Tools Are Only As Good As the Analyst I always say artificial intelligence, automation, and machine learning in security — and those technologies absolutely help — but they aren’t magic. At the end of the day, human analysts are still essential. Analysts need to understand how attackers think, what techniques they use, and how to connect data from multiple sources. Strong memory forensics and process analysis skills make EDR dramatically more effective. When you know what “normal” looks like, it becomes much easier to spot what doesn’t belong. The truth is, traditional forensics might eventually uncover everything EDR can reveal, but it would take much longer. EDR brings everything together in one place, speeding up identification of malicious activity and helping analysts make faster decisions. The goal is simple: use powerful tools, but keep strong foundational knowledge. That foundation is what lets you sort normal behavior from abnormal and respond confidently when something looks off. --------------------------------------------Dean-----------------------------------------------------------

  • Deep Dive: How Dropzone AI Investigates Alerts (Example Explained)

    In the previous article, I explained the Dropzone AI dashboard and overall features. Now, let’s get into the real action — how Dropzone actually investigates an alert , using Panther  as the example. Let’s begin. How Alerts Flow From Panther → Dropzone Let’s say you’ve integrated: Panther data source Panther alert source This means: Every alert Panther generates will be picked up automatically by Dropzone. No manual work. No need to forward anything. Dropzone grabs the alert → starts investigation immediately. Alright, good. ----------------------------------------------------------------------------------------------------------- Dropzone Picks the Alert & Starts Investigation Once Dropzone receives the alert, it begins analysis instantly. After a short time (usually under a minute), the AI spits out: Full investigation Conclusion (benign, suspicious, or malicious) Summary Top findings And here’s the important part: If Dropzone marks an alert as “Benign,” 99.9% of the time it is actually benign. So you can safely close it. Also important: If you close the alert in Dropzone, it gets closed in Panther as well. Any note or comment you add → also gets added to Panther. Super useful. ----------------------------------------------------------------------------------------------------------- Opening the Alert – The Investigation Screen When you open the alert, this is what you see (I had to hide sensitive info in the real screenshot): Alert Summary AI Conclusion Top Findings Links to original tool (Panther button on the left side)[ It’s extremely clean and simple. From here, you have 2 choices as an analyst: Method 1: Investigate manually in Panther Click the Panther button → open the alert in Panther → do your normal manual investigation. Method 2: Use Dropzone’s full investigation This is the easier method, and I’ll explain it below. After reviewing the alert: Select your conclusion (benign/suspicious/malicious) Add your comment Save That’s it. Alert will close in both systems. ----------------------------------------------------------------------------------------------------------- Findings Tab – The Brain of Dropzone The Findings  tab is where you see exactly what questions the AI asked  during its investigation. Dropzone literally interrogates the alert: “Has this IP been used before?” “Is this user associated with risky activity?” “Does this resemble past malicious alerts?” “Is this event common for this device/user?” “Any MITRE technique indicators?” Every question → AI’s response → Final verdict. You can click on each question to see: What Dropzone checked What information it used Why it reached the verdict This transparency is what makes Dropzone so powerful. It’s like sitting inside the brain of an analyst. ----------------------------------------------------------------------------------------------------------- Evidence Locker – All Evidence in One Place This is basically a collection of everything Dropzone used during the investigation. Examples: IPInfo lookup results Geo location ASN Device/user history File reputation Prior alerts Previous analyst comments Other tools' data You can click View Response  to see details. Dropzone also checks previous 5 alerts related to this case. It looks at: What analysts concluded What comments they left Whether those alerts were benign or malicious This is where your past decisions matter — Dropzone keeps learning from your behaviour. I call this “AI learning from the past”  😂 ----------------------------------------------------------------------------------------------------------- Remediation Tab – What To Do If Malicious If the alert is malicious, Dropzone provides: Recommendations Steps to take What should be contained What needs review At top, you’ll see Containment Actions . Dropzone currently supports automatic remediation for a few integrated apps (for example: Microsoft Defender, Okta, etc.) If enabled: AI will automatically isolate the machine or disconnect the user if the alert is malicious. This is insanely powerful. ----------------------------------------------------------------------------------------------------------- Change Log – Timeline of Everything The Change Log tab shows a timeline: When alert came in When investigation started Any status change Analyst comments If alert was closed If remediation was triggered It’s a clean, readable timeline. ----------------------------------------------------------------------------------------------------------- Second Method of Investigation – “Ask a Question” Earlier, I mentioned two ways to investigate: 1. Manual investigation using Panther 2. Using Dropzone findings Here’s the third  and easiest  method: Just ask Dropzone a question. Example questions: “Has this IP been used by any other user?” “Is this user agent associated with this IP anywhere in our logs?” “Show me all events linked to this process.” Just type your question like human language → Dropzone runs a full investigation across all logs and gives you the answer. No querying. No log diving. No manual searching. One of my favourite features. ----------------------------------------------------------------------------------------------------------- False Positive Example – Benign Alert Let’s take another example: MFA disabled by user. Normally this may look suspicious. But in reality, this was normal behaviour. Dropzone marked it as benign , and if you read the investigation, it clearly tells you why. Here you don’t need to investigate — just approve and close. ----------------------------------------------------------------------------------------------------------- Why AI Saves Massive Time (Realistic SOC Example) Let’s say 10 alerts arrive: 9 are false positives 1 is true positive A human analyst — even a very good one — will take about 45 minutes  to investigate all 10. There is also the chance they may get tired or distracted and miss the true positive. Now compare this with Dropzone: It will mark 9 alerts as benign  within 10 minutes It will highlight 1 true positive  clearly You just review the 1 true positive Approve the 9 benign ones This saves enormous time. This is why AI is becoming a huge part of SOC. Not because it replaces jobs, but because: It removes the noise so analysts can focus on the real threats. ----------------------------------------------------------------------------------------------------------- Final Thoughts You’ve now seen exactly how a Dropzone AI alert looks, how it thinks, how it asks questions, and how to review it. In the next article, I will show: Sentinel One, CrowdStrike Example --------------------------------------------------Dean---------------------------------------------------- Check out next article below: https://www.cyberengage.org/post/dropzone-ai-final-conclusion-what-all-these-examples-really-show

  • Dropzone AI Final Conclusion – What All These Examples Really Show

    Now that I’ve shown you investigations from Panther  — I think you can clearly see what Dropzone AI is actually doing behind the scenes. No matter which security tool generates the alert: Dropzone picks it up instantly Investigates it faster than any human Asks all the important questions automatically Pulls evidence from everywhere Checks historical behaviour Compares with analyst verdicts Correlates with MITRE framework And finally gives you a clear conclusion All of this happens in seconds , not minutes — and definitely not hours. This is why I keep saying: AI is already transforming the SOC team, whether someone believes it or not. Look at the examples again: ✔ SentinelOne → Investigation + Findings + Remediation Conclusion Findings: Remediation ✔ CrowdStrike → Investigation + Findings Conclusion Findings: ✔ Microsoft Sentinel → Investigation + Findings Conclusion Findings: ✔ Splunk → Investigation + Evidence Locker + Findings Conclusion Evidence Locker Findings: Different tools, different alert types…But Dropzone handles all of them with the same speed, same accuracy, and same style. ----------------------------------------------------------------------------------------------------------- Why This Matters (Even if People Don’t Want to Hear It) Let’s be honest: Most SOC analysts today spend 70% of their time doing: Routine triage Repeating basic checks Searching logs Closing false positives This is exactly the work that AI automates perfectly . And when AI can: Analyze 10 alerts in 2 minutes Mark 9 as benign Show you only the real threat Pull evidence from all tools Provide ready-made conclusions Recommend remediation actions Even perform automated remediation …then the role of a SOC analyst changes forever. It’s not about “AI replacing jobs.” It’s about AI replacing the boring part of the job , and you focusing on real incident response. But people who refuse to learn these tools? Those are the ones AI will replace. ----------------------------------------------------------------------------------------------------------- My Final Advice to Every SOC Analyst / IR Engineer If you takeaway one thing from all these examples, let it be this: 👉 Start learning how to work WITH AI, not against it. 👉 Become the person who understands AI-driven investigations. 👉 Learn how to verify AI decisions, not manually do everything. 👉 Focus on deeper skills: threat hunting, forensics, malware analysis. AI is not taking your job. AI is taking your old  job. Your new  job is to supervise, validate, and respond — not chase false positives. Dropzone is just one example. So the smart move? Start upgrading your skills now. ------------------------------------------------Dean------------------------------------------

  • Dropzone AI Dashboard & Investigation Overview

    Your SOC, but finally without the headache. In the previous article, I talked about how AI is changing SOC operations forever — especially tools like Dropzone AI  that automate full investigations. If you ask me which tools I enjoy working with the most, I will always say CrowdStrike , SentinelOne , and Forensic tools . But recently, one tool has impressed me so much that I genuinely feel like every SOC team should see it at least once. And that tool is Dropzone AI . This Article part is all about how Dropzone actually looks and feels  when you use it every day. --------------------------------------------------------------------------------------------------------- The Dropzone Dashboard When you open Dropzone AI, the first thing you see is the Dashboard . And trust me — I love simple dashboards . SentinelOne has one of the best UIs , and Dropzone follows the same philosophy: clean, clear, and not overloaded. The dashboard lets you filter investigations by: Conclusion  (Benign / Suspicious / Malicious) Priority Status Source This is super helpful when you have multiple log sources connected. You can instantly see: Where alerts are coming from Which tools are generating noise Which sources need tuning How Dropzone is handling everything in real-time Lifetime Metrics You also get three very important metrics: ✔ Lifetime Investigations Total number of investigations Dropzone has done for your environment. ✔ Lifetime Median TTI TTI = Time to Investigate . Humans take 30–90 minutes per alert. Dropzone does this in under 20 minutes . ✔ Time Saved This is your “why am I not doing 24/7 shifts anymore” metric. This is the reason I say AI kills alert fatigue . Response Metrics (My favourite) This section shows: The time between the event happening and Dropzone completing the investigation. This is 🔥.Because humans simply cannot  operate with this speed or consistency — especially at 3 A.M. And if you want the best results? Make sure you ingest all logs . Dropzone correlates telemetry across tools — EDR, Identity, SIEM, Cloud — and then produces a final conclusion. More logs = more accurate investigations. Finally, the dashboard also includes: MITRE ATT&CK Correlation For every alert, Dropzone maps it to relevant MITRE techniques. This is extremely helpful for understanding attacks at a glance. --------------------------------------------------------------------------------------------------------- Fleet Dashboard — One Console for All Clients This is a new feature and honestly a game changer for MSSPs. If you manage many clients with different: Domains Tenants Log sources Alert volumes You don’t need to jump into each one separately. The Fleet Dashboard  shows: Total investigations per client Priority breakdown Status breakdown High-level overview of all environments Think of it as a master SOC console . Important: You can only see dashboards here — not  individual alerts. To analyze alerts, you still open that specific client’s workspace. There’s also a search bar on top: Just type the client name → instantly jump into their console. --------------------------------------------------------------------------------------------------------- Investigation Tab — The Heart of Dropzone This is where the magic happens. Whenever an alert comes in (CrowdStrike, SentinelOne, Panther, etc.): Dropzone picks it up → Triaged It starts investigation → Running It finishes and gives a conclusion → Benign / Suspicious / Malicious It categorizes into → Urgent / Notable / Informational And then it’s your job to review it. The Review Workflow Once Dropzone gives its conclusion: If you agree→ Approve the review→ Alert moves to Reviewed If you don’t agree→ Add your own analysis→ Change the category (e.g., benign) The right side shows: Queued  alerts Running  investigations Stopped  analysis (if you manually stop one) You don’t have to babysit anything. It runs automatically in the background. This UI is very  clean — honestly easier than CrowdStrike. Dropzone feels more like SentinelOne: simple, smooth, functional. --------------------------------------------------------------------------------------------------------- Ask a Question — AI Threat Hunting for Humans This is hands down one of the best features. You can ask Dropzone anything in human language, such as: “Was this IP seen with any other user?” “Did this hash appear anywhere else in the last 30 days?” “Show me all failed logins from this user across all sources.” Dropzone will go through every  integrated data source: SIEM EDR Identity logs Cloud logs Network logs …and give you a correct answer in under a minute. I tested it. I checked the logs manually. It was 100% correct. Search Result This feature alone saves hours of manual threat hunting. --------------------------------------------------------------------------------------------------------- Context Memory — The Brain of the SOC This part makes Dropzone feel less like a tool and more like a human teammate. This is one feature I truly love. Dropzone remembers your actions and your environment context. Example: You have a user who usually works in Europe but is traveling to the USA for 7 days. You simply write this in human language: “akash@cyberengage is traveling to the USA for 7 days. Login from USA is expected.” Dropzone stores this. Now, for the next 7 days, if the user logs in from the US, the alert will be marked benign automatically. It learns from: Your comments Your decisions Your organization context Seriously… this is next-level SOC automation. And the best part? If you mark the same alert type false positive  10 times→ Dropzone will automatically mark it benign next time. In 20,000+ investigations I observed, Dropzone never missed a true positive . It only produces false positives occasionally, and those are labeled benign — which you can simply approve. --------------------------------------------------------------------------------------------------------- Settings — Custom Strategies, Integrations, and Response Actions Let’s go through the most useful settings. Custom Strategies Think of these like “If this happens → do this” rules, but in AI style. Example 1: EICAR Test File If you often run EICAR tests: You can create a strategy: “If alert contains EICAR hash → mark as benign.” Next time the EICAR test runs? Dropzone auto-marks it benign. Example 2: Critical Assets If you have crown jewels (domain controllers, VIP laptops, financial systems): Create a strategy: “Always mark alerts from this asset as suspicious.” That way, analysts always review them — no risk. Integrations Dropzone supports easy, one-click integrations with tools like: SentinelOne CrowdStrike Microsoft Defender 365 Panther Okta Slack AWS GCP Azure…and many more. There are three parts: ✔ Connected Apps Which tools you’ve connected. ✔ Data Sources Where logs are coming from. ✔ Alert Sources Which alerts Dropzone should pick up and investigate. If Alert Source  is not enabled, Dropzone won’t triage alerts — it will only analyze data. Alert Source Sentinel one configuration example --------------------------------------------------------------------------------------------------------- Response Actions This is basically notifications & automation. You can configure Dropzone to send updates to: Slack Teams Email Custom scripts Webhooks Examples: “Send me a Slack message when a malicious investigation is completed.” “Trigger a script whenever Dropzone starts analyzing a new alert.” This means you don’t have to keep Dropzone open 24/7. --------------------------------------------------------------------------------------------------------- Automatic Remediation This is extremely powerful. If integrated with tools like Okta or Microsoft Defender auto remediation action be taken. Based on Dropzone’s conclusion. Or you can trigger remediation manually from the investigation page — without opening the original tool. --------------------------------------------------------------------------------------------------------- What’s Next? Alert Analysis I know this is the part everyone is waiting for. In the next article, I will show you: 🔥 Real investigation examples 🔥 CrowdStrike alert → Dropzone output 🔥 SentinelOne alert → Dropzone reasoning ------------------------------------Dean----------------------------------------------------------- Check Out next article below: https://www.cyberengage.org/post/deep-dive-how-dropzone-ai-investigates-alerts-panther-example-explained

  • Is AI Coming for SOC Jobs? A Real Talk + My First Look at Dropzone AI

    Let’s be honest for a second. I’ve been in forensics and incident response long enough to see the cybersecurity world change fast — but nothing is shaking things up more than AI inside SOCs . And no matter how many people say “ AI won’t take jobs, it will only assist us, ”  the reality I’m seeing in the field is completely different. I’m on calls with security teams, MSSPs, product vendors… and the pattern is the same everywhere: 🔥 Tasks that used to require 20–30 analysts are now being done by 3–4 people — with AI doing all the heavy lifting. 🔥 Threat hunting, alert triage, correlation, enrichment, reporting — all automated. 🔥 24/7 monitoring with no night shifts… because AI doesn’t need sleep. You can ignore it. You can debate it. But you cannot  deny it. AI is already replacing a huge portion of SOC work. So when I say “AI is coming for SOC jobs,”  trust me — this is not fear. This is observation. I personally know teams handling 50 clients with just four analysts , because the AI platform they’re using handles all investigations automatically. This is where the world is going. But okay… let’s pause the rant for a moment. Because today I want to talk about one specific tool that made me smile and feel sad at the same time: ----------------------------------------------------------------------------------------------------- Dropzone AI — The SOC Analyst That Never Sleeps Before I jump in: I need to say something I deeply hate about this industry… The 24/7 SOC Problem I’ve done 24/7 work. You’ve probably done it too. I don’t need to explain how mentally and physically draining it is. Once, I asked my manager: “Why do analysts in Asia have to do 24/7 shifts? Why can’t we do a follow-the-sun model if you already have offices in Europe and the US?” The manager told me: 👉 “India is cheaper. 👉 Other countries’ labor policies won’t allow that. 👉 India’s policies allow it.” And that answer stuck with me. Why should people in one country sacrifice their health and family time just because it’s cheap ? Anyway, that’s a topic for another day. Let’s jump back to Dropzone AI — because this is exactly the kind of tool that makes 24/7 SOCs unnecessary. ----------------------------------------------------------------------------------------------------- Alert Fatigue: The Problem Dropzone Is Trying to Solve If you’ve ever worked in a SOC, you already know: The alert volume is insane. According to research: 90%  of SOCs are drowning in false positives and backlog 80%  of analysts feel they can’t keep up Humans naturally start ignoring alerts when there are too many Attackers actively use this fatigue to slip in quietly False positives are the biggest enemy. When 98 out of 100 alerts are useless, the brain learns to ignore them — and the dangerous ones hide among the noise. This is where tools like Dropzone AI enter the game. ----------------------------------------------------------------------------------------------------- AI SOC Analysts: What They Really Are Let me break it down simply: A normal SIEM tells you:👉 “Hey, something suspicious happened. Good luck.” A SOAR platform automates a workflow you already built manually. But an AI SOC analyst doesn’t just raise alerts — it conducts the entire investigation by itself. According to the 2025 AI SOC Market report: A typical SOC sees ~960 alerts daily 40% never get investigated 66% of SOCs cannot keep up 70% of analysts leave within 3 years  due to pressure This is the crisis. AI SOC analysts solve this by doing what humans don’t have time to do: They run end-to-end investigations like a real analyst: ✔ Pull evidence from EDR, SIEM, Identity, Cloud✔ Correlate data across platforms ✔ Analyze lateral movement, patterns, anomalies ✔ Summarize everything in a human-readable narrative ✔ Provide recommendations ✔ Do all of this in parallel  — infinitely What takes a human 60–90 minutes , Dropzone AI does in 3–10 minutes . No playbooks. No rules. No babysitting. It reasons through the problem like an actual analyst. ----------------------------------------------------------------------------------------------------- SOAR vs AI SOC Analyst (The Real Difference) People confuse these two a lot, so let me clear it: SOAR = Static. Plays back predefined steps. If the workflow breaks, the SOAR breaks. AI SOC Analyst = Dynamic. Investigates like a human. Adapts based on findings. Requires zero playbooks . In simple words: SOAR follows a recipe. AI SOC cooks based on whatever is in the kitchen. ----------------------------------------------------------------------------------------------------- Human Analyst vs AI SOC Analyst — A Fair Comparison Here’s the truth nobody wants to say out loud: Aspect Human SOC Analyst AI SOC Analyst Alert Processing 25–40 min per alert 3–10 min per alert Availability 8 hours + breaks 24/7/365 Daily Capacity 10–20 deep investigations Unlimited Consistency Varies with mood, fatigue 100% Learning Curve 6–12 months Instant Investigation Depth Deep for selected alerts Deep for every  alert Cost $75k–150k per year Subscription Yes — it is  expensive. But not more expensive than hiring a 20-person SOC team. Especially in India 😅 ----------------------------------------------------------------------------------------------------- Why Dropzone AI Got My Attention Because this tool actually works . It takes the alerts from: CrowdStrike SentinelOne Panther SIEMs Splunk, Microsoft Sentinel Identity platforms Cloud logs …and turns them into full investigation reports. No nonsense. No fluff. Actual DFIR-style analysis. But before I show you the investigations and output (especially for CrowdStrike and SentinelOne), I want to start with the dashboard. That will be in the next article . ----------------------------------------------------------------------------------------------------- Final Thoughts (For Now) AI is not the enemy. But pretending that AI isn’t replacing jobs is just denial. The industry is changing.T he SOC model is changing. The skillset needed is changing. Instead of competing against  AI, the smart move is to work with  it. This article is just Part 1. Next up:👉 Dropzone AI Dashboard Deep Dive 👉 Real alert investigations 👉 CrowdStrike + SentinelOne examples 👉 How it handles correlation and storytelling -------------------------------------------Dean------------------------------------------------------- Check Out next article below: https://www.cyberengage.org/post/dropzone-ai-dashboard-investigation-overview

  • SentinelOne Series: The SSO Workaround You’ll Actually Thank Me For

    Hey everyone! Welcome back to another post in my SentinelOne series  — if you haven’t checked out the earlier ones, I recommend scrolling back and giving them a read. https://www.cyberengage.org/courses-1/mastering-sentinelone%3A-a-comprehensive-guide-to-deep-visibility%2C-threat-hunting%2C-and-advanced-querying%22 Today, I’m here to share something different  — a real-world workaround  that helped me fix an interesting SSO problem with SentinelOne. --------------------------------------------------------------------------------------------------------- The Background So here’s the situation. There are two kinds of SentinelOne setups you can find yourself in: Scenario 1: You Own the SentinelOne Server You’re the big boss here — you bought the SentinelOne server or have full global-level access  on the console.You can create tenants, sites, roles, policies — the whole thing. If you’re in this camp, this trick won’t apply  to you (and you’ll soon see why). You already have the keys to the kingdom. Scenario 2: You Don’t Have Global Access Now this is where the magic happens. Let’s say you don’t own the SentinelOne backend. Instead, you asked SentinelOne (or your MSSP) to create an account for you . So they set you up as a tenant on one of their servers , and you start deploying your client environments under your account. Nothing complex so far — right? Easy stuff. --------------------------------------------------------------------------------------------------------- The Challenge Now imagine this — I’ve got a client  who manages about seven of their own clients  under my SentinelOne account , and here’s where things start getting interesting : each of those seven clients uses their own SSO (Single Sign-On)  setup. Sounds simple, right? But here’s the real issue SentinelOne doesn’t make it easy  to handle multiple SSO configurations under one parent structure. If my client and his each client  all have different identity providers (like Okta, Azure AD, Ping, etc.), things quickly become messy  when it comes to access and visibility. The client wants his IT team to access the specific client sites  they’re responsible for — but they should not  see the there Sentinel One site or endpoints  that I (as the managed security provider) monitor and manage for them. So the problem was clear: How can I let their IT team log in through their own SSO , get to their client’s SentinelOne site , and still keep my internal SentinelOne environment completely hidden  from them? That’s where the workaround comes in. 😉 In short — you want separation between authentication  and visibility . --------------------------------------------------------------------------------------------------------- The Workaround I Used Alright, here’s what I did (and honestly, it worked beautifully). Created a new site  in SentinelOne called something like: 👉 Cyberengage Auth Passthrough This site will have zero endpoints  — it’s purely there to authenticate users through SSO . Configured SSO  on this site. Deleted all local SentinelOne accounts  for the managed IT users from the parent tenant. We don’t want anyone logging in locally anymore — SSO all the way. Then, had each Managed IT user log in via SSO  to the new Auth Passthrough  site. Once they were authenticated through that SSO site, I reassigned them roles  for the client sites they actually needed access to. And just like that… 🎯 Employees can’t see any endpoints of there site. Managed IT can access the client sites they support. Everyone uses SSO — clean and compliant. --------------------------------------------------------------------------------------------------------- Why This Works Think of it this way —I separated authentication  (the SSO part) from data access  (the endpoints). The Auth Passthrough  site acts as a secure door  — people come through it, prove who they are via SSO, and then I decide which rooms (client sites) they can enter after that. This setup keeps SentinelOne access organized, auditable, and most importantly — isolated  per client. --------------------------------------------------------------------------------------------------------- The End Result ✅ All users authenticate via SSO. ✅ No local accounts  left to manage. ✅ Employees  can’t see internal endpoints. ✅ Managed IT  can view and manage the client monitoring I responsible for. ✅ No global-level permissions needed. It’s a simple but powerful design when you’re operating inside someone else’s SentinelOne tenant and need a clear boundary between teams and clients. --------------------------------------------------------------------------------------------------------- Final Thoughts Sometimes SentinelOne setups aren’t one-size-fits-all — especially when you’re working under another organization’s infrastructure or managing multiple clients. You’ve got to be creative to make SSO, visibility, and role-based access all play nice together. This workaround gave us that perfect balance. If you’re struggling with multi-client SSO management in SentinelOne, try this approach — it might just save you a lot of headache (and support tickets 😉). --------------------------------------------Dean-------------------------------------------------------

  • Carving Hidden Evidence with Bulk Extractor: The Power of Record Recovery

    Before diving in, I’d like to highlight a comprehensive series I’ve created on Data Carving—feel free to check it out via the link below. https://www.cyberengage.org/courses-1/data-carving%3A-advanced-techniques-in-digital-forensics --------------------------------------------------------------------------------------------------------- If you’ve been in digital forensics long enough, you’ve probably heard about Bulk Extractor  — the legendary tool  that can scan through massive amounts of data and pull out meaningful information like emails, IPs, URLs, and even credit card numbers in record time. But what if I told you there’s an upgraded version that goes beyond basic carving — one that digs deep into the very record structures  of Windows file systems and event logs? Let’s talk about Bulk Extractor with Record Carving (bulk_extractor-rec)   ----------------------------------------------------------------------------------------------------------- Why Bulk Extractor (and this fork) Matters Traditional carving tools (like PhotoRec, Scalpel, or Foremost) are great for recovering deleted files . But they usually focus on whole files — not the records inside them. https://www.kazamiya.net/en/bulk_extractor-rec bulk_extractor-rec , on the other hand, looks for specific forensic record types  — and this is a game-changer. Why? Because it can pull out the small but crucial artifacts that tell us what happened on a system, even when the original files are gone. Here’s what it can recover: EVTX logs  — Windows Event Log chunks NTFS MFT records  — metadata for files and folders $UsnJrnl:$J  — change journal entries (fantastic for timeline work) $LogFile  — transactional logs that reveal filesystem changes $INDEX_ALLOCATION  records (INDX) — directory index data utmp records  — Unix/Linux login/logout records Now, those first five are gold for Windows forensics. These are exactly the artifacts you need to reconstruct activity, detect tampering, or trace attacker movements — especially when original logs or MFT files have been partially overwritten. ----------------------------------------------------------------------------------------------------------- The Smart Part: Record Reconstruction Here’s what I really  love about bulk_extractor-rec : it doesn’t just rip out raw data — it tries to rebuild valid structures . For example, when it carves out Windows Event Log chunks, it doesn’t just dump fragments. It rebuilds them into valid .evtx files that you can directly open in tools like Event Log Explorer  or Eric Zimmerman’s EvtxECmd . That means your recovered logs can be parsed just like normal event logs . This saves hours of manual hex editing or XML parsing — and makes this fork incredibly practical during investigations. ----------------------------------------------------------------------------------------------------------- Working with NTFS Artifacts When carving NTFS-related artifacts (like MFT or USN records), Bulk Extractor outputs two main files: A clean file with all valid records (for example, MFT or UsnJrnl-J) A _corrupted file with invalid or partial records that didn’t pass integrity checks You can feed the valid ones straight into MFTECmd  or similar tools for easy parsing. The corrupted ones can still contain useful fragments . ----------------------------------------------------------------------------------------------------------- Performance and Speed Bulk Extractor is known for one thing — speed . That means it doesn’t just read surface data — it digs into compressed containers too. Even better, it can process hibernation files  (prior to Windows 8) automatically — which often contain tons of evidence about user sessions. Focusing on Unallocated Space When I’m investigating, I often want to focus carving on unallocated space  — that’s where deleted or lost records usually live. Since Bulk Extractor isn’t filesystem-aware (by design), I use another tool — blkls  from The Sleuth Kit  — to extract just the unallocated clusters first. Here’s how that works: blkls image.dd > image.unallocated This command dumps all the unallocated data into a new file , ready to be carved by Bulk Extractor. You can even extract slack space  (the tiny gaps between files) using the -s switc h — useful when you want to catch small remnants left behind by deleted files. ----------------------------------------------------------------------------------------------------------- Alternatives & Complements As I always say no single tool does it all (Especially if we are using open source)— and that’s totally fine. I often combine bulk_extractor-rec  with other tools to maximize recovery: Joakim Schicht’s NTFS Tools  – specialized parsers and carvers for $MFT, $LogFile, and $UsnJrnl https://github.com/jschicht EVTXtract (by Willi Ballenthin)  – carves EVTX records in raw XML format (great for deep event log recovery) https://github.com/williballenthin/EVTXtract One gives you structured .evtx logs, and the other gives you raw XML records — a powerful combo! ----------------------------------------------------------------------------------------------------------- Final Thoughts If you’ve never tried Bulk Extractor with Record Carving, you’re missing out on one of the most efficient ways to dig deep into deleted or fragmented forensic artifacts . It’s fast, multi-threaded, reconstructs readable logs, and supports critical NTFS and EVTX records — all in one go. And best of all? It’s free and open-source . --------------------------------------------------Dean-----------------------------------------------------

  • Every forensic investigator should know these common antiforensic wipers

    Everyone who does digital forensics has seen wipers. Funny part is attackers and careless admins both sometimes want files gone . Tools that overwrite/delete files — “wipers” — are common and can hide evidence. SDelete (a Sysinternals tool signed by Microsoft) is famous because it can slip past some whitelisting and looks “legit” on a system. But SDelete is only the tip of the iceberg — there are other tools and each leaves its own marks. Knowing those marks helps you figure out what happened  even when the file contents are gone. ------------------------------------------------------------------------------------------------------------ The main players Here are the common tools investigators run into: SDelete (Sysinternals)  — overwrites file clusters and free space. Popular because admins use Sysinternals and signatures make it look benign. BCWipe (commercial)  — very thorough, has features to wipe MFT records, slack space, and other NTFS artifacts . Commercial product; trial exists. Eraser (open source)  — long-lived tool. Renames files many times (seven by default), overwrites clusters , etc. cipher.exe (built-in Windows)  — intended for EFS tasks but /w:  can wipe free space. Very stealthy because it’s a system binary. ------------------------------------------------------------------------------------------------------------ What these tools try  to hide — and what they often fail to hide Wipers attempt to remove traces of a file. But Windows and NTFS create lots of metadata, logs, and side-files that are harder to fully erase. From an investigator’s point of view, the goal  is often just to prove the file existed  and that wiping happened  — not necessarily to recover the original content. Commonly left-behind evidence includes: USN Journal ($UsnJrnl) entries  — rename, delete, data-overwrite, stream changes. Wipers produce many USN events if they rename/overwrite repeatedly. NTFS LogFile ($LogFile)  — sometimes contains original file names or operations even when MFT entries are gone. MFT records ($MFT)  — deleted or overwritten MFT entries, reused MFT entry numbers (tools may create new files using same MFT index). Prefetch / evidence-of-execution  — prefetch files and other execution traces often show the wiper ran. ADS (Zone.Identifier)  — some tools (e.g., Eraser) leave alternate data streams like Zone.Identifier intact, which can reveal source URLs or original filenames. Temporary directories / filenames  — e.g., EFSTMPWP left behind by cipher.exe, or ~BCWipe.tmp created by BCWipe. Odd timestamps  — some wipers zero timestamps (e.g., set to Windows epoch Jan 1, 1601) which looks suspicious. Large flurries of rename / DataOverwrite / DataExtend events  — pattern of many sequential operations in a short time window. ------------------------------------------------------------------------------------------------------------ Short tool profiles + investigator takeaways SDelete What it does:  Overwrites clusters and free space. Why it’s sneaky:  Signed Sysinternals binary → looks legitimate. Look for:  USN and MFT evidence of overwrites and prefetch/execution traces for sdelete.exe. BCWipe What it does:  Commercial, deep-wiping features (MFT, slack, NTFS $LogFile features advertised). Real behavior:  Very noisy in NTFS journals — lots of renames, data-overwrite events, and creative file/directory creation to overwrite metadata (e.g., ~BCWipe.tmp, ... filenames). Look for:  ~BCWipe.tmp directories, massive $UsnJrnl activity in a short time, entries that show rename → overwrite → delete sequences, prefetched BCWipe executables. Eraser What it does:  Open-source, renames files repeatedly (7 passes by default), overwrites clusters. Quirks:  Leaves Zone.Identifier ADS sometimes; renames and timestamp zeroing (Jan 1, 1601) are common. Look for:  Repeated rename patterns in USN, C (change ) time updated but other times zeroed, leftover ADS pointing to original download URL. cipher.exe What it does:  Windows built-in — /w: wipes free space by creating temporary files to overwrite free clusters. Quirks:  Leaves a directory named EFSTMPWP at the root (observed persisting across reboots in many tests), creates many fil.tmp files while running. Look for:  EFSTMPWP directory, temporary fil*.tmp files, prefetch entries showing cipher run (and Windows event traces of disk activity). ------------------------------------------------------------------------------------------------------------ Example artifact patterns to search for: You can use these heuristics in your triage/scripting/search: Search USN journal for many sequence-like events (rename → data overwrite → delete) within seconds — suspicious. Look for directory names and temporary filenames: ~BCWipe.tmp, BCW, filXXXX.tmp. Check prefetch for unexpected executables: sdelete.exe, bcwipe*.pf, eraser*.pf, cipher*.pf. Scan for Zone.Identifier ADS on recently deleted files (may include original download URL or filename). Find files with timestamps set to zeroed timestamps — a potential sign of Eraser or timestamp wiping. Look for an MFT entry number reused by a later file or directory — indicates the original MFT record was targeted and may have been overwritten. Parse $LogFile (transaction log) for strange entries that mention original file names even when $MFT shows deletion. ------------------------------------------------------------------------------------------------------------ Investigator workflow Snapshot everything  (image the volume) — you need a forensically sound copy. Parse the MFT and USN  — timeline representation is crucial. Many wipers create big bursts in the USN journal that are easy to see in a timeline. Check $LogFile and shadow copies  — sometimes these hold remnants of filenames or older versions. Search ADS  — Zone.Identifier can unexpectedly reveal original source/location. Look for prefetch and execution evidence  — often the wiper executable will leave a prefetch or service entry. Remember SSD caveats  — wear-leveling and TRIM can make complete overwrites unreliable on SSDs; artifacts can be missing or inconsistent. Correlate with logs  — application logs, Windows event logs, and backup logs can confirm when delete/wipe activity occurred. ------------------------------------------------------------------------------------------------------------ Caveats and testing notes (be honest about limits) Tests often assume the active file clusters were overwritten — but you can’t always prove every  copy was overwritten (especially on SSDs). Some wipers advertise wiping certain structures (like $LogFile), but testing showed mixed results — so always verify with artifacts rather than relying on vendor claims. ------------------------------------------------------------------------------------------------------------ Short example: cipher.exe /w:C: — what to expect If someone runs cipher.exe /w:C: after deleting files: You may see EFSTMPWP at C:\ root. Temporary fil####.tmp files created and deleted during the run. No direct evidence of which files were wiped (cipher writes free space), but you can correlate deletion times from USN/MFT earlier in the timeline to guess what got targeted. Prefetch and process execution traces will show cipher.exe ran. ------------------------------------------------------------------------------------------------------------ Wrap-up — final thought Wipers try to erase content , but they often leave stories . The job of a forensic examiner is to read those stories in metadata, journals, and side-files. Look for patterns — rapid renames, heaps of USN events, leftover temp folders, strange timestamps, MFT reuse, and ADS — and you’ll often reconstruct what happened even when the file is gone. -------------------------------------Dean---------------------------------------------------------------

  • Sublime Just Got Even Smarter: Automatic Calendar Event Deletion Is Here

    If you’ve been following me for a while, you already know how much I love Sublime . It’s one of those tools that just keeps getting better — feature after feature, update after update — all with one goal: making email security effortless . And today, they’ve released something that’s honestly a game changer . The Hidden Threat: Malicious Calendar Invites We’ve all seen those sketchy calendar invites — you know, the ones that magically appear on your calendar even though you never accepted them. Sometimes they’re phishing attempts. Sometimes they’re just spam. Either way, they’re becoming a growing problem across both Google Workspace  and Microsoft 365 . Attackers have learned that not every threat has to come through your inbox. Many organizations still allow automatic calendar event additions, and that’s where trouble starts . A single malicious invite can trick users into clicking phishing links or even expose them to malware. That’s why this new Sublime feature is such a big deal. ------------------------------------------------------------------------------------------------------------- What’s New: Automatic Calendar Event Deletion Sublime’s new Automatic Calendar Event Deletion  feature can now automatically delete malicious or unwanted calendar events  whenever you: Quarantine a message, Move it to spam, or Send it to the trash. In short, if a bad email comes in with a sneaky calendar invite attached — Sublime wipes that calendar entry for you. That’s brilliant. One less attack vector to worry about. ------------------------------------------------------------------------------------------------------------- Why It’s So Useful Think about this: You receive a spam or phishing email that includes a calendar invite. Even if you delete the email, that event might still sit  on your calendar, waiting for someone to click it later. Now, Sublime closes that gap completely. When you remove or quarantine the email, the related calendar event goes with it. This is exactly the kind of intelligent automation that saves time and  improves security posture at the same time. ------------------------------------------------------------------------------------------------------------- How to Join the Public Beta This feature is currently in public beta , and getting it enabled is pretty simple. You just need to update the Sublime app permissions to include Calendar access , then notify the Sublime team. (I’ll update this once it goes live — currently, it’s in beta mode.) Here’s a quick breakdown based on your setup: For Google Workspace (Cloud-Managed) Log in to your admin console: https://admin.google.com Go to Security → Access and Data Control → API Controls Scroll down to Domain-wide delegation  and click Manage domain-wide delegation Click Add new Enter the Client ID: 112905660299333414135 Add this scope: https://www.googleapis.com/auth/calendar.events Wait up to 24 hours for the changes to apply For Google Workspace (Self-Managed) Add the Calendar API scope to your existing domain-wide delegation client Enable the Google Calendar API in your project Wait up to 24 hours for propagation For Microsoft 365 (Cloud-Managed) Visit your organization-specific authorization link Approve the consent request that includes “ Read and write calendars ” You’ll be redirected back to Sublime’s homepage Wait up to 24 hours for it to take effect For Microsoft 365 (Self-Managed) Add the Calendars.ReadWrite permission to your Azure AD app Grant admin consent Allow up to 24 hours for changes Once you’re done, just let the Sublime team know so they can enable the feature in your environment. That’s it — no further configuration needed. (Note: Restoring deleted calendar events isn’t supported yet, but they’ve confirmed it’s coming soon. For now, users can re-RSVP to a restored message to re-add the event.) ------------------------------------------------------------------------------------------------------------- Bonus Tips: Strengthen Your Calendar Security While Sublime’s update adds a huge layer of protection, there are still a few best practices every organization should follow. For Google Workspace: Go to Apps → Google Workspace → Calendar → Advanced Settings Under “Add invitations to my calendar” , choose one of: “Invitations from known senders,” or “Invitations users have responded to via email.” This prevents random senders from silently placing meetings on your calendar. For Microsoft 365: Use PowerShell to stop your system from auto-accepting events. Run: Set-CalendarProcessing -Identity -AutomateProcessing None This disables the “Calendar Attendant” that automatically processes invites — giving you more control. ------------------------------------------------------------------------------------------------------------- My Take: Honestly, this update just shows how Sublime really listens to real-world threats . Attackers keep finding new ways to sneak in — even through calendars — and Sublime is always one step ahead. Automatic Calendar Event Deletion might sound like a small addition, but in practice, it can save analysts hours of investigation time and prevent users from walking right into phishing traps. I’ve said it before and I’ll say it again — this is why I love Sublime. If you’re managing email security, you should definitely check this feature out. ------------------------------------------------------------------------------------------------------------- Final Thought Cybersecurity isn’t just about blocking emails anymore. It’s about protecting every single touchpoint  — even your calendar. And with this new feature, Sublime just made that a whole lot easier. -------------------------------------------------Dean---------------------------------------------------

  • Tracking Lateral Movement: PowerShell Remoting, WMIC, Explicit Credentials, NTLM Relay Attacks, Credential Theft and Reuse (Event IDs)

    Welcome back, folks! If you’ve been following this series, I’ve already covered how attackers move laterally using things like named pipes, scheduled tasks, services, and registry modifications and more .Now it’s time to unpack some classic but still dangerous  remote execution tricks — and how to actually hunt them down using Windows logs. ------------------------------------------------------------------------------------------------------------- PowerShell Remoting & WMIC — Attackers’ Favorite “Admin Tools” Here’s the deal : not every network logon (Event ID 4624 , Type 3 ) means RDP or SMB. Sometimes, those logons come from administrative tools  being misused for remote execution — particularly PowerShell Remoting , WMIC , or WinRS . Attackers love these tools because: They’re already installed  on almost every Windows machine. They blend in perfectly with normal IT activity. And they use legitimate protocols  (WinRM or RPC), which defenders often ignore. WMIC (Windows Management Instrumentation Command-Line) The WMIC /node:  command lets you run commands remotely using RPC. When someone runs this: wmic /node:cyberengage.svr process call create "cmd.exe /c C:\Public\HackBloodHound.exe" Windows creates a WmiPrvSE.exe  process on cyberengage.svr to execute that command. Detection tip: If you see WmiPrvSE.exe  spawned unexpectedly — especially running a strange command or launching tools like PowerShell, cmd.exe, or unknown binaries — that’s a huge red flag. Log relationships to remember: Event ID 4624  (Type 3) → Remote network logon Parent process:  WmiPrvSE.exe Child process:  The command being executed (cmd.exe, powershell.exe, etc.) Use Sysmon Event ID 1 (Process Creation)  or 4688 (Security Log)  to tie it all together. PowerShell Remoting (WinRM) PowerShell remoting uses the WinRM  service to execute PowerShell commands on remote systems. When an attacker runs: Enter-PSSession -ComputerName cyberengage.svr -Credential Administrator or Invoke-Command -ComputerName cyberengage.svr -ScriptBlock { Start-Process C:\Public\HackBloodHound.exe } On the target endpoint , you’ll see WSMProvHost.exe  kick in. That’s the host process responsible for remote PowerShell sessions. Detection tip: In your EDR or Sysmon data, look for WSMProvHost.exe  as a parent of suspicious child processes (like cmd.exe, powershell.exe, rundll32.exe, etc.). Log indicators: Event ID 4624 , Logon Type 3 (network logon) Parent process:  WSMProvHost.exe Sysmon Event ID 1  → shows the actual command line executed remotely. WinRS (Windows Remote Shell) This one’s often overlooked but used heavily by adversaries. winrs.exe works over the same WinRM  protocol as PowerShell remoting — but it directly runs programs instead of PowerShell commands. Example: winrs -r:cyberengage.svr "C:\Public\HackBloodHound.exe" This will launch WinrsHost.exe  on cyberengage.svr, which spawns cmd.exe → executes the malicious payload. Detection stack: Event ID 4624 , Logon Type 3 Process chain:  svchost.exe → WinrsHost.exe → cmd.exe → HackBloodHound.exe Sysmon Event ID 1  to capture command-line parameters. ------------------------------------------------------------------------------------------------------------- Explicit Credentials — Watching Attackers Switch Keys The most underrated event types : Event ID 4648  — “A logon was attempted using explicit credentials.” Here’s what that means: Someone (or something) explicitly provided a username/password to run a command — instead of using cached credentials from their current logon session. So when attackers use tools like: runas /user:Administrator cmd.exe psexec -u cyberengage.org\user -p Welcome123 \\cyberengage.svr cmd.exe or use Cobalt Strike modules that specify credentials — you’ll get a 4648 event . What makes this event gold  for defenders is that it’s logged on the source system  — the machine the attacker is coming from , not just the one they’re moving to . That means you can finally track the attack chain backwards  — see where lateral movement originated . How to Investigate 4648s 4624 Logon Type 9  = Successful logon with explicit credentials 4648  = “Tool used explicit credentials” (even if it’s the same user account) If you see a 4648 → look at: “Target Server” field — if it shows localhost, it’s inbound; if it shows another host, it’s outbound. The username and process that initiated it. The timestamp — match it against process creation or PowerShell logs. Pro tip: Filter out the noise (computer accounts, M365 services, etc.) — what remains is almost always either: Admins doing maintenance, or Attackers moving laterally ------------------------------------------------------------------------------------------------------------- NTLM Relay Attacks — Spotting the Subtle Network Trick Now, for the fun part. NTLM relay attacks don’t “crack” passwords — they just reuse  authentication requests to trick another system into accepting them. So what happens in the logs? Event ID 4624  on Server  will show: Workstation Name:  Client Source Network Address:  IP of Server That mismatch is your giveaway. This “split identity” is a strong sign of NTLM relay in action. To confirm: Correlate the IPs — is the workstation name and source IP inconsistent? If DHCP is used, grab DHCP lease logs to confirm which IP belongs to which device. NTLM relay attacks often accompany SMB traffic anomalies (e.g., access to ADMIN$, IPC$ shares. ------------------------------------------------------------------------------------------------------------- Combined Up Recap: Technique Key Parent Process Log/Event IDs Detection Clue WMIC WmiPrvSE.exe 4624 (Type 3), Sysmon 1 Suspicious child processes PowerShell Remoting WSMProvHost.exe 4624, Sysmon 1 PowerShell remote commands WinRS WinrsHost.exe 4624 Command execution via WinRM Explicit Credentials varies 4648, 4624 Type 9 Source-based credential use NTLM Relay N/A 4624 Workstation name ≠ IP address ------------------------------------------------------------------------------------------------------------- Credential Theft and Reuse Credential theft and reuse attacks often exploit weaker encryption types  and legacy authentication protocols (NTLM)  to move laterally through a Windows domain. Key detection points lie in Kerberos event IDs (4768, 4769)  and NTLM authentication logs (4624, 4776) . 1. Abuse of Weak Kerberos Encryption (RC4-HMAC-MD5) Attackers often force the use of weaker encryption types  to speed up offline password cracking or perform “Overpass-the-Hash” (pass-the-key) attacks. Common Scenarios: Attack Type Description Key Event ID(s) Detection Indicator Kerberoasting Attackers request service tickets encrypted with weak RC4-HMAC-MD5 to brute-force service account passwords offline. 4769 Encryption Type: 0x17 or 0x18 Overpass-the-Hash Attackers use a stolen NT hash to request a TGT using RC4 encryption. 4768 Encryption Type or (post-Jan 2025) Session Type: 0x17 / 0x18 Key Log Artifacts: Event ID 4769 – Service Ticket Request Ticket Encryption Type: 0x17 (RC4-HMAC-MD5) Event ID 4768 – TGT Request Ticket Encryption Type: 0x17 Session Encryption Type: 0x17 (Post-Jan 2025 patch introduces more fields for encryption visibility.) Why RC4 Matters RC4-HMAC-MD5 (0x17) is a legacy encryption type . AES128 (0x11) or AES256 (0x12) are default for modern environments. Seeing frequent 0x17 or 0x18 tickets → highly suspicious , unless legacy systems exist. Defender Tip: Hunt for Event IDs 4768/4769  where Encryption Type = 0x17 or 0x18.Filter out legacy systems, then review recent TGS/TGT requests by privileged or service accounts. 2. NTLM and Pass-the-Hash Detection Even with Kerberos as the default protocol, NTLMv2 authentication  still appears — especially in legacy or IP-based connections. Attackers exploit NTLM through pass-the-hash , relay , or forced authentication  attacks. Detection via Logs: Log Type Event ID Description Account Logon 4776 NTLMv2 authentication attempt Logon Success (Network) 4624 Check for Authentication Package: NTLM and Package Name (NTLM only): NTLM V2 Normal vs Suspicious: Normal:  Kerberos authentication seen (Package Name: -) Suspicious:  NTLMv2 used on systems that normally use Kerberos(e.g., sudden NTLMv2 activity on domain controllers or file servers) Sample 4624 Log (NTLMv2) Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Package Name (NTLM only): NTLM V2 Key Length: 128 Hunt Strategy: Look for unusual NTLMv2 authentications  in Event IDs 4624 / 4776. Correlate with 4648 (Explicit Credentials)  or 4624 Logon Type 9  to trace the origin. Watch for sudden NTLMv2 spikes  or logons to unfamiliar hosts . 3. Post-Jan 2025 Microsoft Patch — What Changed? Microsoft’s January 2025 update enhanced Event ID 4768 and 4769  logs: Added new fields: Session Encryption Type Pre-Authentication Encryption Type Long-Term Key Type visibility Enables defenders to differentiate client-supported encryption vs DC-issued encryption . Greatly improves detection of RC4 downgrade or forced-weak encryption  scenarios. 4. Other Lateral Movement Indicators Technique Event ID What to Look For Credential switching (RunAs) 4648 Logs explicit use of credentials; indicates lateral move origin Logon Type 9 (NewCredentials) 4624 Indicates session initiated with explicit credentials Delegation abuse 4624 + abnormal access patterns Delegated service accounts connecting to new/unexpected systems Coercion/NTLM Relay 4624 Mismatch between Workstation Name and Source Network Address Quick Hunt Queries (SIEM Examples) Kerberoasting SecurityEvent | where EventID == 4769 | where TicketEncryptionType in ("0x17", "0x18") | project TimeGenerated, TargetUserName, ServiceName, TicketEncryptionType Overpass-the-Hash SecurityEvent | where EventID == 4768 | where SessionEncryptionType in ("0x17", "0x18") or TicketEncryptionType in ("0x17", "0x18") | project TimeGenerated, TargetUserName, IpAddress, Computer NTLMv2 Usage SecurityEvent | where EventID == 4624 and AuthenticationPackageName == "NTLM" | project TimeGenerated, TargetUserName, IpAddress, WorkstationName ------------------------------------------------------------------------------------------------------------- Summary Attack Type Key Event IDs Indicator What It Means Kerberoasting 4769 RC4-HMAC-MD5 (0x17) Weak encryption used for service tickets Overpass-the-Hash 4768 RC4-HMAC-MD5 (0x17) TGT requested using NT hash Pass-the-Hash 4624, 4776 NTLMv2 logons Reuse of stolen NTLM hash Credential Switching 4648, 4624 (Type 9) Explicit credentials Lateral movement initiation NTLM Relay 4624 Hostname-IP mismatch Relayed authentication ------------------------------------------------------------------------------------------------------------- Bonus: Abuse of Administrative Credentials & Tools Once attackers compromise high-privileged accounts like Domain Admins or service accounts , they effectively inherit legitimate administrative rights  — becoming "unpaid administrators." They can now: Control much of the environment (domain, servers, endpoints). Use legitimate tools for remote management and execution such as: RDP , VNC , PowerShell , PsExec , WMIC , and Group Policy . Patch management and software deployment tools  to push malicious payloads. Detection & Defense Restrict and monitor accounts used for deployment . Use unique accounts  (not Domain Admins) for patching. Limit deployment windows  (detect off-hour use). Maintain decoy/test systems  to log and analyze deployment activities. Watch for unexpected GPO changes  or new deployment tasks. Lateral Movement via Vulnerability Exploitation When credentials aren’t available or remote access is blocked, attackers turn to exploiting vulnerabilities  to move laterally. Trends Vulnerability exploitation  is on the rise — both for initial access  and lateral movement . Zero-days  increasingly used by state-sponsored actors. Detection Methods Crash / Exploit Detection: Event logs showing crashes or memory corruption. Microsoft Exploit Guard / antivirus telemetry. Process Creation Monitoring (Event ID 4688): Detect abnormal parent-child process chains (e.g., IIS worker spawning cmd.exe). Watch for code injection , new handles , and unusual command shells . Application control / EDR logs . Threat intelligence  to track newly exploited vulnerabilities. Memory forensics  for hidden or injected processes. ------------------------------------------------------------------------------------------------------------- Wrapping up: Effective lateral movement detection hinges on visibility, context, and restraint of privilege. Attackers exploit legitimate pathways; defenders must therefore combine behavioral monitoring, account segregation, and timely patching to break the chain before impact. -------------------------------------------Dean----------------------------------------------------------

  • Tracking Lateral Movement — Named Pipes, Scheduler, Services, Registry, and DCOM (Event IDs)

    Hey — today we’re unpacking lateral movement. Think of it like this : an attacker already got a foothold in your network and now wants to move sideways to more valuable systems. In this article I’ll try tp show you the common ways they do that, what Windows logs to watch for, and practical detective steps you can take right now. ------------------------------------------------------------------------------------------------------------- Why this matters Once an attacker can move laterally, they can reach domain controllers, file servers, backup systems, or any asset you care about. Detecting lateral movement early can stop a breach from becoming a full-blown incident. Start point: the logon event that often tells the story — Event ID 4624 (Logon Type 3) When someone authenticates remotely to a Windows host (SMB, named pipes, remote service calls, PsExec, scheduled tasks, etc.) , Windows commonly records Event ID 4624  with Logon Type = 3  (Network). That single event is often the first hint something happened on the target system. What to watch for: A strange source computer or IP doing a network logon to a host that normally doesn’t see it. An account authenticating from an unexpected system (service accounts from a workstation, users from servers). Unusual time-of-day for an account or a burst of network logons from the same source. Quick detective mindset: Find 4624 / LogonType 3 on the target machine. Note the Account Name, Logon ID and Caller Computer / Source IP. Correlate with surrounding events: process creation, PowerShell logs, Sysmon network events, scheduled task events. Don’t assume a 4624 = malicious. Many normal operations use this type of logon. Context is everything. ------------------------------------------------------------------------------------------------------------- Common techniques that produce 4624 Type 3 (and what extra artifacts they leave) Here are typical ways attackers use network logons and what you can look for around them. Network share access (SMB, port 445) Look for Event IDs related to file share access (if auditing enabled). Attackers often mount shares to copy tools or exfiltrate files. Named pipes / RPC (port 135, 445) Correlate network logs showing RPC or SMB with suspicious services/processes. Remote scheduled tasks / Task Scheduler Scheduled task creation events, task run events, or suspicious schtasks command lines in process creation logs. Remote service execution (PsExec, sc.exe) Process creation for psexec.exe, sc.exe remote service installs, or any service creation events. Check for Service Control Manager logs. PowerShell remoting, WinRM (port 5985/5986) PowerShell logs, WinRM session events, or Event ID 4648 (explicit credentials) near a 4624. WMI remote execution (wmic /node) WMI operation events, suspicious wmic command lines in process creation logs. Analyze network connections and system activity — where to get signal Successful TCP/UDP connections by themselves are usually not logged — they’re too noisy — but these sources can give you the visibility you need: Sysmon (Event ID 3)  — if you run Sysmon and enable network connection logging, you get process-to-remote-host mapping (gold). Host-based firewall logs  — Windows Filtering Platform and firewall logs can show successful connections (if enabled). Security log: Event ID 5156  — Windows Filtering Platform allowed a connection. EDR/XDR  — many EDRs provide both process telemetry and network context; use process launch + network data together. How to use them together: Start from the 4624 Type 3 event. Look at Sysmon or firewall logs to see which remote port and process were involved. Check process creation logs, PowerShell logs, and the Security log around the same timestamp for suspicious activity (scripts, encoded commands, task creation). ------------------------------------------------------------------------------------------------------------- File shares — a favorite lateral movement highway Mounting and using network shares is one of the easiest ways to move laterally or stage data. Windows events for share auditing: 5140  — network share was accessed (gives share name, server path, source IP, account). This event is created when the session is established — not for every file access. 5145  — detailed file share auditing (records access to specific files/folders) — very informative but noisy. Use this only for very sensitive shares or short bursts of investigation. 5142–5144  — share created/modified/deleted. Important note about 5140: The Accesses  field will often say ReadData (or ListDirectory) even if later the user wrote or deleted files. 5140 records the initial access granted when the share session started. Practical steps: If you can, enable strategic share auditing on sensitive file servers (5140 + selective 5145). When you see 5140 for a suspicious source, follow the account’s Logon ID across the security log to find what else it touched. Look for sudden creation of new shares (5142) or the use of admin shares (like C$) from non-admin hosts. ------------------------------------------------------------------------------------------------------------- Detection tips you can use today Baseline normal: map which systems normally connect to your servers and from what accounts. Anything outside that baseline is suspicious. Flag 4624 Type 3  where the Caller Computer is not in your baseline or the account is unexpected for that host. Look for 4624 Type 3  followed quickly by process creation events launching admin tools (PsExec, wmiprvse making child processes, schtasks, sc.exe, net.exe). Monitor for Service creation , scheduled task creation , or new remote services started from non-admin systems. If you have Sysmon: monitor Image that opens network connections (Event 3) — e.g., cmd.exe, powershell.exe, wmic.exe, psexec.exe. On file servers: enable Object Access > Audit File Share  (ID 5140) for strategic monitoring; enable detailed file share  (ID 5145) only for critical folders. Common pitfalls — what to avoid when investigating Assuming every 4624 Type 3 is bad  — many business processes use network logons. Use context (account role, time, source host). Relying on a single log source  — attackers leave breadcrumbs in many places; correlate across logs. Enabling noisy auditing wide-open  — 5145 and verbose Sysmon network logging can generate mountains of data. Apply selectively or with filters. ------------------------------------------------------------------------------------------------------------- Named pipes — what they are and why attackers love them Think of a named pipe like a little memory-backed mailbox processes use to chat with each other . Pipes can be local (two processes on the same machine) or remote (a process on another machine reads/writes the pipe over SMB). Windows exposes remote pipes through the special IPC$ share — so when you see IPC$ traffic, named pipes might be involved. Why this matters to defenders: Attackers use named pipes to hide communications inside normal SMB traffic (TCP 445). Instead of opening a weird port, they piggyback on SMB — which often looks rote in network telemetry. Windows telemetry you can use Named pipes are noisy in normal operation, so context is key. Useful events and sources: Sysmon Event ID 17  — a named pipe was created by a process (gives you the process that created it). Sysmon Event ID 18  — a named pipe connection occurred (shows which pipe was accessed and when). Security / System logs  — IPC$ access will show up as a network share access (similar to file share events). How to use them together: Start with suspicious IPC$/SMB activity (or a 4624 network logon tied to a workstation). Look at Sysmon Event 18 around the same time to get the pipe name. Use Sysmon Event 17 to find which process created that pipe on the target host. Correlate with process creation, command lines, or network activity to decide whether it’s malicious. ------------------------------------------------------------------------------------------------------------- Scheduled tasks — silent persistence and remote execution Scheduled tasks are a favorite for attackers because they: Can be created remotely — which often generates a network logon on the target (so still shows up as that 4624 type 3 behavior we talked about). But Windows gives you great signals about tasks — if you enable the logging. Key logs to turn on: Task Scheduler / Operational  (Microsoft-Windows-TaskScheduler/Operational) — excellent for creation and execution entries. Events here persist longer and are easier to hunt through than Security log entries. Security log scheduled task events  (when object auditing is enabled) — these include: 4698  — scheduled task created 4699  — scheduled task deleted 4700 / 4701  — task enabled/disabled 4702  — task updated Task Scheduler operational events you’ll see: 106  — task created (shows registering user and task name) 200 / 201  — task executed / completed (these often contain the actual command path that ran) Why remote task creation is important: Tasks created remotely will usually be accompanied by a 4624 Type 3  logon on the host around the same timestamp. That pairing is a very useful signal to automate hunting on. ------------------------------------------------------------------------------------------------------------- When analysts think about lateral movement, scheduled tasks and malicious services often go hand-in-hand. Attackers use them to execute commands remotely , maintain persistence , and bypass login-based detection . Luckily, Windows leaves behind rich forensic artifacts — if you know where to look. Task Scheduler v1.2 — Modern Task Artifacts (Vista and Later) Starting with Windows Vista and Server 2008 , Microsoft introduced a new scheduled task format (v1.2) , the new tasks are XML-based , human-readable, and stored without any file extension. Where to Find Them Folder Description C:\Windows\System32\Tasks Standard location for 64-bit task files C:\Windows\SysWOW64\Tasks Rare — tasks created by 32-bit code (worth checking for anomalies) Each file’s name matches the task name , and its contents describe who created it, what it runs, and under which account. What Information You Get (from the XML) Inside each XML file, key elements reveal attacker actions and context: Tag Description Shows the date/time  and account  that registered the task Includes hostname and username  that created the task — crucial for spotting remote scheduling Defines when and how often the task runs (e.g., once, hourly, at logon) Contains the command path or script  executed Identifies the user account used to run the command Why This Matters in Lateral Movement Remote scheduled tasks are a common tactic . While event logs (like 4698 or 106) don’t clearly state whether a task was scheduled remotely, the Author tag  in the XML file does. If you see a hostname or domain account in , it’s almost certainly remote. Even if the attacker deletes the task, the XML file may remain or be recovered forensically  from disk or Volume Shadow Copies. Better yet — if the same malicious task name appears across several systems, it can map attacker propagation  across your network. Task Scheduler v1.0 — Legacy Artifacts (XP / 2003) Older systems use .job files stored under C:\Windows\Tasks.These binary-format jobs contain: Registration date/time User account Command path Execution timestamp They’re created by at.exe  and schtasks.exe  on XP/2003 systems. Even on newer OS .job files may appear for backward compatibility , giving you a second artifact to pivot from if attackers forget to delete both versions. ------------------------------------------------------------------------------------------------------------- Windows Services — Another Common Lateral Movement Vector Just like scheduled tasks, services  are used for persistence and remote execution — often seen when attackers deploy PsExec , SCShell , or custom service installers. Key System Log Event IDs (Service Control Manager) Event ID Description Why it matters 7034 Service crashed unexpectedly May reveal instability caused by injected malware 7035 Service sent a Start/Stop control Traces the start/stop command 7036 Service started or stopped Confirms the actual operation 7040 Service start type changed Detects persistence via Boot/Auto-start configuration 7045 New service installed (Windows 2008 R2+) Excellent signal for new service-based malware EID 7045  is particularly powerful — each new service installation generates one. Even transient services (like PsExec) produce 7045 entries, making them easy to track across hosts. Security Log — Event ID 4697 If “Audit Security System Extension”  is enabled, Event ID 4697  will appear in the Security log  for new service installations. While it may list SYSTEM as the account, it provides start type and correlates nicely with 7045 events. Pro tip: Use both 4697 (Security) and 7045 (System) to get the who + when + what  of any new service creation. ------------------------------------------------------------------------------------------------------------- Abusing Windows Services Services in Windows are like small background workers — they run quietly without user interaction, handling updates, drivers, or system tasks. But attackers love them because services can start code with high privileges  and even survive reboots . So when you see a “new service installed” event on a host you didn’t expect, your alarm bells should go off. What to Hunt For The two golden event IDs for service creation are: EID 4697 (Security log)  — “A service was installed in the system.” EID 7045 (System log)  — “A service was installed.” Before Windows 10, these were clean, high-signal events. If one popped up, something new was created — and you’d investigate. But then Microsoft introduced Per-User Services  in Windows 10.These are like lightweight user-specific instances that start when a user logs in — and unfortunately, they flood your logs with hundreds of “new” service events. So now your logs might look like: OneSyncSvc_52a78dec WpnUserService_4g4y BluetoothUserService_0a3c Looks legit, right? That’s the problem. Attackers can easily hide behind that chaos. For example, naming their malicious service something like: OneSyncSvc_52a78dec and blending right in. Smarter Filtering Don’t just filter out every service with an underscore — that’s dangerous. Instead: Filter by ServiceFileName , not by service name. Create an ignore list  for known legitimate binaries, like: C:\Windows\System32\svchost.exe -k ASUSSystemAnalysis Focus on EID 7045  (System log) instead of 4697 — it’s less noisy and usually doesn’t log those per-user services. Bottom line: If you see an unexpected service installed and the binary path points to Temp, AppData, or a random directory — that’s your sign. ------------------------------------------------------------------------------------------------------------- Remote Registry — The Sneaky Lateral Move Here’s something you’ll see often in real intrusions: attackers using the reg command  to make changes on another system’s  registry. Yup, the same reg add command you use locally can modify a remote machine if they have credentials and the Remote Registry service  running. 🔍 What You’ll See in Logs When this command runs, several artifacts light up: Event Log Event ID Description Security 4624 (Logon Type 3) Network logon from the attacker system Security 5140 IPC$ access (named pipe communication) System 7036 Remote Registry service start/stop If you’re auditing file shares, 5140 is pure gold — it confirms the named pipe connection (like \PIPE\winreg).If not, 4624 + 7036 can still tell the story. Registry Timestamp The modified registry key also updates its LastWrite timestamp . Example workflow: Spot suspicious remote logons (4624 Type 3). Check the timeframe in Registry Explorer. Sort by Last Write Time  to see what changed. If a new value appears under Run with a weird executable path — that’s your persistence clue. Analyst Tip Attackers often choose “subtle misspellings”  for filenames . So when you see something like: C:\Windows\System32\svchos1.exe Ask yourself — since when did Windows start naming files like that? ------------------------------------------------------------------------------------------------------------- DCOM Abuse — Old Tech, New Tricks This is one of those “been around forever but still dangerous” technologies. DCOM lets one system create or control a code component on another system over the network. Attackers use this for lateral movement — without dropping any new binaries. What to Look For When a COM object is instantiated remotely: The DcomLauncher  service spins up the associated process (e.g., mmc.exe). That process becomes a child of the svchost.exe  process that hosts the DcomLaunch service. So if you’re threat hunting: Look for processes like mmc.exe, excel.exe, or outlook.exe being launched by svchost.exe -k DcomLaunch . Check the logs for EID 4624 (Type 3)  — remote network logons. Correlate with EID 4672  — “Special privileges assigned” (this usually means admin rights). Then review process creation logs (Sysmon Event 1 or Security EID 4688) for binaries spawned right after. Common Noise (and How to Filter It) Two DCOM-related system events are worth noting: EID 10036  — DCOM error (can reveal attacker testing or failure). EID 10016  — frequent in normal operations; only useful if you filter by user, SID, or time . Pro tip: most malware devs test DCOM methods by trial and error. Those errors generate 10036s — if you see a cluster of them before a suspicious logon, you’ve probably caught them mid-experiment. ------------------------------------------------------------------------------------------------------------- 🧭 Quick Recap — Detection Summary Technique Key Events Hunt Focus Service Abuse 4697, 7045 Unexpected service install, odd binary path Remote Registry 4624, 5140, 7036 Remote logon + Run key changes DCOM Abuse 4624, 4672, 4688, 10036 DcomLauncher spawning child process, privileged remote logon ------------------------------------------------------------------------------------------------------------- Why This Matters These techniques aren’t theoretical — they’re used daily by ransomware operators, red teams, and even internal IT tools. But the difference between legit  and malicious  is all about context . If you tune your detections around these three — services, registry, and DCOM — you’ll catch the kind of lateral movement that slips past surface-level monitoring. ----------------------------------------------------Dean------------------------------------------------ We will continue this in next article

  • PowerShell Logging: Making the Invisible Visible

    If you’ve worked in cybersecurity for a while, you know one truth: PowerShell is both a friend and a foe . Administrators love it because it makes automation simple. Attackers love it because it makes exploitation simple. From credential theft to data exfiltration, lateral movement, and even memory-only malware — PowerShell can do it all. So, the real question is not whether  PowerShell is being used, but how  and by whom . That’s where PowerShell logging  comes into play — your digital microscope into what’s really happening behind that blue window. ---------------------------------------------------------------------------------------------------------- The Problem — Power Without Visibility Older versions of PowerShell were like stealth bombers — powerful and almost invisible. Before version 5, investigators had very little to work with; attackers could run entire malicious frameworks without leaving much of a trace. Then came PowerShell v5 , and finally, we got some serious logging features. Now we can see what’s being executed, which modules are being loaded, and even the contents of the script itself. But here’s the catch: most of this logging is not enabled by default , and attackers know it. That’s why it’s your job to understand it, enable it, and use it wisely. The Key Event IDs You Must Know Think of these as your “eyes and ears” inside PowerShell: Event ID Log Type What It Shows Why It Matters 4103 Module Logging Captures module and pipeline output Great for spotting command sequences and variables 4104 Script Block Logging Captures the full script content Lets you see exactly what was executed (with deobfuscation!) 400 & 800 Legacy PowerShell Logs Session start and stop events Still useful for older systems or downgrade attacks WinRM/Operational Remoting Logs Tracks inbound/outbound PS remoting Crucial for identifying remote PowerShell abuse Logs to remember: Microsoft-Windows-PowerShell/Operational.evtx → for PowerShell v5 Microsoft-Windows-PowerShellCore/Operational.evtx → for PowerShell Core (v6, v7) ---------------------------------------------------------------------------------------------------------- The Power of Script Block Logging (EID 4104) Event ID 4104  is where the magic happens. It records the entire script block  that was executed — whether typed in manually, run from a file, or even built dynamically in memory. What’s even better? Windows automatically flags potentially malicious script blocks as “Warning” events  under EID 4104 — even if script block logging isn’t fully enabled. That’s right — Microsoft built in a safety net. If a suspicious command like Invoke-Expression , DownloadString , or FromBase64String  runs, it gets logged automatically. So, even in unprepared environments, you might still get lucky. ---------------------------------------------------------------------------------------------------------- Why Attackers Still Get Away — The Downgrade Trick Attackers know PowerShell v5+ logs everything, so they often downgrade  their sessions to PowerShell v2  — which has no useful logging at all. You can spot this behavior in the legacy logs: Look in Windows PowerShell.evtx  for Event ID 400  with EngineVersion=2.0  or HostVersion=2.0 Also, watch for command-line indicators like: powershell -Version 2 Best defense? Disable or uninstall PowerShell v2 entirely. It’s outdated, insecure, and only helps attackers stay invisible. ---------------------------------------------------------------------------------------------------------- Decoding the Attacker’s Syntax Attackers rarely use PowerShell the normal way. They love stealthy parameters  that hide windows, bypass policies, and run scripts directly from memory. Common flags and their purpose: Parameter What It Does Why It’s Dangerous -WindowStyle hidden Hides PowerShell window Runs silently in background -NoProfile Skips user profile scripts Avoids detection via profile hooks -NonInteractive Disables prompts Prevents blocking -ExecutionPolicy Bypass Disables script execution restrictions Runs unsigned or malicious scripts -EncodedCommand Runs Base64-encoded script Obfuscates real command Invoke-Expression (IEX) Executes arbitrary code Common in download cradles ---------------------------------------------------------------------------------------------------------- Quick Wins for Analysts When triaging PowerShell activity, start with these “low-hanging fruit” checks: Search for suspicious keywords :download, Invoke-Expression, FromBase64String, WebClient, rundll32, Start-BitsTransfer, Invoke-WmiMethod Filter by Event ID : 4103 → See what modules and variables were used 4104 → See the full script, including deobfuscated code Look for “Warning” in 4104  — these are Microsoft’s auto-flagged malicious patterns. Check for Remoting (WinRM) : Event ID 6  — Destination host, IP, and logged-on user Event ID 91  — Session creation Event ID 168  — Authenticating user Remember — even legitimate scripts can look suspicious. Don’t just hunt for what  was executed — understand why  it was executed. ---------------------------------------------------------------------------------------------------------- PowerShell Core (v6 & v7) — The Next Chapter PowerShell Core is cross-platform now (Windows, Linux, macOS) — built on .NET Core. It doesn’t replace PowerShell v5, it coexists  with it. That means: Separate installation Separate Group Policies Separate logs If both versions exist on a system, you must enable logging on both . Attackers already know this — they may run scripts through PowerShell 7 to bypass your v5 monitoring. ---------------------------------------------------------------------------------------------------------- Practical Tip: Centralize, Don’t Silo The biggest mistake I see? Logs stay on endpoints until it’s too late. Forward PowerShell logs , Security logs (4688) , and Defender logs (1116–1119)  to a central SIEM or log collector. Once an attacker wipes or disables logging locally, your central copy will be the only evidence left. ---------------------------------------------------------------------------------------------------------- The Art of Obfuscation Attackers now rely heavily on obfuscation  — the practice of making a script unreadable to humans and confusing to machines. If you’ve ever seen something like ${#/~) or $a+$b+$c forming into a download cradle, that’s obfuscation at work. One of the most famous tools behind this is Invoke-Obfuscation , created by Daniel Bohannon. It showed the world just how easily PowerShell scripts could be twisted into unrecognizable forms while still running perfectly. Since then, threat actors and cybercrime groups have taken this concept to a new level, turning PowerShell into a weapon that can blend right into legitimate system activity. The scary part? A heavily obfuscated script can look like complete gibberish but still download malware or run a tool like Mimikatz in the background. The good news is that defenders have caught up. Windows 10 introduced AMSI (Antimalware Scanning Interface) , which allows security tools to inspect scripts as they’re executed  — even if they’re obfuscated. This means we can now catch malicious activity closer to the source. ---------------------------------------------------------------------------------------------------------- Understanding PowerShell Logging PowerShell isn’t just powerful for attackers — it’s also a goldmine for forensic analysts. The tool leaves behind several kinds of logs, and each tells a different part of the story. PowerShell Transcript Logs Transcript logs are like a screen recorder for the PowerShell terminal. They capture exactly what was typed  and what the system replied with . That includes both the commands (inputs) and their results (outputs). These logs are incredibly useful because they give you the complete picture  — especially in scenarios where an attacker runs tools like Mimikatz  to dump credentials. Script Block Logging might tell you what script was executed , but Transcript Logging shows what actually happened , including whether the attack succeeded. Transcript logs are saved as text files, usually under: C:\Users\\Documents\ But storing logs in user folders is risky — attackers can easily delete or modify them. That’s why redirect them to a write-only network share  or a secure folder with limited permissions. You can enable transcription via Group Policy  under: Computer Configuration >Administrative Templates >Windows Components > Windows PowerShell >Turn on PowerShell Transcription It’s lightweight, easy to enable, and provides incredible visibility with minimal storage cost. Script Block Logging Script Block Logging focuses on the code itself  — the PowerShell commands and functions that get executed, even if they’re hidden or dynamically generated. In combination with transcript logs, Script Block Logging helps you catch both the “what” and the “how” of PowerShell activity. PSReadLine History Starting with PowerShell v5, Microsoft introduced something even simpler — a local history file called ConsoleHost_history.txt. If you’ve used Linux before, this is similar to .bash_history. It records the last 4,096 commands typed into the PowerShell console (but not the output). The file is stored here: %UserProfile%\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadline\ This is often the first artifact investigators check when trying to reconstruct what an attacker was doing manually. However, there are two caveats: Attackers can easily delete or modify it. It only records interactive console sessions, not background or remote executions. Even so, it’s incredibly handy — especially because it’s enabled by default. ---------------------------------------------------------------------------------------------------------- Why This Matters for Defenders Every PowerShell log type provides a different piece of the puzzle: Log Type Captures Typical Use Case Script Block Logging Code that was executed Detect obfuscated or malicious scripts Transcript Logging Commands + Output Reconstruct what happened in a session PSReadLine History Last 4,096 typed commands Identify manual actions or testing When combined, they can help analysts piece together how an attacker moved through a system, what tools they executed, and what data they may have accessed. ---------------------------------------------------------------------------------------------------------- Wrapping Up PowerShell is not the enemy — blindness is and PowerShell logging isn’t just a feature — it’s a window into the attacker’s mind/ When properly logged, PowerShell becomes one of your most valuable forensic resources. You can see who ran what, when they ran it, and what exactly executed in memory. ----------------------------------------------Dean---------------------------------------------------

bottom of page