
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
497 results found with an empty search
- Handling Incident Response: A Guide with Velociraptor and KAPE
Over the 3 years period , I’ve created numerous articles on forensic tools and incident response (IR). This time, I want to take a step back and focus on how to handle an incident investigation. This guide will specifically highlight incident response workflows using Velociraptor and KAPE . If you're looking for the forensic part of investigations, check out my other articles —or let me know, and I’ll create one soon! For those unfamiliar , I’ve written a series of articles diving deep into Velociraptor: from configuration to advanced usage. You can find those articles https://www.cyberengage.org/post/exploring-velociraptor-a-versatile-tool-for-incident-response-and-digital-forensics https://www.cyberengage.org/post/setting-up-velociraptor-for-forensic-analysis-in-a-home-lab https://www.cyberengage.org/post/navigating-velociraptor-a-step-by-step-guide Now, let’s dive into incident response without overcomplicating things ------------------------------------------------------------------------------------------------------------- Why Velociraptor’s Labels Matter One of Velociraptor's standout features is Labels , which play a critical role in investigations. They help you categorize, organize, and quickly identify relevant endpoints. While I can’t show you live client data due to privacy reasons , I'll provide detailed examples to help you understand the process. Remember, this article assumes you’ve read my previous Velociraptor articles; they provide the foundational knowledge you'll need here. ------------------------------------------------------------------------------------------------------------- Scenario: Investigating a Phishing Attack Imagine you’re responding to an incident involving a client with 100 endpoints. Attack Overview: The client fell victim to a phishing email. An attachment in the email was opened, initiating the attack. The client has isolated the environment by cutting off external connectivity. Their key questions are: (Before forensic) How many users opened the attachment? What files were created, and where? Which endpoints were infected? The client doesn’t have EDR or SIEM tools. Yes, it’s not ideal, but in the real world, this happens more often than you’d think. ------------------------------------------------------------------------------------------------------------- Deploying Velociraptor Agents First, configure your Velociraptor server (refer to my previous articles for detailed steps). Provide the client with the necessary token and instructions to roll out Velociraptor using GPO (Group Policy Objects). Key Points: Velociraptor isn’t a typical agent-based EDR. It doesn’t modify the system drastically, making it less intrusive and easier to handle. Once the client deploys the agents across endpoints, all devices will begin appearing in your Velociraptor console. ------------------------------------------------------------------------------------------------------------- Automating Labels for Large Environments Let’s say the client has rolled out Velociraptor to 100 endpoints . Manually assigning labels to each endpoint is impractical . Instead, you can automate this process: Click the eye icon in Velociraptor. Search for Server.Monitor.Autolabeling.Clients. Launch the rule. With this rule enabled, Velociraptor will automatically assign labels to new clients as they connect, streamlining your workflow. ------------------------------------------------------------------------------------------------------------- Investigating the Malicious Attachment The client informs you of the attachment name (e.g., 123.ps1) . Your goal is to determine: How many endpoints have this file. The file's location on each endpoint. Here’s how to proceed: Step 1: Create a Hunt Navigate to the Hunts section. Use the FileFinder artifact to configure the hunt. Configuration Example: If you’re looking for 123.ps1, set the search parameter as: Step 2: Launch the Hunt Once launched, Velociraptor will search for the specified file across all endpoints. You can view the results under the Notebook tab. Output: Improving Readability of Results By default, the output may not be user-friendly, especially if it contains 20-30 artifacts. To make the data more readable: Click t he edit icon in the Notebook. Paste the following query : SELECT Fqdn, OSPath, BTime AS CreatedTime, MTime AS ModifiedTime FROM source(artifact="Windows.Search.FileFinder") LIMIT 50 Run the query, and you'll see a neatly formatted output. Now after this see below screenshot (How good and easy view it have become right?) ------------------------------------------------------------------------------------------------------------- Labeling Infected Endpoints Let’s say you identify 20 infected endpoints out of 100. To make tracking easier, label these endpoints as Phishing . Here’s the query to do so: SELECT Fqdn, OSPath, BTime AS CreatedTime, MTime AS ModifiedTime, label(client_id=ClientId, labels=['Phishing'], op='set') AS SetLabel FROM source(artifact="Windows.Search.FileFinder") This automatically assigns the label Phishing to all infected devices(20 device), simplifying your investigation. Example: In my case Before running automate query: After running the note book: After Running query in notebook Once you have identified 20 endpoints that have opened the file or downloaded the attachment , the next step is to determine which users these endpoints are associated with for further investigation. There are multiple ways to accomplish this. Like: Asking Client each endpoint belong to which user or using velociraptor live query method: Running live query: Create hunt for only endpoint which label is phishing or crowdstrike previously (This is where label become more useful instead of running hunt to all endpoint we can run hunt on only labelled endpoints to get data see how useful label become) Select Hunt you want to run in this case Windows.sys.allusers Launch the hunt and you will get the endpoint belong to which user( this user information will be usefull in our next hunt) Once you run this hunt, use a notebook to extract a list of all affected users and their respective laptops. This initial step helps you identify around 20 laptops belonging to users who potentially acted as "patient zero." ------------------------------------------------------------------------------------------------------------- Tracing Lateral Movement Next, we investigate whether these 20 users logged into other laptops beyond their assigned ones . To do this: Launch a hunt using Windows Event Logs: RDP Authentication . While configuring the hunt, use a regular expression (regex) to include the usernames of the 20 suspected users. For example: Above example(Screenshot) is for single user If you want to add multiple users use below regex .*(Dean\.amberose|hana\.wealth|Chad\.seen|jaye\.Ward).* This pattern helps track these users across multiple endpoints . However, this step may produce a large dataset with many false positives . To refine the results, analyze the output in a notebook. Output Before running notebook As u see screenshot when i run the query I got 60 result u can see why we need to minimize it(Because if u run same query on 20 endpoints in real scenario output will be very intense) Minimizing False Positives To reduce noise, use a carefully crafted query. For example: SELECT EventTime, Computer, Channel, EventID, UserName, LogonType, SourceIP, Description, Message, Fqdn FROM source(artifact="Windows.EventLogs.RDPAuth") WHERE ( (UserName =~ "akash" AND NOT Computer =~ "Akash-Laptop") OR (UserName =~ "hana.wealth" AND NOT Computer =~ "Doctor") ) AND NOT EventID = 4634 -- Exclude logoff events AND NOT (Computer =~ "domaincontroller" OR Computer =~ "exchangeserver" OR Computer =~ "fileserver") ORDER BY EventTime This query excludes routine logons to systems like domain controllers or file servers, focusing on suspicious activity. Modify it further based on your environment to suit your needs. Output After running the notebook If needed, run additional hunts, such as UAL (User Access Logs) for servers . You can use below hunt from velociraptor. By analyzing these logs, you can map which accounts accessed which systems, providing insights into lateral movement. for server as well. Use this information to update labels, marking new suspected endpoints for further investigation. If you want to learn more about UAL how to parse and analyse it, check out my article below: https://www.cyberengage.org/post/lateral-movement-user-access-logging-ual-artifact ------------------------------------------------------------------------------------------------------------- Hunting for Suspicious Processes and Services To understand the attack's scope and detect malicious activities, examine the processes and services running across all endpoints. For service use below hunt and for processes i will show practice: Let start with processes: Automated Process Hunting We will run this Process hunting in two way First without using notebook hunt itself: Run a hunt using Windows.System.Pslist . When configuring parameters, check the option to focus on "untrusted authenticated code. " This will flag processes not signed by trusted authorities , providing their hash values. Second running hunt first (without untrusted authenticated code box check) and then using Notebook : Lets run same hunt as previously without untrusted authenticated code box check As soon as i run the hunt i got 291 processes on one endpoint lets suppose if u run this hunt on 100 endpoints how many processes u will get damn analysis will be worst: Worry not if used second method, I have a notebook for you to make analysis easy Query: For a more detailed approach, use this query in a notebook: SELECT Name,Exe,CommandLine,Hash.SHA256 AS SHA256, Authenticode.Trusted, Username, Fqdn, count() AS Count FROM source(WHERE Authenticode.Trusted = "untrusted" // unsigned binaries // List of environment-specific processes to exclude AND NOT Exe = "C:\\Program Files\\filebeat-rss\\filebeat.exe" AND NOT Exe = "C:\\Program Files\\winlogbeat-rss\\winlogbeat.exe" AND NOT Exe = "C:\\macfee\\macfee.exe" AND NOT Exe = "C:\\test\\bin\\python.exe" // Stack for prevalence analysis GROUP BY Exe // Sort results ascending ORDER BY Count O utput after running above notebook Only 3 detection's with clean output ------------------------------------------------------------------------------------------------------------- Hunting for Suspicious Processes with automated Virus total Scan Imagine you've scanned 100 endpoints and discovered 50 untrusted Processes . Checking their hashes manually would be frustrating and time-consuming. Here's how to simplify this: Output before Virus total automation: Before this keep in mind first you have to run hunt like above once you get output after that use below query or notebook to automate analysis with Virustotal. Use the following query to cross-reference file hashes with VirusTotal, reducing manual overhead(Using note book) // Get a free VirusTotal API key LET VTKey <= "your_api_key_here" // Build the list of untrusted processes LET Results = SELECT Name, CommandLine, Exe, Hash.SHA256 AS SHA256, count() AS Count FROM source() WHERE Authenticode.Trusted = "untrusted" AND SHA256 // only entries with SHA256 hashes // Exclude environment-specific processes AND NOT Exe = "C:\\Sentinelone\\sentinel.exe" GROUP BY Exe, SHA256 // Combine with VirusTotal enrichment query SELECT *, {SELECT VTRating FROM Artifact.Server.Enrichment.Virustotal(VirustotalKey=VTKey, Hash=SHA256)} AS VTResults FROM foreach(row=Results) WHERE Count < 5 ORDER BY VTResults DESC Outcome: A fter running the query, you get VirusTotal results with file ratings , making it easier to prioritize your efforts. No more manual hash-checking! ------------------------------------------------------------------------------------------------------------ Tracing Parent Processes Once y ou’ve identified malicious processes, the next step is to trace their origins . Here’s how: Set Up a Parent Process Hunt: Suppose you’ve identified these malicious processes: IGCC.exe WidgetService.exe IGCCTray.exe Use the Generic.System.PsTree hunt to map their parent processes. Configure the parameters Configure the parameters by adding the malicious processes in a regex format like this: .*(IGCCTray.exe|WidgetService.exe|IGCC.exe).* In our case Outcome: The output will show the process call chain, helping you identify the parent processes and their origins. This insight is crucial for understanding how attackers gained initial access and their lateral movement within the network. ------------------------------------------------------------------------------------------------------------ Investigating Persistence Mechanisms Persistence is a common tactic used by attackers to maintain access. Let's focus on startup items. Startup Items Hunt: Running this hunt on 100 endpoints can generate a huge amount of data. Use hunt ( Windows.Sys.Star tupItems) For instance, a single endpoint yield 22 startup items (Screenshot below) , and across 100 endpoints, the dataset becomes unmanageable. Filter Common False Positives: Narrow down results what we do is create a notebook or a query which will exclude the files or path client is using in their environment or they aware about or we can assume that those are mostly legit like macfee, ondrive, vmware right. LET Results = SELECT count() AS Count, Fqdn, Name, OSPath, Details FROM source(artifact="Windows.Sys.StartupItems") // Exclude common false positives WHERE NOT OSPath =~ "vmware-tray.exe" AND NOT OSPath =~ "desktop.ini" AND NOT (Name =~ "OneDrive" AND OSPath =~ "OneDrive" AND Details =~ "OneDrive") // Stack and filter results GROUP BY Name, OSPath, Details SELECT * FROM Results WHERE Count < 10 ORDER BY Count O utput after running above notebook: Outcome The refined output is structured, significantly reducing the data volume and allowing you to focus on potential threats . For example, the filtered results might now show only 15 entries instead of hundreds. (You can narrow those down) ------------------------------------------------------------------------------------------------------------ Documentation Is Key Throughout the process: Document all malicious processes, paths, infected endpoints, and related findings. Organize your notes for efficient forensic investigation and reporting. ------------------------------------------------------------------------------------------------------------ Investigating Scheduled Tasks Scheduled tasks often serve as a persistence mechanism for attackers. Here's how to efficiently analyze using velociraptor: Use the hunt Windows.System.TaskScheduler /Analysis artifact to collect scheduled task data. Once the data is collected, run the following query to exclude known legitimate entries from your environment: Query: LET Results = SELECT OSPath, Command, Arguments, Fqdn, count() AS Count FROM source(artifact="Windows.System.TaskScheduler/Analysis")WHERE Command AND Arguments AND NOT Command =~ "ASUS"AND NOT (Command = "C:\\Program Files (x86)\\Common Files\\Adobe\\AdobeGCClient\\AGCInvokerUtility.exe" OR OSPath =~ "Adobe") AND NOT Command =~ "OneDrive" AND NOT OSPath =~ "McAfee" AND NOT OSPath =~ "Microsoft" GROUP BY OSPath, Command, Arguments SELECT * FROM Results WHERE Count < 5 ORDER BY Count // sorts ascending Outcome: By running this query, you’ll exclude known false positives (e.g., ASUS, Adobe, OneDrive), significantly reducing the dataset and narrowing your focus to potentially suspicious tasks. Environment-Specific Adjustments: Tailor the query to your specific environment by adding more exclusions based on legitimate scheduled tasks in your network. ------------------------------------------------------------------------------------------------------------ Analyzing Using Autorun tool Autorun tools is another common tool which can identify attackers seeking persistence . Here's how to analyze them efficiently: Use the hunt Windows.Sysinternals.Autoruns artifact in Velociraptor to gather autorun data across endpoints. Refine Results with a Notebook Query: Autorun entries often generate a large amount of data . Use the following query to focus on suspicious entries (In notebook) : Query: LET Results = SELECT count() AS Count, Fqdn, Entry, Category, Profile, Description, `Image Path` AS ImagePath, `Launch String` AS LaunchString, `SHA-256` AS SHA256 FROM source() WHERE NOT Signer AND Enabled = "enabled" GROUP BY ImagePath, LaunchString SELECT * FROM Results WHERE Count < 5 // return entries present on fewer than 5 systems ORDER BY Count Outcome: This query filters out signed entries and narrows the results allowing you to focus on anomalies while discarding likely false positives. Customization: Like the scheduled task query, modify this query to include exclusions specific to your environment for more accurate results. ------------------------------------------------------------------------------------------------------------ Document Everything Keep a record of all suspicious entries, including file paths, hashes, and endpoints where they were found. This documentation is essential for both immediate remediation and forensic reporting. Iterate and Adjust Each organization has unique software and configurations. C ontinuously refine your queries to adapt to legitimate processes and new threats. ------------------------------------------------------------------------------------------------------------ So far, we’ve gathered substantial data from scheduled tasks, autorun entries, and identified potential malicious artifacts. Now, let’s take it a step further to ensure that no other endpoints in the environment are compromised Identifying Additional Compromised Endpoints Once we have identified malicious files or processes from our analysis , the next step is to ensure they aren’t present on any other endpoints. We’ll use the Windows.Search.FileFinder artifact to search for malicious file names across all endpoints. This is the same artifact we’ve used previously, but now we’ll populate it with the suspicious file paths or names identified in the earlier stages . Example paths (for demonstration purposes): Launch the Hunt Run the hunt across 100 endpoints or more to check if the identified malicious files exist elsewhere. Reviewing the Output: Once the hunt completes, you’ll see a detailed list of endpoints where these files are found. If the files are present on other endpoints, label those endpoints as “compromised” or “attacked” for further investigation. Labeling Compromised Endpoints: Use the following query to label endpoints automatically: Query: SELECT Fqdn, OSPath, BTime AS CreatedTime, MTime AS ModifiedTime, label(client_id=ClientId, labels=['Phishing'], op='set') AS SetLabel FROM source(artifact="Windows.Search.FileFinder") ------------------------------------------------------------------------------------------------------------ Next Steps Based on Findings: If no additional compromised endpoints are found, you can move forward with the analysis of the initially identified endpoints. If more compromised endpoints are identified, label them and consider i solating or rebuilding them to eliminate the risk of reinfection. ------------------------------------------------------------------------------------------------------------ YARA Scans for Advanced Threat Detection Once you’ve identified the potentially malicious files and endpoints, the final step is to run a YARA rule scan across the environmen t. This helps detect specific malware families or identify links to Advanced Persistent Threat (APT) groups. Running a YARA Hunt Use the Windows.Detection.Yara.Process artifact for this hunt. Configuring Parameters: If you don’t provide a custom YARA rule, Velociraptor will default to scanning for Cobalt Strike indicators. To run a specific YARA rule (e.g., for detecting APT activity), upload the rule or provide its URL in the configuration. Example of adding a custom rule URL: Launching the YARA Scan: Once configured, launch the hunt. Velociraptor will scan all endpoints and flag any files or processes matching the specified YARA rules. Reviewing the Results: If hits are detected, you can identify the malware family or APT group involved based on the rule triggered. If no hits are found, you can confirm that the environment is clean for the specified indicators. ----------------------------------------------------------------------------------------------------------- Now that you’ve identified infected endpoints and labeled the “patient zero,” it’s time to move to the triage, containment, and recovery phases. KAPE Triage Imaging The next logical step is to capture a triage image of the compromised endpoints . This allows you to collect crucial artifacts for further investigation. Triage Imaging via Velociraptor: Velociraptor simplifies this process by allowing you to run KAPE (Target Filed) directly on the infected endpoint. Create a hunt to initiate KAPE (Target) collection, targeting the relevant artifacts needed for forensic analysis. Collect key forensic artifacts such as registry hives, event logs, and file system metadata. Ensure the image is stored securely for further examination. Manual Imaging (Optional): If Velociraptor isn’t an option , you can run KAPE manually on the infected machine to create a comprehensive triage image. Quarantining Infected Endpoints Once the imaging process is complete, it’s critical to keep isolated the compromised systems or if not done yet isolate endpoints from the network to prevent further spread or communication with potential Command and Control (C2) servers. Using Velociraptor for Quarantine: Velociraptor can quarantine endpoints by blocking all network communications except to the Velociraptor server. Create a hunt to execute the quarantine action. This ensures the endpoint is unable to communicate externally while still being accessible for analysis. Benefits of Quarantine: Prevents lateral movement within the network. Ensures minimal disruption to the ongoing investigation. Recovery and Reimaging After quarantining the compromised endpoints: Reimage the Systems: Reimaging cleans the endpoint, restoring it to a known good state. Deploy it back into the production environment only after ensuring the threat is eradicated. Forensic Analysis (Optional): If deeper investigation is required, forensic specialists can analyze the collected artifacts. Velociraptor for Forensics: Velociraptor supports advanced forensic capabilities, allowing you to parse and analyze collected data. Manual Analysis: Some professionals like me prefer using tools like KAPE, parsing artifacts manually for an in-depth understanding of the attack. Additional Hunting (Optional) Before wrapping up, you can perform further hunting on the infected endpoints to gather more details about the attack. For example: Command History: Identify commands executed on the endpoints, such as psexec or PowerShell commands, to understand the attacker's actions. Network Activity: Investigate network connections to detect communication with suspicious IPs or domains. Persistence Mechanisms: Look for persistence techniques like registry changes or scheduled tasks. Velociraptor offers an array of artifacts and queries for such investigations. Explore these capabilities to uncover additional insights. ----------------------------------------------------------------------------------------------------------- Final Thoughts With the steps outlined, you’ve gone through a comprehensive process to identify, contain, and recover from an endpoint compromise. From advanced hunting to quarantining infected systems, Velociraptor proves to be a powerful tool for incident response. While this article doesn’t delve into detailed forensic analysis, it’s worth noting that Velociraptor can handle a wide range of forensic tasks. You can collect, parse, and analyze artifacts directly within the platform, making it an all-in-one solution for responders. For those who prefer hands-on forensic work, tools like KAPE and manual parsing remain excellent options. What’s Next? This article is just the beginning. Velociraptor offers many more possibilities for proactive hunting and investigation. Experiment with its capabilities to uncover hidden threats in your environment. Stay tuned for the next article, where we’ll dive deeper into another exciting topic in cybersecurity. Until then, happy hunting! 🚀 Dean
- Prefetch Analysis with PECmd and WinPrefetchView
Windows Prefetch is a critical forensic artifact that helps track program execution history . While Prefetch files can be manually analyzed, forensic tools like PECmd (by Eric Zimmerman) and WinPrefetchView (by NirSoft) simplify and enhance the analysis process. We will cover: ✅ How PECmd extracts and formats Prefetch data ✅ How to analyze Prefetch files using WinPrefetchView ✅ Best practices for interpreting Prefetch execution timestamps ------------------------------------------------------------------------------------------------------------- Using PECmd to Analyze Prefetch Files PECmd is a powerful command-line tool for parsing Prefetch files, extracting valuable metadata, and generating structured reports. 1️⃣ Analyzing a Single Prefetch File (-f option) To extract detailed metadata from a single .pf file, run: PECmd.exe -f C:\Windows\Prefetch\example.exe-12345678.pf This outputs: Executable Name & Path Prefetch Hash & File Size Prefetch Version Run Count (how many times the application was executed) Last Execution Timestamp(s) Windows 7 and earlier: 1 timestamp Windows 8+: Up to 8 execution timestamps 💡 Timestamp Validation: The last run time should match the last modified timestamp of the .pf file. Subtract ~10 seconds for accuracy when using file system timestamps. ------------------------------------------------------------------------------------------------------------- 2️⃣ Batch Processing: Parsing an Entire Prefetch Folder (-d option) To process all Prefetch files in a directory: PECmd.exe -d G:\G\Windows\prefetch --csv "E:\Output for testing" --csvf Prefetch.csv ' This generates two output files :1️⃣ CSV Report: Contains execution details for all parsed Prefetch files. Useful for filtering by run count or searching for specific applications . 2️⃣ Timeline View: Extracts all embedded execution timestamps from Prefetch files. Provides a chronological list of program executions, helping correlate events . ------------------------------------------------------------------------------------------------------------- Using WinPrefetchView for GUI-Based Analysis WinPrefetchView (by NirSoft) provides a graphical interface for analyzing Prefetch data. How to Use WinPrefetchView 1️⃣ Open WinPrefetchView 2️⃣ Go to Options > Advanced Options 3️⃣ Select Prefetch Folder (C:\Windows\Prefetch\ or a forensic image) 4️⃣ Click OK to parse Prefetch files 📌 Key Features: ✅ Displays Run Count, Last Run Time, and File References ✅ Extracts up to 8 execution timestamps ✅ Lists files accessed by the application within the first 10 seconds 🚀 Takeaway: Prefetch file references can reveal hidden malware, deleted tools, or important user actions that might otherwise go undetected. ------------------------------------------------------------------------------------------------------------- Best Practices for Prefetch Analysis 🔍 1. Prioritize Prefetch Collection Running forensic tools on a live system creates new Prefetch files , potentially overwriting older evidence. Collect Prefetch files before executing forensic tools. 🔍 2. Cross-Reference Prefetch Data Combine Prefetch analysis with: UserAssist (tracks GUI-based program executions) AmCache (records detailed program metadata) BAM/DAM (tracks recent executions) 🔍 3. Look for Anomalous Prefetch Files Multiple Prefetch files for the same executable but with different hashes may indicate: Malware running from multiple locations Renamed executables attempting to evade detection 🔍 4. Ensure Timestamps Are Interpreted Correctly Convert Windows FILETIME timestamps properly. Keep your forensic VM in UTC time to prevent automatic time conversions by analysis tools. ------------------------------------------------------------------------------------------------------------- Final Thoughts: Mastering Prefetch Analysis with PECmd & WinPrefetchView PECmd and WinPrefetchView are essential tools for extracting, organizing, and analyzing Windows Prefetch data. 💡 Key Takeaways: ✅ PECmd is ideal for batch processing and timeline analysis. ✅ WinPrefetchView provides a user-friendly interface for reviewing Prefetch files. ✅ Prefetch timestamps help reconstruct program execution history—even for deleted applications. ✅ File references inside Prefetch files can reveal hidden malware or deleted forensic evidence. 🚀 If you're investigating program execution on a Windows system, Prefetch analysis should be one of your first steps! 🔍 -----------------------------------------Dean-----------------------------------------------
- Windows Prefetch Files: A Forensic Goldmine for Tracking Program Execution
Windows Prefetch is one of the most valuable forensic artifacts for tracking program execution history . By analyzing Prefetch files, investigators can determine which applications were run, when they were executed, how often they were used, and even which files and directories they accessed . We’ll explore: ✅ What Prefetch is and how it works ✅ Where to find Prefetch files ✅ How to extract and interpret Prefetch data ✅ Best practices for forensic investigations ------------------------------------------------------------------------------------------------------------- What Is Prefetch and How Does It Work? Windows Prefetching is a performance optimization feature that preloads frequently used applications into memory to speed up their execution . When a program is launched for the first time, Windows creates a .pf (Prefetch) file for it. Each .pf file contains: ✅ The name and path of the executed application ✅ How many times it has been executed ✅ The last execution time ✅ Up to 8 previous execution timestamps (Windows 8 and later) ✅ Referenced files and directories the application accessed 💡 Key Insight: If a Prefetch file exists for an application, it proves that the program was executed at least once on the system. ------------------------------------------------------------------------------------------------------------- Where Are Prefetch Files Stored? On Windows workstations (not servers) , Prefetch files are stored in: C:\Windows\Prefetch\ 📌 File Naming Format: 7ZFM.EXE-56DE4F9A.pf The ApplicationName is the name of the executable The HASH is a hexadecimal representation of the executable's full path . 💡 Pro Tip: If you find multiple Prefetch files with the same executable name but different hashes , i t means the program was executed from multiple locations —potentially indicating malware or unauthorized software. ------------------------------------------------------------------------------------------------------------- How Many Prefetch Files Are Stored? Windows 7 and earlier → Stores up to 128 Prefetch files Windows 8, 10, and 11 → Stores up to 1,024 Prefetch files 📌 Important Note: Older Prefetch files are deleted as new ones are created , meaning execution history may be lost over time. ------------------------------------------------------------------------------------------------------------- Understanding Prefetch Execution Timestamps 💡 How to determine the first and last execution time: Timestamp Type Meaning Accuracy Considerations File Creation Date First recorded execution of the application Only accurate if the .pf file was never deleted due to aging out File Last Modified Date Last recorded execution of the application Subtract ~10 seconds for accuracy Embedded Timestamps (Windows 8+) Last 8 execution times Most reliable for tracking multiple executions 📌 Important Note: If an application was executed before its Prefetch file aged out , a new .pf file is created, making it look like the application was first executed at a later date than it actually was. ------------------------------------------------------------------------------------------------------------- Why Prefetch Files Are Crucial in Digital Forensics ✅ 1. Tracking Program Execution Prefetch proves a specific application was run on the system . Even if an application was deleted , its Prefetch file may still exist as evidence. ✅ 2. Identifying Suspicious Activity If you find a Prefetch file for malware or hacking tools (mimikatz.exe, nc.exe), it indicates they were executed . Finding multiple Prefetch files for the same executable in different locations suggests a renamed or relocated executable , which is common for malware evasion techniques . ✅ 3. Detecting Unauthorized Software & Insider Threats If a user claims they never used a VPN , but a Prefetch file for NordVPN.exe exists , this contradicts their claim. ✅ 4. Establishing a Timeline of Events Prefetch timestamps can help reconstruct a timeline of when certain applications were executed relative to an incident . ------------------------------------------------------------------------------------------------------------- Limitations of Prefetch Analysis ⚠️ 1. Prefetch Is Disabled on Some Systems Windows Server OS does not use Prefetch. Some Windows 7+ systems with SSDs may have Prefetch disabled . 📌 Check Registry Settings to See If Prefetch Is Enabled: SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters Audit the EnablePrefetcher value: 0 → Disabled 1 → Application launch prefetching enabled 2 → Boot prefetching enabled 3 → Both application launch & boot prefetching enabled (default) ⚠️ 2. Prefetch Does Not Prove Successful Execution A .pf file is created even if the program failed to execute properly . Cross-check with other artifacts ( UserAssist, BAM/DAM, AmCache ) for confirmation. ⚠️ 3. Prefetch Files Are Limited in Number Older Prefetch files are deleted when the limit is reached. If an app was used long ago , its Prefetch file may no longer exist . ------------------------------------------------------------------------------------------------------------- Best Practices for Prefetch Analysis 🔍 1. Prioritize Prefetch Collection Live response tools create new Prefetch files —potentially overwriting older forensic evidence. Collect Prefetch data before running analysis tools . 🔍 2. Cross-Reference Other Execution Artifacts Compare Prefetch data with: UserAssist AmCache BAM/DAM 🔍 3. Look for Anomalous Prefetch Files Multiple Prefetch files for the same application but with different hashes may indicate suspicious execution paths . ------------------------------------------------------------------------------------------------------------- Final Thoughts: Prefetch Is an Essential Artifact for Execution Tracking Windows Prefetch files are one of the most reliable ways to track program execution . They provide timestamps, execution counts, and file access details that are crucial in forensic investigations. 💡 Key Takeaways: ✅ Prefetch proves an application was executed —even if it was later deleted. ✅ Windows 8+ Prefetch files store up to 8 execution timestamps , making them invaluable for tracking repeat usage . ✅ Prefetch files can reveal unauthorized or malicious software execution . ✅ Cross-check Prefetch data with other execution artifacts (UserAssist, BAM/DAM, AmCache) for accuracy . 🚀 If you're investigating program execution on a Windows system, Prefetch analysis should be at the top of your forensic checklist! 🔍 -------------------------------------------------Dean-----------------------------------------------
- SentinelOne Threat Hunting Series P3: Must-Have Custom Detection Rules
In this article, we continue exploring the power of SentinelOne’s custom detection rules to enhance control over your environment's security. Below are more custom detection rules tailored for advanced threat detection, covering various scenarios like remote desktop activity, SMB connections, PowerShell misuse, and suspicious file transfers. 21. RDP Session Start Events with Non-Local Connections Rule : event.type == "Process Exit" AND src.process.cmdline contains:anycase("mstsc.exe") OR (event.type == "Process Creation" AND src.process.cmdline contains:anycase("mstsc.exe") AND !(src.ip.address matches:anycase("0.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16"))) Description : Detects RDP session initiation using the mstsc.exe process from non-local IP addresses, highlighting potential unauthorized remote connections. 22. Creation of Processes Related to Remote Desktop Tools and Protocols Rule : event.type == "Process Creation" AND !(src.ip.address matches:anycase("0.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16")) AND src.process.cmdline contains:anycase("mstsc", "vnc", "ssh", "teamviewer", "anydesk", "logmein", "chrome remote desktop", "splashtop", "gotomypc", "parallels access") Description : Monitors the creation of processes linked to remote access tools while excluding certain IP ranges, which could indicate suspicious remote activity. 23. SMB Connections Indicating Lateral Movement Rule : event.type == "IP Connect" AND event.network.direction == "INCOMING" AND event.network.protocolName == "smb" AND dst.port.number == 445 Description : Flags SMB connections over port 445, commonly used for lateral movement in network compromises. 24. BitsTransfer Activity Rule : event.type == "Process Creation" AND tgt.process.cmdline contains:anycase("BitsTransfer") AND tgt.file.extension in:anycase("ps1", "bat", "exe", "dll", "zip", "rar", "7z", "tar") Description : Monitors the use of BitsTransfer to download or upload files, a technique often used to evade detection in malicious activities. 25. PowerShell Web Request Rule : event.type == "Process Creation" AND tgt.process.displayName == "Windows PowerShell" AND (tgt.process.cmdline contains:anycase("Invoke-WebRequest", "iwr", "wget", "curl", "Net.WebClient", "Start-BitsTransfer")) Description : Detects PowerShell commands that perform web requests, which may indicate data exfiltration or malicious script downloads. 26. Suspicious File Uploads to Cloud Services Rule : event.category == "url" AND url.address matches("https?://(?:www\\.)?(?:dropbox\\.com|drive\\.google\\.com|onedrive\\.live\\.com|box\\.com|mega\\.nz|icloud\\.com|mediafire\\.com|pcloud\\.com)") OR (event.category == "url" AND event.url.action == "PUT" AND url.address matches("https?://(?:www\\.)?(?:dropbox\\.com|drive\\.google\\.com|onedrive\\.live\\.com|box\\.com|mega\\.nz|icloud\\.com|mediafire\\.com|pcloud\\.com)")) Description : Detects upload attempts to cloud storage platforms, which could signify data exfiltration efforts. Share your email and details, and I’ll help craft the perfect rule for your needs. See you soon! 👋 Thank you so much for staying with me throughout this complete series on SentinelOne. It has always been a pleasure writing and sharing knowledge so others can benefit. With this final article, I wrap up my coverage on SentinelOne—until I receive further requests to explore more on this topic. For now, I'll be shifting my focus to other articles and new areas of research. Stay curious, keep learning, and as always, take care. See you soon! 🚀
- SentinelOne Threat Hunting Series P2: Must-Have Custom Detection Rules
In this article, we continue exploring the power of SentinelOne’s custom detection rules to enhance control over your environment's security. These rules allow you to define specific conditions for detecting and responding to potential threats, giving you the flexibility to act beyond built-in detections. 11. Mimikatz (Reg Add with Process Name) Rule : tgt.process.name == "powershell.exe" AND (registry.keyPath == "SYSTEM\\CurrentControlSet\\Services\\mimidrv" OR tgt.process.cmdline contains:anycase("MISC::AddSid", "LSADUMP::DCShadow", "SEKURLSA::Pth", "CRYPTO::Extract")) AND (file.name in:anycase("vaultcli.dll", "samlib.dll", "kirbi")) Description : Detects malicious registry modifications associated with Mimikatz. The rule identifies suspicious PowerShell activity and DLL manipulations indicative of credential dumping or lateral movement. 12. MimikatzV (Behavior-Based) Rule : event.type == "Behavioral Indicators" AND indicator.name in:matchcase("Mimikatz", "PrivateKeysStealAttemptWithMimikatz") OR (event.type == "File Creation" AND tgt.file.path matches(".*\\mimikatz.*", ".*\\sekurlsa.*", ".*\\mimidrv.*", ".*\\mimilib.*")) OR (event.type == "Threat Intelligence Indicators" AND tiIndicator.malwareNames contains:anycase("Mimikatz")) Description : A behavior-based rule for detecting Mimikatz activity by monitoring file creation, threat intelligence indicators, and behavioral signs linked to credential theft. 13. Disable Veeam Backup ServicesV2 Rule : tgt.process.cmdline contains:anycase("net.exe stop veeamdeploysvc", "vssadmin.exe Delete Shadows", "vssadmin.exe delete Shadows /All /Quiet", "wmic shadowcopy delete") Description : Flags attempts to disable Veeam Backup services, commonly used by attackers to disrupt data recovery processes during ransomware campaigns. 14. Mimikatz Executables Rule : tgt.file.path contains:anycase("mimikatz.exe", "mimikatz", "mimilove.exe", "mimilove", "mimidrv.sys", "mimidrv", "mimilib.dll", "mimilib", "mk.7z") Description : Detects the presence of Mimikatz executables or libraries, identifying potential tool deployment for credential harvesting. 15. Rclone (You can other tool like mega.io or Filezilla as well) Rule : src.process.name in:matchcase("rclone.exe", "rclone.org", "Rclone.exe") AND event.dns.request == "rclone.org" OR tgt.process.cmdline contains:anycase("rclone") OR src.process.displayName contains:anycase("rclone") OR src.process.cmdline contains:anycase("rclone") Description : Monitors activity related to Rclone, a legitimate tool often abused for exfiltrating data to cloud storage services. 16. NTDSUtil Rule : event.type == "Process Creation" AND ((tgt.process.cmdline contains:anycase("copy ") AND (tgt.process.cmdline contains:anycase("\\Windows\\NTDS\\NTDS.dit") OR tgt.process.cmdline contains:anycase("\\Windows\\System32\\config\\SYSTEM "))) OR (tgt.process.cmdline contains:anycase("save") AND tgt.process.cmdline contains:anycase("HKLM\\SYSTEM "))) OR (tgt.process.name == "ntdsutil.exe" AND tgt.process.cmdline contains:anycase("ac i ntds")) OR (tgt.process.name == "mklink.exe" AND tgt.process.cmdline contains:anycase("HarddiskVolumeShadowCopy"))) AND !(src.process.cmdline contains:anycase("Get-psSDP.ps1")) OR (src.process.cmdline contains:anycase("ntdsutil") AND src.process.cmdline contains:anycase("ifm")) OR (tgt.process.cmdline contains:anycase("ntdsutil") AND tgt.process.cmdline contains:anycase("ifm")) Description : Targets suspicious usage of NTDSUtil to access Active Directory databases and other sensitive registry keys, a technique used in domain compromises. 17. CURL Connecting to IPs Rule : src.process.cmdline contains:matchcase("curl.exe") AND event.network.direction == "OUTGOING" AND dst.ip.address matches("^((?!10\\.).)*$") AND dst.ip.address matches("^((?!172\\.1[6-9]\\.).)*$") AND dst.ip.address matches("^((?!172\\.2[0-9]\\.).)*$") AND dst.ip.address matches("^((?!172\\.3[0-1]\\.).)*$") Description : Detects CURL network connections to non-local IP addresses, helping to identify potential data exfiltration attempts. 18. Admin$hare Activity (Cobalt Strike - Service Install Admin Share) Rule : src.process.cmdline contains:matchcase("\\127.0.0.1\\ADMIN$") AND src.process.cmdline contains:matchcase("cmd.exe /Q /c") Description : Identifies suspicious activity targeting the ADMIN$ share, often used by tools like Cobalt Strike for lateral movement. 19. RDP Detection (Any Port) Rule : event.type == "IP Connect" AND event.network.direction == "INCOMING" AND src.process.cmdline contains:anycase("-k NetworkService -s TermService") AND src.ip.address matches("\\b(?!10|192\\.168|172\\.(2[0-9]|1[6-9]|3[0-1])|(25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]|99[1-9]))[0-9]{1,3}\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)") AND src.ip.address != "127.0.0.1" Description : Monitors incoming RDP connections, highlighting unusual or unauthorized attempts to access the environment. 20. RDP Detection (Port 3389) Rule : dst.port.number == 3389 AND event.network.direction == "INCOMING" AND src.ip.address matches("\\b(?!10|192\\.168|172\\.(2[0-9]|1[6-9]|3[0-1])|(25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]|99[1-9]))[0-9]{1,3}\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)") AND src.ip.address != "127.0.0.1" Description : Focused detection of RDP activity on the standard port 3389, which is commonly targeted in brute-force attacks. Share your email and details, and I’ll help craft the perfect rule for your needs. See you soon! 👋
- SentinelOne Threat Hunting Series P1: Must-Have Custom Detection Rules
In this three-part series, we’ll explore custom rules for enhanced threat detection and hunting in SentinelOne . These rules leverage STAR (SentinelOne Threat Analysis Rules) to proactively identify malicious activities and enhance security posture. If you need any rules tailored to your environment, feel free to email me via the Contact Us page with your requirements, and I'll be happy to create them for you! Part 1: Top 10 Must-Have Rules for Threat Hunting 1. Delete Shadow Volume Copies Purpose : Detects attempts to delete shadow copies, a common tactic used by ransomware operators to prevent file recovery. Rule : tgt.process.cmdline matches("vssadmin\\.exe Delete Shadows","vssadmin\\.exe delete Shadows /All /Quiet") 2. Suspect Volume Shadow Copy Behavior Detected Purpose : Identifies attempts to access sensitive files from shadow copies. Rule : tgt.process.cmdline contains:anycase("HarddiskVolumeShadowCopy") AND ( tgt.process.cmdline contains:anycase("ntds\\ntds.dit") OR tgt.process.cmdline contains:anycase("system32\\config\\sam") OR tgt.process.cmdline contains:anycase("system32\\config\\system")) AND !( src.process.name == "windows\\system32\\esentutl.exe" OR src.process.publisher in:matchcase("Veritas Technologies LLC", "Symantec Corporation")) 3. Impact - Shadow Copy Delete Via WMI/CIM Detected Purpose : Flags deletion of shadow copies using WMI or CIM commands. Rule : tgt.process.cmdline contains:anycase("win32_shadowcopy") AND ( tgt.process.cmdline contains:anycase("Get-WmiObject") OR tgt.process.cmdline contains:anycase("Get-CimInstance") OR tgt.process.cmdline contains:anycase("gwmi") OR tgt.process.cmdline contains:anycase("gcim")) AND ( tgt.process.cmdline contains:anycase("Delete") OR tgt.process.cmdline contains:anycase("Remove")) 4. Suspect Symlink to Volume Shadow Copy Detected Purpose : Detects creation of symlinks to shadow copies for unauthorized access. Rule : tgt.process.cmdline contains:anycase("mklink") AND tgt.process.cmdline contains:anycase("HarddiskVolumeShadowCopy") 5. Disable/Delete Microsoft Defender AV Using PowerShell Purpose : Monitors attempts to disable Microsoft Defender via PowerShell commands. Rule : tgt.process.cmdline contains:anycase("powershell Set-MpPreference -DisableRealtimeMonitoring $true") OR tgt.process.cmdline contains:anycase("sc stop WinDefend") OR tgt.process.cmdline contains:anycase("sc delete WinDefend") 6. Disable Windows Defender Purpose : Detects various attempts to disable Microsoft Defender features. Rule : tgt.process.cmdline contains:anycase("Set-MpPreference") AND ( tgt.process.cmdline contains:anycase("-DisableArchiveScanning") OR tgt.process.cmdline contains:anycase("-DisableAutoExclusions") OR tgt.process.cmdline contains:anycase("-DisableBehaviorMonitoring") OR tgt.process.cmdline contains:anycase("-DisableBlockAtFirstSeen") OR tgt.process.cmdline contains:anycase("-DisableCatchupFullScan") OR tgt.process.cmdline contains:anycase("-DisableCatchupQuickScan") OR tgt.process.cmdline contains:anycase("-DisableEmailScanning") OR tgt.process.cmdline contains:anycase("-DisableRealtimeMonitoring")) 7. Disable Windows Defender Via Registry Key Purpose : Flags registry key changes disabling Defender. Rule : tgt.process.cmdline contains:anycase("reg\\ add") AND tgt.process.cmdline contains:anycase("\\SOFTWARE\\Policies\\Microsoft\\Windows Defender") AND ( tgt.process.cmdline contains:anycase("DisableAntiSpyware") OR tgt.process.cmdline contains:anycase("DisableAntiVirus")) 8. Disable Windows Defender Signature Updates Purpose : Detects attempts to disable Defender signature updates. Rule : tgt.process.cmdline contains:anycase("Remove-MpPreference") OR tgt.process.cmdline contains:anycase("set-mppreference") AND ( tgt.process.cmdline contains:anycase("HighThreatDefaultAction") OR tgt.process.cmdline contains:anycase("SevereThreatDefaultAction")) 9. SVCHOST Spawned by Unsigned Process Purpose : Flags instances of svchost.exe being launched by unsigned processes. Rule : src.process.publisher == "Unsigned" AND tgt.process.name == "svchost.exe" 10. Mimikatz via PowerShell Purpose : Detects the execution of Mimikatz scripts or commands using PowerShell. Rule : src.process.parent.cmdline contains:anycase("Invoke-Mimikatz.ps1", "Invoke-Mimikatz") AND tgt.process.name == "powershell.exe" Closing Note Stay tuned for more custom threat-hunting rules and best practices in the next articles of this series! If you have specific rule requirements or ideas, feel free to reach out through the Contact Us section. Share your email and details, and I’ll help craft the perfect rule for your needs. See you soon! 👋 Dean
- Streamlining USB Device Identification with a Single Script
Identifying and analyzing USB device details can be a tedious and time-consuming task. It often requires combing through various system registries and logs to gather information about connected USB devices. As a cybersecurity professional, having an efficient way to automate this process can save valuable time and reduce errors. In this blog, I will share a script that simplifies the task of identifying USB device details. This script gathers all necessary information in one go, making the process more efficient. Additionally, you can find this script integrated into my endpoint data capture tool, which is detailed in my previous blog. The script is also available on the resume page of my portfolio. USB Device Information Before diving into the script, let’s look at the kind of information we aim to extract: Serial Number : Unique identifier for the USB device. Friendly Name : User-friendly name of the USB device. Mounted Name : Drive letter assigned to the USB device. First Time Connection : Timestamp of the first connection. Last Time Connection : Timestamp of the last connection. VID : Vendor ID of the USB device. PID : Product ID of the USB device. Connected Now : Indicates if the device is currently connected. User Name : The username that initiated the connection. DiskID : Unique identifier for the disk. ClassGUID : Class GUID of the device. VolumeGUID : Volume GUID of the device (if available). If you run the script in Powershell you will get out like below: If you run my script which you can find under resume page. you will get output like below Update on Script: https://www.linkedin.com/feed/update/urn:li:activity:7284276306349871106/ Conclusion Identifying USB details can indeed be a hectic task when done manually by digging through system registries. However, with the help of automation scripts like the one shared above, the process can become much more manageable and efficient Akash Patel
- SentinelOne(P10- New SentinelOne Console): A Practical Guide/An Practical Training
As promised, let’s dive into the new SentinelOne console and its features. Here's an overview of what the updated interface looks like: Dashboard The front-page dashboard in the new console is intuitive and visually appealing. While it doesn’t require much explanation, I highly encourage you to explore it . Even if you don’t end up using SentinelOne, experiencing this console once will showcase how flawlessly it operates—with proper configuration, of course. Purple AI Tab This tab is dedicated to Purple AI , a feature aimed at enhancing automation and operational efficiency. Alerts Tab In the updated console, t he Alerts tab combines the previously separate Alerts and Threats tabs . You now get all alert details in one place, simplifying threat management. Exposure Tab The Exposure tab now includes: ISPM (Identity Security Posture Management) details under the Misconfiguration section. Application vulnerabilities listed under the Vulnerabilities section, providing a centralized view of risks. Event Search Tab This tab provides Deep Visibility , a feature carried over from the previous console, enabling you to dive deep into historical event data for advanced investigations. Inventory Tab Assets: Displays endpoints with SentinelOne installed. Identity Endpoints: If you’ve configured Active Directory (AD) as part of ISPM, domain controllers or identity endpoints should appear here. Applications: Houses the application inventory, listing all apps detected within the environment. New Graph Query Feature A notable new addition is the ability to build custom graphs: Custom Queries: Build your own queries for tailored insights. Query Builder Examples: Use pre-designed examples as starting points. Library: Access a library of prebuilt queries for quicker analysis. Activities Tab This tab contains the Activity Log , which records everything that happens in the console—a carryover from the previous logs feature. RemoteOps Tab This tab is similar to the Automation feature from the older console, allowing remote operations and task execution. Agent Management Tab The Agent Management tab replaces the old Sentinels tab and provides similar functionalities, along with additional sub-tabs for more granular management. Reports and Policies Tabs The final two tabs: Reports: For generating and viewing detailed reports. Policy and Settings: Offers comprehensive configuration options for policies and other settings. Conclusion That wraps up the overview of the new SentinelOne console. It’s packed with updates and improved functionality. "Thanks for sticking with me on this journey through SentinelOne! It’s truly an incredible platform that combines power, simplicity, and innovation. Whether you're new to it or a seasoned user, SentinelOne has something to wow everyone. Stay tuned for more updates as we continue exploring its awesome features together!"
- Tracing Reused $MFT Entries Paths : Recovering Deleted File Paths Forensically with CyberCX UsnJrnl Rewind
Hey there! If you’ve been following my articles, you might already know the answer to this question. But let me ask it again: If we have $MFT, why do we need $UsnJrnl? Understanding the Difference Between $MFT and $UsnJrnl $MFT vs. $UsnJrnl While the $MFT (Master File Table) gives you a snapshot of the file system at specific points in time, the $UsnJrnl ($J) keeps a detailed record of file system changes over time. Tracking Subtle Changes Example Exfiltration often involves small but significant actions—modifying, renaming, or deleting files. These actions may not always be captured by the $MFT, but $UsnJrnl logs them in detail, which is crucial for uncovering sophisticated exfiltration techniques. Example : Let’s say an attacker creates a ZIP file to exfiltrate data. The $MFT will log the creation of the ZIP file. The $UsnJrnl, however, will document every step: adding files to the ZIP, zipping the data, renaming the file, and moving it. ------------------------------------------------------------------------------------------------------------ This answers the initial question, but let’s raise a new one. What Happens When MFT Entries Are Reused? Here’s the scenario: A file is created, and its details are stored in the $MFT with a sequence number or file record. The file is deleted, and while $UsnJrnl logs this event, the $MFT entry becomes available for reuse. When a new file is created, it might reuse the same MFT sequence number or file record. As $UsnJrnl/$J doesn’t track full file paths but instead logs file names, entry numbers, and sequence numbers, a question arises: If a file's $MFT record is removed or reused by another file, how can you reconstruct the original file path using $MFT and $J? Screenshot of $J Forensic tools often correlate $UsnJrnl with $MFT to reconstruct file paths, but reused MFT entries can complicate this process. ------------------------------------------------------------------------------------------------------------ Okay, Lets give you practical example so u can understand easily Example: Recovering the Path Files Used: $MFT parsed file : mftOutput.csv $UsnJrnl parsed file: j Output.csv Observations: Lets choose a file name creds.txt.txt In the $UsnJrnl:$J file, creds.txt.txt was identified with: Entry Number: 1124 Sequence Number: 4 Update Reason: File Delete | Close(This update reason means file was deleted correct and $mft file record available for reuse) Searching with file name in the $MFT file revealed no file with the name( creds.txt.txt) exits . Now Searching for Entry Number 1124 in the $MFT file revealed that the entry had been reused . The sequence numbers confirmed it had been overwritten four times, with the current file being log.old. This reuse makes it impossible to locate the deleted file's path directly in the $MFT. (Correct) ------------------------------------------------------------------------------------------------------------ Solution: Using CyberCX UsnJrnl Rewind Research and tools from CyberCX come to the rescue. They d eveloped a script called UsnJrnl Rewind , which correlates $MFT and $UsnJrnl:$J data to reconstruct deleted file paths, *********even for entries that have been reused************. Steps to Use: Clone the tool from the GitHub repository: CyberCX UsnJrnl Rewind Set up the environment (e.g., WSL or Linux). Run the script with the $MFT and $UsnJrnl parsed files as inputs: python usnjrnl_rewind.py -m MFT_Output.csv -u UsnJrnl_Output.csv output-path The tool produces two outputs: NTFS.sqlite USNJRNL.fullpath ------------------------------------------------------------------------------------------------------------ Verifying the Results Open the USNJRNL.fullpath file to locate the path of creds.txt.txt. Additionally, you can trace the (File record) file's lifecycle: Sequence 1: Overflowset → Deleted Sequence 2: NewTextDocument.txt → Deleted Sequence 3: log.old~rf14 → Deleted Sequence 4: log → Currently active on the system. And there you have it! This research has taught us valuable insights into forensic investigations. With that, we wrap up this article. See you in the next one—until then, take care and goodbye! ----------------------------------------------Dean----------------------------------------------------
- Lateral Movement Analysis: Using Chainsaw, Hayabusa, and LogParser for Cybersecurity Investigations
A few days ago, I received a request through my website from someone working on an incident response case. He mentioned a situation involving 20 endpoints and asked if there was a quicker way to identify lateral movement —specifically, whether users on those endpoints had attempted to log in to other systems and whether those attempts were successful. He was manually analyzing logs from all 20 endpoints , which was understandably time-consuming and inefficient. Around the same time, another cybersecurity professional with more t than 20 years of experience reached out, seeking an easy way to identify lateral movement and asking me to teach him how to analyze it. Frankly, I was surprised. With such extensive experience and a high-level role, I didn’t expect lateral movement analysis to be a pain point for him. This got me thinking: while most professionals understand what lateral movement is, identifying it during an investigation remains challenging. Lateral movement analysis requires a deep understanding of logs, artifacts, and various attack vectors, which can seem daunting, even for seasoned Incident Response (IR) and Digital Forensics & Incident Response (DFIR) practitioners. Inspired by these requests, I decided to simplify things with this article. Today, I'll discuss three tools that make lateral movement analysis much easier: Chainsaw Hayabusa LogParser If you're unfamiliar with these tools, I’ve already written detailed guides explaining how they work, which commands to use, and what to expect from them . You can check out the following posts: Hayabusa: A Powerful Log Analysis Tool for Forensics and Threat Hunting Chainsaw: Streamlining Log Analysis for Enhanced Security Insights Microsoft’s Log Parser (Bonus File Included) However, today we’ll focus solely on using these tools for lateral movement analysis . ------------------------------------------------------------------------------------------------------------- What Is Lateral Movement? I won’t delve too much into the basics since most of you already know what lateral movement is. If you don’t, I recommend checking out my articles: Understanding Lateral Movement in Cybersecurity . This articles *** covers everything you need to know about manually analyzing lateral movement, using registry , Event IDs, and filesystem****. It’s a great foundation to build your skills before diving into automated tools. ------------------------------------------------------------------------------------------------------------- Chainsaw: Simplifying Lateral Movement Analysis Let’s dive into the first tool, Chainsaw , and see how it simplifies log analysis for lateral movement. I’ll demonstrate how to run a single command in PowerShell and let Chainsaw do the heavy lifting. Command: PS E:\Log Analysis Tools\chainsaw_all_platforms+rules+examples\chainsaw> .\chainsaw_x86_64-pc-windows-msvc.exe hunt -r rules\ "E:\Output for testing\logs 123\log123" Chainsaw immediately starts hunting through the logs, isolating critical events that may indicate lateral movement. Below are some screenshots and insights from my analysis. Screenshot 1: User Remote Access Chainsaw identified that the user remotely accessed a system (XSPACE2197) using the IP 192.168.30.1. While this could be legitimate, you’d need to verify it by asking the user or checking the context. What’s impressive here is how easily Chainsaw pinpoints such activities, saving you time compared to manual log analysis. Screenshot 2: RDP Logoff Events Chainsaw highlights RDP logoff events. Although these don’t directly indicate lateral movement, they’re worth noting when attackers move between systems via RDP and log off after completing their actions. Screenshot 3: Potential RDP Attack In this example, Chainsaw identified events showing successful RDP logins, session connections, and disconnections. Here’s what stood out: The user is Jean-Luc, and the IP 192.168.30.1 suggests activity within a trusted network. While these behaviors may seem normal, it’s crucial to confirm whether this activity aligns with the user’s routine. Chainsaw’s ability to filter relevant data means you don’t need to sift through mountains of logs manually . It automates the heavy lifting, allowing you to focus on deeper investigation and validation. ------------------------------------------------------------------------------------------------------------- A Word of Caution While tools like Chainsaw automate much of the analysis, *** manual log analysis remains an essential skill for any cybersecurity professional **. Automated tools like Magnet Axiom and FTK are great, but understanding the underlying artifacts (e.g., $J, $MFT) and how to analyze them manually is what truly sets you apart as a forensic investigator ------------------------------------------------------------------------------------------------------------- Hayabusa Tool : Lateral Movement Analysis Made Easy Hayabusa has quickly become one of the most reliable tools in my log analysis arsenal . It’s versatile, efficient, and saves an incredible amount of time—especially when detecting lateral movement. Let’s dive into how you can leverage Hayabusa specifically for lateral movement detection. Highlights of Hayabusa in Action 1. When you feed your logs to Hayabusa, it doesn’t just dump all events on you . Instead, it filters and categorizes them into a manageable set of results . For instance: Input Logs : 12,776 events. Filtered Suspicious Events : 1,338 events. 2. H ayabusa categorizes the events by their severity: Critical High Medium Low Each category is color-coded, making it visually intuitive to spot critical issues at a glance. Detecting Lateral Movement Hayabusa flagged potential lateral movement activities. Here’s how it looked: In the screenshots below, I’ve highlighted key areas you should focus on when looking for lateral movement indicators . While there are other important artifacts, such as service installations, I’m providing a basic overview of what to watch for when analyzing logs for lateral movement. At start we had 12,776 events after running hayabusa we only have hits—1339. If you take a look at the output in the command prompt, you might think, "This analysis isn't for me. How can I make this more manageable and effective?" Don’t worry—I’ve got you covered!. There’s a streamlined way to analyze this data, and I’ll show you exactly how to make sense of it more efficiently. Simplifying the Analysis Process Analyzing these hits manually through the command prompt can quickly become cumbersome. So, let me share a personal method I use to simplify the process: First, I extract all the data from Hayabusa into a CSV file , using the following command: PS E:\Log Analysis Tools\hayabusa-2.17.0-win-x64> .\hayabusa-2.17.0-win-x64.exe csv-timeline -d "E:\Output for testing\logs 123\log123" -o output.csv Next, I use Timeline Explorer to open the CSV file. It’s a fantastic tool for navigating through the extracted timeline data and helps you easily pinpoint areas of interest . Let’s say I want to focus on a specific lateral movement indicator, like "PSExec Lateral Movement." Simply search for it in the Excel file, and you’ll get the results—just like the screenshot below. Suppose I want to investigate "Logon (Network)" events. I can search for that in the CSV and quickly get all the details I need, including the type of logon, user, source computer, and source IP. The ease of this process cannot be overstated. Why Hayabusa Is My Tool of Choice If I had to choose one tool for Log Analysis, it would always be Hayabusa . Its seamless integration with timeline data and ability to quickly filter through vast amounts of log data makes it the best choice for me. Much like KAPE, Hayabusa has become an indispensable part of my inventory. I’ve been using it for the past three years, and it has come a long way, becoming more efficient and accurate with each iteration. A big thank you to Yamato Security Group in Japan for creating a tool that truly makes the work of forensic and IR professionals much easier. ------------------------------------------------------------------------------------------------------------- Log Parser: Building Precise Detection Queries Recently, I was reflecting on tools like Hayabusa and Chainsaw and wondered: "Is it possible to build custom queries with exceptions to make detections more precise?" While these tools are fantastic, they don’t offer much flexibility for creating or applying exceptions directly. That sparked a thought: why not explore Log Parser for this purpose? Log Parser uses SQL-like language, which makes it different from other tools. It provides the flexibility to build highly tailored queries, including exceptions, which can significantly enhance detection precision. A Quick Note on Log Parser If you're unfamiliar with Log Parser, I’ve written a complete article detailing its features, commands, and use cases. The link is available at the top of this post—feel free to check it out! Objective: Detecting Lateral Movement For this experiment, I’ll focus on detecting lateral movement using logs. The primary log source will be security.evtx. There are already commands created to run with logparser and these commands are attached in my article itself. Microsoft’s Log Parser (Bonus File Included) I felt the need to create new, customized ones. Let’s get started! Setup and Approach I’ve collected the logs from one endpoint into a single directory. We’ll primarily work with SQL queries to parse and analyze the data. Below is the commands I crafted as a starting point: (Worry not at end I will attach text file containing all the query so u can modify as per your need ) 1. Query to Detect RDP Logins (Event ID 4624) Purpose: RDP is frequently used for lateral movement by attackers to access remote systems. You can use Event ID 4624 (successful logon) to track RDP logins Usage: This query will show when a user logs in via Remote Desktop, which can be indicative of lateral movement across machines. 2. Query to Detect Lateral Movement via SMB (Event ID 5140) Purpose: Lateral movement often involves SMB (Server Message Block) to access shared resources across systems. Event ID 5140 indicates that a network share was accessed. Monitoring this can help detect unauthorized lateral movement attempts. Expected Output: The query will show network share access by users, which could be an indicator of lateral movement. 3. Query to Detect Credential Dumping Tools (Event ID 4688) Purpose: Malicious actors often use tools such as Mimikatz or LaZagne to dump credentials during lateral movement. Monitoring process creation events (Event ID 4688) for signs of such tools is important. Expected Output: This query will help detect the execution of credential dumping tools used for lateral movement. 4. Query to Detect Abnormal WMI Activity (Event ID 5858) Purpose: WMI (Windows Management Instrumentation) is commonly used by attackers for lateral movement. Event ID 5858 represents successful WMI operations. Monitoring this can help detect malicious use of WMI for lateral movement. Expected Output: This query will show WMI operations initiated by users, which could indicate lateral movement across systems. 5. Query to Detect Multiple Failed Logins (Event ID 4625) Purpose: When attackers attempt to move laterally, they often try brute-forcing credentials. Monitoring for multiple failed logon attempts can help detect these attempts. Expected Output: This query will show multiple failed logon attempts, which may indicate brute-force or credential stuffing attacks. 6. Logon and Logoff Activity (Event IDs 4624 and 4634) Purpose: These queries track user logons and logoffs, filtering by remote logon types like RDP (Logon Type 10), network logons (Logon Type 3), and other types like unlocking workstations or batch jobs. The goal is to detect unauthorized access or lateral movement, including identifying unusual source IPs and workstations. 7. Process Creation (Event ID 4688) : Purpose: This query identifies the creation of processes that may be indicative of malicious activity, such as cmd.exe, powershell.exe, wmic.exe, and net.exe. These processes are frequently used for lateral movement, privilege escalation, and other post-exploitation activities. Analyzing the associated command lines and parent processes can reveal suspicious actions. 8. Remote and Network Logons : Purpose: Remote and Network Logons : This query monitors network logons (such as those using SMB) and RDP logons (Logon Type 10). It helps track remote access, particularly focusing on user-driven activities and excluding system accounts. This is particularly useful for detecting lateral movement and unauthorized logins from unexpected locations. 9. Service Creation : Purpose: This query tracks suspicious service installations that could indicate malicious activity. For example, the creation of services associated with tools like PsExec, powershell.exe, and cmd.exe may point to attackers maintaining persistence or executing lateral movement within a network. While the rules mentioned earlier offer a great starting point for detecting suspicious behavior using LogParser, you can further fine-tune these queries to create more customized detections. As you can see, it’s quite simple to adapt and refine these rules to better fit your needs. With the combination of LogParser, process creation analysis, and remote logon detection, you have a powerful toolset to detect lateral movement in your environment. I've added the rules in a text file and attached it for you . You may notice that some rules appear to be duplicates . If these aren’t relevant to your needs, feel free to leave them out or modify them to suit other detections . I’ve provided them just for your reference— please don’t get upset with me!LOL Also, keep in mind that due to Windows updates, some rules might stop working after a few months . Make sure to double-check andtest the rules before running them against your logs. ------------------------------------------------------------------------------------------------------------- Managing Logs from Multiple Workstations Now, let’s take this to the next level. Imagine you have 10 workstations, each generating their own security logs, and you collect these logs into separate folders . Instead of manually running the same command for each folder, which can be time-consuming and inefficient, I’ve got a solution that will make your life easier. Here’s how it works: The script I’ve developed will automatically scan multiple folders or subdirectories in a specified root path. It will identify the Security.evtx logs within each folder, apply your customized detection rules, and generate the output, all with minimal effort on your part . This method streamlines the process and significantly reduces the complexity of manually executing commands for each log file. A Simple, Effective Approach This approach ensures that you can scale your detection efforts across many workstations without being bogged down by repetitive tasks. It’s quick, efficient, and easy to implement—exactly what you need when working with large datasets or multiple machines. ------------------------------------------------------------------------------------------------------------- Get Involved! If you have any queries, suggestions, or additional detection methods you'd like to share, feel free to reach out! I’m always looking for ways to improve and collaborate. Your ideas could be featured in a future post, and you’ll be credited as the creator! See you in the next one, where we’ll dive deeper into more exciting techniques and tools. Stay tuned! -------------------------------------------Dean-----------------------------------------------------
- SentinelOne(P9- Settings): A Practical Guide/An Practical Training
The Settings section in the SentinelOne Console is your central hub for configuration and management. Here's a detailed breakdown of its features with examples and practical insights: 1. Configuration The Configuration tab provides an overview of licenses and key settings. Licenses : See which features you have paid for, such as Remote Ops Forensic or Network Discovery . Other Settings : Adjust session timeouts, password expiration policies, and more. 2. Notifications As the name suggests, this feature allows you to set up alerts. Example: You can configure an email notification to be sent whenever a threat is detected or if someone uninstalls an agent. Customizable Events : Alerts for detection, policy violations, and endpoint changes. 3. Users Here, you can create and manage users with specific roles. Example: SOC Team Role : Restrict permissions to prevent them from uninstalling agents. IR Team Role : Allow broader control, such as agent uninstallation. 4. Integrations This section enables the setup of third-party integrations for SMTP , Syslog , or SSO (Single Sign-On) . Features : View and edit integrations. Add or delete integrations to streamline your workflow with external tools. 5. Policy Override This feature lets you temporarily override security policies for specific endpoints. Real-Life Scenario : A new testing agent triggered false positives, quarantining files (e.g., Excel files). The SOC team was overwhelmed by the alerts. Solution: The policy was switched to "Detect Only" mode, stopping file quarantine. SentinelOne support provided a policy override , resolving the issue without reverting the agent. 6. Accounts Manage accounts for different clients, ensuring flexibility in a multi-client environment. 7. Sites Create and organize Sites within your account hierarchy for better management. 8. Locations Dynamic Policy Application adjusts protection based on network location. Example Features : Stricter policies on untrusted networks (e.g., public Wi-Fi). Define trusted networks by IP ranges, DNS servers, or gateway IPs. Pro Tip: SentinelOne’s flexibility in settings allows organizations to adapt quickly to unique challenges, such as managing alerts, integrating external tools, or handling network-based policies. The Policy Override feature, in particular, can be a lifesaver during unexpected situations like false positives. ------------------------------------------------------------------------------------------------------------- Wrapping Up SentinelOne: Transitioning to the Newer Console That’s a wrap for exploring SentinelOne’s older console! As a heads-up, SentinelOne has rolled out a newer console with updated features and a refreshed UI. While the newer version offers more functionalities, it might feel slightly more complex initially. My Advice: If you’re just starting out with SentinelOne: Begin with the older console : It’s simpler and provides a solid foundation. Transition to the newer console once you’re comfortable. Stay Connected: Thanks for sticking around! If you found this guide helpful and want to stay updated: Bookmark this website for easy access to more articles. Sign up for notifications on my website to get updates on the latest guides, tips, and tutorials. More insights on SentinelOne’s newer console and advanced features are coming in the next article—stay tuned! 🚀
- SentinelOne (P8- SentinelOne Automation) :Guide / Training to Forensic Collection, KAPE Integration, Running Script and Incident Response
SentinelOne’s DFIR capabilities are a standout feature, making it a must-have tool for forensic analysts. Let me walk you through how this tool becomes a forensic heaven for DFIR professionals . ------------------------------------------------------------------------------------------------------------- Why SentinelOne Excels in DFIR Imagine you’ve identified an alert—perhaps a hack tool followed by lateral movement. After isolating the endpoint, the question arises: What’s next? Deep Analysis Options : Logs from SentinelOne’s console provide immediate insights. Use Deep Visibility to explore connections and processes. Advanced Forensics :Beyond log analysis, SentinelOne allows you to collect: Entire disk images. Crucial forensic artifacts like $MFT , $J , Prefetch and more. This flexibility elevates it above other tools, providing unparalleled forensic depth. ------------------------------------------------------------------------------------------------------------- I won’t go into details about what the $MFT (Master File Table), $J, Prefetch, Timeline, how to parse it. For an in-depth understanding, you can explore the dedicated Articles available on my website under the "Courses" tab. Website Link:- https://www.cyberengage.org/courses-1 ------------------------------------------------------------------------------------------------------------- As mentioned earlier, I promised to explain how to collect and review logs before diving into in-depth forensic analysis. Let’s go over the process for gathering logs. Collect Logs Fetching Logs from the Console : Navigate to Endpoints in SentinelOne. Select the endpoint you want to investigate. Click Actions , then select Fetch Logs . Where to Find Logs : Wait 5–10 minutes for the logs to upload. Go to the Activity Tab to download the logs. What’s Inside the Logs? When you extract the ZIP file, you’ll find the following: Sentinel Agent Logs : Contain information about the endpoint's activities. Platform Folder : EventViewer Folder : Includes key logs like: Application System Security Hardware Event Kernel Event Note : SentinelOne does not pull all logs—it focuses on these critical ones. Misc Folder : Misc folder contains a wealth of valuable information for Incident Response (IR) professionals. While SentinelOne does not fetch all logs via Event Viewer, the data within the Misc folder can compensate with its extensive details . ------------------------------------------------------------------------------------------------------------- That's all for logs. I won't delve into log analysis here. If you're interested, I have detailed articles on log analysis using different tools available under the Tool Hub section of my website. These resources will guide you analyzing logs effectively. ------------------------------------------------------------------------------------------------------------- Now, let’s move on to automation . In the Automation section , you'll find three tabs: Remote Ops , Task Management , and Tasks . Let’s begin with Remote Ops to give you a clear understanding of how it works and its relation to the other two tabs. Remote Ops Creating a New Operation: Start by clicking the + Create New button. Selecting an Option: You'll be prompted to choose between uploading a custom script or creating a forensic profile. Let’s explore the Forensic Profile option first. Creating a Forensic Profile: You can collect various artifacts, such as registry data, event logs, and even memory dumps. The platform supports creating forensic profiles for Windows , Linux , and Mac endpoints, which is incredibly versatile. Once you’ve saved your forensic profile, you’ll see it listed as created and ready for use. Example outputs for forensic profiles: Windows : Select and gather registry hives, event logs, and memory images. Linux : Collect configuration files, log files, and process information. Mac : Retrieve system logs, kernel events, and user profiles. Uploading Custom Scripts: If you already have specific scripts prepared (e.g., using tools like KAPE), you can upload them here for execution. I’ll provide more details about using custom scripts like KAPE later, but for now, let’s focus on running the forensic profile to demonstrate the output. Let’s dive into the steps for running a forensic collection, one of my favorite features of SentinelOne. Here’s how it works: Steps to Run the Forensic Collection Start the Collection: Go to the Sentinels Tab in the console. Select the endpoint you want to investigate. Click on Actions , then Search for Forensic Collection . Choose the forensic profile you created earlier and hit Run Collection . Monitor the Task: Head over to the Task Tab to track the status. Initially, it will show as Pending , but within 2–3 minutes, it will switch to In Progress . Once the collection is complete, you’ll see it listed under the Completed section. Download the Results: Click on Download Files to grab the collected data. Typically, the entire process takes just 10–15 minutes. That’s incredibly fast for a forensic workflow! What Do You Get in the Output? When the process completes, you’ll get a wide range of valuable forensic artifacts, including: $MFT (Master File Table) $J (Journal) UserAssist (recent applications used) Prefetch Files PowerShell History In short, you’re handed a complete forensic package— raw and parsed data that’s ready for analysis. ------------------------------------------------------------------------------------------------------------- Why I Love This Feature Here’s why I think SentinelOne excels in forensic collection: You get original, unparsed artifacts like $MFT and $J , which you can analyze deeply. It also provides parsed data in JSON format , which is great for users who prefer structured outputs. You’re not limited to Windows —this works seamlessly for Linux and macOS too. My Personal Take While I appreciate the JSON files SentinelOne generates, I’ll admit they’re not my favorite format to work with. JSON can be challenging to analyze directly, so I usually stick to my trusted tools like Timeline Explorer and KAPE for parsing and analysis. For instance, I’ll take the original $MFT file and parse it using KAPE, which makes the data much easier to work with. Similarly, for jumplists and shellbags, I prefer analyzing them manually after extraction. That said, this feature is a game-changer for anyone comfortable with JSON or text-based formats. If you’re like me and have your favorite tools, you can still extract the raw data and analyze it your way. ------------------------------------------------------------------------------------------------------------- Running a Script in SentinelOne: Step-by-Step Now that we’ve covered forensic profiles, let’s move on to running scripts on endpoints. For this example, we’ll use the PSRecon.ps1 script , which is freely available on GitHub: PSRecon on GitHub . Running scripts through SentinelOne is incredibly straightforward. Here’s how you can do it: Uploading the Script Upload the Script: Navigate to the Automation Tab . Click + Create New and select Upload New Script . Give your script a name (e.g., "PSRecon Script") and upload the script file. Confirm Upload: Once uploaded, you’ll see the script listed in your repository, ready to be executed. Running the Script on an Endpoint Initiate the Script Execution: Go to the Sentinels Tab and select the target endpoint. Click on Actions , then Search for Run Script . Choose the uploaded script from the list. Select Output Location: Specify where you want the output to be saved. I recommend always selecting Sentinel One Cloud for easier access and retrieval. Tracking and Retrieving Output Monitor the Task: Head to the Task Tab to check the status of the script. Initially, it will show as Pending , then move to In Progress . Retrieve the Output: Once completed, you’ll find the task listed under the Completed section. Simply download the output files. The Result And that’s it! Within minutes, you’ll have the output generated by your script. For PSRecon, this means detailed system information neatly organized for analysis. Why This is Amazing Running scripts like this through SentinelOne is incredibly efficient: No need for direct access to the endpoint. Simple, centralized execution. Automated output retrieval. It’s a game-changer for incident response and forensic investigations. Whether you’re running PSRecon or any other script, SentinelOne makes it a breeze. ------------------------------------------------------------------------------------------------------------- Using KAPE with SentinelOne: Step-by-Step Guide One of my favorite features of SentinelOne is how seamlessly it integrates with tools like KAPE (Kroll Artifact Parser and Extractor). Let’s dive into how to set this up, whether or not you have an SFTP server for artifact storage. Overview In SentinelOne, to run KAPE, you need: A script to invoke KAPE. KAPE itself, zipped with the required script.**********(Very Important)***** Two Scenarios Without an SFTP Server: Artifacts are stored locally on the endpoint, and y ou’ll need the client to share the output manually. With an SFTP Server: Artifacts are uploaded directly to the SFTP server for easy access. Scenario 1: Without SFTP Server Script 1: run.ps1 This script invokes another script (NoSFTPserver.ps1) from the SentinelOne environment. Script 2: invoke.ps1 This script runs KAPE, specifying the collection and output paths. What It Does: Runs KAPE using the specified compound or target. Saves the collected artifacts as a .zip file in C:\output. Prepare KAPE Package Place invoke.ps1 inside the KAPE folder. Zip the entire KAPE folder, including the script. Upload the Scripts Go to SentinelOne → Automation → RemoteOps → Upload Script . Upload the run.ps1 script. Run the Script Navigate to Sentinels → Endpoints . Select the target endpoint. Go to Actions → Run Script . Select the run.ps1 script and execute it. Artifact Retrieval: The artifacts will be saved in C:\output. (Client Endpoint) Ask the client to share these files with you for analysis. Scenario 2: With SFTP Server If you have an SFTP server, you only need to modify the invoke.ps1 script t o include the SFTP upload parameters. Modified invoke.ps1 Script Add below parameters in Script: Additional Parameters: --scs [server]: SFTP server address. --scp 22: Port (default is 22 for SFTP). --scu [user]: Username. --scpw [pwd]: Password. ------------------------------------------------------------------------------------------------------------- Why This Is Great Ease of Use: Uploading and running scripts in SentinelOne is straightforward and efficient. Flexibility: Works for Windows, macOS, and Linux endpoints. Customizable: You can use or modify scripts as needed for your specific requirements. If you want to learn more about KAPE itself, including its detailed functionality, check out my articles under the Tool Hub tab on my website. https://www.cyberengage.org/services-9 ------------------------------------------------------------------------------------------------------------- Automation with SentinelOne: Streamlining Artifact Collection After a Malicious Alert Imagine this scenario: an attack is detected on a server protected by SentinelOne. With prepared automation , you don't waste a single moment. As soon as the malicious alert is triggered, SentinelOne automatically executes a script like PSRecon or KAPE to collect forensic artifacts. The Power of Automation in Incident Response Key Benefits Time Efficiency: No manual intervention is required to initiate artifact collection. Complete Artifact Coverage: Immediate collection ensures no critical data is lost or overwritten. Faster Analysis: You get the artifacts right away for deeper investigation. Customizable Workflows: You can configure scripts tailored to your investigation needs. Setting Up Automation in SentinelOne Step 1: Go to the Marketplace Navigate to the SentinelOne Marketplace in the console. Search for the Remote Ops Automation package. Click Install. Step 2: Configure the Automation Trigger Define when the automation should run . Example: Trigger the automation when an alert is marked as “True Positive.” Select the script (ID) to be executed. You can use the psrecon.ps1(ID) script (or any custom script). Step 3: Specify the Output Location Output to be available in the Activity Tab of the SentinelOne console. Scripts can save the output locally (endpoint) or transfer it to an SFTP server, depending on your script configuration. (Like Kape) Example: PSRecon Automation How It Works Malicious Alert Detected: SentinelOne flags a suspicious activity. (Analyst Determined it is true positive and marked the threat true positive) Automation Triggered: Your PSRecon script automatically runs on the affected endpoint. Artifacts Collected: All artifacts (like registry, event logs, and more) are gathered without delay. Output Location: Download the artifacts from the Activity Tab of the console or SFTP server Results After automation: Artifacts are readily available: Download them directly from the Activity Tab . Faster Analysis: The immediate availability of artifacts speeds up the forensic process, letting you focus on understanding and mitigating the attack. ------------------------------------------------------------------------------------------------------------- Why SentinelOne is Amazing The seamless integration of automation with tool s makes SentinelOne a game-changer. I nstead of wasting time setting up manual artifact collection, everything happens instantly and efficiently. With SentinelOne, incident response is not just about detection; it's about taking immediate action to gather critical evidence and enabling rapid analysis. See how awesome this is? Automation + Artifact Collection = Total Control! ------------------------------------------------------------------------------------------------------------- Stay tuned for the next article, where we’ll dive into last article —a truly exciting topic! Until then, keep learning and growing. See you soon! 😊 -------------------------------------------------------------------------------------------------------------






