
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
514 results found with an empty search
- OneDrive Forensics : Investigating Cloud Storage on Windows Systems
Microsoft OneDrive is the most widely used cloud storage service, thanks to its default integration in Windows and its enterprise adoption via Microsoft 365 . Understanding OneDrive forensic artifacts is crucial for investigations involving data exfiltration, insider threats, or deleted cloud files . We will cover: ✅ How to locate and analyze OneDrive data on a Windows system ✅ Key forensic artifacts, including logs, databases, and registry entries ✅ How to determine OneDrive activity, authentication, and file synchronization history ✅ How OneDrive’s new sync model affects forensic investigations ✅ Tracking cloud-only files & deleted data ✅ Using OneDrive’s forensic artifacts to recover missing evidence ---------------------------------------------------------------------------------------------------------- 1️⃣ Locating OneDrive Files on a Windows System By default, synced OneDrive files are stored in: %UserProfile%\OneDrive 💡 Important: If a user changes the default storage location , the original OneDrive folder remains empty . The true OneDrive folder location can be found in the Windows registry . Registry Key to Identify OneDrive Folder Location NTUSER\Software\Microsoft\OneDrive\Accounts\Personal This key contains: UserFolder → The actual OneDrive sync folder location cid/UserCid → A unique Microsoft Cloud ID UserEmail → The email used for the Microsoft account LastSignInTime → Last authentication timestamp (Unix epoch format) 💡 Why This Matters: If OneDrive is enabled , this registry key must exist . Investigators can track user activity even if OneDrive files have been moved or deleted. ---------------------------------------------------------------------------------------------------------- 2️⃣ Analyzing OneDrive File Metadata & Sync Database OneDrive stores metadata and sync information in: %UserProfile%\AppData\Local\Microsoft\OneDrive\settings This folder contains key artifacts, including: 📌 SyncEngineDatabase.db (Main OneDrive Database) Tracks both local and cloud-only files Lists file names, folder structure, and metadata Provides timestamps for file sync operations 💡 Why This Matters: Even cloud-only files (not only stored locally) are recorded here . Investigators can track deleted or moved files that no longer exist on the device. ---------------------------------------------------------------------------------------------------------- 3️⃣ OneDrive Logs: Tracking Uploads, Downloads, & File Changes OneDrive keeps detailed logs of file sync activities in: %UserProfile%\AppData\Local\Microsoft\OneDrive\logs These logs store up to 30 days of data and record: ✅ File uploads & downloads ✅ File renames & deletions ✅ Shared file access events 💡 Forensic Insight: Log files can reveal file activity , even if the user deleted local copies . Timestamps in .odl logs can correlate file transfers with other system activity. ---------------------------------------------------------------------------------------------------------- 4️⃣ OneDrive for Business: Additional Registry Artifacts Users with OneDrive for Business (Microsoft 365) will have a separate registry key: NTUSER\Software\Microsoft\OneDrive\Accounts\Business1 This key includes: UserFolder : Location of root of OneDrive local file storage UserEmail : Email tied to Microsoft cloud account LastSignInTime : Date and time of last authentication (Unix epoch time) ClientFirstSignInTimestamp : Time of first authentication of the account (Unix epoch time) SPOResourceID : SharePoint URL for OneDrive instance 💡 Why This Matters: Business OneDrive accounts store work-related data —a key forensic focus. The SPOResourceID can link OneDrive for Business files to a SharePoint instance . ---------------------------------------------------------------------------------------------------------- 5️⃣ Investigating Shared Files & Synced Data from Other Users OneDrive supports file sharing and folder synchronization across multiple accounts . Shared folders are tracked under: NTUSER\Software\Microsoft\OneDrive\Accounts\Personal\Tenants NTUSER\Software\Microsoft\OneDrive\Accounts\Business1\Tenants This key logs shared folders synced to OneDrive . It tracks files shared via Microsoft Teams & SharePoint . 💡 Forensic Insight: Shared folders may not be stored in the default OneDrive folder . Investigators should check all Tenant folders to avoid missing critical evidence . ---------------------------------------------------------------------------------------------------------- 6️⃣ SyncEngines Key: Advanced OneDrive Tracking A final high-value artifact for OneDrive investigations is: NTUSER\Software\SyncEngines\Providers\OneDrive It contains: MountPoint → Local file storage location (useful for tracking shared folders) UrlNamespace → Specifies whether the folder belongs to OneDrive, SharePoint, or Teams LastModifiedTime → The last time the folder was updated 💡 Why This Matters: Identifies all folders being synced , even if they are not in the default OneDrive location . Correlates data across Microsoft cloud services (OneDrive, Teams, SharePoint) . ---------------------------------------------------------------------------------------------------------- 7️⃣ Tracking OneDrive Web Access (Cloud-Only Activity) If a user accessed OneDrive through a web browser (instead of the local app), artifacts may appear in: Browser History (Edge, Chrome, Firefox) Windows Event Logs Cloud Access Logs (if available from Microsoft 365) OneDrive web access URLs look like this: https[:]/onedrive.live.com/?cid=310ff47e40c97767&id=310ff47e40c97767!145750 💡 Forensic Insight: The cid value in the URL matches the UserCid in registry keys —helpful for tracking multiple accounts . The res id parameter refers to specific files or folders accessed via the web client. ---------------------------------------------------------------------------------------------------------- 🛑 Key Challenges in OneDrive Forensics 🚨 1. Cloud-Only Files May Not Be Stored Locally Files accessed via " Files on Demand" may never be fully downloaded . Investigators must analyze metadata & sync logs to track cloud-only data. 🚨 2. Remote Deletions Can Hide Evidence Files deleted in OneDrive sync across all devices . Investigators may need Volume Shadow Copies or Microsoft 365 logs to recover data. 🚨 3. Personal & Business OneDrive Accounts Can Be Mixed Users often log into both accounts on the same system. Check registry keys to differentiate personal vs. business data . ---------------------------------------------------------------------------------------------------------- OneDrive as a Crucial Forensic Artifact Microsoft OneDrive leaves behind substantial forensic evidence , even for files that no longer exist locally . We will explore more about OneDrive in the next article (Advanced OneDrive Forensics: Investigating Cloud-Only Files & Synchronization) , so stay tuned! See you in the next one. --------------------------------------------Dean-------------------------------------------------
- Mastering JLECmd for Windows Jump List Forensics
Windows Jump Lists are a goldmine for forensic investigators, offering detailed insights into file access, user activity, and application usage . To efficiently analyze these artifacts, JLECmd , developed by Eric Zimmerman, provides comprehensive parsing of Jump List data , ensuring no valuable evidence is overlooked . ------------------------------------------------------------------------------------------------------------- 📁 Understanding Jump Lists: AutomaticDestinations vs and CustomDestinations Jump Lists are stored in a user’s Recent folder , but there are two different types: Jump List Type Location Metadata Stored Forensic Value Automatic AutomaticDestinations MRU order, timestamps, LNK files, file paths High (detailed tracking) Custom CustomDestinations Concatenated LNK files, limited metadata Moderate (useful but lacks MRU order) 🔹 Automatic Jump Lists are system-generated for frequently used applications. 🔹 Custom Jump Lists are application-defined and may store favorites, pinned items, or recent actions. Since Automatic Jump Lists contain far more forensic data , they are prioritized in most investigations . ------------------------------------------------------------------------------------------------------------- 🛠 How to Use JLECmd for Jump List Analysis 1️⃣ Parsing a Single Jump List JLECmd.exe -f "G:\C\Users\Akash's\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations\1c7a9be1b15a03ba.automaticDestinations-ms" 🚀 Use Case: If investigating whether Microsoft Word 2016 opened a sensitive file, JLECmd reveals when it was last accessed and from which system location . 2️⃣ Running JLECmd on an Entire User's Recent Folder To extract ALL Jump Lists for a user , run: JLECmd.exe -d G:\C\ --csv "E:\Output for testing\Website investigation" -q --csvf jlcmd.csv 🚀 Use Case: In a data theft investigation , sorting by last accessed timestamps may uncover unauthorized file access from network shares or external USB devices. ------------------------------------------------------------------------------------------------------------- Single File Output Analysis: Key Points from JLECmd Output: AppID Identification: The top-left section of the output shows the AppID and its description. If no match is found, it may return Unknown AppID , requiring manual inference. DestList Information(Metadata): Automatic Jump Lists include metadata like the expected vs. actual number of entries . Discrepancies between these values may indicate missing or uncorrelated entries. The DestList version changes across Windows versions, requiring updates to forensic tools. (DestList Entries) Timestamps & Interaction Tracking: Created time is linked to the Birth DROID timestamp (often before the actual file creation). (Can be ignored) Last modified time is more relevant as it tracks the l ast access of a file or URL . (Very important)**** Newer Jump Lists include an interaction count that records file openings. Deep Parsing with JLECmd: By default , JLECmd limits displayed .lnk data . Using --fd enables full .lnk details (timestamps, paths, volume info). The --dumpTo option extracts shell items into individual .lnk files for deeper analysis. Automatic vs. Custom Jump Lists: Automatic Jump Lists contain DestList data, timestamps, and interaction counts. Custom Jump Lists store fewer details and lack DestList information. ------------------------------------------------------------------------------------------------------------- Multiple File Output Analysis: (Request Use Excel its easy to analyse there) Filter out Important columns which u should keep for investigation: AppId, AppIdDescription, MRU, LastModified(Also called Last opened), Path, InteractionCount ,TargetCreated,TargetModified , FileSize , DriveType , VolumeSerialNumber LocalPath Last Opened time stamp is :---- Jump List Metadata (When this file was last opened as per Jump List) File in question created and modified is :---- NTFS Metadata (File System Metadata (When the file was originally created/modified) ------------------------------------------------------------------------------------------------------------- Extracting Detailed LNK Data with JLECmd Now, JLECmd does not parse most .lnk details during single Jump List parsing . For example, an Automatic Jump List may include hundreds of .lnk files , which can be overwhelming. How to Extract Full LNK Data Use the --fd option in JLECmd to parse full shell item information , including: Target timestamps File size and attributes Absolute path and volume details Extra block information Command: .\JLECmd.exe -f "G:\C\Users\Akash's\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations\1c7a9be1b15a03ba.automaticDestinations-ms" --fd | more ***Due to the large amount of data, redirect the output to a text file or html for better readability****. Command: .\JLECmd.exe -f "G:\C\Users\Akash's\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations\fb3b0dbfee58fac8.automaticDestinations-ms" --fd --html "E:\Output for testing\Website investigation\out.html" -q Way: For extracting all .lnk files fir particular automatic destination, use the --dumpTo option. This allows you to analyze them with other forensic tools. Command: .\JLECmd.exe -f "G:\C\Users\Akash's\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations\fb3b0dbfee58fac8.automaticDestinations-ms" --dumpTo "E:\Output for testing\Website investigation" Once you open the folder You will see all the .lnk files extracted, allowing you to analyze them using any tool, such as LNK Tool . ------------------------------------------------------------------------------------------------------------- The best alternative tool for analyzing Jump Lists and .lnk files, created by Eric Zimmerman, is JumpList Explorer (JLE) . Why Use JumpList Explorer? Unlike JLECmd, which requires command-line parsing, JumpList Explorer provides a graphical interface that makes it easier to understand and analyze Jump List data. If u click on any Lnk file at right hand side bottom: If you need a GUI-based tool for easier .lnk and Jump List analysis, JumpList Explorer is the best option! 🚀 ------------------------------------------------------------------------------------------------------------- 🚀 Quick Reference: Analysis 🚀 Quick Reference: Essential JLECmd Commands Command Purpose JLECmd.exe -f [JumpListPath] Parse a single Jump List JLECmd.exe -d [RecentFolder] --csv/json/html [OutputDir] Parse all Jump Lists for a user JLECmd.exe -f [JumpListPath] --fd Extract full LNK (shell item) data JLECmd.exe -f [JumpListPath] --dumpTo [Folder] Extract all shell items as individual LNK files ------------------------------------------------------------------------------------------------------------- 🚀 Get Started with JLECmd Today! 🔹 Download JLECmd as part of the Zimmerman Tools 🔹 Test it on a sample Jump List to see how much forensic evidence you can extract! Need help with a Jump List investigation? Let me know! I’m here to guide you through it. 🔍🚀 ----------------------------------------Dean----------------------------------------------
- Windows LNK Files: A Hidden Treasure for Forensic Investigators
When investigating digital forensics on a Windows system, LNK (shortcut) files serve as one of the most valuable sources of user activity . Even if a user never explicitly creates a shortcut, Windows does— automatically tracking files, folders, and devices accessed . These artifacts are incredibly useful for proving file access, tracking external devices, and even recovering traces of deleted files . --------------------------------------------------------------------------------------------------------- What Are LNK Files and Why Do They Matter? LNK files, or Windows shortcuts , are automatically created when a user opens, interacts with, or saves a file . Unlike regular files that contain user data, LNK files store metadata about the original file , including: ✅ Full file path – The original location of the accessed file ✅ Timestamps – When the file was first accessed, last accessed, and modified ✅ Volume information – Drive letter, network path, and even USB device details ✅ File extension and type – Identifies the kind of file opened ✅ MAC address (for network shares) – Provides forensic evidence of file access across shared drives These characteristics make LNK files a goldmine for forensic analysts, especially in cases where users have deleted files, accessed removable media, or interacted with files stored on network shares . --------------------------------------------------------------------------------------------------------- Where Are LNK Files Stored? LNK files are stored in each user's "Recent" folder , automatically tracking recent file activity. Their locations differ slightly based on Windows versions: 📌 Windows 7, 8, 10, 11: C:\Users\\AppData\Roaming\Microsoft\Windows\Recent\ C:\Users\\AppData\Roaming\Microsoft\Office\Recent\ (Office-specific shortcuts) 📌 Windows XP: C:\Documents and Settings\\Recent\ These Recent folders store shortcuts for non-executable files, including documents, images, and media files . However, command-line access does not generate LNK files , making them primarily useful for GUI-based user actions. --------------------------------------------------------------------------------------------------------- How LNK Files Help in Forensics 1️⃣ Proving File Access (Even if Deleted) One of the biggest forensic advantages of L NK files is that they persist even after the original file is deleted . 🚀 Example: A user opens "akash.docx" from a USB drive. Even if the user later deletes "akash.docx , the LNK file remains in the Recent folder. The LNK file contains USB details , proving that the file was accessed from external storage. 🔍 Forensic Insight: Investigators can reconstruct deleted file activity using LNK metadata. --------------------------------------------------------------------------------------------------------- 2️⃣ Tracking USB Devices and External Drives When a file is opened from a USB drive or external storage , Windows not only creates an LNK file for the file but also for the parent folder on the device . 🚀 Example: A user accesses a folder from a USB drive (D:\data). An LNK file is generated for the folder itself. The metadata includes the USB device serial number and volume label . 🔍 Forensic Insight: This allows forensic analysts to determine which USB devices were used on a system, even if they are no longer connected. --------------------------------------------------------------------------------------------------------- 3️⃣ Understanding User Navigation and Folder Access LNK files also provide information on f olders frequently accessed by the user . 🚀 Example: A u ser accesses a folde r containing illegal files (C:\open\tools) Even if no specific file is opened , an LNK file for the folder itself is created. 🔍 Forensic Insight: This helps track which folders a user frequently interacts with , even if no direct file evidence remains. --------------------------------------------------------------------------------------------------------- ***Changes to LNK Files in Windows 10 & 11*** Microsoft has made several updates to LNK file behavior , improving forensic usefulness: 1️⃣ LNK files are now created when a file is first saved (not just when opened). Before Windows 10, LNK files were only created after a file was opened. Now, saving a file using "Save As" generates an LNK file immediately. 2️⃣ More detailed folder tracking. If a user creates a new folder, LNK files are also created for its parent and grandparent folders . 3️⃣ LNK file storage limits have changed. Windows historically stored only 149 LNK files per user. In newer Windows 10/11 versions, 300+ LNK files can be found via forensic tools. 4️⃣ File extensions may now be included in LNK names. Example: secret.pdf.lnk (helpful for quick identification). 5️⃣ Multiple LNK files for the same folder can now exist. Instead of just tracking the first and last time a folder was accessed, Windows now creates new LNK files for repeated access , giving more timestamps to analyze . 🔍 Forensic Insight: These updates provide more data points for forensic analysts, making LNK files even more powerful for investigations. --------------------------------------------------------------------------------------------------------- Best Practices for Investigating LNK Files ✅ Check unallocated space for deleted LNK files. Older LNK files may still be recoverable from disk slack space . ✅ Correlate LNK timestamps with system logs. Cross-check with Windows Event Logs and Prefetch data. ✅ Use forensic tools for deeper analysis. Tools like Eric Zimmerman's LECmd can extract and parse LNK metadata efficiently. ✅ Look for USB drive metadata in LNK files. This can help prove external storage use in data theft or insider threat cases . ✅ Use command-line tools to bypass GUI limitations. The Windows GUI hides extra LNK files , but they can still be accessed via command-line or forensic software. --------------------------------------------------------------------------------------------------------- Conclusion LNK files are one of the oldest yet most powerful forensic artifacts in Windows investigations. Whether you're tracking accessed files, proving USB activity, or reconstructing deleted evidence , these automatically generated shortcuts hold a wealth of forensic intelligence . With Windows 10 and 11 introducing new behaviors , investigators now have even more data points to work with—if they know where to look. Parsing Lnk files step-by-step guide in next article --------------------------------------------------Dean-----------------------------------------------------
- LECmd: A Powerful Tool for Investigating LNK Files
This article have been updated on 22 January 2025 When investigating user activity on a Windows system, LNK (shortcut) files serve as a vital source of evidence. However, analyzing them manually or with incomplete tools can result in missing key data. Enter LECmd (LNK Explorer Command Line Edition) —a tool developed by Eric Zimmerman to fully decode and extract every bit of information from LNK files. ----------------------------------------------------------------------------------------------------- Why LECmd? A Tool That Doesn't Hide Data Many forensic tools process LNK files, but not all of them extract every available piece of metadata . Some tools selectively drop or ignore certain data structures without notifying the examiner. LECmd was created to ensure that all metadata from an LNK file is preserved and presented to the investigator. Even if certain data structures appear irrelevant in most cases. ----------------------------------------------------------------------------------------------------- What Does LECmd Extract from an LNK File? LNK files contain a wealth of metadata about accessed files and folders. LECmd extracts and organizes this information into several key sections: 1️⃣ Header Information The header contains essential details about the file, including: ✅ File Timestamps – Creation, modification, and last access times. ✅ File Attributes & Flags – File properties like hidden, system, or read-only status. ✅ File Path & Size – The original location of the file and its size. ✅ Working Directory & Relative Path – The folder the file was stored in and its location relative to system paths. 🔍 Forensic Insight: The creation time of an LNK file represents the first time a user accessed that file , while the modification time indicates the last time the file was opened . On Live System: (Can be used for collected lnk files) ----------------------------------------------------------------------------------------------------- 2️⃣ Link Information The Link Information section reveals how the file was accessed: ✅ Drive Type – Whether the file was on a local drive, removable USB, or network share .✅ Volume Serial Number – Unique identifier for the storage device ✅ UNC Path (if applicable) – Network location if the file was accessed via a shared drive. 🔍 Forensic Insight: If an LNK file points to a USB drive , forensic analysts can match the volume serial number with known USB devices to track data transfers. ----------------------------------------------------------------------------------------------------- 3️⃣ Target ID Information**** This section contains shell items similar to those found in Windows ShellBags , including ✅ Master File Table (MFT) Information – Links to the file’s original NTFS metadata ✅ Timestamps for Folders & Files – Indicates when each part of the file path was created and accessed. 🔍 Forensic Insight: The absolute path in this section can reconstruct the full location of a file or folder , even if it was moved. --------------------------------------------------------------------------------------------------------- 4️⃣ Extra Blocks Information LNK files often contain additional undocumented metadata , stored in Extra Blocks . This data includes: ✅ Console Properties – Information about terminal activity. ✅ Property Store Structures – Additional file metadata, sometimes including user interaction details . 🔍 Forensic Insight: Some Extra Blocks store remnants of file paths or folder interactions , even if they are no longer in use. --------------------------------------------------------------------------------------------------------- How to Use LECmd for Large-Scale Investigations 🔍 Parsing a Single LNK File To extract all metadata from a single LNK file, use: LECmd.exe -f "C:\Users\Akash's\AppData\Roaming\Microsoft\Windows\Recent\Microsoft Edge.lnk" 🔍 Forensic Insight: This command provides the most detailed breakdown of a single LNK file , useful when analyzing a specific file of interest . --------------------------------------------------------------------------------------------------------- 🔍 Parsing an Entire Directory of LNK Files For bulk analysis , u se the -d option to parse all LNK files in a folder: LECmd.exe -d G:\G\Users --csv "E:\Output for testing" --csvf lnkfile.csv 🔍 Forensic Insight: This is the best method for quickly reviewing user activity , as i t produces a CSV report containing timestamps, file paths, and device details. --------------------------------------------------------------------------------------------------------- Using Timestamps to Uncover User Activity LNK files contain two sets of timestamps : 1️⃣ Source Timestamps (LNK file timestamps) Indicate when the shortcut was created or last updated (i.e., when the user first and last opened the file). 2️⃣ Target Timestamps (File metadata timestamps) Indicate the original file’s creation, modification, and last accessed times . 🔍 Forensic Insight: By comparing source and target timestamps , investigators can determine if a file was copied or moved . 🚀 Example: A file is copied from a USB drive (D:) to the local system (C:) . The target creation timestamp on the C: drive will be newer than the target modification timestamp from the D: drive. This proves the file was copied from the USB drive rather than created locally. --------------------------------------------------------------------------------------------------------- Example: Tracking USB File Transfers with LECmd Imagine an employee is suspected of stealing company documents using a USB drive. I nvestigators could use LECmd to analyze their LNK files and reveal when and where files were accessed . 🚀 Case Study Walkthrough 1️⃣ Run LECmd on the suspect’s user profile Recent folder: 2️⃣ Review the CSV output and look for references to the USB drive (e.g., D: or E:). Target ID Section may include a Volume Serial Number linked to a specific USB. T arget Creation Timestamp s may indicate when files were copied to the device . 3️⃣ Confirm that sensitive files were accessed just before removal of the USB. If L NK timestamps align with the suspect's departure time , the case for data theft strengthens. --------------------------------------------------------------------------------------------------------- Conclusion: Why LECmd is a Must-Have Forensic Tool LECmd provides deep insight into user activity on a Windows system. By fully decoding every piece of metadata from LNK files, investigators can: ✅ Track accessed files and folders ✅ Identify USB devices and removable media use ✅ Prove file movement and copying activity ✅ Analyze timestamps to reconstruct user actions Whether conducting an insider threat investigation, data exfiltration case, or simply tracking user activity , LECmd is an essential tool for forensic professionals. --------------------------------------------Dean------------------------------------------------------ Example of Output Source File Source Created Source Modified Source Accessed Target Created Target Modified Target Accessed Drive Type Target ID Absolute Path File Size Working Directory Volume Serial Number Local Path Target MFT Entry Number Machine ID Machine MAC Address G:\G\Users\Jean-Luc\Desktop\Microsoft Edge.lnk 24-03-2023 17:20 24-03-2023 17:22 21-01-2025 19:41 11-04-2022 18:47 21-03-2023 18:47 24-03-2023 17:22 Fixed storage media (Hard drive) This PC\C:\@shell32.dll,-21817\Microsoft\Edge\Application\msedge.exe 4055968 C:\Program Files (x86)\Microsoft\Edge\Application 60562114 C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe 0x18207 xspace2197 44:e5:17:ed:50:3e
- Forensic Challenges in Cloud Storage Investigations
With businesses and individuals rapidly shifting their data to the cloud , digital forensic investigations have become more complex. Traditional endpoint analysis is no longer sufficient, as critical evidence often resides on third-party servers . The widespread adoption of cloud storage applications like OneDrive, Google Drive, Dropbox, and Box has introduced n ew security risks and forensic challenges . Investigators must now determine: ✅ What cloud applications are installed on a system ✅ Which user accounts were used for authentication ✅ What files exist locally and in the cloud ✅ How files have been uploaded, downloaded, or shared ✅ Whether deleted files can be recovered Why Cloud Storage Forensics Is Important Cloud storage services are often under-audited in enterprise environments , making them a prime target for: 🚨 Insider threats – Employees using personal accounts to exfiltrate company data 🚨 Cybercriminals – Hackers leveraging cloud storage for data theft or malware distribution🚨 Accidental data leaks – Sensitive files mistakenly shared or synced to personal devices ------------------------------------------------------------------------------------------------------------ Key Forensic Data from Cloud Storage Applications Cloud storage applications leave behind substantial forensic evidence on a user’s system. Below are the most critical artifacts to analyze: 1️⃣ Identifying Installed Cloud Applications & User Accounts The first step in an investigation is determining: Which cloud storage applications are installed Which user accounts are logged in Where cloud files are stored locally 💡 Why This Matters: Many organizations fail to monitor unauthorized cloud apps , allowing employees or attackers to store data outside of approved platforms . 2️⃣ Files Available Locally & in the Cloud Cloud storage services maintain databases that track: ✅ Files stored locally ✅ Files available only in the cloud ✅ Deleted files (sometimes recoverable) ✅ Files shared with the user from other accounts 💡 Why This Matters: These records can reveal data exfiltration attempts , hidden documents , or deleted evidence that might not be visible through normal file system analysis. 3️⃣ File Metadata (Timestamps, Hashes, & Paths) Most cloud storage applications track: ✅ File creation & modification times ✅ File size ✅ Full path location ✅ Cryptographic hashes (MD5, SHA1, or SHA256) 💡 Why This Matters: Tracking file metadata helps investigators identify when files were created, modified, or moved , even if they no longer exist on the local system. 4️⃣ File Transfer Logs (Uploads, Downloads & Synchronization) Cloud storage services track how files are transferred between devices . These logs help answer questions like: Was a file uploaded from this system to the cloud? Was a cloud-only file downloaded to this device? Was a file moved between different cloud folders? 💡 Why This Matters: This information is crucial in data breach investigations or insider threat cases to track file movements. 5️⃣ User Activity & Account Logs Some business-grade c loud storage applications provide detailed activity logs , including: ✅ When users log in & from what IP addresses ✅ What files they access, edit, or delete ✅ Which files were shared externally 💡 Why This Matters: This can reveal unauthorized access, suspicious downloads, or attempts to erase evidence . ------------------------------------------------------------------------------------------------------------ Forensic Challenges in Cloud Storage Investigations 🔴 1. Limited Local Evidence Many cloud files exist only in the cloud and are not stored locally unless synced . Investigators must rely on: Cloud provider logs (if accessible) Database files that track cloud-stored files "Files on Demand" cache (if available) 🔴 2. Data Commingling Between Personal & Business Accounts U sers often log into both personal and business cloud accounts on the same device, leading to data mixing . This complicates: Determining which account uploaded a file Investigating unauthorized transfers between accounts 🔴 3. Selective Sync & "Files on Demand" Features Newer cloud storage services do not automatically sync all files to a device . Instead, they provide on-demand access , meaning: The file is only downloaded when accessed Some files may never have existed locally Investigators must determine whether a file was ever present on the system or only stored in the cloud. 🔴 4. Remote Deletion of Evidence Cloud-stored files can be deleted remotely , meaning: The file is no longer accessible from the local system Investigators may need to request logs or backups from the cloud provider 🔴 5. Encryption & Secure Cloud Storage Some cloud storage solutions offer: ✅ End-to-end encryption (making file contents inaccessible to forensic tools) ✅ Zero-knowledge storage (where even the provider cannot access files) In such cases, investigators may need user credentials or court-ordered access to provider logs . ------------------------------------------------------------------------------------------------------------ Upcoming Cloud Storage Forensic Series In our next articles, we will deep-dive into forensic investigations for the most popular cloud storage platforms: 🔹 OneDrive Forensics 🔹 Google Drive Forensics 🔹 Dropbox Forensics 🔹 Box Cloud Storage Forensics ------------------------------------------------------------------------------------------------------------ Final Thoughts: Why Cloud Storage Forensics Matters Cloud storage has become a critical blind spot in forensic investigations . As more businesses and individuals move data to OneDrive, Google Drive, Dropbox, and Box , forensic professionals must adapt their techniques to: ✅ Track cloud-stored files, even if they are not locally available ✅ Investigate deleted cloud files & remote evidence ✅ Identify unauthorized cloud activity & data exfiltration attempts 🚀 Stay tuned for our next deep-dive article on OneDrive forensics! 🔍 ----------------------------------------------Dean----------------------------------------------------
- Handling Incident Response: A Guide with Velociraptor and KAPE
Over the 3 years period , I’ve created numerous articles on forensic tools and incident response (IR). This time, I want to take a step back and focus on how to handle an incident investigation. This guide will specifically highlight incident response workflows using Velociraptor and KAPE . If you're looking for the forensic part of investigations, check out my other articles —or let me know, and I’ll create one soon! For those unfamiliar , I’ve written a series of articles diving deep into Velociraptor: from configuration to advanced usage. You can find those articles https://www.cyberengage.org/post/exploring-velociraptor-a-versatile-tool-for-incident-response-and-digital-forensics https://www.cyberengage.org/post/setting-up-velociraptor-for-forensic-analysis-in-a-home-lab https://www.cyberengage.org/post/navigating-velociraptor-a-step-by-step-guide Now, let’s dive into incident response without overcomplicating things ------------------------------------------------------------------------------------------------------------- Why Velociraptor’s Labels Matter One of Velociraptor's standout features is Labels , which play a critical role in investigations. They help you categorize, organize, and quickly identify relevant endpoints. While I can’t show you live client data due to privacy reasons , I'll provide detailed examples to help you understand the process. Remember, this article assumes you’ve read my previous Velociraptor articles; they provide the foundational knowledge you'll need here. ------------------------------------------------------------------------------------------------------------- Scenario: Investigating a Phishing Attack Imagine you’re responding to an incident involving a client with 100 endpoints. Attack Overview: The client fell victim to a phishing email. An attachment in the email was opened, initiating the attack. The client has isolated the environment by cutting off external connectivity. Their key questions are: (Before forensic) How many users opened the attachment? What files were created, and where? Which endpoints were infected? The client doesn’t have EDR or SIEM tools. Yes, it’s not ideal, but in the real world, this happens more often than you’d think. ------------------------------------------------------------------------------------------------------------- Deploying Velociraptor Agents First, configure your Velociraptor server (refer to my previous articles for detailed steps). Provide the client with the necessary token and instructions to roll out Velociraptor using GPO (Group Policy Objects). Key Points: Velociraptor isn’t a typical agent-based EDR. It doesn’t modify the system drastically, making it less intrusive and easier to handle. Once the client deploys the agents across endpoints, all devices will begin appearing in your Velociraptor console. ------------------------------------------------------------------------------------------------------------- Automating Labels for Large Environments Let’s say the client has rolled out Velociraptor to 100 endpoints . Manually assigning labels to each endpoint is impractical . Instead, you can automate this process: Click the eye icon in Velociraptor. Search for Server.Monitor.Autolabeling.Clients. Launch the rule. With this rule enabled, Velociraptor will automatically assign labels to new clients as they connect, streamlining your workflow. ------------------------------------------------------------------------------------------------------------- Investigating the Malicious Attachment The client informs you of the attachment name (e.g., 123.ps1) . Your goal is to determine: How many endpoints have this file. The file's location on each endpoint. Here’s how to proceed: Step 1: Create a Hunt Navigate to the Hunts section. Use the FileFinder artifact to configure the hunt. Configuration Example: If you’re looking for 123.ps1, set the search parameter as: Step 2: Launch the Hunt Once launched, Velociraptor will search for the specified file across all endpoints. You can view the results under the Notebook tab. Output: Improving Readability of Results By default, the output may not be user-friendly, especially if it contains 20-30 artifacts. To make the data more readable: Click t he edit icon in the Notebook. Paste the following query : SELECT Fqdn, OSPath, BTime AS CreatedTime, MTime AS ModifiedTime FROM source(artifact="Windows.Search.FileFinder") LIMIT 50 Run the query, and you'll see a neatly formatted output. Now after this see below screenshot (How good and easy view it have become right?) ------------------------------------------------------------------------------------------------------------- Labeling Infected Endpoints Let’s say you identify 20 infected endpoints out of 100. To make tracking easier, label these endpoints as Phishing . Here’s the query to do so: SELECT Fqdn, OSPath, BTime AS CreatedTime, MTime AS ModifiedTime, label(client_id=ClientId, labels=['Phishing'], op='set') AS SetLabel FROM source(artifact="Windows.Search.FileFinder") This automatically assigns the label Phishing to all infected devices(20 device), simplifying your investigation. Example: In my case Before running automate query: After running the note book: After Running query in notebook Once you have identified 20 endpoints that have opened the file or downloaded the attachment , the next step is to determine which users these endpoints are associated with for further investigation. There are multiple ways to accomplish this. Like: Asking Client each endpoint belong to which user or using velociraptor live query method: Running live query: Create hunt for only endpoint which label is phishing or crowdstrike previously (This is where label become more useful instead of running hunt to all endpoint we can run hunt on only labelled endpoints to get data see how useful label become) Select Hunt you want to run in this case Windows.sys.allusers Launch the hunt and you will get the endpoint belong to which user( this user information will be usefull in our next hunt) Once you run this hunt, use a notebook to extract a list of all affected users and their respective laptops. This initial step helps you identify around 20 laptops belonging to users who potentially acted as "patient zero." ------------------------------------------------------------------------------------------------------------- Tracing Lateral Movement Next, we investigate whether these 20 users logged into other laptops beyond their assigned ones . To do this: Launch a hunt using Windows Event Logs: RDP Authentication . While configuring the hunt, use a regular expression (regex) to include the usernames of the 20 suspected users. For example: Above example(Screenshot) is for single user If you want to add multiple users use below regex .*(Dean\.amberose|hana\.wealth|Chad\.seen|jaye\.Ward).* This pattern helps track these users across multiple endpoints . However, this step may produce a large dataset with many false positives . To refine the results, analyze the output in a notebook. Output Before running notebook As u see screenshot when i run the query I got 60 result u can see why we need to minimize it(Because if u run same query on 20 endpoints in real scenario output will be very intense) Minimizing False Positives To reduce noise, use a carefully crafted query. For example: SELECT EventTime, Computer, Channel, EventID, UserName, LogonType, SourceIP, Description, Message, Fqdn FROM source(artifact="Windows.EventLogs.RDPAuth") WHERE ( (UserName =~ "akash" AND NOT Computer =~ "Akash-Laptop") OR (UserName =~ "hana.wealth" AND NOT Computer =~ "Doctor") ) AND NOT EventID = 4634 -- Exclude logoff events AND NOT (Computer =~ "domaincontroller" OR Computer =~ "exchangeserver" OR Computer =~ "fileserver") ORDER BY EventTime This query excludes routine logons to systems like domain controllers or file servers, focusing on suspicious activity. Modify it further based on your environment to suit your needs. Output After running the notebook If needed, run additional hunts, such as UAL (User Access Logs) for servers . You can use below hunt from velociraptor. By analyzing these logs, you can map which accounts accessed which systems, providing insights into lateral movement. for server as well. Use this information to update labels, marking new suspected endpoints for further investigation. If you want to learn more about UAL how to parse and analyse it, check out my article below: https://www.cyberengage.org/post/lateral-movement-user-access-logging-ual-artifact ------------------------------------------------------------------------------------------------------------- Hunting for Suspicious Processes and Services To understand the attack's scope and detect malicious activities, examine the processes and services running across all endpoints. For service use below hunt and for processes i will show practice: Let start with processes: Automated Process Hunting We will run this Process hunting in two way First without using notebook hunt itself: Run a hunt using Windows.System.Pslist . When configuring parameters, check the option to focus on "untrusted authenticated code. " This will flag processes not signed by trusted authorities , providing their hash values. Second running hunt first (without untrusted authenticated code box check) and then using Notebook : Lets run same hunt as previously without untrusted authenticated code box check As soon as i run the hunt i got 291 processes on one endpoint lets suppose if u run this hunt on 100 endpoints how many processes u will get damn analysis will be worst: Worry not if used second method, I have a notebook for you to make analysis easy Query: For a more detailed approach, use this query in a notebook: SELECT Name,Exe,CommandLine,Hash.SHA256 AS SHA256, Authenticode.Trusted, Username, Fqdn, count() AS Count FROM source(WHERE Authenticode.Trusted = "untrusted" // unsigned binaries // List of environment-specific processes to exclude AND NOT Exe = "C:\\Program Files\\filebeat-rss\\filebeat.exe" AND NOT Exe = "C:\\Program Files\\winlogbeat-rss\\winlogbeat.exe" AND NOT Exe = "C:\\macfee\\macfee.exe" AND NOT Exe = "C:\\test\\bin\\python.exe" // Stack for prevalence analysis GROUP BY Exe // Sort results ascending ORDER BY Count O utput after running above notebook Only 3 detection's with clean output ------------------------------------------------------------------------------------------------------------- Hunting for Suspicious Processes with automated Virus total Scan Imagine you've scanned 100 endpoints and discovered 50 untrusted Processes . Checking their hashes manually would be frustrating and time-consuming. Here's how to simplify this: Output before Virus total automation: Before this keep in mind first you have to run hunt like above once you get output after that use below query or notebook to automate analysis with Virustotal. Use the following query to cross-reference file hashes with VirusTotal, reducing manual overhead(Using note book) // Get a free VirusTotal API key LET VTKey <= "your_api_key_here" // Build the list of untrusted processes LET Results = SELECT Name, CommandLine, Exe, Hash.SHA256 AS SHA256, count() AS Count FROM source() WHERE Authenticode.Trusted = "untrusted" AND SHA256 // only entries with SHA256 hashes // Exclude environment-specific processes AND NOT Exe = "C:\\Sentinelone\\sentinel.exe" GROUP BY Exe, SHA256 // Combine with VirusTotal enrichment query SELECT *, {SELECT VTRating FROM Artifact.Server.Enrichment.Virustotal(VirustotalKey=VTKey, Hash=SHA256)} AS VTResults FROM foreach(row=Results) WHERE Count < 5 ORDER BY VTResults DESC Outcome: A fter running the query, you get VirusTotal results with file ratings , making it easier to prioritize your efforts. No more manual hash-checking! ------------------------------------------------------------------------------------------------------------ Tracing Parent Processes Once y ou’ve identified malicious processes, the next step is to trace their origins . Here’s how: Set Up a Parent Process Hunt: Suppose you’ve identified these malicious processes: IGCC.exe WidgetService.exe IGCCTray.exe Use the Generic.System.PsTree hunt to map their parent processes. Configure the parameters Configure the parameters by adding the malicious processes in a regex format like this: .*(IGCCTray.exe|WidgetService.exe|IGCC.exe).* In our case Outcome: The output will show the process call chain, helping you identify the parent processes and their origins. This insight is crucial for understanding how attackers gained initial access and their lateral movement within the network. ------------------------------------------------------------------------------------------------------------ Investigating Persistence Mechanisms Persistence is a common tactic used by attackers to maintain access. Let's focus on startup items. Startup Items Hunt: Running this hunt on 100 endpoints can generate a huge amount of data. Use hunt ( Windows.Sys.Star tupItems) For instance, a single endpoint yield 22 startup items (Screenshot below) , and across 100 endpoints, the dataset becomes unmanageable. Filter Common False Positives: Narrow down results what we do is create a notebook or a query which will exclude the files or path client is using in their environment or they aware about or we can assume that those are mostly legit like macfee, ondrive, vmware right. LET Results = SELECT count() AS Count, Fqdn, Name, OSPath, Details FROM source(artifact="Windows.Sys.StartupItems") // Exclude common false positives WHERE NOT OSPath =~ "vmware-tray.exe" AND NOT OSPath =~ "desktop.ini" AND NOT (Name =~ "OneDrive" AND OSPath =~ "OneDrive" AND Details =~ "OneDrive") // Stack and filter results GROUP BY Name, OSPath, Details SELECT * FROM Results WHERE Count < 10 ORDER BY Count O utput after running above notebook: Outcome The refined output is structured, significantly reducing the data volume and allowing you to focus on potential threats . For example, the filtered results might now show only 15 entries instead of hundreds. (You can narrow those down) ------------------------------------------------------------------------------------------------------------ Documentation Is Key Throughout the process: Document all malicious processes, paths, infected endpoints, and related findings. Organize your notes for efficient forensic investigation and reporting. ------------------------------------------------------------------------------------------------------------ Investigating Scheduled Tasks Scheduled tasks often serve as a persistence mechanism for attackers. Here's how to efficiently analyze using velociraptor: Use the hunt Windows.System.TaskScheduler /Analysis artifact to collect scheduled task data. Once the data is collected, run the following query to exclude known legitimate entries from your environment: Query: LET Results = SELECT OSPath, Command, Arguments, Fqdn, count() AS Count FROM source(artifact="Windows.System.TaskScheduler/Analysis")WHERE Command AND Arguments AND NOT Command =~ "ASUS"AND NOT (Command = "C:\\Program Files (x86)\\Common Files\\Adobe\\AdobeGCClient\\AGCInvokerUtility.exe" OR OSPath =~ "Adobe") AND NOT Command =~ "OneDrive" AND NOT OSPath =~ "McAfee" AND NOT OSPath =~ "Microsoft" GROUP BY OSPath, Command, Arguments SELECT * FROM Results WHERE Count < 5 ORDER BY Count // sorts ascending Outcome: By running this query, you’ll exclude known false positives (e.g., ASUS, Adobe, OneDrive), significantly reducing the dataset and narrowing your focus to potentially suspicious tasks. Environment-Specific Adjustments: Tailor the query to your specific environment by adding more exclusions based on legitimate scheduled tasks in your network. ------------------------------------------------------------------------------------------------------------ Analyzing Using Autorun tool Autorun tools is another common tool which can identify attackers seeking persistence . Here's how to analyze them efficiently: Use the hunt Windows.Sysinternals.Autoruns artifact in Velociraptor to gather autorun data across endpoints. Refine Results with a Notebook Query: Autorun entries often generate a large amount of data . Use the following query to focus on suspicious entries (In notebook) : Query: LET Results = SELECT count() AS Count, Fqdn, Entry, Category, Profile, Description, `Image Path` AS ImagePath, `Launch String` AS LaunchString, `SHA-256` AS SHA256 FROM source() WHERE NOT Signer AND Enabled = "enabled" GROUP BY ImagePath, LaunchString SELECT * FROM Results WHERE Count < 5 // return entries present on fewer than 5 systems ORDER BY Count Outcome: This query filters out signed entries and narrows the results allowing you to focus on anomalies while discarding likely false positives. Customization: Like the scheduled task query, modify this query to include exclusions specific to your environment for more accurate results. ------------------------------------------------------------------------------------------------------------ Document Everything Keep a record of all suspicious entries, including file paths, hashes, and endpoints where they were found. This documentation is essential for both immediate remediation and forensic reporting. Iterate and Adjust Each organization has unique software and configurations. C ontinuously refine your queries to adapt to legitimate processes and new threats. ------------------------------------------------------------------------------------------------------------ So far, we’ve gathered substantial data from scheduled tasks, autorun entries, and identified potential malicious artifacts. Now, let’s take it a step further to ensure that no other endpoints in the environment are compromised Identifying Additional Compromised Endpoints Once we have identified malicious files or processes from our analysis , the next step is to ensure they aren’t present on any other endpoints. We’ll use the Windows.Search.FileFinder artifact to search for malicious file names across all endpoints. This is the same artifact we’ve used previously, but now we’ll populate it with the suspicious file paths or names identified in the earlier stages . Example paths (for demonstration purposes): Launch the Hunt Run the hunt across 100 endpoints or more to check if the identified malicious files exist elsewhere. Reviewing the Output: Once the hunt completes, you’ll see a detailed list of endpoints where these files are found. If the files are present on other endpoints, label those endpoints as “compromised” or “attacked” for further investigation. Labeling Compromised Endpoints: Use the following query to label endpoints automatically: Query: SELECT Fqdn, OSPath, BTime AS CreatedTime, MTime AS ModifiedTime, label(client_id=ClientId, labels=['Phishing'], op='set') AS SetLabel FROM source(artifact="Windows.Search.FileFinder") ------------------------------------------------------------------------------------------------------------ Next Steps Based on Findings: If no additional compromised endpoints are found, you can move forward with the analysis of the initially identified endpoints. If more compromised endpoints are identified, label them and consider i solating or rebuilding them to eliminate the risk of reinfection. ------------------------------------------------------------------------------------------------------------ YARA Scans for Advanced Threat Detection Once you’ve identified the potentially malicious files and endpoints, the final step is to run a YARA rule scan across the environmen t. This helps detect specific malware families or identify links to Advanced Persistent Threat (APT) groups. Running a YARA Hunt Use the Windows.Detection.Yara.Process artifact for this hunt. Configuring Parameters: If you don’t provide a custom YARA rule, Velociraptor will default to scanning for Cobalt Strike indicators. To run a specific YARA rule (e.g., for detecting APT activity), upload the rule or provide its URL in the configuration. Example of adding a custom rule URL: Launching the YARA Scan: Once configured, launch the hunt. Velociraptor will scan all endpoints and flag any files or processes matching the specified YARA rules. Reviewing the Results: If hits are detected, you can identify the malware family or APT group involved based on the rule triggered. If no hits are found, you can confirm that the environment is clean for the specified indicators. ----------------------------------------------------------------------------------------------------------- Now that you’ve identified infected endpoints and labeled the “patient zero,” it’s time to move to the triage, containment, and recovery phases. KAPE Triage Imaging The next logical step is to capture a triage image of the compromised endpoints . This allows you to collect crucial artifacts for further investigation. Triage Imaging via Velociraptor: Velociraptor simplifies this process by allowing you to run KAPE (Target Filed) directly on the infected endpoint. Create a hunt to initiate KAPE (Target) collection, targeting the relevant artifacts needed for forensic analysis. Collect key forensic artifacts such as registry hives, event logs, and file system metadata. Ensure the image is stored securely for further examination. Manual Imaging (Optional): If Velociraptor isn’t an option , you can run KAPE manually on the infected machine to create a comprehensive triage image. Quarantining Infected Endpoints Once the imaging process is complete, it’s critical to keep isolated the compromised systems or if not done yet isolate endpoints from the network to prevent further spread or communication with potential Command and Control (C2) servers. Using Velociraptor for Quarantine: Velociraptor can quarantine endpoints by blocking all network communications except to the Velociraptor server. Create a hunt to execute the quarantine action. This ensures the endpoint is unable to communicate externally while still being accessible for analysis. Benefits of Quarantine: Prevents lateral movement within the network. Ensures minimal disruption to the ongoing investigation. Recovery and Reimaging After quarantining the compromised endpoints: Reimage the Systems: Reimaging cleans the endpoint, restoring it to a known good state. Deploy it back into the production environment only after ensuring the threat is eradicated. Forensic Analysis (Optional): If deeper investigation is required, forensic specialists can analyze the collected artifacts. Velociraptor for Forensics: Velociraptor supports advanced forensic capabilities, allowing you to parse and analyze collected data. Manual Analysis: Some professionals like me prefer using tools like KAPE, parsing artifacts manually for an in-depth understanding of the attack. Additional Hunting (Optional) Before wrapping up, you can perform further hunting on the infected endpoints to gather more details about the attack. For example: Command History: Identify commands executed on the endpoints, such as psexec or PowerShell commands, to understand the attacker's actions. Network Activity: Investigate network connections to detect communication with suspicious IPs or domains. Persistence Mechanisms: Look for persistence techniques like registry changes or scheduled tasks. Velociraptor offers an array of artifacts and queries for such investigations. Explore these capabilities to uncover additional insights. ----------------------------------------------------------------------------------------------------------- Final Thoughts With the steps outlined, you’ve gone through a comprehensive process to identify, contain, and recover from an endpoint compromise. From advanced hunting to quarantining infected systems, Velociraptor proves to be a powerful tool for incident response. While this article doesn’t delve into detailed forensic analysis, it’s worth noting that Velociraptor can handle a wide range of forensic tasks. You can collect, parse, and analyze artifacts directly within the platform, making it an all-in-one solution for responders. For those who prefer hands-on forensic work, tools like KAPE and manual parsing remain excellent options. What’s Next? This article is just the beginning. Velociraptor offers many more possibilities for proactive hunting and investigation. Experiment with its capabilities to uncover hidden threats in your environment. Stay tuned for the next article, where we’ll dive deeper into another exciting topic in cybersecurity. Until then, happy hunting! 🚀 Dean
- Prefetch Analysis with PECmd and WinPrefetchView
Windows Prefetch is a critical forensic artifact that helps track program execution history . While Prefetch files can be manually analyzed, forensic tools like PECmd (by Eric Zimmerman) and WinPrefetchView (by NirSoft) simplify and enhance the analysis process. We will cover: ✅ How PECmd extracts and formats Prefetch data ✅ How to analyze Prefetch files using WinPrefetchView ✅ Best practices for interpreting Prefetch execution timestamps ------------------------------------------------------------------------------------------------------------- Using PECmd to Analyze Prefetch Files PECmd is a powerful command-line tool for parsing Prefetch files, extracting valuable metadata, and generating structured reports. 1️⃣ Analyzing a Single Prefetch File (-f option) To extract detailed metadata from a single .pf file, run: PECmd.exe -f C:\Windows\Prefetch\example.exe-12345678.pf This outputs: Executable Name & Path Prefetch Hash & File Size Prefetch Version Run Count (how many times the application was executed) Last Execution Timestamp(s) Windows 7 and earlier: 1 timestamp Windows 8+: Up to 8 execution timestamps 💡 Timestamp Validation: The last run time should match the last modified timestamp of the .pf file. Subtract ~10 seconds for accuracy when using file system timestamps. ------------------------------------------------------------------------------------------------------------- 2️⃣ Batch Processing: Parsing an Entire Prefetch Folder (-d option) To process all Prefetch files in a directory: PECmd.exe -d G:\G\Windows\prefetch --csv "E:\Output for testing" --csvf Prefetch.csv ' This generates two output files :1️⃣ CSV Report: Contains execution details for all parsed Prefetch files. Useful for filtering by run count or searching for specific applications . 2️⃣ Timeline View: Extracts all embedded execution timestamps from Prefetch files. Provides a chronological list of program executions, helping correlate events . ------------------------------------------------------------------------------------------------------------- Using WinPrefetchView for GUI-Based Analysis WinPrefetchView (by NirSoft) provides a graphical interface for analyzing Prefetch data. How to Use WinPrefetchView 1️⃣ Open WinPrefetchView 2️⃣ Go to Options > Advanced Options 3️⃣ Select Prefetch Folder (C:\Windows\Prefetch\ or a forensic image) 4️⃣ Click OK to parse Prefetch files 📌 Key Features: ✅ Displays Run Count, Last Run Time, and File References ✅ Extracts up to 8 execution timestamps ✅ Lists files accessed by the application within the first 10 seconds 🚀 Takeaway: Prefetch file references can reveal hidden malware, deleted tools, or important user actions that might otherwise go undetected. ------------------------------------------------------------------------------------------------------------- Best Practices for Prefetch Analysis 🔍 1. Prioritize Prefetch Collection Running forensic tools on a live system creates new Prefetch files , potentially overwriting older evidence. Collect Prefetch files before executing forensic tools. 🔍 2. Cross-Reference Prefetch Data Combine Prefetch analysis with: UserAssist (tracks GUI-based program executions) AmCache (records detailed program metadata) BAM/DAM (tracks recent executions) 🔍 3. Look for Anomalous Prefetch Files Multiple Prefetch files for the same executable but with different hashes may indicate: Malware running from multiple locations Renamed executables attempting to evade detection 🔍 4. Ensure Timestamps Are Interpreted Correctly Convert Windows FILETIME timestamps properly. Keep your forensic VM in UTC time to prevent automatic time conversions by analysis tools. ------------------------------------------------------------------------------------------------------------- Final Thoughts: Mastering Prefetch Analysis with PECmd & WinPrefetchView PECmd and WinPrefetchView are essential tools for extracting, organizing, and analyzing Windows Prefetch data. 💡 Key Takeaways: ✅ PECmd is ideal for batch processing and timeline analysis. ✅ WinPrefetchView provides a user-friendly interface for reviewing Prefetch files. ✅ Prefetch timestamps help reconstruct program execution history—even for deleted applications. ✅ File references inside Prefetch files can reveal hidden malware or deleted forensic evidence. 🚀 If you're investigating program execution on a Windows system, Prefetch analysis should be one of your first steps! 🔍 -----------------------------------------Dean-----------------------------------------------
- Windows Prefetch Files: A Forensic Goldmine for Tracking Program Execution
Windows Prefetch is one of the most valuable forensic artifacts for tracking program execution history . By analyzing Prefetch files, investigators can determine which applications were run, when they were executed, how often they were used, and even which files and directories they accessed . We’ll explore: ✅ What Prefetch is and how it works ✅ Where to find Prefetch files ✅ How to extract and interpret Prefetch data ✅ Best practices for forensic investigations ------------------------------------------------------------------------------------------------------------- What Is Prefetch and How Does It Work? Windows Prefetching is a performance optimization feature that preloads frequently used applications into memory to speed up their execution . When a program is launched for the first time, Windows creates a .pf (Prefetch) file for it. Each .pf file contains: ✅ The name and path of the executed application ✅ How many times it has been executed ✅ The last execution time ✅ Up to 8 previous execution timestamps (Windows 8 and later) ✅ Referenced files and directories the application accessed 💡 Key Insight: If a Prefetch file exists for an application, it proves that the program was executed at least once on the system. ------------------------------------------------------------------------------------------------------------- Where Are Prefetch Files Stored? On Windows workstations (not servers) , Prefetch files are stored in: C:\Windows\Prefetch\ 📌 File Naming Format: 7ZFM.EXE-56DE4F9A.pf The ApplicationName is the name of the executable The HASH is a hexadecimal representation of the executable's full path . 💡 Pro Tip: If you find multiple Prefetch files with the same executable name but different hashes , i t means the program was executed from multiple locations —potentially indicating malware or unauthorized software. ------------------------------------------------------------------------------------------------------------- How Many Prefetch Files Are Stored? Windows 7 and earlier → Stores up to 128 Prefetch files Windows 8, 10, and 11 → Stores up to 1,024 Prefetch files 📌 Important Note: Older Prefetch files are deleted as new ones are created , meaning execution history may be lost over time. ------------------------------------------------------------------------------------------------------------- Understanding Prefetch Execution Timestamps 💡 How to determine the first and last execution time: Timestamp Type Meaning Accuracy Considerations File Creation Date First recorded execution of the application Only accurate if the .pf file was never deleted due to aging out File Last Modified Date Last recorded execution of the application Subtract ~10 seconds for accuracy Embedded Timestamps (Windows 8+) Last 8 execution times Most reliable for tracking multiple executions 📌 Important Note: If an application was executed before its Prefetch file aged out , a new .pf file is created, making it look like the application was first executed at a later date than it actually was. ------------------------------------------------------------------------------------------------------------- Why Prefetch Files Are Crucial in Digital Forensics ✅ 1. Tracking Program Execution Prefetch proves a specific application was run on the system . Even if an application was deleted , its Prefetch file may still exist as evidence. ✅ 2. Identifying Suspicious Activity If you find a Prefetch file for malware or hacking tools (mimikatz.exe, nc.exe), it indicates they were executed . Finding multiple Prefetch files for the same executable in different locations suggests a renamed or relocated executable , which is common for malware evasion techniques . ✅ 3. Detecting Unauthorized Software & Insider Threats If a user claims they never used a VPN , but a Prefetch file for NordVPN.exe exists , this contradicts their claim. ✅ 4. Establishing a Timeline of Events Prefetch timestamps can help reconstruct a timeline of when certain applications were executed relative to an incident . ------------------------------------------------------------------------------------------------------------- Limitations of Prefetch Analysis ⚠️ 1. Prefetch Is Disabled on Some Systems Windows Server OS does not use Prefetch. Some Windows 7+ systems with SSDs may have Prefetch disabled . 📌 Check Registry Settings to See If Prefetch Is Enabled: SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters Audit the EnablePrefetcher value: 0 → Disabled 1 → Application launch prefetching enabled 2 → Boot prefetching enabled 3 → Both application launch & boot prefetching enabled (default) ⚠️ 2. Prefetch Does Not Prove Successful Execution A .pf file is created even if the program failed to execute properly . Cross-check with other artifacts ( UserAssist, BAM/DAM, AmCache ) for confirmation. ⚠️ 3. Prefetch Files Are Limited in Number Older Prefetch files are deleted when the limit is reached. If an app was used long ago , its Prefetch file may no longer exist . ------------------------------------------------------------------------------------------------------------- Best Practices for Prefetch Analysis 🔍 1. Prioritize Prefetch Collection Live response tools create new Prefetch files —potentially overwriting older forensic evidence. Collect Prefetch data before running analysis tools . 🔍 2. Cross-Reference Other Execution Artifacts Compare Prefetch data with: UserAssist AmCache BAM/DAM 🔍 3. Look for Anomalous Prefetch Files Multiple Prefetch files for the same application but with different hashes may indicate suspicious execution paths . ------------------------------------------------------------------------------------------------------------- Final Thoughts: Prefetch Is an Essential Artifact for Execution Tracking Windows Prefetch files are one of the most reliable ways to track program execution . They provide timestamps, execution counts, and file access details that are crucial in forensic investigations. 💡 Key Takeaways: ✅ Prefetch proves an application was executed —even if it was later deleted. ✅ Windows 8+ Prefetch files store up to 8 execution timestamps , making them invaluable for tracking repeat usage . ✅ Prefetch files can reveal unauthorized or malicious software execution . ✅ Cross-check Prefetch data with other execution artifacts (UserAssist, BAM/DAM, AmCache) for accuracy . 🚀 If you're investigating program execution on a Windows system, Prefetch analysis should be at the top of your forensic checklist! 🔍 -------------------------------------------------Dean-----------------------------------------------
- SentinelOne Threat Hunting Series P3: Must-Have Custom Detection Rules
In this article, we continue exploring the power of SentinelOne’s custom detection rules to enhance control over your environment's security. Below are more custom detection rules tailored for advanced threat detection, covering various scenarios like remote desktop activity, SMB connections, PowerShell misuse, and suspicious file transfers. 21. RDP Session Start Events with Non-Local Connections Rule : event.type == "Process Exit" AND src.process.cmdline contains:anycase("mstsc.exe") OR (event.type == "Process Creation" AND src.process.cmdline contains:anycase("mstsc.exe") AND !(src.ip.address matches:anycase("0.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16"))) Description : Detects RDP session initiation using the mstsc.exe process from non-local IP addresses, highlighting potential unauthorized remote connections. 22. Creation of Processes Related to Remote Desktop Tools and Protocols Rule : event.type == "Process Creation" AND !(src.ip.address matches:anycase("0.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16")) AND src.process.cmdline contains:anycase("mstsc", "vnc", "ssh", "teamviewer", "anydesk", "logmein", "chrome remote desktop", "splashtop", "gotomypc", "parallels access") Description : Monitors the creation of processes linked to remote access tools while excluding certain IP ranges, which could indicate suspicious remote activity. 23. SMB Connections Indicating Lateral Movement Rule : event.type == "IP Connect" AND event.network.direction == "INCOMING" AND event.network.protocolName == "smb" AND dst.port.number == 445 Description : Flags SMB connections over port 445, commonly used for lateral movement in network compromises. 24. BitsTransfer Activity Rule : event.type == "Process Creation" AND tgt.process.cmdline contains:anycase("BitsTransfer") AND tgt.file.extension in:anycase("ps1", "bat", "exe", "dll", "zip", "rar", "7z", "tar") Description : Monitors the use of BitsTransfer to download or upload files, a technique often used to evade detection in malicious activities. 25. PowerShell Web Request Rule : event.type == "Process Creation" AND tgt.process.displayName == "Windows PowerShell" AND (tgt.process.cmdline contains:anycase("Invoke-WebRequest", "iwr", "wget", "curl", "Net.WebClient", "Start-BitsTransfer")) Description : Detects PowerShell commands that perform web requests, which may indicate data exfiltration or malicious script downloads. 26. Suspicious File Uploads to Cloud Services Rule : event.category == "url" AND url.address matches("https?://(?:www\\.)?(?:dropbox\\.com|drive\\.google\\.com|onedrive\\.live\\.com|box\\.com|mega\\.nz|icloud\\.com|mediafire\\.com|pcloud\\.com)") OR (event.category == "url" AND event.url.action == "PUT" AND url.address matches("https?://(?:www\\.)?(?:dropbox\\.com|drive\\.google\\.com|onedrive\\.live\\.com|box\\.com|mega\\.nz|icloud\\.com|mediafire\\.com|pcloud\\.com)")) Description : Detects upload attempts to cloud storage platforms, which could signify data exfiltration efforts. Share your email and details, and I’ll help craft the perfect rule for your needs. See you soon! 👋 Thank you so much for staying with me throughout this complete series on SentinelOne. It has always been a pleasure writing and sharing knowledge so others can benefit. With this final article, I wrap up my coverage on SentinelOne—until I receive further requests to explore more on this topic. For now, I'll be shifting my focus to other articles and new areas of research. Stay curious, keep learning, and as always, take care. See you soon! 🚀
- SentinelOne Threat Hunting Series P2: Must-Have Custom Detection Rules
In this article, we continue exploring the power of SentinelOne’s custom detection rules to enhance control over your environment's security. These rules allow you to define specific conditions for detecting and responding to potential threats, giving you the flexibility to act beyond built-in detections. 11. Mimikatz (Reg Add with Process Name) Rule : tgt.process.name == "powershell.exe" AND (registry.keyPath == "SYSTEM\\CurrentControlSet\\Services\\mimidrv" OR tgt.process.cmdline contains:anycase("MISC::AddSid", "LSADUMP::DCShadow", "SEKURLSA::Pth", "CRYPTO::Extract")) AND (file.name in:anycase("vaultcli.dll", "samlib.dll", "kirbi")) Description : Detects malicious registry modifications associated with Mimikatz. The rule identifies suspicious PowerShell activity and DLL manipulations indicative of credential dumping or lateral movement. 12. MimikatzV (Behavior-Based) Rule : event.type == "Behavioral Indicators" AND indicator.name in:matchcase("Mimikatz", "PrivateKeysStealAttemptWithMimikatz") OR (event.type == "File Creation" AND tgt.file.path matches(".*\\mimikatz.*", ".*\\sekurlsa.*", ".*\\mimidrv.*", ".*\\mimilib.*")) OR (event.type == "Threat Intelligence Indicators" AND tiIndicator.malwareNames contains:anycase("Mimikatz")) Description : A behavior-based rule for detecting Mimikatz activity by monitoring file creation, threat intelligence indicators, and behavioral signs linked to credential theft. 13. Disable Veeam Backup ServicesV2 Rule : tgt.process.cmdline contains:anycase("net.exe stop veeamdeploysvc", "vssadmin.exe Delete Shadows", "vssadmin.exe delete Shadows /All /Quiet", "wmic shadowcopy delete") Description : Flags attempts to disable Veeam Backup services, commonly used by attackers to disrupt data recovery processes during ransomware campaigns. 14. Mimikatz Executables Rule : tgt.file.path contains:anycase("mimikatz.exe", "mimikatz", "mimilove.exe", "mimilove", "mimidrv.sys", "mimidrv", "mimilib.dll", "mimilib", "mk.7z") Description : Detects the presence of Mimikatz executables or libraries, identifying potential tool deployment for credential harvesting. 15. Rclone (You can other tool like mega.io or Filezilla as well) Rule : src.process.name in:matchcase("rclone.exe", "rclone.org", "Rclone.exe") AND event.dns.request == "rclone.org" OR tgt.process.cmdline contains:anycase("rclone") OR src.process.displayName contains:anycase("rclone") OR src.process.cmdline contains:anycase("rclone") Description : Monitors activity related to Rclone, a legitimate tool often abused for exfiltrating data to cloud storage services. 16. NTDSUtil Rule : event.type == "Process Creation" AND ((tgt.process.cmdline contains:anycase("copy ") AND (tgt.process.cmdline contains:anycase("\\Windows\\NTDS\\NTDS.dit") OR tgt.process.cmdline contains:anycase("\\Windows\\System32\\config\\SYSTEM "))) OR (tgt.process.cmdline contains:anycase("save") AND tgt.process.cmdline contains:anycase("HKLM\\SYSTEM "))) OR (tgt.process.name == "ntdsutil.exe" AND tgt.process.cmdline contains:anycase("ac i ntds")) OR (tgt.process.name == "mklink.exe" AND tgt.process.cmdline contains:anycase("HarddiskVolumeShadowCopy"))) AND !(src.process.cmdline contains:anycase("Get-psSDP.ps1")) OR (src.process.cmdline contains:anycase("ntdsutil") AND src.process.cmdline contains:anycase("ifm")) OR (tgt.process.cmdline contains:anycase("ntdsutil") AND tgt.process.cmdline contains:anycase("ifm")) Description : Targets suspicious usage of NTDSUtil to access Active Directory databases and other sensitive registry keys, a technique used in domain compromises. 17. CURL Connecting to IPs Rule : src.process.cmdline contains:matchcase("curl.exe") AND event.network.direction == "OUTGOING" AND dst.ip.address matches("^((?!10\\.).)*$") AND dst.ip.address matches("^((?!172\\.1[6-9]\\.).)*$") AND dst.ip.address matches("^((?!172\\.2[0-9]\\.).)*$") AND dst.ip.address matches("^((?!172\\.3[0-1]\\.).)*$") Description : Detects CURL network connections to non-local IP addresses, helping to identify potential data exfiltration attempts. 18. Admin$hare Activity (Cobalt Strike - Service Install Admin Share) Rule : src.process.cmdline contains:matchcase("\\127.0.0.1\\ADMIN$") AND src.process.cmdline contains:matchcase("cmd.exe /Q /c") Description : Identifies suspicious activity targeting the ADMIN$ share, often used by tools like Cobalt Strike for lateral movement. 19. RDP Detection (Any Port) Rule : event.type == "IP Connect" AND event.network.direction == "INCOMING" AND src.process.cmdline contains:anycase("-k NetworkService -s TermService") AND src.ip.address matches("\\b(?!10|192\\.168|172\\.(2[0-9]|1[6-9]|3[0-1])|(25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]|99[1-9]))[0-9]{1,3}\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)") AND src.ip.address != "127.0.0.1" Description : Monitors incoming RDP connections, highlighting unusual or unauthorized attempts to access the environment. 20. RDP Detection (Port 3389) Rule : dst.port.number == 3389 AND event.network.direction == "INCOMING" AND src.ip.address matches("\\b(?!10|192\\.168|172\\.(2[0-9]|1[6-9]|3[0-1])|(25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]|99[1-9]))[0-9]{1,3}\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)") AND src.ip.address != "127.0.0.1" Description : Focused detection of RDP activity on the standard port 3389, which is commonly targeted in brute-force attacks. Share your email and details, and I’ll help craft the perfect rule for your needs. See you soon! 👋
- SentinelOne Threat Hunting Series P1: Must-Have Custom Detection Rules
In this three-part series, we’ll explore custom rules for enhanced threat detection and hunting in SentinelOne . These rules leverage STAR (SentinelOne Threat Analysis Rules) to proactively identify malicious activities and enhance security posture. If you need any rules tailored to your environment, feel free to email me via the Contact Us page with your requirements, and I'll be happy to create them for you! Part 1: Top 10 Must-Have Rules for Threat Hunting 1. Delete Shadow Volume Copies Purpose : Detects attempts to delete shadow copies, a common tactic used by ransomware operators to prevent file recovery. Rule : tgt.process.cmdline matches("vssadmin\\.exe Delete Shadows","vssadmin\\.exe delete Shadows /All /Quiet") 2. Suspect Volume Shadow Copy Behavior Detected Purpose : Identifies attempts to access sensitive files from shadow copies. Rule : tgt.process.cmdline contains:anycase("HarddiskVolumeShadowCopy") AND ( tgt.process.cmdline contains:anycase("ntds\\ntds.dit") OR tgt.process.cmdline contains:anycase("system32\\config\\sam") OR tgt.process.cmdline contains:anycase("system32\\config\\system")) AND !( src.process.name == "windows\\system32\\esentutl.exe" OR src.process.publisher in:matchcase("Veritas Technologies LLC", "Symantec Corporation")) 3. Impact - Shadow Copy Delete Via WMI/CIM Detected Purpose : Flags deletion of shadow copies using WMI or CIM commands. Rule : tgt.process.cmdline contains:anycase("win32_shadowcopy") AND ( tgt.process.cmdline contains:anycase("Get-WmiObject") OR tgt.process.cmdline contains:anycase("Get-CimInstance") OR tgt.process.cmdline contains:anycase("gwmi") OR tgt.process.cmdline contains:anycase("gcim")) AND ( tgt.process.cmdline contains:anycase("Delete") OR tgt.process.cmdline contains:anycase("Remove")) 4. Suspect Symlink to Volume Shadow Copy Detected Purpose : Detects creation of symlinks to shadow copies for unauthorized access. Rule : tgt.process.cmdline contains:anycase("mklink") AND tgt.process.cmdline contains:anycase("HarddiskVolumeShadowCopy") 5. Disable/Delete Microsoft Defender AV Using PowerShell Purpose : Monitors attempts to disable Microsoft Defender via PowerShell commands. Rule : tgt.process.cmdline contains:anycase("powershell Set-MpPreference -DisableRealtimeMonitoring $true") OR tgt.process.cmdline contains:anycase("sc stop WinDefend") OR tgt.process.cmdline contains:anycase("sc delete WinDefend") 6. Disable Windows Defender Purpose : Detects various attempts to disable Microsoft Defender features. Rule : tgt.process.cmdline contains:anycase("Set-MpPreference") AND ( tgt.process.cmdline contains:anycase("-DisableArchiveScanning") OR tgt.process.cmdline contains:anycase("-DisableAutoExclusions") OR tgt.process.cmdline contains:anycase("-DisableBehaviorMonitoring") OR tgt.process.cmdline contains:anycase("-DisableBlockAtFirstSeen") OR tgt.process.cmdline contains:anycase("-DisableCatchupFullScan") OR tgt.process.cmdline contains:anycase("-DisableCatchupQuickScan") OR tgt.process.cmdline contains:anycase("-DisableEmailScanning") OR tgt.process.cmdline contains:anycase("-DisableRealtimeMonitoring")) 7. Disable Windows Defender Via Registry Key Purpose : Flags registry key changes disabling Defender. Rule : tgt.process.cmdline contains:anycase("reg\\ add") AND tgt.process.cmdline contains:anycase("\\SOFTWARE\\Policies\\Microsoft\\Windows Defender") AND ( tgt.process.cmdline contains:anycase("DisableAntiSpyware") OR tgt.process.cmdline contains:anycase("DisableAntiVirus")) 8. Disable Windows Defender Signature Updates Purpose : Detects attempts to disable Defender signature updates. Rule : tgt.process.cmdline contains:anycase("Remove-MpPreference") OR tgt.process.cmdline contains:anycase("set-mppreference") AND ( tgt.process.cmdline contains:anycase("HighThreatDefaultAction") OR tgt.process.cmdline contains:anycase("SevereThreatDefaultAction")) 9. SVCHOST Spawned by Unsigned Process Purpose : Flags instances of svchost.exe being launched by unsigned processes. Rule : src.process.publisher == "Unsigned" AND tgt.process.name == "svchost.exe" 10. Mimikatz via PowerShell Purpose : Detects the execution of Mimikatz scripts or commands using PowerShell. Rule : src.process.parent.cmdline contains:anycase("Invoke-Mimikatz.ps1", "Invoke-Mimikatz") AND tgt.process.name == "powershell.exe" Closing Note Stay tuned for more custom threat-hunting rules and best practices in the next articles of this series! If you have specific rule requirements or ideas, feel free to reach out through the Contact Us section. Share your email and details, and I’ll help craft the perfect rule for your needs. See you soon! 👋 Dean
- Streamlining USB Device Identification with a Single Script
Identifying and analyzing USB device details can be a tedious and time-consuming task. It often requires combing through various system registries and logs to gather information about connected USB devices. As a cybersecurity professional, having an efficient way to automate this process can save valuable time and reduce errors. In this blog, I will share a script that simplifies the task of identifying USB device details. This script gathers all necessary information in one go, making the process more efficient. Additionally, you can find this script integrated into my endpoint data capture tool, which is detailed in my previous blog. The script is also available on the resume page of my portfolio. USB Device Information Before diving into the script, let’s look at the kind of information we aim to extract: Serial Number : Unique identifier for the USB device. Friendly Name : User-friendly name of the USB device. Mounted Name : Drive letter assigned to the USB device. First Time Connection : Timestamp of the first connection. Last Time Connection : Timestamp of the last connection. VID : Vendor ID of the USB device. PID : Product ID of the USB device. Connected Now : Indicates if the device is currently connected. User Name : The username that initiated the connection. DiskID : Unique identifier for the disk. ClassGUID : Class GUID of the device. VolumeGUID : Volume GUID of the device (if available). If you run the script in Powershell you will get out like below: If you run my script which you can find under resume page. you will get output like below Update on Script: https://www.linkedin.com/feed/update/urn:li:activity:7284276306349871106/ Conclusion Identifying USB details can indeed be a hectic task when done manually by digging through system registries. However, with the help of automation scripts like the one shared above, the process can become much more manageable and efficient Akash Patel








