
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
499 results found with an empty search
- Understanding Windows Services and Their Role in System Security
Windows Services are background processes that run independently of user interaction. They play a crucial role in maintaining system stability, handling network functions, and ensuring various OS components operate smoothly. Some essential services, such as the DHCP Client, Windows Event Log, Server, and Workstation services, start automatically during system boot and are critical for the operating system to function properly. How Windows Services Work Services in Windows can be implemented as standalone executables or as Dynamic Link Libraries (DLLs). To optimize resource usage, multiple service DLLs often run under a single process called svchost.exe. This is why, if you check your Task Manager, you might see multiple instances of svchost.exe running simultaneously. Each of these hosts different service groups. Windows manages service configurations through the Windows Registry, specifically under the path: HKLM\SYSTEM\CurrentControlSet\Services This registry key contains detailed information about each service, including its name, display name, file path, start type, required privileges, and dependencies. The start type of a service determines when and how it runs: 0x00 (Boot Start) – Loaded by the bootloader, typically for device drivers. 0x02 (Automatic Start) – Launches at boot without user intervention. Manual Start – Runs only when explicitly started by a user or another process. Disabled – Cannot be started unless manually enabled. Because services can start automatically, even before security tools like antivirus software load, they are often exploited by attackers as a persistence mechanism for malware. ------------------------------------------------------------------------------------------------------------- Windows Services as a Persistence Mechanism Since Windows services can launch automatically with system privileges, they are a favorite method for attackers to maintain access on a compromised system. Attackers can exploit services in several ways: 1. Creating a Malicious Service With administrative privileges, an attacker can create a new service that launches malicious code at startup. This can be done using the built-in sc (Service Control) command: sc create MaliciousService binPath= "C:\malware.exe" start= auto This method ensures that the malware runs every time the system starts. 2. Replacing an Existing Service Instead of creating a new service, attackers can modify an existing one . If a service is rarely used or disabled, they can change its executable path to point to a malicious file. To modify a service, an attacker may update its registry entry: HKLM\SYSTEM\CurrentControlSet\Services\\ImagePath This makes it harder to detect compared to creating a completely new service. 3. Exploiting Service Recovery Options Windows allows services to be restarted automatically if they fail. Attackers can exploit this by modifying the recovery settings so that when a service crashes, Windows executes a malicious file instead of restarting the service. For example, they can use this command to modify a service’s recovery action: sc failure actions= restart/6000/run/c:\malware.exe If the targeted service crashes, Windows will execute malware.exe instead of restarting the service. ------------------------------------------------------------------------------------------------------------- Detecting and Investigating Malicious Services Security analysts can use several tools to detect suspicious services: Sysinternals Autoruns – Lists all auto-starting services and executables. SC command-line tool – Provides detailed information about services (sc queryex, sc qc, sc qprivs, sc qtriggerinfo). Registry Analysis – Investigate the HKLM\SYSTEM\CurrentControlSet\Services key for unusual entries. Event Logs – Unusual service crashes or modifications might indicate compromise. ------------------------------------------------------------------------------------------------------------- Kansa PowerShell Framework – The Get-SvcFail.ps1 script can collect information on service failure recovery settings. Even kansa is do not have support few of the scripts are used in this tool are still work awesomely Output of script: Lets understand output of the script Breaking Down the Output: ServiceName The name of the service (e.g., AsusScreenXpertHostService, ASUSSoftwareManager, ASUSSwitch). RstPeriod (Reset Period) This defines the time (in seconds) after which the failure counter resets. 86400 seconds = 24 hours , meaning if a service doesn't fail within 24 hours, previous failure counts are reset. RebootMsg If specified, this would contain a message shown to the user when a reboot is required. Since it's empty, no message is set. CmdLine If a command needs to run upon failure, it would be specified here. Since it’s empty, no command is executed. FailAction1, FailAction2, FailAction3 These define what happens if the service fails multiple times. FailAction1: What happens on the first failure. FailAction2: What happens on the second failure. FailAction3: What happens on the third failure (or more). ------------------------------------------------------------------------------------------------------------- Conclusion Windows Services are a fundamental part of the OS, but their ability to start automatically and run with high privileges makes them an attractive target for attackers. Understanding how services work, how they can be manipulated, and how to detect anomalies is crucial for maintaining system security. By leveraging built-in Windows tools and security best practices, defenders can identify and mitigate service-based threats before they lead to significant damage. -------------------------------------------Dean------------------------------------------------------
- Understanding AutoStart Persistence in Windows: Key Locations and Detection Methods
Updated on 12 Feb,2025 Windows provides numerous ways for applications—and unfortunately, malware—to persist on a system. These persistence mechanisms, o fficially known as AutoStart Extension Points (ASEPs) , allow programs to execute automatically when a system boots or when a user logs in. While these features are essential for legitimate software, they are also frequently exploited by attackers to maintain access to compromised machines. Why Are ASEPs Important? The sheer number of ASEPs in Windows makes securing the system a challenge . Malicious programs can place references to themselves in various locations to ensure they run persistently. M any of these locations are found within the Windows Registry, offering a somewhat centralized place for forensic investigators to check. However, with hundreds of thousands of registry keys on a typical system, identifying malicious persistence is no small task. Common Registry-Based AutoStart Locations Among the many ASEPs available, the " Run" keys in the Windows Registry are the most commonly abused by attackers. These keys execute listed applications when a user logs into their system: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Runonce HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Runonce HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run When an attacker inserts a reference to a malicious executable in one of these keys, it will launch every time the user logs in , providing persistent access. The Userinit Key: A Lesser-Known but Dangerous ASEP Another powerful ASEP is the Userinit key, located at: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Userinit This key typically contains a reference to userinit.exe, which is responsible for launching explorer.exe after a user logs in . However, an attacker can modify this key to add a malicious executable: C:\Windows\system32\userinit.exe, C:\Temp\winsvchost.exe By adding their payload here, attackers ensure that their malware is executed as soon as a user logs in, even before the desktop fully loads. File System-Based AutoStart Locations Attackers often abuse locations within the file system that do not require administrative privileges. One of the most effective and widely used methods is placing malicious shortcuts in the Startup folder: %AppData%\Roaming\Microsoft\Windows\Start Menu\Programs\Startup Any executable or shortcut placed in this folder will launch automatically upon user login . This technique is commonly used in phishing attacks where a malicious file is dropped into this directory, ensuring execution without the need for elevated permissions. Detecting and Analyzing AutoStart Entries Given the wide range of ASEPs, forensic analysts and incident responders rely on specialized tools to detect and analyze suspicious entries. Some of the best tools for this task include: Registry Explorer – Allows deep exploration and analysis of Windows Registry hives, making it easier to locate malicious entries. RegRipper – A powerful tool with plugins designed to extract known ASEPs quickly. Autoruns – A Microsoft Sysinternals tool that provides a comprehensive view of all AutoStart locations on a system. Kansa – A PowerShell framework useful for collecting registry and filesystem persistence indicators across multiple systems, enabling large-scale detection. As for tools: Kansa is out of date very less people are using. If you ask me what is the best tool and did i created an article to parse these location automatically and analyse them. My answer is Hell yeah! Tool called RECmd is best tool which automate the process. Check out the article link below! https://www.cyberengage.org/post/uncovering-autostart-locations-in-windows Conclusion AutoStart Extension Points are a double-edged sword: they enable seamless operation of legitimate applications but also provide an easy way for malware to persist. Understanding the most commonly exploited ASEPs and utilizing forensic tools to monitor them can significantly improve security posture. Whether you're an incident responder, a forensic analyst, or an enthusiast looking to improve your cybersecurity knowledge, mastering ASEP analysis is a crucial skill in defending against persistent threats. ------------------------------------------Dean-------------------------------------------------------------
- String Searching with bstrings: Carving Files and Finding Hidden Data
Hi, everyone! Welcome to another article. If you’ve been following along, you know I’ve covered some amazing tools, including bstrings.exe , a powerful utility by Eric Zimmerman. I previously shared how to use this tool for string searches and you can check out that guide https://www.cyberengage.org/post/memory-forensics-using-strings-and-bstrings-a-comprehensive-guide Today, we’re revisiting bstrings, but with a new focus: file carving and detailed searches . Let’s dive in! ------------------------------------------------------------------------------------------------------------- Why String Searching Matters in Forensics String searching is one of the most versatile forensic techniques. It’s commonly used in tasks like: Memory analysis Reverse-engineering malware Finding data in unallocated space or disk images Why is it so effective? Many crucial pieces of evidence—like IP addresses, URLs, usernames, passwords, and file paths—are represented as strings . By searching for strings, you can uncover data in memory dumps, page files, or even entire disk images. ------------------------------------------------------------------------------------------------------------- bstrings: A Power Tool for String Searching bstrings , created by Eric Zimmerman, is an advanced tool for string searching. Originally a Windows-exclusive tool, it’s now powered by .NET6 , making it faster and compatible with Linux. It stands out for its ability to: Extract both ASCII and Unicode strings simultaneously. Perform advanced searches using regular expressions (regex) . Use built-in regex patterns to search for common items like IP addresses, file paths, and URLs. Here are some examples of bstrings in action: Extract all strings with a minimum length of 8 characters: bstrings.exe -f "E:\Output for testing\20250116.mem" -m 8 Search for specific terms in a file: bstrings -f "E:\Output for testing\20250116.mem"--ls Use regex to find IPv4 addresses: bstrings.exe -f "E:\Output for testing\20250116.mem" --lr ipv4 To see all available regex patterns, run: bstrings -p ------------------------------------------------------------------------------------------------------------- Two Approaches to String Searching In forensic investigations, string searching typically follows one of two methods: 1. Bit-by-Bit Searching This approach scans data directly for specific terms or patterns. T ools like bstrings and hex editors are often used . While thorough, this method has limitations , that if the term of interest is inside a compressed file, such as Windows 10/11 pagefiles, Outlook OSTs and PSTs, and even Office documents (which are essentially zip-compressed XML files), then basic searching tools will be unable to find the term . Specialized forensic tools can decompress some file types, but coverage is incomplete, especially for newer systems like Windows 10/11. 2. Indexed Searching This method creates a searchable index of all strings in a datase t. Tools like Autopsy use indexing engines (e.g., Apache Solr) to process disk images, making subsequent searches fast. While indexing is efficient for large datasets, it has downsides: Indexes can take a long time to build and require significant storage space. Some strings or characters (e.g., "@") might be excluded for efficiency, complicating certain searches. For example, searching for an email address might require a workaround like proximity searches to account for missing characters. ------------------------------------------------------------------------------------------------------------- Comparing bstrings and Indexed Searches To illustrate the differences, let’s look at a scenario where we’re searching for a Bitlocker recovery key : Using bstrings : A d irect regex search across a memory image quickly identified a string in the recovery key format. While fast and efficient, it requires manual validation of results to rule out false positives. bstrings.exe -f "E:\Output for testing\20250116.mem" --lr bitlocker Using Autopsy : By indexing the disk image, multiple searches were performed almost instantly. While indexing took significant time, it allowed for comprehensive searches , including compressed files. Both methods have their place, depending on the type of investigation and the tools available. ------------------------------------------------------------------------------------------------------------- Why bstrings Is a Must-Have Tool Here’s what makes bstrings so valuable: Speed and Efficiency : It processes data quickly, even for large datasets. Advanced Regex Support : Built-in patterns save time and ensure accurate results. Cross-Platform : Now compatible with Linux, it’s more versatile than ever. Customization : You can add your own regex patterns for unique searches. ------------------------------------------------------------------------------------------------------------- Final Thoughts String searching might not seem flashy, but it’s one of the most reliable forensic techniques. Whether you’re using bstrings for bit-by-bit searches or tools like Autopsy for indexed searching, the key is to match the method to your investigation’s needs. I highly recommend checking out the article on Memory Forensics using Strings or Bstrings . The reason is simple: it provides commands along with screenshots, making it easy to understand and follow. https://www.cyberengage.org/post/memory-forensics-using-strings-and-bstrings-a-comprehensive-guide If you haven’t already, give bstrings a try. It’s free, fast, and incredibly powerful—perfect for anyone looking to level up their forensic skills. As always, keep exploring and stay curious. See you in the next article! --------------------------------------------------Dean---------------------------------------------------
- Remote Collections Artifacts Using KAPE including UNC and Over the Internet(ZeroTier)
If you’ve been following me, you already know how much of a fan I am of Eric Zimmerman’s tool, KAPE. I’ve written several articles about it, including IR case studies, and even showed you how to use SentinelOne to run KAPE and collect artifacts from client desktops or transfer them to an SFTP server. Kape https://www.cyberengage.org/courses-1/kape-unleashed%3A-harnessing-power-in-incident-response Sentinel One https://www.cyberengage.org/post/sentinelone-p8-sentinelone-automation-guide-training-to-forensic-collection-kape-integration But why am I back talking about KAPE again? Well, today, I want to introduce you to a few different ways you can use KAPE to collect artifacts efficiently. So, are you ready? Let’s get started! ---------------------------------------------------------------------------------------------------------- Collecting Artifacts Using UNC Paths One of the easiest ways to collect a rtifacts remotely is by using UNC (Universal Naming Convention) paths. This is especially useful for gathering data from systems within a corporate LAN or even over the internet. What is a UNC Path? A UNC path is a standardized way of specifying the location of files and folders on a network. Instead of using a drive letter like C:\, a UNC path looks something like this: \\Artfiact-PC\drive or sharefolder Using a UNC path, we can map a shared network drive and access it as if it were a local drive. This is super useful when working with KAPE to collect data remotely. Setting Up a Shared Drive on the Target System Before we can run KAPE remotely using UNC paths, we need to make sure the t arget system’s drive is shared . Here’s how to do it: Go to the target system(From which you want to collect artifact) and open File Explorer. Right-click on the C: drive and select Properties . Navigate to the Sharing tab. Click on Advanced Sharing . Check the box Share this folder . Click Permissions and grant Full Control to the necessary users. Click OK , then Apply , then Close . At this point, the target’s C: drive is shared, and we can map it on our collection machine. ------------------------------------------------------------------------------------------------------- Mapping the Shared Drive on the Collection System Now that the share is ready , l et’s map the drive on the machine where we’ll be running KAP E: Two Methods:- First using command prompt Open Command Prompt or PowerShell as an administrator. Test the network connection to the target machine: ping # Replace with the target's IP or desktop name like \\ akash \c Map the shared drive to a local drive letter: net use G: \\Artifact-PC\C$ /user:DOMAIN\Username (Replace Artifact-PC with the actual system name or IP.) G: (Drive letter assigned to share drive) \\Artifact-PC\C (UNC Path) /user:DOMAIN\Username example /user:Artifact-PC\Admin Verify the drive mapping: net use At this point, our target’s C: drive is mapped as G:, and we can use it in KAPE commands just like a local drive. Second Method: Go on This PC Click on Map Network drive Add drive letter and check both check boxes (and add login credentials of Artifact machine) Example Artifact-PC\Admin than password Once Mapped Output will be like below To Lean more about this method check the awesome article written below https://www.tomshardware.com/news/how-to-share-drives-windows-pc,36936.html ------------------------------------------------------------------------------------------------------- Running KAPE with UNC Paths Now comes the fun part—using KAPE to collect artifact s from the mapped drive. Basic Command to Collect Artifacts Once the drive is mapped, running KAPE is straightforward. Here’s an example command that collects lnk files and jumplist from the remote system: kape.exe --tsource G:\c --target Lnkfileandjumplists --tdest C:\kape_out\test or kape.exe --tsource \\Artifact-PC\C --target Lnkfileandjumplists --tdest C:\kape_out\test Output: Saves the extracted artifacts to C:\kape_out\test on our local system. Handling Locked Files & Limitations****** While UNC paths are great, they have some limitations. One major issue is that Windows doesn’t allow raw copying of locked files over UNC paths. For example, if you try to collect locked registry hives, KAPE might defer the copy and fail to retrieve them. Lets Run command on mapped drive like above: kape.exe --tsource G:\c --target RegistryHives--tdest C:\kape_out1\test or kape.exe --tsource \\Artifact-PC\C --target RegistryHives --tdest C:\kape_out1\test Output Screenshot(Not supported) A way: Use PsExec to run KAPE locally on the remote machine. Or use Tool like Sentinel One which i have written article link at top ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------ Now Question is what is the solution or better way to do it right? The issue raw copying doesn’t work well with protected system files. So, what if we flip the approach? Instead of pulling files directly, we process them on the target system and only transfer the results . That way, we’re moving clean copies of the files, avoiding direct access issues. Plus, we’re not actually installing or leaving KAPE on the target system . We execute kape.exe from a shared location on the collection system, keeping our footprint minimal. Sure, there’ll still be execution artifacts—like Prefetch files and some registry entries—but we’re not risking overwriting valuable data in unallocated space. That’s a win. The Setup Step 1: Share the KAPE Folder from the Collection System On your collection system (the machine where you want to store the data), share the folder or complete drive that contains kape.exe. This is important because we’ll map it as a drive letter on the target system. Also, create a subfolder within the KAPE folder for storing collected data. This keeps things clean and avoids setting up multiple shares. Step 2: Map the Shared Folder on the Target On the target machine, use the net use command to map the shared KAPE folder from the collection system to a drive letter. This step is basically the reverse of what you’d do if you were sharing the target’s hard drive. Step 3: Verify the Connection Check the mapped drive on the target system to confirm the share is working properly. If all looks good, you’re ready to run KAPE. Running KAPE to Collect Protected Registry Files Now, let’s run KAPE directly from the target, but send all collected files straight to the collection system —without touching the target’s disk. Here’s the trick: Since kape.exe is running from the mapped drive, it doesn’t need to be copied to the target. Kape.exe --tsource C: --target registryhives --tdest G:\Kape_Out\tdest --vss The collected data is written over the UNC path to the collection system, not to the target’s local storage. Even better, we can pull Volume Shadow Copies (VSCs) this way, too. Output on Collection System ------------------------------------------------------------------------------------------------------------ If you guys remember i spoken of using SFTP(In sentinel One article) Well there is trade off Watch Out! KAPE Can Write to the Target While this approach generally keeps things clean, there are cases where KAPE temporarily writes to the target system, even if the final destination is elsewhere. Why? Because certain features—like sending data to an SFTP server or cloud storage—require temporary files before uploading. For example, if your command includes the --tdest or --mdest option pointing to a local target location, that means KAPE is writing files there first. Here’s an example: kape.exe --tsource C:\ --tdest C:\Temp\KAPE_Collection --target !SANS_Triage --scs [SERVER] --scp [PORT] --scu [USER] --scpw [PASSWORD] --vhdx What’s Happening Here? The --tdest option is set to C:\Temp\KAPE_Collection, meaning files are temporarily stored on the target system. Since we’re using --scs (which creates a VHDX) and then zipping it up, these actions will write to the target. Once done, the files are sent to an SFTP server. The Trade-off Sometimes, we don’t have a choice. The risk of temporary writes is outweighed by the benefit of getting crucial forensic data off the system ASAP. If you can’t take a full forensic image, this might be the best option. ------------------------------------------------------------------------------------------------------------ KAPE Collection Over the Internet Using ZeroTier Running KAPE from a network share and saving collected data to another network share is a simple but powerful technique. It works great when everything is within the same local network (LAN), but what if we need to collect data from a system across the internet or over a wide-area network (WAN)? That’s where ZeroTier One comes in . It’s a free, open-source tool that allows us to create a Software-Defined Wide Area Network (SD-WAN) in just a few minutes. This means we can make our remote target system appear as if it’s inside our LAN, making KAPE collection just as easy as if the system were sitting next to us. Pre-Requisites ZeroTier Account : This will allow you to create a virtual network for your collection. ZeroTier installed on both systems : You need to install ZeroTier on both the collection and target systems. KAPE : The tool you'll use for the data collection. Ensure you have KAPE installed on the collection system. Step-by-Step Guide 1. Create a ZeroTier Account To get started, visit ZeroTier and create an account. Once you sign up, you’ll have the ability to create networks. After signing up, you’ll be taken to your ZeroTier dashboard. Click on the “ Create a Network” button to set up your first virtual network. This network will enable you to connect multiple systems remotely, even across the internet. Note down your Network ID , as you'll need it when configuring the client software on your systems. 2. Download and Install ZeroTier on Both Systems Now that you've created a ZeroTier account, y ou need to install ZeroTier on both your collection system (where you'll run KAPE) and the target system (from where you'll be collecting the data). Download ZeroTier : Visit the ZeroTier Downloads page and download the correct version for both systems (Windows, macOS, Linux, etc.). Install ZeroTier : Run the installer on both systems, accepting all the default settings. Once installed, you’ll see the ZeroTier icon in the system tray on both systems. You’re now ready to join the network. 3. Join Both Systems to the ZeroTier Network Right-click the ZeroTier icon in the system tray and click Join Network . Enter the Network ID that you noted earlier and click Join . At this point, the system will attempt to connect to the ZeroTier network, but it won’t be fully connected yet. 4. Authorize the Systems in ZeroTier After both systems have attempted to join the ZeroTier network, you need to authorize them from the ZeroTier management console: Log into the ZeroTier web console . Navigate to My Networks and select the network ID you created. Under the Members section, you'll see a list of systems attempting to join the network. For each system, check the box next to its Node ID to authorize it to join the network. Add a label for each system to easily identify which is the Collection System and which is the Target System . 5. Verify Connectivity Between Systems To ensure that both systems are successfully connected to the ZeroTier network: Open a Command Prompt (or PowerShell ) on one system. Try to ping the other system using its ZeroTier IP address (visible in the ZeroTier management console). If the ping is successful, both systems are properly connected 6. Map a Network Drive for Data Collection Now that both systems are connected via ZeroTier, you can share folders and map drives just like in a local network: Share a folder/Drive on the collection system (where KAPE is installed) by right-clicking the folder and selecting Properties > Sharing > Share . On the target system, map a network drive to the shared folder on the collection system: Open This PC > Map Network Drive . Enter the path to the shared folder on the collection system (e.g., \\\shared-folder). Choose a drive letter (e.g., G:) for easy access. 7. Run the KAPE Collection This runs KAPE on the target but saves the collected data directly to the collection system, ensuring no forensic evidence is overwritten on the remote system. Here’s an example of the command: Kape.exe --tsource C: --target registryhives --tdest G:\Kape_Out\tdest --vss ------------------------------------------------------------------------------------------------------------- you might ask me question: Dean If we can use a UNC path, why do we need a ZeroTier account? Aren’t both similar? Great question! Let’s break it down and compare these two methods. Key Differences Between ZeroTier and UNC Paths a. Network Setup UNC Path: Requires direct network access (LAN, VPN, or port forwarding). Needs firewall and routing configurations (port 445 for SMB). Security risk if exposed over the internet. ZeroTier: No need for port forwarding or complex firewall settings . Creates a private network automatically , making remote access simple. Works even if devices are behind firewalls or NAT . b. Security UNC Path: Exposing SMB ports on the internet is a major security risk . Requires VPN or encryption (SMBv3) to secure remote access. ZeroTier: Fully encrypted network traffic by default. No need to open SMB ports to the internet. Better access control via ZeroTier’s network management. c. Flexibility and Remote Access UNC Path: Works well within the same network or VPN. Requires IP/hostname visibility (challenging if behind NAT/firewall). ZeroTier: Works globally with no complex VPN setup. Devices connect as if on the same local network , regardless of location. No dependency on public IPs or DNS resolutions . d. Ease of Use UNC Path: Standard method for local file sharing but tricky over the internet . Needs VPN or port forwarding for remote use. ZeroTier: Simple to set up – install and connect. No need for manual routing or firewall changes. e. Performance UNC Path: Performance depends on network setup and VPN overhead . SMB can suffer from latency issues over long distances. ZeroTier: Uses peer-to-peer connections (when possible) for better speed. Lower latency and more stable performance. Conclusion: Which One Should You Use? Use UNC Path when both systems are on the same network or within a well-configured VPN. Use ZeroTier for secure, seamless, and easy remote access without complex configurations. Why ZeroTier is the Better Choice for Remote Collections? ✅ No need for port forwarding or VPN setups . ✅ Stronger security with end-to-end encryption. ✅ Works anywhere – even behind firewalls or NAT. ✅ Simpler setup and management compared to UNC paths. ------------------------------------------------------------------------------------------------------------ Wrapping It Up That’s it! With this setup, you can conduct remote forensic collections using KAPE over the internet without writing any files to the target system . Want to try it out? Set up a ZeroTier account, spin up a couple of test VMs, and see how smooth it works for yourself. Happy hunting! -------------------------------------------Dean---------------------------------------
- File Carving: A Simple and Powerful Way to Recover Deleted Files
Have you ever accidentally deleted a file and thought it was gone forever? Luckily, tools like file carving can help recover those files, even if they seem lost. File carving is one of the easiest and most effective ways to retrieve data, and it works by using something called file headers . Let’s break it down in simple terms. What Is File Carving? Every file type—like photos, videos, or documents—has a unique "signature" at the beginning of the file called a header . Think of a file header as the file’s ID card. It tells your computer what kind of file it is so the right program can open it. For example: A JPEG photo starts with the header FF D8 FF . A Windows executable file (.exe) starts with MZ (in hexadecimal: 0x4D 0x5A). File carving tools scan a storage device, looking for these unique headers. Once a header is found, the tool tries to extract the data that follows it to recover the file. This process works on hard drives, USB drives, memory cards, and even devices like Android phones—basically any storage medium. Why Is File Carving So Useful? File carving doesn’t rely on the file system (the system that organizes files on your device). This makes it powerful because it works even if: The file system is corrupted or missing. The drive has been reformatted. There are no traditional file records left. In short, file carving only cares about the raw data, not how the files were organize How File Headers Help in Recovery File headers can be a few bytes to dozens of bytes. The longer the header, the more accurate the detection, because longer headers reduce the chances of a false match . However, no method is perfect. File carving can sometimes produce false positives—this means the tool might think it found a file when it actually didn’t . That’s why testing the tool and ensuring it’s looking for the correct file signatures is so important. When Should You Use File Carving? File carving is great for situations like: Deleted Files : If a user has intentionally or accidentally deleted files , carving can help recover them. Old Data : When the s ystem activity happened long ago , file carving can uncover older data. System Failures : If a h ard drive was formatted or an operating system was reinstalled, carving can still find data. Tools for File Carving: Why PhotoRec Stands Out One of the best tools for file carving is PhotoRec , a free and open-source program that’s been developed over the past 20 years. It’s highly respected in the forensic community and consistently ranks as one of the top file recovery tools. If you’re looking to understand how to run Photorec and what its output looks like , I recommend checking out the detailed article linked below. https://www.cyberengage.org/post/digital-evidence-techniques-for-data-recovery-and-analysis Example screenshot of above article The Limitations of File Carving While file carving is a powerful technique, it’s not perfect: False Positives : Shorter headers can sometimes match random data. Fragmentation : Files stored in non-continuous clusters may be difficult to fully recover. Corrupted Files : If the recovered file is incomplete or the size is miscalculated, it may not work properly. Despite these challenges, file carving remains one of the best options for recovering data, especially when no other methods are available. Conclusion: A Powerful Tool for Recovery File carving is an incredible technique for recovering lost or deleted files. Tools like PhotoRec make it accessible to anyone, whether you’re a forensic expert or just someone trying to recover a precious photo . Best of all, PhotoRec is free and open-source, so you can get started right away. Just remember, file carving takes time and isn’t foolproof, but when it works, it feels like magic! ------------------------------------------------Dean---------------------------------------------------
- Metadata Investigation(Exiftool): A Powerful Tool in Digital Forensics
Metadata, often described as "data about data ," is a treasure trove of hidden information embedded within files. While it’s not something most users think about, metadata can provide critical evidence in digital investigations . From timestamps to GPS coordinates, metadata stays with a file no matter where it goes—even to removable drives or the cloud . What Is Metadata, and Why Is It Important? Many file formats, like Word documents, PDFs, and images, contain metadata. This data includes details such as: Who created the file When it was last modified How long it was edited Where it was created (e.g., GPS data in photos) What makes metadata special is its resilience. Even if a file is carved out of unallocated space during data recovery, the metadata embedded within the file remains intact . This helps forensic analysts piece together the context of the file’s history. Real-World Cases Highlighting Metadata’s Power Metadata has played a crucial role in solving numerous cases. Here are some notable examples: Intellectual Property Theft : A Word document recovered with embedded company information helped prove it originated at a competitor. Military Vulnerabilities : Insurgents in Afghanistan targeted attack helicopters after extracting GPS coordinates from photos posted online by U.S . military personnel. These examples show that metadata isn’t just a technical curiosity—it’s often the key to cracking high-stakes investigations Tools for Metadata Analysis: Exiftool Thankfully, extracting metadata doesn’t require expensive tools. Exiftool , created by Phil Harvey, is an open-source command-line tool that supports metadata extraction from nearly 180 file formats. Its flexibility and continuous updates make it an essential addition to any forensic toolkit. Why Exiftool Stands Out: Wide Format Support : Handles a broad range of file types, from Office documents to images and videos. Detailed Metadata Extraction : Provides deep insights, including timestamps, creator information, and file modification history. Free and Open Source : Accessible to anyone, from seasoned professionals to hobbyists. Metadata in Action: A Case Study with Microsoft Office Metadata varies by file type and the software used to create it. Let’s take a closer look at a document. Some metadata fields you might find include: Creator Name : Who originally created the file. Last Modified By : The user who last edited the document. Company Name : The organization tied to the document. Edit Time and Revision Count : How long the document was worked on and how many changes were made. Create and Modify Dates : Embedded timestamps that track when the file was created and last changed. Example: Uncovering a Sabotage Incident Modify Date vs. File Creation Date : The embedded Modify Date was different than File Creation Date on the forensic system. T his discrepancy suggested the document had been modified on a different system before being transferred. Such insights helped uncover malicious activity and track the attackers' actions. Why Metadata Matters in Forensics Metadata provides a layer of context that’s hard to manipulate. While file system timestamps can be easily altered, embedded metadata follows the file and retains its integrity, offering: Clues about file origin Timelines of creation and modification Links to individuals or organizations For forensic analysts, metadata is often the linchpin in building a case. ------------------------------------------------------------------------------------------------------------- Image Metadata in Digital Forensics It’s no secret that images contain more than just pixels. Embedded metadata , including GPS coordinates, timestamps, and camera details, can reveal a lot about where and when a photo was taken. This hidden data has been a valuable tool in digital forensics for years, helping investigators track movements, verify evidence, and even uncover manipulation attempts. How GPS Metadata Ends Up in Images Most modern smartphones and digital cameras have the option to embed GPS coordinates in photos . While this feature is often turned off by default due to privacy concerns, many users enable it—sometimes without realizing the long-term implications. For example, a traveler might activate location tagging while on vacation to easily sort and upload their photos . But if they forget to turn it off, every picture they take afterward continues to record exact latitude and longitude . If these images are uploaded to certain platforms, their metadata might remain intact for anyone to extract. Historically, social media sites have handled metadata differently: Twitter used to retain image metadata for years but has since removed it upon upload. Flickr still maintains metadata, making it a useful source for investigators. Blogs and personal websites often store unaltered image files, preserving valuable metadata. Since metadata presence varies, forensic analysts must examine each image individually to determine whether useful data is available. What Can Image Metadata Reveal? The Exif (Exchangeable Image File Format) standard stores a wealth of information in image files, including: Camera make and model Camera settings Timestamps Copyright information (sometimes identifying the owner of the camera) Post-processing software (if the image was edited in Photoshop, Lightroom, etc.) Thumbnail previews GPS coordinates Among these, GPS data is often the most valuable in forensic investigations . It can place a device at a specific location at a precise time, offering key evidence in criminal cases. Metadata Can Be Manipulated – Proceed with Caution While image metadata is a powerful tool , it is not foolproof . Metadata can be: Removed u sing built-in smartphone settings or third-party tools. Edited using Exif manipulation software like Exiftool (yes, the same tool used for analysis can also modify metadata). Spoofed by altering a device’s GPS settings or using software to fake a location. So, how do investigators verify whether metadata has been tampered with? Look for supporting evidence : Are there multiple images from the same location? Do timestamps match other records (e.g., phone logs, social media activity)? Check for search history or installed tools : Has the person searched for ways to edit metadata? Are Exif editing apps installed? Analyze multiple sources : Instead of relying on one file, cross-check data from different forensic artifacts (e.g., cloud backups, messaging apps, or system logs). Digital forensics isn’t just about finding a single piece of evidence—it’s about building a strong case by layering multiple findings. Final Thoughts Metadata is a goldmine of information in digital forensics, offering insights that go far beyond surface-level data. Tools like Exiftool make it easy to extract and analyze metadata, empowering investigators to solve cases ranging from intellectual property theft to cyberattacks. In the world of digital forensics, the smallest details can make the biggest difference. Keep digging—you never know what secrets an image might hold! --------------------------------------------------Dean------------------------------------------
- Metadata Recovery: Bringing Deleted Files Back to Life
When a file is deleted from a computer, it’s not really gone. The data remains on the disk until something else overwrites it . This opens a window for forensic experts to recover these "lost" files . What Is Metadata Recovery? Metadata is like the "address book" of your computer’s file system . It keeps track of information about files, like: File name Size Location on the disk Dates when the file was created, modified, or accessed When a file is deleted, the metadata is often still there . Forensic tools can use this information to locate the file’s data and attempt to restore it. How Files Are Stored on Disk Disks store files in small chunks called clusters . These clusters can either be: Allocated : Used by a file Unallocated : Marked as free but might still contain leftover data from deleted files When a file is deleted, its clusters are marked as unallocated . However, the actual data remains until new files overwrite those clusters. This means that forensic experts can recover the data if it hasn’t been overwritten yet. Two Ways to Recover Deleted Data There are two main methods for recovering deleted files: 1. Metadata Recovery This method is faster and more reliable. Forensic tools examine the metadata to find: Where the file was stored How big it is What type of file it was The tool then retrieves the data from the disk and reassembles the file. If the metadata hasn’t been overwritten, recovery is usually successful. 2. Data Layer Recovery If metadata is missing or damaged, tools can directly scan the unallocated clusters on the disk. They search for file signatures (unique patterns at the start of a file, called headers). For example: A Windows executable file (.exe) starts with MZ (in hexadecimal: 0x4D 0x5A) A J PEG image starts with FF D8 FF This method, called file carving , can find files without metadata. However, it has some downsides: It might produce false positives (random data that looks like a file). It struggles with fragmented files (files stored in non-adjacent clusters). Challenges in File Recovery While metadata recovery is powerful, it’s not foolproof: If clusters are reused for new files, recovery fails. Actions like formatting a drive can erase metadata entirely. Data layer recovery can’t always guess the exact size of a file, leading to partial or corrupted results. Solid-state drives (SSDs) add another layer of difficulty. They use features like wear leveling , which spreads out data to extend the drive's life. This makes it harder to pinpoint and recover specific files. Tools for Metadata Recovery Several forensic tools make metadata recovery easy: FTK Imager : A free tool that can identify deleted files and export them. Autopsy : An open-source forensic suite with metadata recovery features. The Sleuth Kit : A toolkit for forensic analysis, including a tool called tsk_recover for undeleting files. Example: FTK Imager: These tools often highlight deleted files with symbols (e.g., a red "X") to indicate their unallocated status. Analysts can then attempt to recover these files. FTK Imager uses metadata to retrieve file data, and in many cases, you’ll get fully intact files. --------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------- Final Thoughts Metadata recovery is an essential tool for digital forensics. It’s a fast and reliable way to bring deleted files back to life, even when they seem lost. While it’s not perfect—especially for fragmented or overwritten files—it’s often the first step investigators take when analyzing a disk. W ith tools like FTK Imager and Autopsy, recovering deleted files is more accessible than ever. -------------------------------------------Dean-----------------------------------------------------
- Volume Shadow Copy extraction with KAPE(including data/file recovery)
--------------------------------------------------------------------------------------------------------- Before we dive into today’s discussion I want to let you know there’s already a comprehensive article available on extracting and examining Volume Shadow Copies for forensic analysis . The tools used in that case are Symbolic Links and Shadow Explorer . You can check out the detailed guide here: https://www.cyberengage.org/post/extracting-examine-volume-shadow-copies-for-forensic-analysis --------------------------------------------------------------------------------------------------------- When it comes to forensic investigations, Volume Shadow Copy (VSC) analysis can be a game-changer . This technique allows investigators to access previous states of a file system, giving them the ability to uncover data that may have been deleted, overwritten, or altered. However, one unique aspect of VSC analysis is that it typically requires access to a complete disk image. T hat’s where tools like KAPE (Kroll Artifact Parser and Extractor) come in, offering a flexible solution when creating a full disk image isn’t feasible. Why Does VSC Analysis Need a Full Disk Image? VSCs are essentially differential snapshots of a system, capturing only the changes made since the last snapshot. To rebuild a previous state of the volume, these snapshots need to be applied to the current state of the file system. T his process explains why VSC analysis traditionally requires access to the entire volume or disk image—without it, you can’t reconstruct the full picture. The Challenge of Limited Access In many situations, acquiring a full disk image might not be practical . For example, the system might still be in active use, or there could be concerns about the time and storage needed for such a large acquisition. This is where KAPE steps in as a powerful alternative. How KAPE Simplifies VSC Analysis KAPE is designed to collect forensic data quickly and efficiently, and it includes a feature specifically for handling VSCs. When the "Process VSCs" option is enabled, KAPE takes the following steps: Identify and Mount VSCs : KAPE detects any existing Volume Shadow Copies on the system. Collect Evidence from VSCs : It p erforms the same data collection tasks on each VSC that it does on the current file system. Deduplication Saves Space (Deduplicate) : To optimize efficiency, KAPE compares files in the VSCs to those on the current file system . If a file hasn’t changed, it won’t be collected again, saving both time and storage space. Results : The data from each VSC is neatly organized into folders corresponding to the VSC names, such as VSS11, VSS12, and so on. This naming convention aligns with Windows' internal numbering for VSCs, which increments as new snapshots are created. Once you Unzip the folder you collected and Mount it: Screenshot below How output will look like it As stated earlier same data collection tasks on each VSC. Output 1: C drive Output 2: VSS 16 Advantages of KAPE for VSC Triage Using KAPE for VSC analysis has several benefits: Access to Historical Data : Even if older versions of files have been modified or deleted, KAPE ensures that you can still analyze them. Flexible Triage Options : KAPE’s ability to process VSCs during triage collection means you don’t need a complete disk image to capture valuable historical data. Time and Space Efficiency : The deduplication feature significantly reduces redundant data collection, making the process faster and less storage-intensive. Consider the Trade-offs Longer Processing Times : Collecting data from multiple VSCs adds to the overall processing time. Larger Triage File Sizes : Even with deduplication, the additional data collected from VSCs will increase the size of your output files. When to Use VSC Processing If you’re unsure whether you’ll need older versions of a file system, it’s often better to on the side of caution and enable the "Process VSCs" option during triage . The ability to access historical snapshots of data can provide critical insights that might otherwise be missed. Conclusion Volume Shadow Copy analysis is a powerful tool in the forensic investigator’s arsenal, and KAPE makes it easier and more efficient than ever to access and analyze this data. By enabling the "Process VSCs" option, investigators can extract valuable historical data without the need for a full disk image, saving time and resources while uncovering key evidence. However, it’s essential to weigh the trade-offs and plan accordingly to get the most out of this feature. Keep learning, exploring, and experimenting with different tools. They all offer unique benefits and can deepen your forensic capabilities. See you in the next article! ----------------------------------------------Dean-------------------------------------------------
- Cloud Storage Affect on file Timestamps and collection with KAPE: A Forensic Guide
😂 The Final Cloud Storage Article – I Promise! ☁️ I know you all must be thinking, "Another cloud storage article?" But trust me, this is the last one (for now)! Hopefully, you’re not too bored yet. 😆 Let’s dive right in and wrap up this series with something insightful. Stay with me till the end—you won’t want to miss this one! 🚀 Cloud storage has revolutionized how we access and synchronize files across multiple devices. Whether using Google Drive, Dropbox, Box, or OneDrive, users can seamlessly move data between desktops, laptops, and mobile devices. However, this convenience presents challenges for forensic investigators, particularly when analyzing file timestamps. Timestamps help determine when a file was created, modified, accessed, or deleted . But cloud synchronization can alter these timestamps, sometimes making forensic investigations more complex. ------------------------------------------------------------------------------------------------------- Key Timestamps in Cloud Storage Modification Time (Last Modified Date) Creation Time (Date Created) Access Time (Last Accessed Date) While modification time is generally preserved across all devices, creation and access times behave differently depending on the cloud storage provider. ------------------------------------------------------------------------------------------------------- How Different Cloud Providers Handle Timestamps Each cloud provider treats file timestamps uniquely. 1. Modification Time ✅ * Preserved across all devices * – If you edit a file on one system, the change is reflected with the same timestamp on all synchronized devices. 2. Creation Time 🔹 OneDrive, Dropbox, and Box When a file syncs to a new device , the creation time is reset to the synchronization time, meaning the original creation date is lost unless retrieved from the source device. 🔹 Google Drive for Desktop Unlike other platforms, Google Drive preserves the original creation time across all synchronized devices. 3. Access Time 🔹 Google Drive for Desktop, Dropbox, and Box update access time when a file is opened, even if this behavior contradicts traditional filesystem norms. 🔹 Only Google Drive for Desktop ensures that the access time remains consistent across all devices. ------------------------------------------------------------------------------------------------------- Challenges with Virtualized Filesystems in Cloud Storage Many modern cloud storage services do not store all files locally. Services like Box Drive, OneDrive's "Files on Demand," and Dropbox's "Smart Sync" c reate virtual filesystems, making forensic collection more difficult. Virtual Filesystem Workarounds: ✔️ If analyzing a live system , use forensic tools like FTK Imager or KAPE to capture available files. ✔️ Be cautious—retrieving files may automatically download cloud-only files , potentially overwriting unallocated space. ✔️ When possible, f orensic acquisition should include both local and cloud-based records for a complete picture. ----------------------------------------------------------------------------------------------------- Forensic Best Practices for Cloud Storage Investigations Cross-check timestamps – Compare filesystem timestamps with cloud metadata logs for discrepancies. Identify virtual file behavior – Determine whether files are local or cloud-only. Use forensic tools wisely – Applications like FTK Imager, KAPE, and specialized SQLite parsers can extract valuable timestamp data. Capture logs from cloud services – Business-tier cloud storage often retains detailed logs of file access, downloads, and deletions. Consider legal implications – Downloading cloud-only files during forensic analysis can alter data and potentially breach privacy regulations. ------------------------------------------------------------------------------------------------------- Collection with KAPE Cloud storage applications like OneDrive, Google Drive, Dropbox, and Box have become essential in modern computing, making them a goldmine of forensic evidence . Whether you’re investigating data theft, unauthorized file access, or insider threats, these platforms can provide key insights. However, due to their on-demand file access and virtualized storage techniques , traditional forensic methods don’t always work. This is where KAPE (Kroll Artifact Parser and Extractor) comes in —a powerful forensic tool that simplifies the acquisition and processing of forensic artifacts , including cloud storage metadata and user files. ------------------------------------------------------------------------------------------------------------- Using KAPE Kape can : ✅ Extract metadata and files from cloud storage apps ✅ Work on live systems or forensic images ✅ Identify critical artifacts that standard imaging tools might miss KAPE is scriptable and customizable , making it an invaluable tool for forensic investigators dealing with cloud storage investigations . ------------------------------------------------------------------------------------------------------------- Using KAPE to Extract Cloud Storage Artifacts KAPE relies on target files (tkape files) to specify what artifacts should be collected. For cloud storage investigations, KAPE comes with predefined target files for: OneDrive Google Drive Dropbox Box Drive These target files are further categorized into: 🔹 Metadata Targets – Collects metadata about cloud files (useful for tracking file access and modification). 🔹 UserFiles Targets – Captures actual files stored in the local cloud folde r (be cautious, as this may trigger automatic downloads). For a comprehensive collection , KAPE provides compound target files: CloudStorage_Metadata.tkape – Captures metadata from all cloud storage apps. CloudStorage_All.tkape – Collects both metadata and local files for a complete forensic snapshot. ⚠️ Important Note: If a f ile is cloud-only and not cached locally , it won’t be collected by KAPE . However, attempting to collect user files may trigger downloads from the cloud, potentially overwriting unallocated space. ------------------------------------------------------------------------------------------------------------- Beyond KAPE: Other Key Cloud Storage Artifacts While KAPE simplifies collection, forensic analysts should also explore other sources for cloud storage evidence: 1. Browser History and Cloud URLs Most cloud applications have a web interface that leaves traces in browser history . URLs can provide insight into: ✅ Files viewed or edited online ✅ Deleted items and version history ✅ External file sharing For example, OneDrive URLs include user IDs and document references , which can be cross-referenced with local metadata. 2. Windows Registry Entries Cloud storage applications leave registry traces that can help reconstruct file activity. Searching for terms like: OneDrive Google Drive Dropbox Box Drive …can uncover details about previously accessed cloud files, even if they are no longer present. 3. Windows LNK (Shortcut) Files LNK files store metadata about files that were opened , even if they have since been deleted. A shortcut pointing to a OneDrive document proves that file existed , even if it’s no longer in the cloud folder. 4. Cloud Storage Logs Many cloud storage apps keep detailed logs in local directories. Dropbox logs track file synchronization and deletions. Box logs record detailed file access timestamps . Google Drive logs store user interactions with cloud files. These logs can help rebuild past file activity , even if the cloud account has changed or been deleted. ------------------------------------------------------------------------------------------------------------- Best Practices for Cloud Storage Forensics ✔️ Prioritize metadata first – Avoid triggering downloads that overwrite evidence. ✔️ Use KAPE alongside traditional forensic tools – Combine it with FTK Imager , Autopsy , or X-Ways for deeper analysis. ✔️ Check browser history and registry keys – These often contain evidence that local cloud folders don’t. ✔️ Correlate timestamps across multiple sources – Cloud storage timestamps can differ from filesystem timestamps. ✔️ Be mindful of legal implications – Cloud files may be outside the scope of a forensic warrant. ------------------------------------------------------------------------------------------------------------- Conclusion Cloud storage investigations require a multi-layered approach . While KAPE makes extracting cloud storage artifacts fast and efficient , analysts should also examine browser history, registry entries, logs, and system artifacts to get a complete picture. Understanding how virtualized cloud filesystems work and knowing where to look for hidden evidence can make all the difference in a successful forensic investigation. ---------------------------------------Dean-------------------------------------------------------------
- Box Cloud Storage Forensic Investigations: Logs, Cached Files, and Metadata Analysis
Box is one of the most forensic-friendly cloud storage applications, offering extensive logging, locally cached files, and SQLite databases that track user activity and file metadata. This makes it a goldmine for forensic investigators looking to analyze user interactions, deleted files, and cloud-stored documents. ------------------------------------------------------------------------------------------------------------- 1️⃣ Box Local Artifacts: Logs, Databases, and Cache Files 🔍 Where Does Box Store Metadata and Logs Locally? File/Database Location Purpose sync.db %AppData%\Local\Box\Box\data\ Tracks both locally cached & cloud-only files streemsfs.db %AppData%\Local\Box\Box\data\ Lists cached files stored locally metrics.db %AppData%\Local\Box\Box\data\ Stores Box login email & authentication details Box_Streem logs %AppData%\Local\Box\Box\logs\ Detailed user activity logs (file access, sync) .cache %UserProfile%\Box\ Stores offline & temp cached files 📌 Forensic Use: ✅ Recover cached files, including cloud-only items accessed offline ✅ Track user logins and Box authentication details ✅ Extract timestamps and SHA1 hashes for forensic verification ------------------------------------------------------------------------------------------------------------- 2️⃣ Understanding Box's NTFS Reparse Points & Virtual File System 🔍 How Does Box Handle Cloud-Only Files? Box uses NTFS reparse points to create a virtualized file system , meaning: Files appear local , but actual content may be in the cloud. Interacting with a file triggers a real-time download. B ox Drive contents won’t be visible in traditional forensic imaging . 📍 Forensic Implications: 🔸 Standard disk imaging won’t capture Box cloud files if they are not cached. 🔸 Investigators must a ccess live systems or parse Box databases to extract metadata. 🔸 Offline files can be recovered from the cache directory . ------------------------------------------------------------------------------------------------------------- 3️⃣ Extracting Metadata from Box SQLite Databases 🔍 1. Analyzing sync.db: The Box File Tracker *******Deleted items in the Box Drive trash folder are not tracked in this database. 📍 Located at: %AppData%\Local\Box\Box\data\sync.db Column Description name Original filename checksum SHA1 hash of file size File size (bytes) content_created_at File creation time (Unix epoch) content_updated_at Last modification time (Unix epoch) parent_item_id Parent folder, which can be cross-referenced with the box_id field to find folder name Note: Timestamps do not appear to update as expected when interfacing with Bo x on the website via a browser. As an example, content_created_at and content_updated_at are both set to the original file modification time when the file is added via the browser. When using Windows File Explorer to interact with files, timestamps update as expected. Useful fields from the local_item table: . inode: Universal ID assigned to file, which can be useful for quickly matching with other databases like streemsfs.db 📌 Forensic Use: ✅ Identify cloud-only files and locally stored files ✅ Verify file integrity using SHA1 checksums ✅ Correlate file timestamps with user activity 🔍 2. Analyzing streemsfs.db: Track locally Cached File files ********Both offline and online (cloud) items are tracked in the database, but deleted items in the Box Drive trash folder are not tracked within this database. 📍 Located at: %AppData%\Local\Box\Box\data\streemsfs.db Column Description name Cached file name createdAtTimestamp File creation time modifiedAtTimestamp Last modified time accessedAtTimestamp Last accessed time (when file was opened from Box Drive) markForOffline Files marked for permanent offline use inodeId Identifier used to determine parent folders and as a foreign key in cachefiles table parentInodeId: inodeId for parent folder folderFetchTimestamp: When folder content was last synchronized with cloud Note: Timestamps do not appear to update as expected when interfacing with Box on the website via a browser. As an example, createdAtTimestamp, modifiedAtTimestamp, and accessedAtTimestamp are all set to the original file modification time. And the accessedAtTimestamp does not update when the file is solely accessed via the browser. When using Windows File Explorer, timestamps update as expected in the database, with the exception that access I mportant tables to Look for: Cachefiles, fsnodes cacheDataId: Filename within the Box cache folder of the locally saved file size: File size of cached file (in bytes) inodeId: Identifier used as a foreign key in fsnodes table age: Time file was cached; a “ 0” value means not yet cached (Unix epoch time) As you can see screenshot above correlating data one by one b/w them is difficult task to automate the, we will be use freely available script called streemBOXlite This can be particularly helpful if there are many files cached on the system You should run this script using Python3. Command: akash@DESKTOP-DCLRDM4:/mnt/c/Users/Admin/Downloads$ python3 streemBOXlite.py -p /mnt/d/streem/ -c -o /mnt/c/Users/Admin/Downloads -v -q Output: 📌 Forensic Use: ✅ Determine what files were accessed and stored locally ✅ Track offline file synchronization with Box cloud ✅ Recover deleted or previously cached files 🔍 3. Extracting User Login & Activity from metrics.db 📍 Located at: %AppData%\Local\Box\Box\data\metrics.db Column Description payload Stores Box login email & user authentication data 📌 Forensic Use: ✅ Identify Box accounts linked to the system ✅ Correlate login activity with user behavior ✅ Investigate unauthorized access to enterprise Box accounts ------------------------------------------------------------------------------------------------------------- 4️⃣ Investigating Box Log Files (Box_Streem logs) 🔍 Box logs detailed user actions, including: ✅ File uploads & downloads ✅ Folder structure changes ✅ Synchronization errors ✅ Authentication attempts 📍 Log Location: %AppData%\Local\Box\Box\logs\ 📌 Forensic Use: ✅ Track when files were accessed or modified ✅ Reconstruct user file activity timeline ✅ Identify anomalous file access patterns ------------------------------------------------------------------------------------------------------------- 5️⃣ Recovering Deleted & Cached Files from Box Drive 🔍 How to Recover Deleted Files from Box? Locally Deleted Files → Found in Recycle Bin Box Cloud Trash → Retains deleted files for 30-120 days Cached Files → Found in %UserProfile%\Box\.cache 📍 Forensic Strategy: 1️⃣ Extract file metadata from sync.db & streemsfs.db 2️⃣ Search .cache for offline versions of deleted files 3️⃣ Check Box logs for file deletion records 4️⃣ Retrieve deleted files from Box Cloud if enterprise logs are available 📌 Forensic Use: ✅ Recover previously cached Box files even if deleted from the cloud ✅ Identify sensitive documents removed from the system ✅ Correlate file deletion with user activity logs ------------------------------------------------------------------------------------------------------------- 🚀 Summary: Why Box is a Goldmine for Forensics ✔ Tracks file hashes (SHA1), timestamps, and offline/online status ✔ Maintains detailed logs for file access, sync, and user activity ✔ Stores locally cached files even if deleted from the cloud ✔ Allows forensic reconstruction of user interactions with cloud storage As organizations increasingly rely on Box for cloud storage and collaboration, understanding Box forensics is essential for digital investigations. Box provides detailed activity logs, file versioning, and sharing records , which can help forensic analysts track user actions, detect unauthorized access, and reconstruct file history. 🔍 Stay proactive, test forensic scenarios, and refine your analysis techniques—because every digital action leaves a trace! 🚀
- Investigating Dropbox Forensics
Dropbox has long been a challenging cloud storage service to investigate due to encrypted databases, hidden caches, and complex storage mechanisms . However, recent changes in Dropbox’s architecture have introduced unencrypted metadata sources , making forensic analysis more effective . 🚀 Key Topics Covered: ✅ Locating and analyzing Dropbox metadata & configuration files. ✅ Recovering deleted files from cache and database records ✅ Investigating Dropbox sync activity and user file interactions ✅ Extracting evidence from SQLite databases & JSON logs ------------------------------------------------------------------------------------------------------------- 1️⃣ Locating Dropbox Artifacts on Windows 📌 Primary Dropbox Data Locations Artifact Location Purpose Local Dropbox Folder %UserProfile%\Dropbox\ Stores synced files Configuration Files %UserProfile%\AppData\Local\Dropbox\info.json Contains Dropbox settings & sync path Cache Folder %UserProfile%\Dropbox\.dropbox.cache\ Stores recently deleted & cloud-only files Sync Databases %UserProfile%\AppData\Local\Dropbox\instance1\ Tracks file sync activity Registry Keys SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\SyncRootManager\Dropbox Identifies sync location & settings 📌 Forensic Use: ✅ Identify Dropbox usage even if uninstalled ✅ Recover deleted files from the cache folder ✅ Find local & cloud-only files ------------------------------------------------------------------------------------------------------------- 2️⃣ Extracting Dropbox Configuration Details Located at %UserProfile%\AppData\Local\Dropbox\ this JSON file stores: ✅ Sync folder path (customized storage location) ✅ Dropbox Team info (Enterprise accounts) ✅ Subscription type (Basic, Plus, Business, Enterprise) 📌 How to extract data: 1️⃣ Open the file with a JSON viewer 2️⃣ Search for path, is_team, and subscription_type fields 📌 Forensic Use: ✅ Verify Dropbox usage & account type ✅ Identify business accounts with enhanced logging ✅ Locate all synced files on disk ------------------------------------------------------------------------------------------------------------- 3️⃣ Recovering Deleted & Cloud-Only Files 🔍 The .dropbox.cache Folder 📍 Location: %UserProfile%\Dropbox\.dropbox.cache\ 🔍 Purpose: ✅ A hidden folder present in the root of the user's Dropbox file folder. Can contain copies of deleted files not yet purged from the local file store, ✅ Caches cloud-only files accessed recently ✅ Cleared automatically every 3 days 📌 How to recover files: 1️⃣ Check file headers to identify file types 2️⃣ Use forensic tools (e.g., FTK Imager) to analyze deleted file remnants 3️⃣ Correlate timestamps with Dropbox logs to determine deletion events ------------------------------------------------------------------------------------------------------------- 4️⃣ Investigating File Sync & Modification History 🔍 The aggregation.dbx Database 📍 Location: %UserProfile%\AppData\Local\Dropbox\instance1\ ✅ Tracks previous file updates to Dropbox storage ✅ Stores full path, timestamp, and user attribution 📌 Forensic Use: ✅ I dentify files recently added or modified ✅ Snapshot table****** Determine who edited the fil e (edited_by_me field)***** ✅ Recover deleted or renamed files 🛠 Parsing the Database: 1️⃣ Open with SQLite Viewer 2️⃣ Extract the recent table 3️⃣ Convert JSON entries for easy reading ------------------------------------------------------------------------------------------------------------- 5️⃣ Extracting File Metadata & Starred Items 🔍 The home.db Database 📍 Location: %UserProfile%\AppData\Local\Dropbox\instance1\ 📌 Key Tables: Table Field Purpose recents server_path, timestamp Last updated files starred_items server_path, is_starred, timestamp Files marked as "important" sfj_resources server_path, server_fetch_timestamp Tracks last sync from cloud 📌 Forensic Use: ✅ Track starred files (user-marked important files) ✅ Determine last synced files from the cloud ✅ Recover previous versions of files ------------------------------------------------------------------------------------------------------------- 6️⃣ Investigating Dropbox Sync History 🔍 The sync_history.db Database 📍 Location: UserProfile%\AppData\Local\Dropbox\instance1\ ✅ Records uploads, downloads, deletions, and modifications ✅ Tracks changes made locally vs. changes from the cloud 📌 Key Fields in sync_history Table: Field Purpose file_event_type Type of action (add, delete, edit) direction Upload = Local → Cloud, Download = Cloud → Local local_path Full file path timestamp Time of last activity other_user "1" indicates file owned by another user 📌 Forensic Use: ✅ Identify if a file was deleted locally or via the cloud ✅ Track external file sharing & downloads ✅ Determine if files were modified before deletion ------------------------------------------------------------------------------------------------------------- 7️⃣ Recovering Hidden Dropbox Files 🔍 The nucleus.sqlite3 Database 📍 Location: %UserProfile%\AppData\Local\Dropbox\instance1\sync ✅ S tores names of local & cloud-only files ✅ T racks synced & unsynced files 📌 Key Tables: Table Field Purpose local_tree value Files currently synced locally synced_tree value Mirrors local_tree but with extra metadata remote_tree value Tracks cloud-only files (not synced) 📌 Forensic Use: ✅ Identify files stored only in the cloud ✅ Recover filenames of deleted cloud files ✅ Determine the last known location of missing files ------------------------------------------------------------------------------------------------------------- 8️⃣ Extracting Thumbnails of Deleted Dropbox Images 🔍 The tray-thumbnails.db Database 📍 Location: %UserProfile%\AppData\Local\Dropbox\machine_storage ✅ Stores references to to image files once present in Dropbox ✅ Includes metadata on deleted images 📌 Key Fields: Field Purpose file_name Name of the image file timestamp Time the thumbnail was created 📌 Forensic Use: ✅ Recover filenames of deleted images ✅ Identify when images were last accessed or modified ✅ Correlate with file sync logs for evidence reconstruction ------------------------------------------------------------------------------------------------------------- Extracting icons information 🔍 The icon.db Database 📍 Location: %UserProfile%\AppData\Local\Dropbox\instance1\ ✅ Stores generated icon information, including full file paths. 📌 Key Fields: Field Purpose file_name Full file path created_time Likely the creation time of the icon, not the time an item was added to the store (Unix epoch time) ------------------------------------------------------------------------------------------------------------- 9️⃣ Investigating Dropbox Enterprise & Team Accounts 🔍 Dropbox Business & Enterprise Accounts offer extended logging and audit trails . ✅ Tracks file sharing, modifications, deletions ✅ Identifies file downloads & external access 📌 Forensic Use: ✅ Monitor suspicious file transfers within teams ✅ Track shared links & external file access ✅ Recover deleted files from extended retention policies 🛠 How to Access Dropbox Business Logs: 1️⃣ Login to Dropbox Admin Console 2️⃣ Navigate to Reports > Activity Logs 3️⃣ Filter logs by event type (file downloaded, shared, deleted, etc.) 4️⃣ Export logs in CSV format for offline analysis ------------------------------------------------------------------------------------------------------------- 🔎 Summary & Forensic Workflow ✅ Step 1: Identify Dropbox installation (check info.json, registry keys, and instance1 folder). ✅ Step 2: Extract file metadata (home.db, aggregation.dbx). ✅ Step 3: Recover deleted files (.dropbox.cache, sync_history.db). ✅ Step 4: Track cloud-only & unsynced files (nucleus.sqlite3). ✅ Step 5: Track icons information ( icon.db ). ✅ Step 5: Analyze Dropbox Business logs for enterprise investigations. We will explore more about Dropbox in the next article ( Dropbox Forensic Investigations: Logs, Activity Tracking, and External Sharing ) , so stay tuned! See you in the next one.
- Dropbox Forensic Investigations: Logs, Activity Tracking, and External Sharing
Dropbox presents significant challenges for forensic investigations due to encrypted databases, limited endpoint logs, and obfuscated external IP s . However, with the right approach, investigators can extract valuable metadata, user activity records, and external sharing reports . 🚀 Key Topics Covered: ✅ Extracting Dropbox metadata from local databases ✅ Using SQLECmd to automate SQLite analysis ✅ Tracking user actions via cloud activity logs ✅ Investigating file sharing and external access -------------------------------------------------------------------------------------------------------- 1️⃣ Dropbox Local Artifacts: Databases & Metadata Files 🔍 Where Does Dropbox Store Metadata Locally? File/Database Location Purpose info.json %AppData%\Local\Dropbox\ Dropbox configuration & sync folder location .dropbox.cache %UserProfile%\Dropbox\ Cached & staged file versions aggregation.dbx %AppData%\Local\Dropbox\instance<#> Recent file updates (JSON format) home.db %AppData%\Local\Dropbox\instance<#> Tracks Dropbox file changes (Server File Journal) sync_history.db %AppData%\Local\Dropbox\instance<#> Upload/download activity nucleus.sqlite3 %AppData%\Local\Dropbox\instance<#>\sync List of local & cloud-only files 📌 Forensic Use: ✅ Identify Dropbox folder locations & linked accounts ✅ Recover deleted/staged files from .dropbox.cache ✅ Reconstruct file modification history using home.db -------------------------------------------------------------------------------------------------------- 2️⃣ Automating Dropbox Analysis with SQLECmd 🔍 What is SQLECmd? SQLECmd is an open-source forensic tool created by Eric Zimmerman to automate SQLite database parsing . It utilizes map files to identify Dropbox, Google Drive, and other forensic databases , automatically extracting file activity, timestamps, and metadata . What I did? Used gkape to extract all dropbox related files: 📍 Example: Running SQLECmd on Dropbox Data SQLECmd.exe -d C:\Users\Akash's\Incident response Dropbox --csv . 📌 How It Works: 🔹 -d : Specifies the directory to scan (Dropbox data folder) 🔹 --csv . : Saves results as CSV files in the current directory 📌 Forensic Use: ✅ Quickly extract metadata from Dropbox SQLite databases ✅ Identify synced, modified, and deleted files ✅ Analyze file movement within Dropbox folders -------------------------------------------------------------------------------------------------------- 1️⃣ Dropbox Logging: Free vs. Business Tiers 🔍 Comparing Activity Logs Across Dropbox Tiers Feature Basic (Free) Dropbox Business File Add/Edit/Delete Logs ❌ No logs ✅ Yes File Download & Upload Logs ❌ No logs ✅ Yes User Login & Session History ✅ Limited ✅ Full IP & Geolocation External File Sharing Reports ❌ No ✅ Yes Export Logs to CSV ❌ No ✅ Yes API Access for Logs ❌ No ✅ Yes 📌 Forensic Use: ✅ Track file modifications & deletion history ✅ Identify suspicious logins based on IP & location ✅ Monitor shared links for data exfiltration -------------------------------------------------------------------------------------------------------- 2️⃣ Accessing Dropbox Logs via the Admin Console 🔍 Steps to Retrieve Logs: 1️⃣ Log in to the Dropbox Admin Console 2️⃣ Navigate to Reports > Activity Logs 3️⃣ Use Filters to narrow results by user, file, folder, or event type 4️⃣ Click "Create Report" to export logs in CSV format 📌 Forensic Use: ✅ Track who accessed or modified sensitive files ✅ Identify suspicious external IP addresses ✅ Monitor deleted files & restoration attempts -------------------------------------------------------------------------------------------------------- 3️⃣ Investigating IP Addresses & Geolocation Data 🔍 Analyzing IP Logs for Unauthorized Access Dropbox logs user IP addresses and device locations , which can help track unauthorized logins . ⚠ Limitations: Dropbox obfuscates some external IP addresses , making it difficult to identify non-employee access . 4️⃣ Tracking External File Sharing & Anonymous Links 🔍 Dropbox Business "External Sharing" Report Dropbox tracks files shared outside the organization , but free users lack visibility into external recipients . 5️⃣ Advanced Filtering for Dropbox Logs 🔍 Filtering Logs for Specific Investigations Dropbox allows filtering logs by various criteria, improving forensic analysis. Key Filters for Investigation Filter Use Case Date Range Identify activity before & after an incident User Track a specific employee's Dropbox usage File/Folder Name Find modifications to critical documents Event Type Focus on file downloads, sharing, or deletions ------------------------------------------------------------------------------------------------------------- Before leaving, I waana update that in forensics, not everything is a piece of cake—there are limitations. Same for Dropbox lets talk about limitation Understanding Dropbox Event Logging All Dropbox users, regardless of their plan, have access to basic event logging through the "Events" section. However, users with Business or Advanced Business plans have access to more extensive logging , which is particularly valuable in forensic investigations. What Does Dropbox Log? Administrators of Advanced Business plans can track detailed user activity , including: ✔ File-level events – Adding, downloading, editing, moving, renaming, and deleting files. ✔ Sharing actions – Shared folder creation, shared link creation, Paper doc sharing, and permission changes. ✔ Access tracking – Internal and external interactions with shared files and folders. These logs can be exported in CSV format , allowing investigators to filter data more effectively and analyze additional fields, such as IP addresses . Logs can be retained for years , making them a valuable resource for forensic analysis. However, new event entries may take up to 24 hours to appear. Limitations and Blind Spots in Dropbox Logging While Dropbox's cloud logging is valuable, it is important to recognize its limitations : 🔹 Limited endpoint visibility – Actions performed on locally stored Dropbox files may not be logged . For example, if a user copies a file from the Dropbox folder to their desktop or an external USB device, Dropbox may not record this activi ty. 🔹 Synchronization tracking challenges – While Dropbox logs when an unauthorized devic e connects and authenticates, it does not always track what files were synchronized to that device. 🔹 Difficulty reconstructing deleted files – Dropbox logs make it challenging to determine what files were once in a folder after they are deleted. However, Dropbox's versioning feature can sometimes help retrieve previous versions of a file. Due to these blind spots, forensic investigators should not rely solely on cloud logs . Instead, combining cloud logs with endpoint forensic analysis (such as examining sync databases and local metadata) provides a more complete picture. Best Practices for Dropbox Forensics Since breaches and data theft are inevitable , proactive measures are necessary: ✔ Test forensic scenarios – Simulating real-world incidents can help determine the exact scope of logging available in your environment. ✔ Export and analyze logs regularly – Using CSV exports allows deeper filtering and historical tracking. ✔ Correlate with endpoint forensics – Combining Dropbox logs with local forensic evidence (if available) can help bridge information gaps. While Dropbox logging isn't perfect , it is still a crucial tool for digital investigations . By understanding its capabilities and limitations, forensic analysts can make informed decisions when investigating incidents involving Dropbox. ------------------------------------------------------------------------------------------------------- Conclusion Dropbox forensics is a crucial aspect of modern investigations, as cloud storage plays a key role in how users store, access, and share files. By analyzing local sync folders, logs, SQLite databases, and API activity , forensic analysts can reconstruct file movements, modifications, deletions, and access history with precision . As cloud storage becomes an integral part of personal and corporate data management, the ability to track and analyze Dropbox activity is essential for digital forensics, cybersecurity, and incident response . Staying updated on Dropbox forensic techniques ensures that investigators can effectively follow digital trails and uncover critical evidence. 🚀 Keep exploring, stay curious, and refine your forensic skills—because digital evidence is everywhere! 🔍 🎯 Next Up: Box Forensics – Investigating Cloud Storage Security 🚀










