
Please access this website using a laptop / desktop or tablet for the best experience
Search Results
514 results found with an empty search
- Part 3- Important Registries related to System configuration overview
8. Network profile key: -First and last name connected: Windows XP: The Legacy of Wireless Zero Configuration In the Windows XP era, the Wireless Zero Configuration (WZC) service was the backbone of wireless network management. Deep within the registry at SOFTWARE\Microsoft\WZCSVC\Parameters\Interfaces{GUID} lies a goldmine of data. Here, the machine meticulously records its encounters with wireless access points, preserving SSIDs and timestamps of connection. These SSIDs, akin to unique security identifiers, serve as digital footprints, revealing the machine's proximity to specific locations and networks. Windows 7-10: The Evolution of Network List Profiles The Network List Profiles, housed within below key and took center stage. Each subkey, adorned with a GUID, encapsulates network names and types, delineated by hexadecimal values. SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Profiles Whether wireless (0x47) wired (0x06) broadband (0x17), each network type leaves its mark, illuminating the user's connectivity landscape. Decoding the Temporal Enigma: CreationTime and LastDateConnected The CreationTime and LastDateConnected timestamps, shrouded in 128-bit system time, hold the key to unraveling network chronicles. Utilizing the DCodeDate tool, these timestamps unveil the saga of network encounters, from the maiden connection to the latest rendezvous. CMD: reg query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Profiles" 9. Shares and offline locations: System Hive SYSTEM\CurrentControlSet\Services\lanmanserver\Shares\ CMD: reg query HKLM\SYSTEM\CurrentControlSet\Services\lanmanserver\Shares\ Detecting Open Shares: A Critical Investigation The first step in examining file shares is detecting their presence on a machine. In many cases, users may inadvertently share their entire hard drive, unknowingly granting remote access to sensitive files. Identifying these open shares is crucial in understanding how files may have appeared on a workstation, thereby mitigating potential arguments regarding unauthorized access or file manipulation. Client-Side Caching (CSC): The Silent Culprit A covert method of file exfiltration lies in Windows Offline Files' client-side caching (CSC) feature. By enabling offline access to specific files, users can discreetly cache them on their system, allowing access regardless of network connectivity. This poses a significant challenge in detecting unauthorized file transfers, as cached files may go unnoticed by traditional monitoring methods. However, examining CSC Flags options can provide insights into how folders are cached, shedding light on potential file exfiltration attempts. Windows Offline Files caches files in the directory C: \Windows\ CSC. • CSCFlag = 0: Default option means that the user must specify which files he would like to be cached. • CSCFlag = 16: For automatic document caching, "All files and programs that users open from the shared folder are automatically available offline" with the "optimize for performance" unchecked. • CSCFlag = 32: For automatic program caching. Same as above, but with "Optimize for performance" checked. • CSCFlag = 48: Caching is disabled. • CSCFlag = 2048: Default Win7-l O setting until user disables the "Simple File Sharing" or uses the "advanced" sharing options. It is also the default setting for "Homegroup." Key Data Fields: Unraveling the Mystery Max Uses: Total number of connections to a single share. Set to 4294967295 at default, which is also the highest number you can get using 32 bits. Path: Local path Permissions: Apparently, the value can help us determine how a share was created. 0 is default meaning that GUI or PowerShell created the share. For Win7-10, if the value is 9, then it was created via advanced file sharing. If the value is 63, then a command line created the share. Type: Type of device or share accessed • 0 = Disk Drive or Folder • 1 = Printer • 2 = Device • 3 = IPC • 2147483648 = Admin (Disk, Printer, Device, or IPC) Will continue in next blog................... Akash Patel
- Part 2- Important Registries related to System configuration overview
5. NTFS last access time on/off The Misconception: One common misconception about last access timestamps is that they solely indicate the last time a file was opened or accessed by a user. However, this oversimplification overlooks the fact that these timestamps can be updated for reasons other than user interaction. For instance, a file may have its last access timestamp modified simply by being "touched" by the system, without any actual opening or viewing by a user. Variables Impacting Last Access Timestamps: Several variables can impact the accuracy and reliability of last access timestamps. One significant factor is the operating system's settings. For instance, Microsoft disabled updates to last access timestamps in Windows Vista and subsequent versions for NTFS file systems to enhance performance. However, it's crucial to note that this setting only affects NTFS file systems, while other file systems like ExFAT and FAT continue to update access timestamps normally. Granularity and Enabling Last Access Timestamps: Last access timestamps typically have a loose granularity, often accurate only to within one hour. Users can choose to enable last access timestamps if needed for applications that rely on them. However, enabling this feature may come with performance implications and should be considered carefully based on the specific forensic scenario. Importance in Forensic Analysis: Despite their limitations, They can help investigators determine when files were accessed by the system, shedding light on user activity and potential evidence trails. System Hive: SYSTEM\CurrentControlSet\Control\FileSystem Cmd : reg query HKLM\System\ CurrentControlSet\Control\Filesystem 6: Network interfaces: This key contains a plethora of invaluable details, including TCP/IP configurations, IP addresses, gateways, and DHCP-related information. For machines configured with DHCP, it reveals the assigned IP address, subnet mask, and DHCP server's IP address. Significance in Forensic Investigations: Network interface information plays a crucial role in cases involving network-based evidence. It provides investigators with essential insights into how a system was connected to a network—be it wired, wireless, 3G, or Bluetooth. Moreover, the interface GUID serves as a valuable identifier for correlating additional network profile data stored in registry keys, enhancing the depth of investigation. Exploring Historical IP Information: On Windows 7 through Windows 10 systems, multiple subkeys under each interface provide historical IP information. These records, stemming from DHCP assignments, offer insights into previous IP address assignments. While not exhaustive, they contribute valuable context to investigative analyses. The last connected IP for each interface is particularly noteworthy, as it relates to the parent GUID key. System Hive: SYSTEM\CurrentContro1Set\Services\Tcpip\Parameters\Interfaces Cmd: reg query HKLM\System\Controlset001\Services\Tcpip\Parameters\Interfaces usefulness • Lists network interfaces of the machine • Can determine whether machine has a static IP address or whether it is configured by DHCP • Ties machine to network activity that was logged • Obtain interface GUID for additional profiling in network connections 7. Historical network-network list keys: Understanding NLA Functionality: NLA operates by aggregating network information for each network interface a PC is connected to and generating a globally unique identifier (GUID) for each network. These identifiers, known as network profiles, facilitate the application of appropriate firewall rules based on the network's characteristics. For instance, different firewall profiles may be applied for public, home, or managed networks, allowing for tailored security configurations Forensic Significance of NLA: From a forensic standpoint, NLA presents a wealth of valuable information. By accessing NLA records, investigators can obtain a list of all networks a machine has ever connected to, identified by their DNS suffixes. This capability is instrumental in identifying intranets and external networks, offering crucial context for investigative analysis. Geo-Location Insights: One of the most compelling aspects of NLA for forensic investigators is its potential to provide geo-location insights. By examining the networks a device has connected to and the associated timestamps, investigators can infer the geographical locations where the device has been used. This information can be pivotal in reconstructing timelines, establishing alibis, or corroborating witness statements in digital investigations Registry Details: NLA-related information is primarily stored in the Windows Registry under specific locations: HKLM\Software\Microsoft\Windows NT\CurrentVersion\NetworkList SOFTWARE\Microsoft\ Windows NT\ CurrentVersion \NetworkList\Signatures\ Unmanaged SOFTWARE\Microsoft\ Windows NT\ CurrentVersion \NetworkList\Signatures\Managed Historical data, including connection times, can be found under the Cache key: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Nla\Cache Utilizing ProfileGuid: One challenge in NLA analysis is determining the first and last time a network was connected to. Investigators can overcome this obstacle by leveraging the ProfileGuid, a unique identifier associated with each network, and mapping it to connection times stored in the Registry. Write down profile GUID Usefulness • Identifying intranets and networks that a computer has connected to is incredibly important • First and last time a network connection was made • This will also list any networks that have been connected to via a VPN • MAC Address of SSID for Gateway could be physically triangulated Will Continue on next post................ Akash Patel
- Part 1- Important Registries related to System configuration overview
1. Identify the Microsoft version: An investigator will receive a disk image and have no idea what the specific Windows operating system version is for it. The Windows OS version is critical to ensuring you are accurately finding and utilizing the correct artifacts during your analysis. Directory paths, types of artifacts, and even default programs change based on the version and service pack of the Windows OS. Software Hive: SOFTWARE\Microsoft\Windows NT\CurrentVersion\ Through cmd: reg query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion" 2. Identify current control set: A control set in the Windows Registry contains system configuration settings needed to control system boot, including driver and service information. Typically, there are two ControlSets: ControlSet001 and ControlSet002. ControlSet001 represents the configuration used in the last successful boot, while ControlSet002 serves as a backup that can be used to recover from boot issues. System hive: SYSTEM\Select Command: reg query "HKLM\System\Select" The Select key contains a REG_DWORD value named "Current," which indicates the number for the ControlSet that is currently active. By examining this value, you can identify which ControlSet is the "current" one. For example, if the Current value is set to 0x01 or "1," then ControlSet001 is the registry path that is currently set to the "CurrentControlSet" and should be examined in-depth. Additionally, the "LastKnownGood" key in the Select key indicates which ControlSet is the snapshot of the last successful boot. If the "LastKnownGood" key is set to 0x01 or "1," it means that ControlSet001 represents the snapshot taken during the last successful boot. 3. Computer name: The computer name is useful mainly for logging purposes and verification, but it should not go unnoticed. SYSTEM hive: SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName Cmd: reg query "HKLM\SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName" 4. Time zone information: Registry Timestamps and Time Zones: While most registry timestamps and last write times are recorded in Coordinated Universal Time (UTC), the overall system time, including file system timestamps on FAT file systems, may be associated with the local time zone set in the control panel applet. 2. Changing Time Zone: Users can easily change the time zone settings on their machines. This action updates the last write time of the relevant registry key that stores the time zone information. 3. Recommendation to Use UTC: To maintain consistency and accuracy in forensic analysis, it's highly recommended to set the local analysis machine time to UTC. This helps avoid unintentional biases introduced by forensic tools and minimizes the risk of misinterpreting time-related data. 4. Formulas for Time Conversion: • UTC: UTC = Local Time + ActiveTimeBias • Local Time: Local Time = UTC - ActiveTimeBias • Standard Time: Standard Time = Bias + StandardBias • Daylight Time: Daylight Time = Bias + DaylightBias Time activity is incredibly useful for correlation of activity • Internal log files and date/timestamps will be based off the system time zone information • You might have other network devices and you will need to correlate information to the time zone information collected here. System hive: SYSTEM\CurrentControlSet\Control\TimeZoneinformation Cmd: reg query HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneinformation Will continue further in next blog....... Akash Patel
- Understanding Important Registries
1. MRU Lists (Most recent used lists) NTUSER.DAT for particular user (If we use Registry explorer in my case c:\users\user\ntuser.dat) Look For Last Visited MRU as well as Recent docs(Highlighted into screenshot) Each MRU list maintains the order of the most recent additions to a registry key. This order can provide valuable insights into user activity. MRU lists help investigators understand the sequence of data populating a specific key. The last write time of a key indicates the time when the first entry in the MRU list occurred. For example, the last write time of the Microsoft Office .docx file might correspond to the time when the file was last opened. The subsequent values in the MRU list indicate the order of recent activity, typically sorted from most recent to oldest. 2. Run Registry: Online -via regedit HKCU\Software\Microsoft\Windows\CurrentVersion\Run Offline- Via registry explorer NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Run 3. Deleted registry key values: Privacy cleaner's leftovers can easily be viewed using Registry Explorer. Notice the deleted keys and that each of the sub keys are still visible. In every case, the original data could be recovered. 4. Collecting user information: SAM profiling user/groups (i) Username (ii) RID (iii) User login information -Last login -last failed login -login count -password policy -account creation time (iv) Group information -Administrator -users -remote desktop users When examining the SAM hive in Registry Explorer, we can easily locate the Relative Identifier (RID) associated with a user account(In my case User ID is RID) , as well as other pertinent details. For example, we can identify the RID for a user like Guest os 501, which helps us track his activities on the system. Additionally, Registry Explorer provides insight into important timestamps, including the last login time and the time of the last password change. Akash Patel
- Understanding Registry Hive transaction logs**
The Windows operating system caches writes to the registry in two locations. The first is in memory. The second is on disk in the transaction log file. The transaction log is named after the ntuser.dat.LOG 1 and ntuserdat.LOG2 located in the same folder as the registry hive file. **Starting with Windows 8, Microsoft changed the way that windows permanently write to the hive files. The transaction log files are used to cache writes to the registry before they are permanently written to the hive. A significant change occurred in Windows 8.1 and above that might leave the most recent activity that occurred in the past hour inside the transaction log file and will be missing from the registry hive file unless the transaction log files are parsed when you open the registry hive file. Starting with Windows 8 and above, temporary data is written to the transaction log files and continually appends the log files. It does not permanently write to the core hive file immediately but will do so when the system is being unused, shutdown, or when an hour has occurred since the last write to the primary hive file. This has resulted in much less disk writes over time and apparently has improved performance of the operating system by reducing the continual writes to the registry hives. It means that most recent changes to the registry are likely located in the transaction log files and not found in the hive files you might be examining. Most registry forensic tools do not perform this check or alert you to this issue. This is especially interesting if you are trying to track the recent user or process interactions inside the Windows operating system. Many forensic tools do not take into account the data stored in the transaction log files and especially. Akash Patel
- Understanding Registry:
Windows Registry Overview: The Windows registry is a crucial database storing system, software, hardware, and user configuration data. Root Keys: It comprises four main root keys: HKEY_CLASSES_ROOT HKEY_CURRENT_USER (HKCU) HKEY_LOCAL_MACHINE (HKLM) HKEY_USERS. Offline Access: Registry files are typically located in %WINDIR%\system32\config, with hives like DEFAULT, SAM, SECURITY, SOFTWARE, and SYSTEM. Hives and Contents: Each hive contains specific information: SYSTEM Hive: HKLM Hardware and service configurations. It will also list the majority of the raw device names for volumes and drives on the system including USB keys SOFTWARE Hive: Application settings and configurations. NTUSER.DAT Hive: User-specific configuration and environment settings as well as which includes a slew of identifiable data pertaining to user activity. SAM Hive: Local user accounts and groups. SECURITY Hive: Security information like password policies and group membership. AMCACHE.HVE : Introduced in Windows 8, it tracks application compatibility and execution evidence, aiding in running older executables. Backup hives: RegIdleBackup task runs every 10 days on Vista, Win7, Win8, Win10, Server 2008, Server 2012, and Server 2016. It copies SAM, DEFAULT, SYSTEM, SOFTWARE, and SECURITY hives to %WinDir%\System32\Config\RegBack directory. This backup might contain residue that was cleared from the current hives. The task does not backup the local NTUSER.dat hives of users. Note :- Windows automatically creates backup copies of its registry hives periodically and stores them in the %SystemRoot%\System32\config\RegBack directory. However, this folder might be empty or not contain the most recent backups depending on system settings. User registry Hives The Windows registry holds a wealth of user-specific information, offering insights into various aspects of user activity on the system. It serves as a repository for recent actions performed by users, including accessed files, searched items, typed URLs, executed commands, and saved documents. One of the primary components of the registry is the NTUSER.dat hive : which contains keys specific to each user profile Located under HKEY_CURRENT_USER, the NTUSER.dat hive offers a comprehensive view of user-centric actions within the system. UsrClass.dat. hive : This hive, typically located at C:\Users\AppData\Local\Microsoft\Windows\UsrClass.dat, holds crucial information related to program execution and folder manipulation. It plays a vital role in the virtualized registry root for User Account Control (UAC), facilitating seamless user interactions with the system. Despite its virtualized nature, UsrClass.dat offers valuable clues about user activities, helping forensic analysts reconstruct user behavior patterns. Tip:- One notable aspect of UsrClass.dat is its association with ShellBags, a registry key that tracks the opening and closing of files and folders by programs. By examining ShellBags entries, investigators can uncover evidence of file and folder interactions, shedding light on user activities and application usage patterns. With registry explorer things became easy to analyze (By Eric Zimmerman) Registry key last write time using registry explorer 1. The registry tracks the last write time for every key on the system. 2. This timestamp, stored within the registry itself, indicates the last update of any key value and is typically displayed in Coordinated Universal Time (UTC). 3. The last write time is crucial for forensic investigations as it provides the timing of specific activities or events within the registry. 4. By correlating the last write time with other system data, such as user login times or file copy events, investigators can build a comprehensive timeline of user actions. 5. It's important to note that the last write time is updated whenever a value is added or updated within a key, and different keys may be updated at different points depending on the program's behavior. 6. Ensuring a clear understanding of whether timestamps are recorded in UTC or the local time zone is essential for accurate interpretation of forensic data. Failure to account for time zone discrepancies could lead to misinterpretation of critical evidence, potentially compromising the integrity of the investigation Will Continue in next blog............................. Akash Patel
- Extracting/Examine Volume Shadow Copies for Forensic Analysis
Introduction: In the realm of digital forensics, gaining insights into the changes made to files and volumes over time can be critical for uncovering evidence and understanding system activity. One powerful tool in this endeavor is Volume Shadow Copy (VSC), a feature found in modern Windows operating systems such as Windows Vista, Windows 7, Windows 8, and Windows 2008. Understanding Volume Shadow Copies: Volume Shadow Copies are a feature of the Windows operating system that allows users to create snapshots, or copies, of files and folders at different points in time. These snapshots are created by the Volume Shadow Copy Service (VSS) and can be used to restore files to previous versions in the event of data loss or corruption. While VSCs were initially introduced with Windows XP and System Restore points, they evolved into a more robust feature with Vista and Server 2008, providing persistent snapshots of the entire volume. Recovering Cleared Data: One of the key advantages of Volume Shadow Copies is their ability to recover data that has been deleted or modified, even if it has been wiped by attackers. By examining historical artifacts from earlier snapshots, forensic analysts can uncover evidence of malicious activities that may have been hidden or erased. This includes recovering deleted executables, DLLs, drivers, registry files, and even encrypted or wiped files. Tools for Analyzing Volume Shadow Copy: VSC-Toolset Magnet Forensics(if still available) Creating Volume Shadow Copies: Volume Shadow Copies can be created using various methods, including System Snapshot, Software Installation, and Manual Snapshot. System snapshots are scheduled to occur every 24 hours on Windows Vista and every 7 days on Windows 7, although the timing may vary based on system activity. To obtain a list of the shadows execute: Step 1: Open Command Prompt Begin by opening Command Prompt with administrative privileges. Step 2: Execute vssadmin Command In the Command Prompt window, type the following command: vssadmin list shadows /for=C: Replace "C:" with the drive letter for which you want to list the available shadow copies. Step 3: Review the Output . Here are some key things to notice in the output: 1. Shadow Copy Volume Name: • The name of the shadow copy volume is crucial for examining the contents of that specific volume. 2. Originating Machine: • If you have plugged in an NTFS drive from another shadow copy-enabled machine, the originating machine's name will be listed. 3. Creation Time: • Pay attention to the system time of the creation time . This timestamp indicates when the snapshot was created, helping you identify which shadow copy volume might contain the data you're interested in. Leveraging Symbolic Links to Explore Shadow Copy Volumes: Administrators can utilize symbolic links to navigate and scan directories containing shadow copy volumes. This method provides a convenient way to access previous versions of files and directories directly from a live machine. Step 1: Open an Administrator Command Prompt Start by opening a Command \ Step 2: Select a Shadow Copy Volume Refer to the output of the vssadmin command to identify the shadow copy volume you want to examine. Choose a volume based on the date and time of the snapshot you're interested in. In my example: When I use command vssadmin list shadows /for=C: I found 3 shadow copies But I am going to use 3rd one Step 3: Create a Symbolic Link In the Command Prompt window, execute the following command C:\> mklink /d C:\shadow_copy3 \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy3 Replace "C:\shadow_copy3" with the directory path where you want to create the symbolic link. Ensure to include the trailing backslash in the command Step 4: Access the Shadow Copy Volume Once the symbolic link is created, you can navigate to the specified directory (e.g., C:\shadow_copy3) using File Explorer or the Command Prompt. This directory now points to the selected shadow copy volume, allowing you to browse its contents as if it were a regular directory on your system. Step 5: Retrieve Files or Directories Utilize the symbolic link to access previous versions of files and directories stored in the shadow copy volume. This capability is particularly valuable for recovering files that may have been deleted, overwritten, or corrupted on the live system. Examine/Extracting Volume Shadow data using ShadowExplorer: Step 1: Mount the disk image in Arsenal Image Mounter in "Write Temporary" mode. Arsenal Image Mounter is necessary because FTK Imager's mount capability does not expose the Volume Shadow Copies (VSCs) to the underlying operating system. Open Arsenal Image Mounter--> click on mount image--> Select image--> OpenWrite temporary --> Okay Step 2: Launch ShadowExplorer as Administrator. It's important to run ShadowExplorer with administrator privileges to ensure that it can parse all the files and folders available to the analyst. Step 3: Browse Snapshots. ShadowExplorer provides a familiar Windows Explorer-like interface, making it easy to navigate through the available snapshots. You can browse the snapshots just like you would in Windows Explorer. Step 4: Extract Files. To extract files of interest, simply right-click on the file or folder you want to extract and select "Export." This will allow you to save the selected files or folders to a location of your choice on your system. Challenges and Considerations: While Volume Shadow Copies are a powerful tool for forensic analysis, there are some limitations and considerations to keep in mind. For example, the introduction of ScopeSnapshots in Windows 8 can impact the forensic usefulness of VSCs by limiting the scope of volume snapshots to files relevant for system restore only. However, this feature can be disabled through registry settings on client systems, allowing forensic analysts to access more complete volume backups. Conclusion: Volume Shadow Copies provide forensic analysts with a valuable resource for recovering deleted or modified data and uncovering evidence of malicious activities on compromised systems. By understanding how VSCs work and overcoming challenges such as ScopeSnapshots, forensic analysts can enhance their capabilities and improve their ability to conduct thorough investigations.
- Overview the Core Components of NTFS File System
The $MFT, $J, $LogFile, $T, and $I30 are all important components of the NTFS (New Technology File System) file system used in Windows operating systems. $MFT (Master File Table): Purpose: The $MFT, or Master File Table, serves as the central repository of metadata for all files and directories on an NTFS volume. It contains information such as file names, attributes, security descriptors, and data extents. Structure: The $MFT is organized as a table consisting of fixed-size entries, with each entry representing a file, directory, or metadata object. Each entry has a unique identifier known as the MFT Record Number (also called the Inode Number). Location: The $MFT is located at a fixed position near the beginning of the volume. It is crucial for the proper functioning of the file system and is allocated a portion of disk space during volume formatting. $J (Journal): Purpose: The $J, or journal, is an extension of the $LogFile and serves a similar purpose in maintaining the integrity of the file system. It records metadata changes made to files and directories, ensuring consistency in the event of system failures. Functionality: Like the $LogFile, the $J logs transactions to facilitate recovery in case of system crashes or unexpected shutdowns. However, the $J provides additional capabilities, such as journaling data changes at the cluster level, for more efficient recovery and reduced risk of data corruption. Location: The $J is typically located near the beginning of the volume, operating in conjunction with the $LogFile to provide comprehensive transaction logging and recovery capabilities. $LogFile: Purpose: The $LogFile maintains a record of transactions performed on the file system, ensuring the integrity and consistency of data. It logs changes before they are committed, allowing for recovery in case of system crashes or failures. Functionality: Whenever a modification is made to the file system, such as creating, deleting, or modifying a file, the operation is first logged in the $LogFile. This logged information can be used to reconstruct the file system state and recover data. Redundancy: To prevent data loss, the $LogFile maintains redundant copies of critical information, enabling recovery even if the primary log becomes corrupted. $T (Transaction): Purpose: The $T, or transaction metadata file, is part of the transactional NTFS feature introduced in Windows Vista and later versions. It stores metadata related to transactions, which are units of work performed on the file system. Functionality: The $T file maintains information about transactions, such as transaction IDs, transaction state, and changes made during each transaction. This facilitates atomicity, consistency, isolation, and durability (ACID properties) in file system operations. Location: The $T file is typically located in the root directory of the volume and is associated with the transactional NTFS feature. $I30 (Index Allocation): Purpose: The $I30 is an index allocation attribute used to store directory entries within a directory. It contains information about files and subdirectories, facilitating efficient directory traversal and file access. Functionality: Each directory on an NTFS volume typically has an associated $I30 attribute, which stores references to files and subdirectories contained within that directory. This index allows for quick lookup and retrieval of directory entries. Location: The $I30 attribute is part of the metadata associated with directories and is stored within the MFT entry corresponding to the directory. Summary: $MFT: Central repository of metadata for files and directories. $J (Journal): Extension of the $LogFile for logging metadata changes and ensuring file system integrity. $LogFile: Maintains a record of transactions to facilitate recovery in case of system crashes or failures. $T (Transaction): Stores metadata related to transactions for ensuring ACID properties in file system operations. $I30: Index allocation attribute used to store directory entries within directories, facilitating efficient file access and directory traversal. Akash Patel
- NTFS: Metadata with The Sleuth Kit(istat)
In the realm of digital forensics, dissecting the intricacies of file systems is essential for uncovering valuable evidence and insights. One powerful tool for this purpose is The Sleuth Kit, which offers a range of utilities designed to analyze file system metadata. Understanding istat: "Istat" is a versatile tool within The Sleuth Kit that specializes in parsing metadata information from various file systems, including NTFS, FAT, and ExFAT. It can be used with forensic image files such as raw, E01, and even virtual hard drive formats like VMDK and VHD. Additionally, istat is capable of analyzing live file systems, providing forensic analysts with flexibility in their investigations. https://www.sleuthkit.org/sleuthkit/download.php Usage Example: To demonstrate the usage of istat, we want to analyze the root directory of the C: drive on a Windows system In an Administrator command prompt, we would execute the command: Command :- istat \\.\C: 5 Here, "5" represents the MFT record number reserved for the root of the volume. Command Line Options: Istat offers several optional switches to customize its behavior. "-z," which allows specifying the time zone of the image being analyzed. By default, the local time zone of the analysis system is used, but this can be overridden with the -z flag. "-s," which enables correcting clock skew in the system. This option is particularly helpful when dealing with systems that may have inaccurate time settings. MFT Entry Header: Allocation Status: Indicates whether the MFT entry is currently allocated or unallocated. File Allocation: In this instance, the directory is allocated, signifying that it's actively in use. MFT Entry Number: Each MFT entry is assigned a unique number for identification purposes. $LogFile Sequence Number: This value denotes the sequence number associated with the transactional logging information stored in the $LogFile. $STANDARD_INFORMATION Attribute: Purpose: This attribute stores essential metadata about a file, providing crucial details for file management and access control. Contents: Timestamps: Four timestamps are typically included: Created: Indicates when the file was originally created. Modified: Reflects the last time the file's contents were modified. MFT Entry Modified: Represents the last modification time of the MFT entry itself. Last Accessed: Records the last time the file was accessed. File Attributes: Flags indicating various properties of the file, such as read-only, hidden, system file, etc. Security Information: Permissions and access control settings associated with the file. USN Journal Sequence Number: Used for tracking changes to the file for journaling and auditing purposes. $FILE_NAME Attribute: Purpose: This attribute contains information about the file's name, location, and other related details. File Name: The primary name of the file. File Namespace: Indicates the namespace in which the file resides (e.g., NTFS, POSIX). Parent Directory: Information about the directory where the file is located. File Attributes: Similar to those in the $STANDARD_INFORMATION attribute, indicating properties like read-only, hidden, system file, etc. Timestamps: Typically includes timestamps for creation, modification, and last access. Hard Link Count: Specifies the number of hard links associated with the file. File Reference Number: Unique identifier for the file within the file system. Security Descriptor: Security-related information such as permissions and access control settings. Relationship: The $STANDARD_INFORMATION attribute provides general metadata about the file, including timestamps and security information. The $FILE_NAME attribute complements this by providing specific details about the file's name, location, and attributes. Conclusion: Understanding the motives behind timestamp modification, both legitimate and malicious, is crucial for effective forensic analysis and system security. By employing diverse detection methods and leveraging forensic tools, analysts can identify potential timestamp anomalies and uncover malicious activity, enhancing system defense and threat mitigation efforts.
- A Deep Dive into Plaso/Log2Timeline Forensic Tools
Plaso is the Python-based backend engine powering log2timeline, while log2timeline is the tool we use to extract timestamps and forensic artifacts. Together, they create what we call a super timeline—a comprehensive chronological record of system activity. Super timelines, unlike file system timelines, include a broad range of data beyond just file metadata. They can incorporate Windows event logs, prefetch data, shell bags, link files, and numerous other forensic artifacts. This comprehensive approach provides a more holistic view of system activity, making it invaluable for forensic investigations. Example: Imagine you've been given a disk image, perhaps a full disk image or a image created with KAPE. Your task: find evil, armed with little more than a date and time when the supposed activity occurred. So, you begin the investigation with the usual suspects: examining Windows event logs, prefetch data, various registry-based artifacts, and more. But after a while, you realize that combing through all these artifacts manually will take forever. Wouldn't it be great if there was a tool that could parse all these artifacts, consolidate them into a single data source, and arrange them in chronological order? Well, that's precisely what we can achieve with Plaso and log2timeline. I am going to use Ubuntu 22.04LTS version(Virtual box) and Plaso version 20220724 Installation: https://plaso.readthedocs.io/en/latest/sources/user/Ubuntu-Packaged-Release.html Lets start: We need image or collected artifact: The data we're dealing with could take various forms—it might be a raw disk image, an E01 image, a specific partition or offset within an image, or even a physical device like /dev/sdd. Moreover, it could manifest as a live mount point; for instance, we could mount a VHDX image created with KAPE and direct the tool to that mount point. With such versatility, we're equipped with a plethora of choices, each tailored to the specific nature of the data at hand. In current case I did capture the image using Kape tool and then I mounted the image in form of drive in my windows host than I shared the Mounted drive to (Ubuntu) virtual box If you are not able to access the mounted drive in ubuntu you have to enter below in terminal Command :- sudo adduser $USER vboxsf than restart the VM 2. Command and output (Syntax) Syntax log2timeline.py --storage-file OUTPUT INPUT and command will be like in our case log2timeline.py --storage-file akash.dump /media/sf_E_DRIVE akash.dump -- output file name which will be created (this will be in SQL format) you can add path like /path-to/akash.dump /media/sf_E_DRIVE -- Mounted drive path (1) Raw Image log2timeline.py /path-to/plaso.dump /path-to/image.dd (2) EWF Image log2timeline.py /path-to/plaso.dump /path-to/image.E01 (3) Physical Device log2timeline.py /path-to/plaso.dump /dev/sdd (4) Volume via Sector Offset log2timeline.py -o 63 /path-to/plaso.dump /path-to/image.dd 3. if you have entire image of drive as a artifact. log2timeline can ask to provide the which partition or vss you want to parse. if log2time find VSS. it will as for which vss as well You can mention identifier either one vss or all. Example :- 1 or 1..4 or all or (Single command) log2timeline.py --partitions 2 --vss-stores all --storage-file /path-to/plaso.dump /path- to/image.dd Now in current case I don’t have VSS or partition because I collected only needed artifacts (not entire drive) so in this case I did not get above options you can see screen shot below what it looks like once you hit enter. You can also use Parsers and filters against image with plaso/log2timeline and store in akash.dump or any output.dump file Parsers:- which will help us tell log to timeline to concentrate only on certain specific forensic artifacts To check all available parsers: log2timeline.py --parsers list |more if you want to use particular parser: In current case log2timeline.py --parsers windows_services --storage-file akash2.dump /media/sf_E_DRIVE you can write your own parsers: https://plaso.readthedocs.io/en/latest/sources/developer/How-to-write-a-parser.html 2. Filters: - Filter will tell logged timeline to go after specific files that would contain forensically valuable data like /users /windows/system32 Now there is txt file containing all important filter you can parse from image. Link below https://github.com/mark-hallman/plaso_filters/blob/master/filter_windows.txt you can do is open link and click on raw copy the link in ubuntu write : wget https://raw.githubusercontent.com/markhallman/plaso_filters/master/filter_windows.txt it will save the txt file after saving text file you can run below command Command log2timeline.py -f filter_windows.txt --storage-file akash2.dump /media/sf_E_DRIVE What this command will do from image it will go to specific files /Paths which are mentioned in txt file and capture artifact into akash2.dump file you can combine parser and filter in same command as well log2timeline.py - -parsers webhist -f filter_windows.txt --storage-file akash2.dump /media/sf_E_DRIVE what i am telling timeline to do is to target the paths and locations within the filter file and then against those particular locations run the web hist parser which will parse our browser forensics artifacts Now after all the command you will get output in output.dump or in my case akash.dump file. output will be in sql format and its very difficult to understand so now you have convert this dump file into csv format or any format which you prefer (I prefer CSV format because i will use timeline explorer to analyze further) 1. Using pinfo.py As the name suggests, it furnishes details about a specific Plazo storage file (output file): In our case for akash.dump Command pinfo.py akash.dump 2. Using psort.py this command is for Which format you want to create output. Command :- psort.py --output-time-zone utc -o list Now to analyze output with timeline_explorer from eric Zimmerman we will use l2tcsv format Complete command :- psort.py --output-time-zone utc -o l2tcsv -w timeline.csv akash.dump -w write format "Within an investigation, it's common to have a sense of the time range in which the suspected incident occurred. For instance, let's say we want to focus on a specific day and even a particular time within that day—let's choose February 29th at 15:00. We can achieve this using a technique called slicing. By default, it offers a five-minute window before and after the given time, although this window size can be adjusted." Command : psort.py --output-time-zone utc -o l2tcsv -w timeline.csv akash.dump - - slice '2024-02-29 15:00' "However, use a start and end date to delineate the investigation timeframe. This is achieved by specifying a range bounded by two dates. For example, "date > '2024-12-31 23:59:59' and date < '2020-04-01 00:00:00'." Command : psort.py --output-time-zone utc -o l2tcsv -w timeline.csv akash.dump "date > '2024-12-31 23:59:59' AND date < '2024-04-01 00:00:00'" Once super timeline is create in CSV format. We can use timeline explorer to analyze. The best part of timeline explorer is Data loaded into Timeline Explorer is automatically color-coded based on the type of artifact. For example, USB device utilization is highlighted in blue, file openings in green, and program executions in red. This color-coding helps users quickly identify and interpret different types of activities within the timeline. Recommended column to look while analyzing: Date, Time, MACB, Source type, desc, filename, inode, notes, extra Conclusion: In conclusion, Plaso/Log2Timeline stands as a cornerstone in the field of digital forensics, offering investigators a powerful tool for extracting, organizing, and analyzing digital evidence. Its origins rooted in the need for efficiency and accuracy, coupled with its continuous evolution and updates, make it an essential asset for forensic practitioners worldwide. As digital investigations continue to evolve, Plaso/Log2Timeline remains at the forefront, empowering investigators to unravel complex digital mysteries with ease and precision.
- Understanding NTFS Timestamps(Timeline Analysis) : With Example
Lets understand with example: We have created table to understand NTFS Operations 1. Create Operation: When a file is created, according to the table, all timestamps (Modified, Accessed, Created) are updated 2. Modify Operation: When a file is modified, only the Modified timestamp is expected to change, while the Accessed and Created timestamps remain unchanged. However, if NtfsDisableLastAccessUpdate is enabled (set to 0), the Access timestamp will be updated along with the Modified timestamp. In this case its enabled: 3. Copy Operation: When a file is copied using Windows Explorer, the Modified timestamp of the new file inherits from the original file, while the Created and Accessed timestamps are updated to the current time. If a file is copied using the command line (cmd), the behavior is similar to using Windows Explorer. Both methods update the Created and Accessed timestamps of the copied file. However: But when we analyze $MFT File. We may actually see a difference. Because MFT will show us all the time stamps ($SI)These time stamps are which accessible by windows API ($FN) These time stamps are accessible by Windows kernel 4. File Access: The behavior of the Access timestamp depends on the NtfsDisableLastAccessUpdate registry setting. If enabled, the Access timestamp will be updated upon file access. -------------------------------------------------------------------------------------------------------------
- Unveiling Suspicious Files with DensityScout
Introduction DensityScout, a robust tool crafted by Christian Wojner at CERT Austria, stands at the forefront of digital forensics and cybersecurity. Specializing in the detection of common obfuscation techniques such as runtime packing and encryption, DensityScout has become an invaluable asset for security professionals seeking to identify and neutralize potential threats. Decoding Density: A Measure of Randomness At the heart of DensityScout lies the concept of "density," which serves as a measure of randomness or entropy within a file. In straightforward terms, files exhibiting encryption, compression, or packing tend to possess a higher degree of inherent randomness, setting them apart from their normal counterparts. Legitimate executables in Windows, known for their lack of packing or encryption, rarely display random character sequences, leading to higher entropy. Understanding the DensityScout Command The command-line operation of DensityScout provides users with a powerful and customizable approach to file analysis. A typical command, such as Command :- densityscout.exe-pe -r -p 0.1 -o results.txt c:\Windows\System32 exemplifies the tool's capabilities. -pe Option: Instructs DensityScout to select files using the well-known signature of portable executables ("MZ"), transcending conventional file selection by extension. This is instrumental in identifying executable files that may have been strategically renamed to evade detection. -r Flag: Directs the tool to perform a recursive scan of all files and sub-folders from the specified starting point, ensuring a comprehensive examination. -p 0.1 Option: Allows users to set a density threshold for real-time display during the scan. Files with a density below the provided threshold (0.1 in this example) are promptly revealed on the screen. This option caters to users who prefer immediate insights rather than waiting for the entire scan to conclude. -o results.txt Option: Specifies the output file where DensityScout records the density values for each evaluated file. This file becomes a valuable resource for analyzing and further investigating findings. Interpreting Density Values Understanding the significance of density values is crucial in leveraging DensityScout effectively. A density value less than 0.1 often indicates a packed file, signifying a higher degree of randomness. Conversely, normal files, especially those typical of Windows executables, tend to have a density greater than 0.9. Real-world Application and Use Cases DensityScout has proven its mettle in real-world scenarios, providing security professionals with actionable insights into potentially malicious files. The tool's ability to promptly reveal files with suspicious densities ensures a proactive approach to threat detection. Next Steps As you delve into the world of digital forensics and cybersecurity, consider incorporating DensityScout into your toolkit. Explore the tool's capabilities, experiment with different parameters, and enhance your ability to identify and neutralize suspicious files. Final Thoughts In the pursuit of securing digital environments, tools that decode the intricacies of file structures become indispensable. DensityScout's focus on "density" adds a pragmatic layer to file analysis, contributing significantly to the collective efforts of cybersecurity professionals worldwide. Tool Link:- https://cert.at/en/downloads/software/software-densityscout Akash Patel











