Monthly Archives: April 2020

Excelerating Analysis, Part 2 — X[LOOKUP] Gon’ Pivot To Ya

In December 2019, we published a blog post on augmenting analysis using Microsoft Excel for various data sets for incident response investigations. As we described, investigations often include custom or proprietary log formats and miscellaneous, non-traditional forensic artifacts. There are, of course, a variety of ways to tackle this task, but Excel stands out as a reliable way to analyze and transform a majority of data sets we encounter.

In our first post, we discussed summarizing verbose artifacts using the CONCAT function, converting timestamps using the TIME function, and using the COUNTIF function for log baselining. In this post, we will cover two additional versatile features of Excel: LOOKUP functions and PivotTables.

For this scenario, we will use a dataset of logon events for an example Microsoft Office 365 (O365) instance to demonstrate how an analyst can enrich information in the dataset. Then we will demonstrate some examples of how to use PivotTables to summarize information and highlight anomalies in the data quickly.

Our data contains the following columns:

  • Description – Event description
  • User – User’s name
  • User Principle Name – email address
  • App – such as Office 365, Sharepoint, etc.
  • Location – Country
  • Date
  • IP address
  • User agent (simplified)
  • Organization – associated with IP address (as identified by O365)


Figure 1: O365 data set

LOOKUP for Data Enrichment

It may be useful to add more information to the data that could help us in analysis that isn’t provided by the original log source. A step FireEye Mandiant often performs during investigations is to take all unique IP addresses and query threat intelligence sources for each IP address for reputation, WHOIS information, connections to known threat actor activity, etc. This grants more information about each IP address that we can take into consideration in our analysis.

While FireEye Mandiant is privy to historical engagement data and Mandiant Threat Intelligence, if security teams or organizations do not have access to commercial threat intelligence feeds, there are numerous open source intelligence services that can be leveraged.

We can also use IP address geolocation services to obtain latitude and longitude related to each source IP address. This information may be useful in identifying anomalous logons based on geographical location.

After taking all source IP addresses, running them against threat intelligence feeds and geolocating them, we have the following data added to a second sheet called “IP Address Intel” in our Excel document:


Figure 2: IP address enrichment

We can already see before we even dive into the logs themselves that we have suspicious activity: The five IP addresses in the 203.0.113.0/24 range in our data are known to be associated with activity connected to a fictional threat actor tracked as TMP.OGRE.

To enrich our original dataset, we will add three columns to our data to integrate the supplementary information: “Latitude,” “Longitude,” and “Threat Intel” (Figure 3). We can use the VLOOKUP or XLOOKUP functions to quickly retrieve the supplementary data and integrate it into our main O365 log sheet.


Figure 3: Enrichment columns

VLOOKUP

The traditional way to look up particular data in another array is by using the VLOOKUP function. We will use the following formula to reference the “Latitude” values for a given IP address:


Figure 4: VLOOKUP formula for Latitude

There are four parts to this formula:

  1. Value to look up:
    • This dictates what cell value we are going to look up more information for. In this case, it is cell G2, which is the IP address.
  2. Table array:
    • This defines the entire array in which we will look up our value and return data from. The first column in the array must contain the value being looked up. In the aforementioned example, we are searching in ‘IP Address Intel’!$A$2:$D:$15. In other words, we are looking in the other sheet in this workbook we created earlier titled “IP Address Intel”, then in that sheet, search in the cell range of A2 to D15.

      Figure 5: VLOOKUP table array

      Note the use of the “$” to ensure these are absolute references and will not be updated by Excel if we copy this formula to other cells.
  3. Column index number:
    • This identifies the column number from which to return data. The first column is considered column 1. We want to return the “Latitude” value for the given IP address, so in the aforementioned example, we tell Excel to return data from column 2.
  4. Range lookup (match type)
    • This part of the formula tells Excel what type of matching to perform on the value being looked up. Excel defaults to “Approximate” matching, which assumes the data is sorted and will match the closest value. We want to perform “Exact” matching, so we put “0” here (“FALSE” is also accepted).

With the VLOOKUP function complete for the “Latitude” data, we can use the fill handle to update this field for the rest of the data set.

To get the values for the “Longitude” and “Threat Intel” columns, we repeat the process by using a similar function and, adjusting the column index number to reference the appropriate columns, then use the fill handle to fill in the rest of the column in our O365 data sheet:

  • For Longitude:
    • =VLOOKUP(G2,'IP Address Intel'!$A$2:$D$15,3,0)
  • For Threat Intel:
    • =VLOOKUP(G2,'IP Address Intel'!$A$2:$D$15,4,0)

Bonus Option: XLOOKUP

The XLOOKUP function in Excel is a more efficient way to reference the threat intelligence data sheet. XLOOKUP is a newer function introduced to Excel to replace the legacy VLOOKUP function and, at the time of writing this post, is only available to “O365 subscribers in the Monthly channel”, according to Microsoft. In this instance, we will also leverage Excel’s dynamic arrays and “spilling” to fill in this data more efficiently, instead of making an XLOOKUP function for each column.

NOTE: To utilize dynamic arrays and spilling, the data we are seeking to enrich cannot be in the form of a “Table” object. Instead, we will apply filters to the top row of our O365 data set by selecting the “Filter” option under “Sort & Filter” in the “Home” ribbon:


Figure 6: Filter option

To reference the threat intelligence data sheet using XLOOKUP, we will use the following formula:


Figure 7: XLOOKUP function for enrichment

There are three parts to this XLOOKUP formula:

  1. Value to lookup:
    • This dictates what cell value we are going to look up more information for. In this case, it is cell G2, which is the IP address.
  2. Array to look in:
    • This will be the array of data in which Excel will search for the value to look up. Excel does exact matching by default for XLOOKUP. In the aforementioned example, we are searching in ‘IP Address Intel’!$A$2:$A:$15. In other words, we are looking in the other sheet in this workbook titled “IP Address Intel”, then in that sheet, search in the cell range of A2 to A15:

      Figure 8: XLOOKUP array to look in

      Note the use of the “$” to ensure these are absolute references and will not be updated by Excel if we copy this formula to other cells.
  3. Array of data to return:
    • This part will be the array of data from which Excel will return data. In this case, Excel will return the data contained within the absolute range of B2 to D15 from the “IP Address Intel” sheet for the value that was looked up. In the aforementioned example formula, it will return the values in the row for the IP address 198.51.100.126:

      Figure 9: Data to be returned from ‘IP Address Intel’ sheet

      Because this is leveraging dynamic arrays and spilling, all three cells of the returned data will populate, as seen in Figure 4.

Now that our dataset is completely enriched by either using VLOOKUP or XLOOKUP, we can start hunting for anomalous activity. As a quick first step, since we know at least a handful of IP addresses are potentially malicious, we can filter on the “Threat Intel” column for all rows that match “TMP.OGRE” and reveal logons with source IP addresses related to known threat actors. Now we have timeframes and suspected compromised accounts to pivot off of for additional hunting through other data.

PIVOT! PIVOT! PIVOT!

One of the most useful tools for highlighting anomalies by summarizing data, performing frequency analysis and quickly obtaining other statistics about a given dataset is Excel’s PivotTable function.

Location Anomalies

Let’s utilize a PivotTable to perform frequency analysis on the location from which users logged in. This type of technique may highlight activity where a user account logged in from a location which is unusual for them.

To create a PivotTable for our data, we can select any cell in our O365 data and select the entire range with Ctrl+A. Then, under the “Insert” tab in the ribbon, select “PivotTable”:


Figure 10: PivotTable selection

This will bring up a window, as seen in Figure 11, to confirm the data for which we want to make a PivotTable (Step 1 in Figure 11). Since we selected our O365 log data set with Ctrl+A, this should be automatically populated. It will also ask where we want to put the PivotTable (Step 2 in Figure 11). In this instance, we created another sheet called “PivotTable 1” to place the PivotTable:


Figure 11: PivotTable creation

Now that the PivotTable is created, we must select how we want to populate the PivotTable using our data. Remember, we are trying to determine the locations from which all users logged in. We will want a row for each user and a sub-row for each location the user has logged in from. Let’s add a count of how many times they logged in from each location as well. We will use the “Date” field to do this for this example:


Figure 12: PivotTable field definitions

Examining this table, we can immediately see there are two users with source location anomalies: Ginger Breadman and William Brody have a small number of logons from “FarFarAway”, which is abnormal for these users based on this data set.

We can add more data to this PivotTable to get a timeframe of this suspicious activity by adding two more “Date” fields to the “Values” area. Excel defaults to “Count” of whatever field we drop in this area, but we will change this to the “Minimum” and “Maximum” values by using the “Value Field Settings”, as seen in Figure 13.


Figure 13: Adding min and max dates

Now we have a PivotTable that shows us anomalous locations for logons, as well as the timeframe in which the logons occurred, so we can hone our investigation. For this example, we also formatted all cells with timestamp values to reflect the format FireEye Mandiant typically uses during analysis by selecting all the appropriate cells, right-clicking and choosing “Format Cells”, and using a “Custom” format of “YYYY-MM-DD HH:MM:SS”.


Figure 14: PivotTable with suspicious locations and timeframe

IP Address Anomalies

Geolocation anomalies may not always be valuable. However, using a similar configuration as the previous example, we can identify suspicious source IP addresses. We will add “User Principle Name” and “IP Address” fields as Rows, and “IP Address” as Values. Let’s also add the “App” field to Columns. Our field settings and resulting table are displayed in Figure 15:


Figure 15: PivotTable with IP addresses and apps

With just a few clicks, we have a summarized table indicating which IP addresses each user logged in from, and which app they logged into. We can quickly identify two users logged in from IP addresses in the 203.0.113.0/24 range six times, and which applications they logged into from each of these IP addresses.

While these are just a couple use cases, there are many ways to format and view evidence using PivotTables. We recommend trying PivotTables on any data set being reviewed with Excel and experimenting with the Rows, Columns, and Values parameters.

We also recommend adjusting the PivotTable options, which can help reformat the table itself into a format that might fit requirements.

Conclusion

These Excel functions are used frequently during investigations at FireEye Mandiant and are considered important forensic analysis techniques. The examples we give here are just a glimpse into the utility of LOOKUP functions and PivotTables. LOOKUP functions can be used to reference a multitude of data sources and can be applied in other situations during investigations such as tracking remediation and analysis efforts.

PivotTables may be used in a variety of ways as well, depending on what data is available, and what sort of information is being analyzed to identify suspicious activity. Employing these techniques, alongside the ones we highlighted previously, on a consistent basis will go a long way in "excelerating" forensic analysis skills and efficiency.

Coming Soon – “Active Directory Security for Attackers and Defenders”

Folks,

From the U.S. Department of Defense to the Trillion $ Microsoft Corporation, and from the While House to the Fortune 100, today over 85% of organizations worldwide operate on Microsoft Active Directory.


The cyber security of these foundational Active Directory deployments worldwide is thus paramount to cyber security worldwide, and yet, unfortunately, the Active Directory deployments of most organizations remain alarmingly vulnerable to compromise.

To help thousands of organizations adequately bolster their existing Active Directory security defenses, and to help millions of cyber security and IT personnel worldwide enhance their proficiency in this paramount subject, starting May 05, 2020, I will personally be sharing Active Directory security insights for everyone's benefit, at the Paramount Defenses Blog.

Save the date - May 05, 2020.

Best wishes,
Sanjay.

Putting the Model to Work: Enabling Defenders With Vulnerability Intelligence — Intelligence for Vulnerability Management, Part Four

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

Organizations often have to make difficult choices when it comes to patch prioritization. Many are faced with securing complex network infrastructure with thousands of systems, different operating systems, and disparate geographical locations. Even when armed with a simplified vulnerability rating system, it can be hard to know where to start. This problem is compounded by the ever-changing threat landscape and increased access to zero-days.

At FireEye, we apply the rich body of knowledge accumulated over years of global intelligence collection, incident response investigations, and device detections, to help our customers defend their networks. This understanding helps us to discern between hundreds of newly disclosed vulnerabilities to provide ratings and assessments that empower network defenders to focus on the most significant threats and effectively mitigate risk to their organizations. 

In this blog post, we’ll demonstrate how we apply intelligence to help organizations assess risk and make informed decisions about vulnerability management and patching in their environments.

Functions of Vulnerability Intelligence

Vulnerability intelligence helps clients to protect their organizations, assets, and users in three main ways:


Figure 1: Vulnerability intelligence can help with risk assessment and informed decision making

Tailoring Vulnerability Prioritization

We believe it is important for organizations to build a defensive strategy that prioritizes the types of threats that are most likely to impact their environment, and the threats that could cause the most damage. When organizations have a clear picture of the spectrum of threat actors, malware families, campaigns, and tactics that are most relevant to their organization, they can make more nuanced prioritization decisions when those threats are linked to exploitation of vulnerabilities. A lower risk vulnerability that is actively being exploited in the wild against your organization or similar organizations likely has a greater potential impact to you than a vulnerability with a higher rating that is not actively being exploited.


Figure 2: Patch Prioritization Philosophy

Integration of Vulnerability Intelligence in Internal Workflows

Based on our experience assisting organizations globally with enacting intelligence-led security, we outline three use cases for integrating vulnerability intelligence into internal workflows.


Figure 3: Integration of vulnerability intelligence into internal workflows

Tools and Use Cases for Operationalizing Vulnerability Intelligence

1. Automate Processes by Fusing Intelligence with Internal Data

Automation is valuable to security teams with limited resources. Similar to automated detecting and blocking of indicator data, vulnerability threat intelligence can be automated by merging data from internal vulnerability scans with threat intelligence (via systems like the Mandiant Intelligence API) and aggregated into a SIEM, Threat Intelligence Platform, and/or ticketing system. This enhances visibility into various sources of both internal and external data with vulnerability intelligence providing risk ratings and indicating which vulnerabilities are being actively exploited. FireEye also offers a custom tool called FireEye Intelligence Vulnerability Explorer (“FIVE”), described in more detail below for quickly correlating vulnerabilities found in logs and scans with Mandiant ratings.

Security teams can similarly automate communication and workflow tracking processes using threat intelligence by defining rules for auto-generating tickets based on certain combinations of Mandiant risk and exploitation ratings; for example, internal service-level-agreements (SLAs) could state that ‘high’ risk vulnerabilities that have an exploitation rating of ‘available,’ ‘confirmed,’ or ‘wide’ must be patched within a set number of days. Of course, the SLA will depend on the company’s operational needs, the capability of the team that is advising the patch process, and executive buy-in to the SLA process. Similarly, there may be an SLA defined for patching vulnerabilities that are of a certain age. Threat intelligence tells us that adversaries continue to use older vulnerabilities as long as they remain effective. For example, as recently as January 2020, we observed a Chinese cyber espionage group use an exploit for CVE-2012-0158, a Microsoft Office stack-based buffer overflow vulnerability originally released in 2012, in malicious email attachments to target organizations in Southeast Asia. Automating the vulnerability-scan-to-vulnerability-intelligence correlation process can help bring this type of issue to light. 

Another potential use case employing automation would be incorporating vulnerability intelligence as security teams are testing updates or new hardware and software prior to introduction into the production environment. This could dramatically reduce the number of vulnerabilities that need to be patched in production and help prioritize those vulnerabilities that need to be patched first based on your organization’s unique threat profile and business operations.

2. Communicating with Internal Stakeholders

Teams can leverage vulnerability reporting to send internal messaging, such as flash-style notifications, to alert other teams when Mandiant rates a vulnerability known to impact your systems high or critical. These are the vulnerabilities that should take priority in patching and should be patched outside of the regular cycle.

Data-informed intelligence analysis may help convince stakeholders outside of the security organization the importance of patching quickly, even when this is inconvenient to business operations. Threat Intelligence can inform an organization’s appropriate use of resources for security given the potential business impact of security incidents.

3. Threat Modeling

Organizations can leverage vulnerability threat intelligence to inform their threat modeling to gain insight into the most likely threats to their organization, and better prepare to address threats in the mid to long term. Knowledge of which adversaries pose the greatest threat to your organization, and then knowledge of which vulnerabilities those threat groups are exploiting in their operations, can enable your organization to build out security controls and monitoring based on those specific CVEs.

Examples

The following examples illustrate workflows supported by vulnerability threat intelligence to demonstrate how organizations can operationalize threat intelligence in their existing security teams to automate processes and increase efficiency given limited resources.

Example 1: Using FIVE for Ad-hoc Vulnerability Prioritization

The FireEye Intelligence Vulnerability Explorer (“FIVE”) tool is available for customers here. It is available for MacOS and Windows, requires a valid subscription for Mandiant Vulnerability Intelligence, and is driven from an API integration.


Figure 4: FIVE Tool for Windows and MacOS

In this scenario, an organization’s intelligence team was asked to quickly identify any vulnerability that required patching from a server vulnerability scan after that server was rebuilt from a backup image. The intelligence team was presented with a text file containing a list of CVE numbers. Users can drag-and-drop a text readable file (CSV, TEXT, JSON, etc.) into the FIVE tool and the CVE numbers will be discovered from the file using regex. As shown in Figure 6 (below), in this example, the following vulnerabilities were found in the file and presented to the user. 


Figure 5: FIVE tool startup screen waiting for file input


Figure 6: FIVE tool after successfully regexing the CVE-IDs from the file

After selecting all CVE-IDs, the user clicked the “Fetch Vulnerabilities” button, causing the application to make the necessary two-stage API call to the Intelligence API.

The output depicted in Figure 7 shows the user which vulnerabilities should be prioritized based on FireEye’s risk and exploitation ratings. The red and maroon boxes indicate vulnerabilities that require attention, while the yellow indicate vulnerabilities that should be reviewed for possible action. Details of the vulnerabilities are displayed below, with associated intelligence report links providing further context.


Figure 7: FIVE tool with meta-data, CVE-IDs, and links to related Intelligence Reports

FIVE can also facilitate other use cases for vulnerability intelligence. For example, this chart can be attached in messaging to other internal stakeholders or executives for review, as part of a status update to provide visibility on the organization’s vulnerability management program.

Example 2: Vulnerability Prioritization, Internal Communications, Threat Modeling

CVE-2019-19781 is a vulnerability affecting Citrix that Mandiant Threat Intelligence rated critical. Mandiant discussed early exploitation of this vulnerability in a January 2020 blog post. We continued to monitor for additional exploitation, and informed our clients when we observed exploitation by ransomware operators and Chinese espionage group, APT41.

In cases like these, threat intelligence can help impacted organizations find the “signal” in the “noise” and prioritize patching using knowledge of exploitation and the motives and targeting patterns of threat actors behind the exploitation. Enterprises can use intelligence to inform internal stakeholders of the potential risk and provide context as to the potential business and financial impact of a ransomware infection or an intrusion by a highly resourced state sponsored group. This support the immediate patch prioritization decision while simultaneously emphasizing the value of a holistically informed security organization.

Example 3: Intelligence Reduces Unnecessary Resource Expenditure — Automating Vulnerability Prioritization and Communications

Another common application for vulnerability intelligence is informing security teams and stakeholders when to stand down. When a vulnerability is reported in the media, organizations often spin up resources to patch as quickly as possible. Leveraging threat intelligence in security processes help an organization discern when it is necessary to respond in an all-hands-on-deck manner.

Take the case of the CVE-2019-12650, originally disclosed on Sept. 25, 2019 with an NVD rating of “High.” Without further information, an organization relying on this score to determine prioritization may include this vulnerability in the same patch cycle along with numerous other vulnerabilities rated High or Critical. As previously discussed, we have experts review the vulnerability and determine that it required the highest level of privileges available to successfully exploit, and there was no evidence of exploitation in the wild.

This is a case where threat intelligence reporting as well as automation can effectively minimize the need to unnecessarily spin up resources. Although the public NVD score rated this vulnerability high, Mandiant Intelligence rated it as “low” risk due to the high level of privileges needed to use it and lack of exploitation in the wild. Based on this assessment, organizations may decide that this vulnerability could be patched in the regular cycle and does not necessitate use of additional resources to patch out-of-band. When Mandiant ratings are automatically integrated into the patching ticket generation process, this can support efficient prioritization. Furthermore, an organization could use the analysis to issue an internal communication informing stakeholders of the reasoning behind lowering the prioritization.

Vulnerabilities: Managed

Because we have been closely monitoring vulnerability exploitation trends for years, we were able to distinguish when attacker use of zero-days evolved from use by a select class of highly skilled attackers, to becoming accessible to less skilled groups with enough money to burn. Our observations consistently underscore the speed with which attackers exploit useful vulnerabilities, and the lack of exploitation for vulnerabilities that are hard to use or do not help attackers fulfill their objectives. Our understanding of the threat landscape helps us to discern between hundreds of newly disclosed vulnerabilities to provide ratings and assessments that empower network defenders to focus on the most significant threats and effectively mitigate risk to their organizations.

Mandiant Threat Intelligence enables organizations to implement a defense-in-depth approach to holistically mitigate risk by taking all feasible steps—not just patching—to prevent, detect, and stymie attackers at every stage of the attack lifecycle with both technology and human solutions.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Additional Resources

Zero-Day Exploitation Increasingly Demonstrates Access to Money, Rather than Skill — Intelligence for Vulnerability Management, Part One

Think Fast: Time Between Disclosure, Patch Release and Vulnerability Exploitation — Intelligence for Vulnerability Management, Part Two

Separating the Signal from the Noise: How Mandiant Intelligence Rates Vulnerabilities — Intelligence for Vulnerability Management, Part Three

Mandiant offers Intelligence Capability Development (ICD) services to help organizations optimize their ability to consume, analyze and apply threat intelligence.

The FIVE tool is available on the FireEye Market. It requires a valid subscription for Mandiant Vulnerability Intelligence, and is driven from an API integration. Please contact your Intelligence Enablement Manager or FireEye Support to obtain API keys. 

Mandiant's OT Asset Vulnerability Assessment Service informs customers of relevant vulnerabilities by matching a customer's asset list against vulnerabilities and advisories. Relevant vulnerabilities and advisories are delivered in a report from as little as once a year, to as often as once a week. Additional add-on services such as asset inventory development and deep dive analysis of critical assets are available. Please contact your Intelligence Enablement Manager for more information.

Security Threats Facing Modern Mobile Apps

We use mobile apps every day from a number of different developers, but do we ever stop to think about how much thought and effort went into the security of these apps?

It is believed that 1 out of every 36 mobile devices has been compromised by a mobile app security breach. And with more than 5 billion mobile devices globally, you do the math.

The news that a consumer-facing application or business has experienced a security breach is a story that breaks far too often. As of late, video conferencing apps like Zoom and Houseparty have been the centre of attention in the news cycle.

As apps continue to integrate into the everyday life of our users, we cannot wait for a breach to start considering the efficacy of our security measures. When users shop online, update their fitness training log, review a financial statement, or connect with a colleague over video, we are wielding their personal data and must do so responsibly.

Let’s cover some of the ways hackers access sensitive information and tips to prevent these hacks from happening to you.

The Authentication Problem

Authentication is the ability to reliably determine that the person trying to access a given account is the actual person who owns that account. One factor authentication would be accepting a username and password to authenticate a user, but as we know, people use the same insecure passwords and then reuse them for all their accounts.

If a hacker accesses a user’s username and password, even if through no fault of yours, they are able to access that user’s account information.

Although two-factor authentication (2FA) can feel superfluous at times, it is a simple way to protect user accounts from hackers.


2FA uses a secondary means of authenticating the user, such as sending a confirmation code to a mobile device or email address. This adds another layer of protection by making it more difficult for hackers to fake authentication. 

Consider using services that handle authentication securely and having users sign in with them. Google and Facebook, for example, are used by billions of people and they have had to solve authentication problems on a large scale.
Reverse Engineering

Reverse engineering is when hackers develop a clone of an app to get innocent people to download malware. How is this accomplished? All the hacker has to do is gain access to the source code. And if your team is not cautious with permissions and version control systems, a hacker can walk right in unannounced and gain access to the source code along with private environment variables.

One way to safeguard against this is to obfuscate code. Obfuscation and minification make the code less readable to hackers. That way, they’re unable to conduct reverse engineering on an app. You should also make sure your code is in a private repository, secret keys and variables are encrypted, and your team is aware of best practices.

If you’re interested in learning more ways hackers can breach mobile app security, check out the infographic below from CleverTap.



Authored by Drew Page Drew is a content marketing lead from San Diego, where he helps create epic content for companies like CleverTap. He loves learning, writing and playing music. When not surfing the web, you can find him actually surfing, in the kitchen or in a book.

Email bungle at company seeking jobkeeper payments exposes staff’s personal details

Names, addresses and birthdates of more than 100 people shared in privacy breach

The company responsible for delivering traffic reports on radio and TV stations across Australia accidentally sent out the dates of birth, names and home addresses of more than 100 current and former staff to potentially thousands of people as the company seeks to apply for the jobkeeper payments.

Australian Traffic Network provides short traffic report updates during news bulletins to 80 radio and television stations, including the ABC, Seven, Nine, 10, 2GB and Triple M.

Related: As Australia takes on Google and Facebook over news content, the world is watching | Margaret Simons

Continue reading...

Vietnamese Threat Actors APT32 Targeting Wuhan Government and Chinese Ministry of Emergency Management in Latest Example of COVID-19 Related Espionage

From at least January to April 2020, suspected Vietnamese actors APT32 carried out intrusion campaigns against Chinese targets that Mandiant Threat Intelligence believes was designed to collect intelligence on the COVID-19 crisis. Spear phishing messages were sent by the actor to China's Ministry of Emergency Management as well as the government of Wuhan province, where COVID-19 was first identified. While targeting of East Asia is consistent with the activity we’ve previously reported on APT32, this incident, and other publicly reported intrusions, are part of a global increase in cyber espionage related to the crisis, carried out by states desperately seeking solutions and nonpublic information.

Phishing Emails with Tracking Links Target Chinese Government

The first known instance of this campaign was on Jan. 6, 2020, when APT32 sent an email with an embedded tracking link (Figure 1) to China's Ministry of Emergency Management using the sender address lijianxiang1870@163[.]com and the subject 第一期办公设备招标结果报告 (translation: Report on the first quarter results of office equipment bids). The embedded link contained the victim's email address and code to report back to the actors if the email was opened.


Figure 1: Phishing email to China's Ministry of Emergency Management

Mandiant Threat Intelligence uncovered additional tracking URLs that revealed targets in China's Wuhan government and an email account also associated with the Ministry of Emergency Management.

  • libjs.inquirerjs[.]com/script/<VICTIM>@wuhan.gov.cn.png
  • libjs.inquirerjs[.]com/script/<VICTIM>@chinasafety.gov.cn.png
  • m.topiccore[.]com/script/<VICTIM>@chinasafety.gov.cn.png
  • m.topiccore[.]com/script/<VICTIM>@wuhan.gov.cn.png
  • libjs.inquirerjs[.]com/script/<VICTIM>@126.com.png

The libjs.inquirerjs[.]com domain was used in December as a command and control domain for a METALJACK phishing campaign likely targeting Southeast Asian countries.

Additional METALJACK Activity Suggests Campaigns Targeting Mandarin Speakers Interested in COVID-19

APT32 likely used COVID-19-themed malicious attachments against Chinese speaking targets. While we have not uncovered the full execution chain, we uncovered a METALJACK loader displaying a Chinese-Language titled COVID-19 decoy document while launching its payload.

When the METALJACK loader, krpt.dll (MD5: d739f10933c11bd6bd9677f91893986c) is loaded, the export "_force_link_krpt" is likely called. The loader executes one of its embedded resources, a COVID-themed RTF file, displaying the content to the victim and saving the document to %TEMP%.

The decoy document (Figure 2) titled 冠状病毒实时更新:中国正在追踪来自湖北的旅行者, MD5: c5b98b77810c5619d20b71791b820529 (Translation: COVID-19 live updates: China is currently tracking all travelers coming from Hubei Province) displays a copy of a New York Times article to the victim.


Figure 2: COVID-themed decoy document

The malware also loads shellcode in an additional resource, MD5: a4808a329b071a1a37b8d03b1305b0cb, which contains the METALJACK payload. The shellcode performs a system survey to collect the victim's computer name and username and then appends those values to a URL string using libjs.inquirerjs[.]com. It then attempts to call out to the URL. If the callout is successful, the malware loads the METALJACK payload into memory.

It then uses vitlescaux[.]com for command and control.

Outlook

The COVID-19 crisis poses an intense, existential concern to governments, and the current air of distrust is amplifying uncertainties, encouraging intelligence collection on a scale that rivals armed conflict. National, state or provincial, and local governments, as well as non-government organizations and international organizations, are being targeted, as seen in reports. Medical research has been targeted as well, according to public statements by a Deputy Assistant Director of the FBI. Until this crisis ends, we anticipate related cyber espionage will continue to intensify globally.

Indicators

Type

Indicators

Domains

m.topiccore[.]com

jcdn.jsoid[.]com

libjs.inquirerjs[.]com

vitlescaux[.]com

Email Address

lijianxiang1870@163[.]com

Files

MD5: d739f10933c11bd6bd9677f91893986c

METALJACK loader

MD5: a4808a329b071a1a37b8d03b1305b0cb

METALJACK Payload

MD5: c5b98b77810c5619d20b71791b820529

Decoy Document (Not Malicious)

Detecting the Techniques

Platform

Signature Name

Endpoint Security

Generic.mg.d739f10933c11bd6

Network Security

Trojan.Apost.FEC2, Trojan.Apost.FEC3, fe_ml_heuristic

Email Security

Trojan.Apost.FEC2, Trojan.Apost.FEC3, fe_ml_heuristic

Helix

 

Mandiant Security Validation Actions

  • A150-096 - Malicious File Transfer - APT32, METALJACK, Download
  • A150-119 - Protected Theater - APT32, METALJACK Execution
  • A150-104 - Phishing Email - Malicious Attachment, APT32, Contact Information Lure

MITRE ATT&CK Technique Mapping

Tactic

Techniques

Initial Access

Spearphishing Attachment (T1193), Spearphising Link (T1192)

Execution

Regsvr32 (T1117), User Execution (T1204)

Defense Evasion

Regsvr32 (T1117)

Command and Control

Standard Cryptographic Protocol (T1032), Custom Command and Control Protocol (T1094)

Using Big Tech to tackle coronavirus risks swapping one lockdown for another | Adam Smith

An app that logs movements and contacts might seem like a fair trade now but we risk giving away our privacy for good

Even when the lockdown is lifted, there is no guarantee that life will ever return to normal. To prevent a future outbreak of coronavirus, the UK will need to roll out mass testing, maintain some social distancing measures and closely monitor communities to curb future flare-ups.

In pursuing that last aim, governments across the world are developing technology to track our movements. When lockdown ends, technology could be a valuable means of controlling future outbreaks, alerting people to cases of Covid-19 in their area and hopefully preventing future shutdowns.

Related: The expansion of mass surveillance to stop coronavirus should worry us all | Veena Dubal

Continue reading...

Research Grants to support Google VRP Bug Hunters during COVID-19



In 2015, we launched our Vulnerability Research Grant program, which allows us to recognize the time and efforts of security researchers, including the situations where they don't find any vulnerabilities. To support our community of security researchers and to help protect our users around the world during COVID-19, we are announcing a temporary expansion of our Vulnerability Research Grant efforts.

In light of new challenges caused by the coronavirus outbreak, we are expanding this initiative by  creating a COVID-19 grant fund. As of today, every Google VRP Bug Hunter who submitted at least two remunerated reports from 2018 through April 2020 will be eligible for a $1,337 research grant. We are dedicating these grants to support our researchers during this time. We are committed to protecting our users and we want to encourage the research community to help us identify threats and to prevent potential vulnerabilities in our products.

We understand the individual challenges COVID-19 has placed on the research community are different for everyone and we hope that these grants will allow us to support our Bug Hunters during these uncertain times. Even though our grants are intended to recognize the efforts of our frequent researchers regardless of their results, as always, bugs found during the grant are eligible for regular rewards per the Vulnerability Reward Program (VRP) rules. We are aware that some of our partners might not be interested in monetary grants. In such cases, we will offer the option to donate the grant to an established COVID-19 related charity and within our discretion, will monetarily match these charitable donations.

For those of you who recently joined us or are planning to start, it’s never too late. We are committed to continue the Vulnerability Research Grant program throughout 2020, so stay tuned for future announcements and follow us on @GoogleVRP!

You’ve Got (0-click) Mail!

You’ve Got (0-click) Mail!

Updates

Impact & Key Details (TL;DR) :

  • The vulnerability allows remote code execution capabilities and enables an attacker to remotely infect a device by sending emails that consume significant amount of memory
  • The vulnerability does not necessarily require a large email – a regular email which is able to consume enough RAM would be sufficient. There are many ways to achieve such resource exhaustion including RTF, multi-part, and other methods
  • Both vulnerabilities were triggered in-the-wild
  • The vulnerability can be triggered before the entire email is downloaded, hence the email content won’t necessarily remain on the device
  • We are not dismissing the possibility that attackers may have deleted remaining emails following a successful attack
  • Vulnerability trigger on iOS 13: Unassisted (/zero-click) attacks on iOS 13 when Mail application is opened in the background
  • Vulnerability trigger on iOS 12: The attack requires a click on the email. The attack will be triggered before rendering the content. The user won’t notice anything anomalous in the email itself
  • Unassisted attacks on iOS 12 can be triggered (aka zero click) if the attacker controls the mail server
  • The vulnerabilities exist at least since iOS 6 – (issue date: September 2012) – when iPhone 5 was released
  • The earliest triggers we have observed in the wild were on iOS 11.2.2 in January 2018
  • FAQ

This post is the first part of a series – See post number 2

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

the mail-demon vulnerability
were you targeted by this vulnerability?
Check Now

Multiple Vulnerabilities in MobileMail/Maild

Introduction

Following a routine iOS Digital Forensics and Incident Response (DFIR) investigation, ZecOps found a number of suspicious events that affecting the default Mail application on iOS dating as far back as Jan 2018. ZecOps analyzed these events and discovered an exploitable vulnerability affecting Apple’s iPhones and iPads. ZecOps detected multiple triggers in the wild to this vulnerability on enterprise users, VIPs, and MSSPs, over a prolonged period of time.

The attack’s scope consists of sending a specially crafted email to a victim’s mailbox enabling it to trigger the vulnerability in the context of iOS MobileMail application on iOS 12 or maild on iOS 13. Based on ZecOps Research and Threat Intelligence, we surmise with high confidence that these vulnerabilities – in particular, the remote heap overflow – are widely exploited in the wild in targeted attacks by an advanced threat operator(s).

The suspected targets included:

  • Individuals from a Fortune 500 organization in North America
  • An executive from a carrier in Japan 
  • A VIP from Germany
  • MSSPs from Saudi Arabia and Israel
  • A Journalist in Europe
  • Suspected: An executive from a Swiss enterprise

Few of the suspicious events even included strings commonly used by hackers (e.g. 414141…4141) – see FAQ. After verifying that it wasn’t a red-team exercise, we validated that these strings were provided by the email-sender. Noteworthy, although the data confirms that the exploit emails were received and processed by victims’ iOS devices, corresponding emails that should have been received and stored on the mail-server were missing. Therefore, we infer that these emails may have been deleted intentionally as part of attack’s operational security cleanup measures.

We believe that these attacks are correlative with at least one nation-state threat operator or a nation-state that purchased the exploit from a third-party researcher in a Proof of Concept (POC) grade and used ‘as-is’ or with minor modifications (hence the 4141..41 strings).

While ZecOps refrain from attributing these attacks to a specific threat actor, we are aware that at least one ‘hackers-for-hire’ organization is selling exploits using vulnerabilities that leverage email addresses as a main identifier.

We advise to update as soon as an iOS update is available.

Exploitation timeline

We are aware of multiple triggers in the wild that happened starting from Jan 2018, on iOS 11.2.2. It is likely that the same threat operators are actively abusing these vulnerabilities presently. It is possible that the attacker(s) were using this vulnerability even earlier. We have seen similarities between some of the suspected victims during triggers to these vulnerabilities.

Affected versions:

  • All tested iOS versions are vulnerable including iOS 13.4.1. 
  • Based on our data, these bugs were actively triggered on iOS 11.2.2 and potentially earlier.
  • iOS 6 and above are vulnerable. iOS 6 was released in 2012. Versions prior to iOS 6 might be vulnerable too but we haven’t checked earlier versions. At the time of iOS 6 release, iPhone 5 was in the market.

ZecOps Customers & Partners

ZecOps Gluon iOS DFIR customers can detect attacks leveraging MobileMail/maild vulnerabilities.

Thank you

Before we dive deeper, we would like to thank Apple’s product security and  the engineering team that delivered a beta patch to block these vulnerabilities from further abuse once deployed to GA.

Vulnerability Details

ZecOps found that the implementation of MFMutableData in the MIME library lacks error checking for system call ftruncate() which leads to the Out-Of-Bounds write. We also found a way to trigger the OOB-Write without waiting for the failure of the system call ftruncate. In addition, we found a heap-overflow that can be triggered remotely.

We are aware of remote triggers of both vulnerabilities in the wild. 

Both the OOB Write bug, and the Heap-Overflow bug, occurred due to the same problem: not handling the return value of the system calls correctly.

The remote bug can be triggered while processing the downloaded email, in such scenario, the email won’t get fully downloaded to the device as a result.

Affected Library: /System/Library/PrivateFrameworks/MIME.framework/MIME

Vulnerable function: -[MFMutableData appendBytes:length:]

Abnormal behavior once the vulnerabilities are exploited

Besides a temporary slowdown of mobile mail application, users should not observe any other anomalous behavior. Following an exploit attempt (both successful / unsuccessful) on iOS 12 – users may notice a sudden crash of the Mail application.

On iOS13, besides a temporary slowdown, it would not be noticeable. Failed attacks would not be noticeable on iOS 13 if another attack is carried afterwards and deletes the email. 

In failed attacks, the emails that would be sent by the attacker would show the message: “This message has no content.”.. As seen in the following picture below:

Crash Forensics Analysis

Part of the a crash (out of multiple crashes) experienced by the user, is as follows.

The crashed instruction was stnp x8, x9, [x3], meaning that the value of x8 and x9 has been written into x3 and crashed due to accessing an invalid address 0x000000013aa1c000 which was stored in x3.

Thread 3 Crashed:
0   libsystem_platform.dylib      	0x000000019e671d88 _platform_memmove +88
       0x19e671d84         b.ls 0x00008da8                
       0x19e671d88         stnp x8, x9, [x3]              
       0x19e671d8c         stnp x10, x11, [x3, #16]       
       0x19e671d90         add x3, x3, 0x20               
       0x19e671d94         ldnp x8, x9, [x1]              
1   MIME                          	0x00000001b034c4d8 -[MFMutableData appendBytes:length:] + 356 
2   Message                       	0x00000001b0b379c8 -[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] + 808
Registers:
x0: 0x000000013aa1b05a  x5: 0x0000000000000006
x1: 0x0000000102f0cfc6  x6: 0x0000000000000000
x2: 0x0000000000004a01  x7: 0x0000000000000000
x3: 0x000000013aa1c000  x8: 0x3661614331732f0a
x4: 0x000000000000004b  x9: 0x48575239734c314a

In order to find out why the process crashed, we need to take a look at the implementation of MFMutableData.

The below call tree is taken from the crash log experienced only by a selected number of  devices.

-[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] 
|
+--  -[MFMutableData appendData:]
   |
   +--  -[MFMutableData appendBytes:length:]
       |
       +-- memmove()

By analyzing the MIME library, the pseudo code of -[MFMutableData appendBytes:length:] as follows:

-(void)appendBytes:(const void*)bytes length:(uint64_t)len{
    //...
    uint64_t new_length = old_length + len;
    [self setLength:new_length];
    if (!self->flush){
        self->flush = true;
    }
    //...
    memmove(dest, bytes, len);
}

The following call stack was executed before the crash happened:

-[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] 
|
+--  -[MFMutableData appendData:]
   |
   +--  -[MFMutableData appendBytes:length:]
       |
       +-- -[MFMutableData setLength:]
          |
          +-- -[MFMutableData _flushToDisk:capacity:]
             |
             +-- ftruncate()

A file is used to store the actual data if the data size reaches the threshold, when the data changes, the content and the size of the mapped file should be changed accordingly. The system-call ftruncate() is called inside -[MFMutableData _flushToDisk:capacity:] to adjust the size of the mapped file.

Following is the pseudo code of -[MFMutableData _flushToDisk:capacity:]

- (void)_flushToDisk:(uint64_t)length capacity:(uint64_t)capacity;{
    boolean_t flush;
    //...
    if (self->path){
        boolean_t flush = self->flush; //<-line [a]
    }else{
        //...
        flush = true;
    }
    if(flush){ //<-line [b]
        //...
        ftruncate(self->path, capacity);
        //...
        self->flush = false;
    }
}

The man page of ftruncate specifies:

ftruncate() and truncate() cause the file named by path, or referenced by fildes, to be truncated
 (or extended) to length bytes in size. If the file size exceeds length, any extra data is discarded. 
If the file size is smaller than length, the file is extended and filled with zeros to the indicated 
length.  The ftruncate() form requires the file to be open for writing.
RETURN VALUES
    A value of 0 is returned if the call succeeds.  If the call fails a -1 is returned, and the global 
    variable errno specifies the error.

According to the man page: “If the call fails a -1 is returned, and the global variable errno specifies the error.” which means under certain conditions, this system call would fail to truncate the file and return an error code.

However, upon failure of the ftruncate system call , _flushToDisk continues anyway, which means the mapped file size is not extended and the execution finally reaches the memmove() in the appendBytes() function, causing the mmap file an out-of-bound (OOB) write.

Finding another trigger

We know that the crash was due to a failure ftruncate() system call, does it mean we can’t do anything but to wait for the system call to fail?

Let’s take another look at the -[MFMutableData _flushToDisk:capacity:] function.

- (void)_flushToDisk:(uint64_t)length capacity:(uint64_t)capacity;{
    boolean_t flush;
    //...
    if (self->path){
        boolean_t flush = self->flush; //<-line [a]
    }else{
        //...
        flush = true;
    }
    if(flush){ //<-line [b]
        //...
        ftruncate(self->path, capacity);
        //...
        self->flush = false;
    }
}

As you can see in line [b], it checks whether the flush flag is true before calling ftruncate(). This means if we can set the flush flag to false at line [a], the ftruncate() won’t be executed at all.

If someone calls -[MFMutableData setLength:](set flush to  0), before calling -[MFMutableData appendData:], ftruncate() won’t be executed due to flush==0) and  there will be a similar result.

The following backtrace is a demonstration of a local POC. Combining this OOB Write with an additional vulnerability and/or methods to control the memory layout, this vulnerability could be triggered remotely – for example by controlling the selectors (as we have observed in other DFIR events as well as in Google Project Zero blog post).

Thread 0 Crashed:
0   libsystem_platform.dylib      	0x00000001cc442d98 _platform_memmove  + 88
       0x1cc442d8c         stnp x14, x15, [x0, #16]       
       0x1cc442d90         subs x2, x2, 0x40              
       0x1cc442d94         b.ls 0x00008db8    // 0x00000001cc44bf30 
       0x1cc442d98         stnp x8, x9, [x3]              
       0x1cc442d9c         stnp x10, x11, [x3, #16]       
       0x1cc442da0         add x3, x3, 0x20               
       0x1cc442da4         ldnp x8, x9, [x1]              

1   MIME                          	0x00000001ddbf0518 -[MFMutableData appendBytes:length:]  + 352
       0x1ddbf050c         mov x1, x20                    
       0x1ddbf0510         mov x2, x19                    
       0x1ddbf0514         bl 0x000498f4    // 0x00000001ddc39e08 
       0x1ddbf0518         ldp x29, x30, [sp, #80]        
       0x1ddbf051c         ldp x20, x19, [sp, #64]        
       0x1ddbf0520         ldp x22, x21, [sp, #48]        
       0x1ddbf0524         ldp x24, x23, [sp, #32]        

Although this is indeed a vulnerability that should be patched, we suspect that it was triggered by accident while the attackers were trying to exploit the following vulnerability.

2nd Vulnerability: Remote Heap Overflow in MFMutable

We continued our investigation to the remotely triggered events that were suspicious and determined that there is another vulnerability in the same area

The backtrace can be seen as following:

-[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] 
|
+--  -[MFMutableData appendData:]
    |
   +--  -[MFMutableData appendBytes:length:]
       |
       +-- -[MFMutableData _mapMutableData]

While analyzing the code flow, we determined the following:

  • The function [MFDAMessageContentConsumer consumeData:length:format:mailMessage:] gets called when downloading an email in raw MIME form, and also will get called multiple times until the email is downloaded in Exchange mode. It will create a new NSMutableData object, and call appendData: for any new streaming data that belongs to the same email/MIME message. For other protocols like IMAP it uses -[MFConnection readLineIntoData:] instead but the logic and vulnerability are the same.
  • NSMutableData sets a threshold of 0x200000 bytes, if the data is bigger than 0x200000 bytes, it will write the data into a file, and then use the mmap systemcall to map the file into the device memory. The threshold size of 0x200000 can be easily excessed, so every time new data needs to append, the file will be re-mmap’ed, and the file size as well as the mmap size getting bigger and bigger.
  • Remapping is done inside -[MFMutableData _mapMutableData:], the vulnerability is inside this function.

Pseudocode of the vulnerable function as below:

-[MFMutableData _mapMutableData:] calls function MFMutableData__mapMutableData___block_invoke when the mmap system call fails

-[MFMutableData _mapMutableData:]
{
  //...
  result = mmap(0LL, v8, v9, 2, v6, 0LL);
  if (result == -1){
      //...
      result = (void *)MFMutableData__mapMutableData___block_invoke(&v21);
  }
  //...
}

The pseudo code of MFMutableData__mapMutableData___block_invoke is as follows, it allocates a size 8 heap memory then replaces the data->bytes pointer with the allocated memory.

void MFMutableData__mapMutableData___block_invoke(__int64 data)
{
  __int64 result; // x0

  data->vm = 0;    
  data->length = 0;
  data->capacity = 8; // Reset MFMutableData capacity to 8
  result = calloc(data->capacity, 1); // Allocate a new piece of memory, size of 8
  data->bytes = result; // replace the mapping memory pointer, which overwrites the -1 in case of mmap failure.
  return result;
}

After the execution of -[MFMutableData _mapMutableData:], the process continues execution of -[MFMutableData appendBytes:length:], causing a heap overflow when copying data to the allocated memory.

-[MFMutableData appendBytes:length:] 
{
  int length = [self length];
  //...
  bytes = self->bytes;
  if(!bytes){
     bytes = [self _mapMutableData]; //Might be a data pointer of a size 8 heap
  }
  copy_dst = bytes + length;
  //...
  platform_memmove(copy_dst, append_bytes, append_length); // It used append_length to copy the memory, causing an OOB writing in a small heap
}

append_length is the length of a chunk of data from the streaming. Since MALLOC_NANO is a very predictable memory region, it is possible to exploit this vulnerability.

An attacker doesn’t need to drain every last bit of the memory to cause mmap to fail, as mmap requires a continuous memory region.

Reproduction

According to the man page of mmap, the mmap would fail when MAP_ANON was specified and insufficient memory was available.

The goal is to cause mmap to fail, ideally, a big enough email is going to make it happen inevitably. However, we believe that the vulnerabilities can be triggered in using other tricks that can exhaust the resources. Such tricks can be achieved through multi-part, RTF, and other formats – more on that later.

Another important factor that can affect exploitability, is the hardware specs:

  • iPhone 6 has 1GB
  • iPhone 7 has 2GB
  • iPhone X has 3GB

Older devices have smaller physical RAM, and smaller virtual memory space, hence it is not necessary to drain every last bit of RAM in order to trigger this bug, mmap will fail when it cannot find a continuous memory of given size in the available virtual memory space.

We have determined that MacOS is not vulnerable to both vulnerabilities.

In iOS 12, it is easier to trigger the vulnerability because the data streaming is done within the same process, as the default mail application (MobileMail), it deals with much more resources, which eats up allocation of virtual memory space, especially the UI rendering, whereas in iOS 13, MobileMail pass data streaming to a background process namely maild. It concentrates its resources in parsing the e-mails, which reduces the risk of virtual memory space accidentally running out.

Remote Reproduction / POC

Since MobileMail/maild didn’t explicitly set the max limit for email size, it is possible to set up a custom email server and send an email that has several GB of plain text. iOS MIME/Message library chunks data into an average of roughly 0x100000 bytes while streaming data, so failing to download the entire email is totally fine.

Please note that this is just one example of how to trigger this vulnerability. Attackers do not need to send such email in-order to trigger this vulnerability, and other tricks with multi-part, RTF, or other formats may accomplish the same objective with a standard size email.

Indicators of Compromise

# Type of indicator Purpose IOC
1 String in raw email Part of the malicious email sent AAAAAAAA AND AAAAATEy AND EA\r\nAABI AND "$\x0e\xce\xa0\xd4\xc7\xcb\x08" AND T8hlGOo9 AND OKl2N\r\nC (updated)
3 String in raw email Part of the malicious email sent 3r0TRZfh AND AAAAAAAAAAAAAAAA AND \x0041\x0041\x0041\x0041 (unicode AAAA) (updated)
4 String in raw email Part of the malicious email sent \n/s1Caa6 AND J1Ls9RWH
5 String in raw email Part of the malicious email sent ://44449
6 String in raw email Part of the malicious email sent ://84371
7 String in raw email Part of the malicious email sent ://87756
8 String in raw email Part of the malicious email sent ://94654

Patch

Apple patched both vulnerabilities in iOS 13.4.5 beta, as can be seen in the following screenshot below:

To mitigate these issues – you can use the latest beta available. If using a beta version is not possible, consider disabling Mail application and use Outlook, Edison Mail, or Gmail that are not vulnerable.

Disclosure timeline

  • February 19th 2020 – Suspected events reported to the Vendor under ZecOps responsible disclosure policy which allows immediate release for in-the-wild triggers
  • Ongoing communication between the affected vendor & ZecOps
  • March 23rd – ZecOps sent to the affected vendor a POC reproduction of the OOB Write vulnerability
  • March 25th – ZecOps shared a local POC for the OOB Write
  • March 31st – ZecOps confirmed a second vulnerability exists in the same area and the ability of a remote trigger (Remote Heap Overflow) – both vulnerabilities were triggered in the wild
  • March 31st – ZecOps shared a POC with the affected vendor for the remote heap-overflow vulnerability
  • April 7th – ZecOps shared a custom Mail Server to trigger the 0-click heap overflow vulnerability on iOS 13.4 / 13.4.1 easily by simply adding the username & password to Mail and downloading the emails
  • April 15/16th – Vendor patches both vulnerabilities in the publicly available beta
  • April 20th – We re-analyzed historical data and found additional evidence of triggers in the wild on VIPs and targeted personas. We sent an email notifying the vendor that we will have to release this threat advisory imminently in order  to enable organizations to safeguard themselves as attacker(s) will likely increase their activity significantly now that it’s patched in the beta
  • April 22nd – Public disclosure

FAQ

Q: Was the first and/or the second vulnerability exploited in the wild?

A: The suspected emails triggered code paths of both vulnerabilities in the wild, however, we think the first vulnerability (OOB Write) was triggered accidentally, and the main goal was to trigger the second vulnerability (Remote Heap Overflow).

  1. We have seen multiple triggers on the same users across multiple continents. 
  2. We examined the suspicious strings & root-cause (such as the 414141…41 events and mostly other events):
    1. We confirmed that this code path do not get randomly triggered.
    2. We confirmed the registers values did not originate by the targeted software or by the operating system.
    3. We confirmed it was not a red team exercise / POC tests.
    4. We confirmed that the controlled pointers containing 414141…41, as well as other controlled memory, were part of the data sent via email to the victim’s device.
  3. We verified that the bugs were remotely exploitable & reproduced the trigger.
  4. We saw similarities between the patterns used against at least a couple of the victims sent by the same attacker. 
  5. Where possible, we confirmed that the allocation size was intentional.
  6. Lastly, we verified that the suspicious emails were received and processed by the device – according to the stack trace and it should have been on the device / mail server. Where possible, together with the victims, we verified that the emails were deleted.

Based on the above, together with other information, we surmise that these vulnerabilities were actively exploited in the wild.

Q: Does an attacker have to trigger the first vulnerability first in order to trigger the second one?

A: No. An attacker would likely target the second vulnerability.

Q: Why are you disclosing these bugs before a full patch is available?

A: It’s important to understand the following:

  • These bugs alone cannot cause harm to iOS users – since the attackers would require an additional infoleak bug & a kernel bug afterwards for full control over the targeted device.
  • Both bugs were already disclosed during the publicly available beta update. The attackers are already aware that the golden opportunity with MobileMail/maild is almost over and they will likely use the time until a patch is available to attack as many devices as possible. 
  • With very limited data we were able to see that at least six organizations were impacted by this vulnerability – and the potential abuse of this vulnerability is enormous. We are confident that a patch must be provided for such issues with public triggers ASAP. 

It is our obligation to the public, our customers, partners, and iOS users globally to disclose these issues so people who are interested can protect themselves by applying the beta patch, or stop to use Mail and temporarily switch to alternatives that are not vulnerable to these bugs. 

We hope that with making this information public it will help to promote a faster patch.

Q: Can both vulnerabilities be triggered remotely?

A; The remote heap overflow has been proven to be possible to trigger remotely without any user-interaction (aka ‘0-click’) on iOS13. The OOB write can be triggered remotely with an additional vulnerability that allows to call an arbitrary selector, just like the one published by Google Project Zero here:

https://i.blackhat.com/USA-19/Wednesday/us-19-Silvanovich-Look-No-Hands-The-Remote-Interactionless-Attack-Surface-Of-The-iPhone.pdf – since we saw in the wild trigger for the first vulnerability, it is possible, although we think it was done by mistake (see above).

Q: Do end users require to perform any action for the exploitation to succeed?

A: On iOS 13 – no. On iOS 12 – it requires the victim to click on an email.

If an attacker controls the mail server, the attack can be performed without any clicks on iOS 12 too.

Q: Since when iOS is vulnerable to these bugs?

A: iOS is vulnerable to these bugs at least since iOS 6 (Sept’ 2012). We haven’t checked earlier versions.

Q: What can you do to mitigate the vulnerability:

A: The newly released beta update of 13.4.5 contains a patch for these vulnerabilities. If you cannot patch to this version instead of using Mail application consider to use other mail applications until a GA patch is available.

Q: Does an attacker need to send a very large email (e.g. 1-3gb)  to trigger the vulnerability?

A: No. Attackers may use tricks in multi-part / RTF, etc in order to consume the memory in a similar way without sending a large email.

Q: Does the vulnerability require additional information to succeed? 

A: Yes, an attacker would need to leak an address from the memory in order to bypass ASLR. We did not focus on this vulnerability in our research.

Q: What does the vulnerability allow:

A: The vulnerability allows to run remote code in the context of MobileMail (iOS 12) or maild (iOS 13). Successful exploitation of this vulnerability would allow the attacker to leak, modify, and delete emails. Additional kernel vulnerability would provide full device access – we suspect that these attackers had another vulnerability. It is currently under investigation.

Q: Would end users notice any abnormal behavior once either vulnerabilities are triggered / exploited?

A: Besides a temporary slowdown of mobile mail application, users should not observe any other anomalous behavior.

When the exploit fails on iOS 12 – users may notice a sudden crash of the Mail application.

On iOS13, besides a temporary slowdown, it would not be noticeable. Failed attacks would not be noticeable on iOS 13 if another attack is carried afterwards and deletes the email. In failed attacks, the emails that would be sent by the attacker would show the message: "This message has no content." As seen in the following picture below:

Q: if the attackers fail, can they re-try to attack using the same vulnerability right after? 

A: On iOS 13 – attackers may try multiple times to infect the device silently and without user interaction. On iOS 12 – additional attempt would require the user to click on a newly received email by the attackers. The victim does not need to open an attachment and just viewing the email is sufficient to trigger the attack. 

Q: Can the attackers delete the received email after it was processed by the device and triggered the vulnerability?

A: Yes

Q; Is MacOS vulnerable to these vulnerabilities too? 

A: No

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Separating the Signal from the Noise: How Mandiant Intelligence Rates Vulnerabilities — Intelligence for Vulnerability Management, Part Three

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

Every information security practitioner knows that patching vulnerabilities is one of the first steps towards a healthy and well-maintained organization. But with thousands of vulnerabilities disclosed each year and media hype about the newest “branded” vulnerability on the news, it’s hard to know where to start.

The National Vulnerability Database (NVD) considers a range of factors that are fed into an automated process to arrive at a score for CVSSv3. Mandiant Threat Intelligence takes a different approach, drawing on the insight and experience of our analysts (Figure 1). This human input allows for qualitative factors to be taken into consideration, which gives additional focus to what matters to security operations.


Figure 1: How Mandiant Rates Vulnerabilities

Assisting Patch Prioritization

We believe our approach results in a score that is more useful for determining patching priorities, as it allows for the adjustment of ratings based on factors that are difficult to quantify using automated means. It also significantly reduces the number of vulnerabilities rated ‘high’ and ‘critical’ compared to CVSSv3 (Figure 2). We consider critical vulnerabilities to pose significant security risks and strongly suggest that remediation steps are taken to address them as soon as possible. We also believe that limiting ‘critical’ and ‘high’ designations helps security teams to effectively focus attention on the most dangerous vulnerabilities. For instance, from 2016-2019 Mandiant only rated two vulnerabilities as critical, while NVD assigned 3,651 vulnerabilities a ‘critical’ rating (Figure 3).


Figure 2: Criticality of US National Vulnerability Database (NVD) CVSSv3 ratings 2016-2019 compared to Mandiant vulnerability ratings for the same vulnerabilities


Figure 3: Numbers of ratings at various criticality tiers from NVD CVSSv3 scores compared to Mandiant ratings for the same vulnerabilities

Mandiant Vulnerability Ratings Defined

Our rating system includes both an exploitation rating and a risk rating:

The Exploitation Rating is an in indication of what is occurring in the wild.


Figure 4: Mandiant Exploitation Rating definitions

The Risk Rating is our expert assessment of what impact an attacker could have on a targeted organization, if they were to exploit a vulnerability.


Figure 5: Mandiant Risk Rating definitions

We intentionally use the critical rating sparingly, typically in cases where exploitation has serious impact, exploitation is trivial with often no real mitigating factors, and the attack surface is large and remotely accessible. When Mandiant uses the critical rating, it is an indication that remediation should be a top priority for an organization due to the potential impacts and ease of exploitation.

For example, Mandiant Threat Intelligence rated CVE-2019-19781 as critical due to the confluence of widespread exploitation—including by APT41—the public release of proof-of-concept (PoC) code that facilitated automated exploitation, the potentially acute outcomes of exploitation, and the ubiquity of the software in enterprise environments.

CVE-2019-19781 is a path traversal vulnerability of the Citrix Application Delivery Controller (ADC) 13.0 that when exploited, allows an attacker to remotely execute arbitrary code. Due to the nature of these systems, successful exploitation could lead to further compromises of a victim's network through lateral movement or the discovery of Active Directory (AD) and/or LDAP credentials. Though these credentials are often stored in hashes, they have been proven to be vulnerable to password cracking. Depending on the environment, the potential second order effects of exploitation of this vulnerability could be severe.

We described widespread exploitation of CVE-2019-19781 in our blog post earlier this year, including a timeline from disclosure on Dec. 17, 2019, to the patch releases, which began a little over a month later on Jan. 20, 2020. Significantly, within hours of the release of PoC code on Jan. 10, 2020, we detected reconnaissance for this vulnerability in FireEye telemetry data. Within days, we observed weaponized exploits used to gain footholds in victim environments. On the same day the first patches were released, Jan. 20, 2020, we observed APT41, one of the most prolific Chinese groups we track, kick off an expansive campaign exploiting CVE-2019-19781 and other vulnerabilities against numerous targets.

Factors Considered in Ratings

Our vulnerability analysts consider a wide variety of impact-intensifying and mitigating factors when rating a vulnerability. Factors such as actor interest, availability of exploit or PoC code, or exploitation in the wild can inform our analysis, but are not primary elements in rating.

Impact considerations help determine what impact exploitation of the vulnerability can have on a targeted system.

Impact Type

Impact Consideration

Exploitation Consequence

The result of successful exploitation, such as privilege escalation or remote code execution

Confidentiality Impact

The extent to which exploitation can compromise the confidentiality of data on the impacted system

Integrity Impact

The extent to which exploitation allows attackers to alter information in impacted systems

Availability Impact

The extent to which exploitation disrupts or restricts access to data or systems

Mitigating factors affect an attacker’s likelihood of successful exploitation.

Mitigating Factor

Mitigating Consideration

Exploitation Vector

What methods can be used to exploit the vulnerability?

Attacking Ease

How difficult is the exploit to use in practice?

Exploit Reliability

How consistently can the exploit execute and perform the intended malicious activity?

Access Vector

What type of access (i.e. local, adjacent network, or network) is required to successfully exploit the vulnerability?

Access Complexity

How difficult is it to gain access needed for the vulnerability?

Authentication Requirements

Does the exploitation require authentication and, if so, what type of authentication?

Vulnerable Product Ubiquity

How commonly is the vulnerable product used in enterprise environments?

Product's Targeting Value

How attractive is the vulnerable software product or device to threat actors to target?

Vulnerable Configurations

Does exploitation require specific configurations, either default or non-standard?

Mandiant Vulnerability Rating System Applied

The following are examples of cases in which Mandiant Threat Intelligence rated vulnerabilities differently than NVD by considering additional factors and incorporating information that either was not reported to NVD or is not easily quantified in an algorithm.

Vulnerability

Vulnerability Description

NVD Rating

Mandiant Rating

Explanation

CVE-2019-12650

A command injection vulnerability in the Web UI component of Cisco IOS XE versions 16.11.1 and earlier that, when exploited, allows a privileged attacker to remotely execute arbitrary commands with root privileges

High

Low

This vulnerability was rated high by NVD, but Mandiant Threat Intelligence rated it as low risk because it requires the highest level of privileges – level 15 admin privileges – to exploit. Because this level of access should be quite limited in enterprise environments, we believe that it is unlikely attackers would be able to leverage this vulnerability as easily as others. There is no known exploitation of this activity.

CVE-2019-5786

A use after free vulnerability within the FileReader component in Google Chrome 72.0.3626.119 and prior that, when exploited, allows an attacker to remotely execute arbitrary code. 

 

Medium

High

NVD rated CVE-2019-5786 as medium, while Mandiant Threat Intelligence rated it as high risk. The difference in ratings is likely due to NVD describing the consequences of exploitation as denial of service, while we know of exploitation in the wild which results in remote code execution in the context of the renderer, which is a more serious outcome.

As demonstrated, factors such as the assessed ease of exploitation and the observance of exploitation in the wild may result a different priority rating than the one issued by NVD. In the case of CVE-2019-12650, we ultimately rated this vulnerability lower than NVD due to the required privileges needed to execute the vulnerability as well as the lack of observed exploitation. On the other hand, we rated the CVE-2019-5786 as high risk due to the assessed severity, ubiquity of the software, and confirmed exploitation.

In early 2019, Google reported two zero-day vulnerabilities were being used together in the wild: CVE-2019-5786 (Chrome zero-day vulnerability) and CVE-2019-0808 (a Microsoft privilege escalation vulnerability). Google quickly released a patch for the Chrome vulnerability pushed it to users through Chrome’s auto-update feature on March 1. CVE-2019-5786 is significant because it can impact all major operating systems, Windows, Mac OS, and Linux, and requires only minimal user interaction, such as navigating or following a link to a website hosting exploit code, to achieve remote code execution. The severity is further compounded by a public blog post and proof of concept exploit code that was released a few weeks later and subsequently incorporated into a Metasploit module.

The Future of Vulnerability Analysis Requires Algorithms and Human Intelligence

We expect that the volume of vulnerabilities to continue to increase in coming years, emphasizing the need for a rating system that accurately identifies the most significant vulnerabilities and provides enough nuance to allow security teams to tackle patching in a focused manner. As the quantity of vulnerabilities grows, incorporating assessments of malicious actor use, that is, observed exploitation as well as the feasibility and relative ease of using a particular vulnerability, will become an even more important factor in making meaningful prioritization decisions.

Mandiant Threat Intelligence believes that the future of vulnerability analysis will involve a combination of machine (structured or algorithmic) and human analysis to assess the potential impact of a vulnerability and the true threat that it poses to organizations. Use of structured algorithmic techniques, which are common in many models, allows for consistent and transparent rating levels, while the addition of human analysis allows experts to integrate factors that are difficult to quantify, and adjust ratings based on real-world experience regarding the actual risk posed by various types of vulnerabilities.

Human curation and enhancement layered on top of automated rating will provide the best of both worlds: speed and accuracy. We strongly believe that paring down alerts and patch information to a manageable number, as well as clearly communicating risk levels with Mandiant vulnerability ratings makes our system a powerful tool to equip network defenders to quickly and confidently take action against the highest priority issues first.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Applicants Webinar: NICE Notice of Funding Opportunity (NOFO) – K12 Cybersecurity Outreach Program

The PowerPoint slides used during this webinar can be downloaded here. Speakers: Rodney Petersen Director, National Initiative for Cybersecurity Education (NICE) Davina Pruitt-Mentle Lead for Academic Engagement, National Initiative for Cybersecurity Education (NICE) Gilbert Castillo Grants Officer National Institute of Standards and Technology (NIST) Synopsis: NIST recently announced funding on behalf of the National Initiative for Cybersecurity Education (NICE) for the NICE K12 Cybersecurity Education Outreach Program. NIST is soliciting applications from U.S.-located, non-Federal entities

ZOOM SAFER WHEN WORKING FROM HOME



Due to the scale of the pandemic ever more businesses and governments institutions are enforcing ‘work from home’ policies in order to keep their employees safe and healthy, and to keep the business going – Social interactions today come down to video calls, social media posts, communicating via instant messaging platforms and for very many of us we make use of Zoom.

In this context, a cyberattack that deprives organisations and families of access to the internet, their devices or data could be devastating – Zoom has had a bunch of security scares recently, as huge numbers of new users flock to it, and as cybercriminals try to take advantage of that. Fortunately, a lot of the problems and risks people are having can be reduced enormously just by getting the basics right.

So here are “things to get right first” – they shouldn’t take you long, and they are easy to do to keep your Zoom safer.

1. Pick the right password.
When setting up Zoom Account its highly recommended to make use of good and strong password – Do not share it, change it as often as you can, make sure you do not reuse the password.


2. Patch early, patch often
Zoom’s own CEO just wrote a blog post announcing a “feature freeze” in the product so that the company can focus on security issues instead. It’s much easier to do that if you aren’t adding new code at the same time.

Why not get into the habit of checking you’re up-to-date every day, before your first meeting? Even if Zoom itself told you about an update the very last time you used it, get in the habit of checking by hand anyway, just to be sure. It doesn’t take long.


3. Use the Waiting Room option
Set up meetings so that the participants can’t join in until you open it up.
And if you suddenly find yourself “on hold until the organiser starts the meeting” when in the past you would have spent the time chatting to your colleagues and getting the smalltalk over with, don’t complain – those pre-meeting meetings are great for socialising but they do make it harder to control the meeting.

4. Take control over screen sharing
Until recently, most Zoom meetings took a liberal approach to screen sharing. Unfortunately this can cause other to share inappropriate things and cause troubles
Actually, it’s not just screen sharing that can cause trouble. There are numerous controls you can apply to participants in meetings, including blocking file sharing and private chat, kicking out disruptive users, and stopping troublemakers coming back.

5. Use random meeting IDs and set meeting passwords
We know lots of Zoom users who memorised their own meeting ID long ago and had fallen into the habit of using it for every meeting they held – even back-to-back meetings with different groups – because they knew they’d never need to look it up.

But that convenience is handy for crooks, too, because they already have a list of known IDs that they can try automatically in the hope of wandering in where they aren’t supposed to be.

It is recommended using a randomly generated meeting ID, and setting a password on any meeting that is not explicitly open to all. You can send the web link by one means, e.g. in an email or invitation request, and the password by another means, e.g. in an instant message just before the meeting starts. (You can also lock meetings once they start to avoid gaining unwanted visitors after you’ve started concentrating on the meeting itself.)

6. Make some rules of etiquette and stick to them.
Etiquette may sound like a strange bedfellow for cybersecurity, and perhaps it is.
But respect for privacy, a sense of trust, and a feeling of social and business comfort are also important parts of a working life that’s now dominated by online meetings.
If you’re expected or you need to use video, pay attention to your appearance and the lighting. (In very blunt terms: try to avoid being a pain to watch.) Remember to use the mute button when you can.

And most importantly – especially if there are company outsiders in the meeting – be very clear up front if you will be recording the meeting, even if you are in a jurisdiction that does not require you to declare it. And make it clear if they are any restrictions, albeit informal ones, about what the participants are allowed to do with the information they learn in the meeting.

Etiquette isn’t about keeping the bad guys out. But respectful rules of engagement for remote meetings help to make it easy for everyone in the meeting to keep the good stuff in.

NIST and OSTP Launch Effort to Improve Search Engines for COVID-19 Research

GAITHERSBURG, Md. — Today, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) and the White House Office of Science and Technology Policy (OSTP) launched a joint effort to support the development of search engines for research that will help in the fight against COVID-19. The project was developed in response to the March 16 White House Call to Action to the Tech Community on New Machine Readable COVID-19 Dataset. “Our nation’s scientific enterprise is mobilized to defeat the invisible enemy that is COVID-19,” said Secretary of Commerce Wilbur Ross. “Our

How to Keep Your Video Conferencing Meetings Secure

Guest Post by By Tom Kellermann (Head Cybersecurity Strategist, VMware Carbon Black)

The sudden and dramatic shift to a mobile workforce has thrust video conferencing into the global spotlight and evolved video conferencing vendors from enterprise communication tools to critical infrastructure.

During any major (and rapid) technology adoption, cyberattackers habitually follow the masses in hopes of launching an attack that could lead to a pay day or give them a competitive advantage. This has not been lost on global organisations’ security and IT teams, who are quickly working to make sure their employees’ privacy and data remains secure.

Here are some high-level tips to help keep video conferencing secure.

Update the Application
Video conferencing providers are regularly deploying software updates to ensure that security holes are mitigated.  Take advantage of their diligence and update the app prior to using it every time.

Lock meetings down and set a strong password
Make sure that only invited attendees can join a meeting. Using full sentences with special characters included, rather than just words or numbers, can be helpful. Make sure you are not sharing the password widely, especially in public places and never on social media. Waiting room features are critical for privacy as the meeting host can serve as a final triage to make sure only invited participants are attending. Within the meeting, the host can restrict sharing privileges, leading to smoother meetings and ensuring that uninvited guests are not nefariously sharing materials. 

Discussing sensitive information
If sensitive material must be discussed, ensure that the meeting name does not suggest it is a top-secret meeting, which would make it a more attractive target for potential eavesdroppers.  Using code words to depict business topics is recommended during the cyber crime wave we are experiencing.

Restrict the sharing of sensitive files to approved file-share technologies, not as part of the meeting itself
Using an employee sharing site that only employees have access to (and has multi-factor authentication in place) is a great way to make sure sensitive files touch the right eyes only.  This should be mandated as this is a huge Achilles heel.

Use a VPN to protect network traffic while using the platform 
With so many employees working remotely, using a virtual private network (VPN) can help better secure internet connections and keep private information private via encryption. Public WiFi can be a gamble as it only takes one malicious actor to cause damage.  Do not use public WiFi, especially in airports or train stations.  Cyber criminals lurk in those locations.

If you can, utilise two networks on your home WiFi router, one for business and the other for personal use.
Make sure that your work computer is only connected to a unique network in your home. All other personal devices – including your family’s – should not be using the same network. The networks and routers in your home should be updated regularly and, again, should use a complex password. Additionally, you should be the only system administrator on your network and all devices that connect to it.

All of us have a role to play in mitigating the cyber crime wave.  Please remember these best practices the next time you connect. Stay safe online

Also related - How Safe are Video Messaging Apps such as Zoom?

Think Fast: Time Between Disclosure, Patch Release and Vulnerability Exploitation — Intelligence for Vulnerability Management, Part Two

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations. Check out our first post on zero-day vulnerabilities.

Attackers are in a constant race to exploit newly discovered vulnerabilities before defenders have a chance to respond. FireEye Mandiant Threat Intelligence research into vulnerabilities exploited in 2018 and 2019 suggests that the majority of exploitation in the wild occurs before patch issuance or within a few days of a patch becoming available.


Figure 1: Percentage of vulnerabilities exploited at various times in relation to patch release

FireEye Mandiant Threat Intelligence analyzed 60 vulnerabilities that were either exploited or assigned a CVE number between Q1 2018 to Q3 2019. The majority of vulnerabilities were exploited as zero-days – before a patch was available. More than a quarter were exploited within one month after the patch date. Figure 2 illustrates the number of days between when a patch was made available and the first observed exploitation date for each vulnerability.

We believe these numbers to be conservative estimates, as we relied on the first reported exploitation of a vulnerability linked to a specific date. Frequently, first exploitation dates are not publicly disclosed. It is also likely that in some cases exploitation occurred without being discovered before researchers recorded exploitation attached to a certain date.


Figure 2: Time between vulnerability exploitation and patch issuance

­­­Time Between Disclosure and Patch Release

The average time between disclosure and patch availability was approximately 9 days. This average is slightly inflated by vulnerabilities such as CVE-2019-0863, a Microsoft Windows server vulnerability, which was disclosed in December 2018 and not patched until 5 months later in May 2019. The majority of these vulnerabilities, however, were patched quickly after disclosure. In 59% of cases, a patch was released on the same day the vulnerability was disclosed. These metrics, in combination with the observed swiftness of adversary exploitation activity, highlight the importance of responsible disclosure, as it may provide defenders with the slim window needed to successfully patch vulnerable systems.

Exploitation After Patch Release

While the majority of the observed vulnerabilities were zero-days, 42 percent of vulnerabilities were exploited after a patch had been released. For these non-zero-day vulnerabilities, there was a very small window (often only hours or a few days) between when the patch was released and the first observed instance of attacker exploitation. Table 1 provides some insight into the race between attackers attempting to exploit vulnerable software and organizations attempting to deploy the patch.

Time to Exploit for Vulnerabilities First Exploited after a Patch

Hours

Two vulnerabilities were successfully exploited within hours of a patch release, CVE-2018-2628 and CVE-2018-7602.

Days

12 percent of vulnerabilities were exploited within the first week following the patch release.

One Month

15 percent of vulnerabilities were exploited after one week but within one month of patch release.

Years

In multiple cases, such as the first observed exploitation of CVE-2010-1871 and CVE-2012-0874 in 2019, attackers exploited vulnerabilities for which a patch had been made available many years prior.

Table 1: Exploitation timing for patched vulnerabilities ranges from within hours of patch issuance to years after initial disclosure

Case Studies

We continue to observe espionage and financially motivated groups quickly leveraging publicly disclosed vulnerabilities in their operations. The following examples demonstrate the speed with which sophisticated groups are able to incorporate vulnerabilities into their toolsets following public disclosure and the fact that multiple disparate groups have repeatedly leveraged the same vulnerabilities in independent campaigns. Successful operations by these types of groups are likely to have a high potential impact.


Figure 3: Timeline of activity for CVE-2018-15982

CVE-2018-15982: A use after free vulnerability in a file package in Adobe Flash Player 31.0.0.153 and earlier that, when exploited, allows an attacker to remotely execute arbitrary code. This vulnerability was exploited by espionage groups—Russia's APT28 and North Korea's APT37—as well as TEMP.MetaStrike and other financially motivated attackers.


Figure 4: Timeline of activity for CVE-2018-20250

CVE-2018-20250: A path traversal vulnerability exists within the ACE format in the archiver tool WinRAR versions 5.61 and earlier that, when exploited, allows an attacker to locally execute arbitrary code. This vulnerability was exploited by multiple espionage groups, including Chinese, North Korean, and Russian, groups, as well as Iranian groups APT33 and TEMP.Zagros.


Figure 5: Timeline of Activity for CVE-2018-4878

CVE-2018-4878: A use after free vulnerability exists within the DRMManager’s “initialize” call in Adobe Flash Player 28.0.0.137 and earlier that, when exploited, allows an attacker to remotely execute arbitrary code. Mandiant Intelligence confirmed that North Korea’s APT37 exploited this vulnerability as a zero-day as early as September 3, 2017. Within 8 days of disclosure, we observed Russia’s APT28 also leverage this vulnerability, with financially motivated attackers and North Korea’s TEMP.Hermit also using within approximately a month of disclosure.

Availability of PoC or Exploit Code

The availability of POC or exploit code on its own does not always increase the probability or speed of exploitation. However, we believe that POC code likely hastens exploitation attempts for vulnerabilities that do not require user interaction. For vulnerabilities that have already been exploited, the subsequent introduction of publicly available exploit or POC code indicates malicious actor interest and makes exploitation accessible to a wider range of attackers. There were a number of cases in which certain vulnerabilities were exploited on a large scale within 48 hours of PoC or exploit code availability (Table 2).

Time Between PoC or Exploit Code Publication and First Observed Potential Exploitation Events

Product

CVE

FireEye Risk Rating

1 day

WinRAR

CVE-2018-20250

Medium

1 day

Drupal

CVE-2018-7600

High

1 day

Cisco Adaptive Security Appliance

CVE-2018-0296

Medium

2 days

Apache Struts

CVE-2018-11776

High

2 days

Cisco Adaptive Security Appliance

CVE-2018-0101

High

2 days

Oracle WebLogic Server

CVE-2018-2893

High

2 days

Microsoft Windows Server

CVE-2018-8440

Medium

2 days

Drupal

CVE-2019-6340

Medium

2 days

Atlassian Confluence

CVE-2019-3396

High

Table 2: Vulnerabilities exploited within two days of either PoC or exploit code being made publicly available, Q1 2018–Q3 2019

Trends by Targeted Products

FireEye judges that malicious actors are likely to most frequently leverage vulnerabilities based on a variety of factors that influence the utility of different vulnerabilities to their specific operations. For instance, we believe that attackers are most likely to target the most widely used products (see Figure 6). Attackers almost certainly also consider the cost and availability of an exploit for a specific vulnerability, the perceived success rate based on the delivery method, security measures introduced by vendors, and user awareness around certain products.

The majority of observed vulnerabilities were for Microsoft products, likely due to the ubiquity of Microsoft offerings. In particular, vulnerabilities in software such as Microsoft Office Suite may be appealing to malicious actors based on the utility of email attached documents as initial infection vectors in phishing campaigns.


Figure 6: Exploited vulnerabilities by vendor, Q1 2018–Q3 2019

Outlook and Implications

The speed with which attackers exploit patched vulnerabilities emphasizes the importance of patching as quickly as possible. With the sheer quantity of vulnerabilities disclosed each year, however, it can be difficult for organizations with limited resources and business constraints to implement an effective strategy for prioritizing the most dangerous vulnerabilities. In upcoming blog posts, FireEye Mandiant Threat Intelligence describes our approach to vulnerability risk rating as well as strategies for making informed and realistic patch management decisions in more detail.

We recommend using this exploitation trend information to better prioritize patching schedules in combination with other factors, such as known active threats to an organization's industry and geopolitical context, the availability of exploit and PoC code, commonly impacted vendors, and how widely software is deployed in an organization's environment may help to mitigate the risk of a large portion of malicious activity.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Cybersecurity Trends

Trends are interesting since they could tell you where things are going.

I do believe in studying history and behaviors in order to figure out where things are going on, so that every Year my colleagues from Yoroi and I spend several weeks to study and to write what we observed during the past months writing the Yoroi Cybersecurity Annual Report (freely downloadable from here: Yoroi Cybersecurity Report 2019).

The Rise of Targeted Ransomware

2019 was a breakthrough year in the cyber security of the European productive sector. The peculiarity of this year is not strictly related to the number of hacking attempts or in the malware code spread all over the Internet to compromise Companies assets and data but in the evolution and the consolidation of a new, highly dangerous kind of cyber attack. In 2019, we noticed a deep change in a consistent part of the global threat landscape, typically populated by States Sponsored actors, Cyber-Criminals and Hack-tivists, each one having some kind of attributes, both in motivations, objectives, methods and sophistications.

During the 2019 we observed a rapid evolution of Cyber Crime ecosystems hosting a wide range of financially motivated actors. We observed an increased volume of money-driven attacks compared to previous years. But actors are also involved in cyber-espionage, CEO frauds, credential stealing operations, PII (Personally Identifiable Information) and IP (Intellectual Property) theft, but traditionally much more active in the so called “opportunistic” cyber attacks. Attacks opportunistically directed to all the internet population, such as botnets and crypto-miners infection waves, but also involved in regional operations, for instance designed to target European countries like Italy or Germany as branches of major global-scale operations, as we tracked since 2018 with the sLoad case and even earlier with the Ursnif malware propagations waves.
In 2019 like what happened in 2018, Ransomware attacks played a significant role in the cyber arena. In previous years the whole InfoSec community observed the fast increase in o the Ransomware phenomenon, both in term of newborn ransomware families and also in the ransom payment options, driven by the consolidation of the digital cryptocurrencies market that made the traditional tracking techniques – operated by law enforcement agencies – l less effective due to new untrackable crypto currencies. But these increasing volumes weren’t the most worrying aspect we noticed.

Before 2019, most ransomware attacks were conducted in an automated, mostly opportunistic fashion: for instance through drive by download attacks and exploit kits, but also very frequently using the email vector. In fact, the “canonical” ransomware attacks before 2019 were characterized by an incoming email luring the victim to open up an attachment, most of the times an Office Document, carefully obfuscated to avoid detection and weaponized to launch some ransomware malware able to autonomously encrypt local user files and shared documents.

During 2019, we monitored a deep change in this trend. Ransomware attacks became more and more sophisticated. Gradually, even major cyber-criminal botnet operators, moved into this emerging sector leveraging their infection capabilities, their long term hacking experience and their bots to monetize their actions using new malicious business models. Indeed, almost every major malware family populating the cyber criminal landscape was involved in the delivery of follow up ransomware within infected hosts. A typical example is the Gandcrab ransomware installation operated by Ursnif implants during most of 2019. But some criminal groups have gone further. They set the threat level to a new baseline.

Many major cyber criminal groups developed a sort of malicious “RedTeam” units, lest call them “DarkTeams”. These units are able to manually engage high value targets such as private companies or any kind of structured organization, gaining access to their core and owning the whole infrastructure at once, typically installing ransomware tools all across the network just after ensuring the deletion of the backup copies. Many times they are also using industry specific knowledge to tamper with management networks and hypervisors to reach an impressive level of potential damage.
Actually, this kind of behaviour is not new to us. Such methods of operations have been used for a long time, but not by such a large number of actors and not with such kind of objectives. Network penetration was in fact a peculiarity of state sponsored groups and specialized cyber criminal gangs, often threatening the banking and retail sectors, typically referenced as Advanced Persistent Threats and traditionally targeting very large enterprises and organizations.
During 2019, we observed a strong game change in the ransomware attacks panorama.

The special “DarkTeams” replicated advanced intrusion techniques from APT playbooks carrying them into private business sectors which were not traditionally prepared to deal with such kinds of threats. Then, they started to hit organizations with high impact business attacks modeled to be very effective for the victim context. We are facing the evolution of ransomware by introducing Targeted Ransomware Attacks.

We observed and tracked many gangs consolidating the new Targeted Ransomware Attacks model. Many of them have also been cited by mainstream media and press due to the heavy impact on the business operation of prestigious companies, such as the LockerGoga and Ryuk ransomware attacks, but they only were the tip of the iceberg. Many other criminal groups have consolidated this kind of operations such as DoppelPaymer, Nemty, REvil/Sodinokibi and Maze, definitely some of the top targeted ransomware players populating the threat landscape in the last half of 2019.
In the past few months we also observed the emergence of a really worrisome practice by some of these players: the public shame of their victims. Maze was one of the first actors pionering this practice in 2019: the group started to disclose the name of the private companies they hacked into along with pieces of internal data stolen during the network intrusions.

The problem rises when the stolen data includes Intellectual Property and Personal Identifiable Information. In such a case the attacker leaves the victim organization with an additional, infaust position during the cyber-crisis: handling of the data breach and the fines disposed by the Data Protection Authorities. During 2020 we expect these kinds of practices will be more and more common into the criminal criminal ecosystems. Thus, adopting a proactive approach to the Cyber Security Strategy leveraging services like Yoroi’s Cyber Security Defence Center could be crucial to equip the Company with proper technology to acquire visibility on targeted ransomware attacks, knowledge, skills and processes to spot and handle these kind of new class of threats.

Zero-Day Malware

Well Known threats are always easier to be recognized and managed since components and intents are very often clear. For example a Ransomware, as known today, performs some standard operations such as (but not limited to): reading file, encrypting file and writing back that file. An early discovery of known threat families would help analysts to perform quick and precise analyses, while unknown threats are always difficult to manage since analysts would need to discover firstly the intentions and then bring back behaviour to standard operations. This is why we track Zero-Day Malware. Yoroi’s technology captures and collects samples before processing them on Yoroi’s shared threat intelligence platform trying to attribute them to known threats.

As part of the automatic analysis pipeline, Yoroi’s technology reports if the malicious files are potentially detected by Anti-Virus technologies during the detection time. This specific analogy is mainly done to figure-out if the incoming threat would be able to bypass perimetral and endpoint defences. As a positive side effect we collect data on detected threats related to their notoriety. In other words we are able to see if a Malware belonging to a

threat actor or related to specific operation (or incident) is detected by AV, Firewall, Next Generation X and used endpoints.
In this context, we shall define what we mean for Zero-Day Malware. We call Zero-Day malware every sample that turns out to be an unknown variant of arbitrary malware families. The following image (Fig:1) shows how most of the analyzed Malware is unknown from the InfoSec community and from common Antivirus vendors. This finding supports the even evolving Malware panorama in where attackers start from a shared code base but modify it depending on their needed to be stealth.

Immagine che contiene dispositivo, disegnando

Descrizione generata automaticamente

The reported data are collected during the first propagation of the malicious files across organizations. It means Companies are highly exposed to the risk of Zero-Day malware. Detection and response time plays a central role in such cases where the attack becomes stealth for hours or even for days.
Along with the Zero-Day malware observation, most of the known malware at time of delivery have not so high chances of being blocked by security controls. The 8% of the malware is detected by few AV engines and only 33% is actually well identified at time of attack. Even the so-called “known malware” is still a relevant issue due to its capability to maintain a low detection rate during the first infection steps. Indeed only less than 20% of analyzed samples belonging to “not Zero-Day” are detected by more than 15 AV engines.

Drilling down and observing the behavioural classification of the intercepted samples known by less than 5 AntiVirus engines at detection time, we might appreciate that the “Dropper” behaviour (i.e. the downloading or unpacking of other malicious stages or component) lead the way with 54% of cases, slightly decreasing since the 2018. One more interesting trend in the analyzed data is the surprising decrease of Ransomware behaviour, dropping from 17% of 2018 to the current 2%, and the bullish raise of “Trojan” behaviours up to 35% of times, more than doubled respect to the 15% of 2018.
This trend endorses the evidence that ransomware attacks in 2019 begun to follow a targeted approach as described in the “The Rise of Targeted Ransomware” section.

Immagine che contiene dispositivo

Descrizione generata automaticamente

A reasonable interpretation of the darkling changes on these data, could actually conform with the sophistication of the malware infection chain discussed in the previous section. As a matter of fact, many of the delivered malware are actually a single part of a more complex infection chain. A chain able to install even multiple families of malware threats, starting from simple pieces of code behaving like droppers and trojan horses to grant access to a wider range of threats.   

This trend gets another validation even in the Zero-Day malware data set: the samples likely unknown to Info.Sec. community – at the time of delivery –  substantially shifted their distribution from previous years. In particular, Ransomware behaviour detections dropped from 29% to 7% in 2019, and Trojan raised from 28% to 52% of cases, showing similar macro variations.

Immagine che contiene dispositivo

Descrizione generata automaticamente

If you want to read more details on “DarkTeams” and on what we observed during the past months, please feel free to download the full report HERE.

Working From Home: Building Your Own Setup

This is the fifth week my company (Yoroi) and I are working from home (covid-19). While every company process is running smooth and fast, personal quarantine is getting quite long and heavy especially if you are accustom to travel a lot for working purposes. Under these circumstances home office setup becomes very important as you should be comfortable in delivering as much as you did while sitting in your perfectly fitting office. Moreover during the past few weeks I received many emails and private messages from people like me asking about personal suggestions on home setup. So I decided to write up a little blog post on my personal suggestions about home setup for remote workers.

First: What you do.

My personal home desk changed a lot during the years. On one hand new technology became available but on the other hand (and mostly important) my role and interests changed a lot over time. I started with a super-nerd home setup while I was in college, including soldering irons, desoldering air heater, Arduino boards all over the shelves, Raspberry with many cover flavors, three monitors one of them vertical oriented (for reading documentation), black screen and mechanical keyboards. This environment was fitting my needs in that specific time, but it would not fit my current needs. The first thing that you should do in refactoring your own home desk is to understand what you do. Not what you would like to do, but rather what you do. Before starting surfing on gadget websites, just focus on what you are doing on daily basis. A developer and a Malware analysts share few needs but their environments wont be closed each other. If you are a CXX your environment will look definitely different respect to your IT-Manager !

Second: Less is more.

I know many of you wont agree with this paragraph but in my personal point of view: “less is more” (cit. Mies). As many objects populate your desk as higher is the probability to get distracted from them. I tended to have books on my desks, and every time I watched them I took my mind to that story or to what the book gave me in term of knowledge and.. this was really distracting me. 6 things are my minimal and best setup. A Laptop, a Mouse, a Mechy keyboard, headphones, a big monitor and my phone.

Home SetUP

Monitor

Talking about monitor I would suggest a single big one. I used to have multiple monitors on my desk and it is amazing to see how many parallel tasks you would keep on them, but many parallel tasks does not necessary mean higher productivity. In my experience I noticed that it’s best to focus on 3 or 4 parallel tasks not more. So a big screen managed by a great window managed (see software section) would help you in not exaggerate on multiple tasks. However if you are a developer an additional vertical screen would definitely help you in consulting StackOverflow, GitHub and Documentations. In many other cases, I personally wont suggest more than two displays. My favorite size is 27″ and I do prefer “border less” monitor with adjustable “neck” in order to move it depending on chair position. Actually one of my favorite is SAMSUNG SR75 4K UHD Space Monitor, it is Ultra HD, great looking and very minimal in space, so you would have much more space for your arms.

Keyboard

Mechanical keyboard is a little pleasure of life. If you are a writer it is definitely a “mush have” while if you are a developer or a malware analyst it’s mostly a fashion. Contrary if you are a penetration tester or a adversarial simulator you would probably appreciate more foldable keyboards or if you are in IT guy you would probably love small and tiny keyboards light and easy to carry between racks on “work in progress” data-centers. Like in monitor ecosystem keyboard is a humongous world where there is not a “best in class” ever, there is what “you like most”. In my case I do love Varmilo keyboards since they allow many quite interesting customizations. Ergonomic plays a fundamental role in keyword choice, but even the most ergonomic keyboards could harm you if you have not a good body posture, so before getting into a very fancy ergonomic keyboard (like the most famous one HERE) try to correct your body posture.

Mouse

Mouse is one of the most used artifact that you will be touching since you sit on your comfortable chair, so you need to put the right attention on what you choose. While Kensington trackball mouse (here) is definitely my personal suggestion, I do not use it. Since I used to travel a lot during my normal working weeks I can’t carry it back and forth from travels. It’s a trackball is not comfortable to be moved at all. So I decided to take a small but yet nice mouse. If you are used to travel a lot like me, you would probably appreciate a Bluetooth mouse with no cables on the bag (remember less is more). The mouse should be small in size and light. I would suggest having a hard (metal) and mechanical wheel with strong inertia in order to give you back a nice scrolling feeling. One of my favorite is definitely the Logitech MX everywhere 2.

Computer

This would be the most important choice, indeed it could be quite easy to change monitor or a mouse, but chaining your PC it would be much more challenging (and expensive). Depending on what you are doing on your daily basis you would have many many choices. So let’s start from the mobility. In my case I move a lot between my offices and where I go I used to have external monitors, so I prefer small laptops. My principal tasks are between malware analysis (most for fun) and management (most for work), so I need many virtual machines (most for fun) and many chrome tabs (most for work). High performances in terms of SSD, CPU and RAM are required (virtualization and Malware analysis tool sets) . If you are a podcaster or a youtuber your would need an high performance graphic processor (especially if you post-process video) , if you are a writer you would probably love to write “around the globe” (not in a small cold office) so you would love a light laptop or if you are a developer or content designer you would probably love a MAC 😀 (just kidding you). My favorite so far is the RazerBlade Stealth 13″ which has incredible performances. Touchscreen monitor and retina display, beyond i7, 16GB ram and 500GB SSD. Generally speaking if you are looking for a PC and not for a MAC I would definitely suggest to take a look to one of the following tiny little but powerful laptops such as: Dell XPS 13, HP Spectre and ASUS ZenBook.

Headphones

If you are a music lover, well you’d better jump this section. I don’t use headphones for high quality music listening but rather for conferences and calls. However from time to time I love focusing by listening my favorite playlist so I had to figure-out what, in my personal point of view, could be a good arrangement. My best compromise was Jabra Move. If you don’t need music (or if you have a separate headphones for listening to music) having two “covered” ears (in term of stereo) could be quite annoying since it’s not so natural talking without having the right feel of your natural voice (with stereo headphone your voice is quite muffled). On the other hand if you want to listen to the music, definitely you cannot do with a mono headphone. Jabra Move looks like having a nice sound quality and nice integrated microphone, so that you could easily switch between conferences and music without changing hardware.

Software

First of all let me explain why I am crazy about window managers. When you get into the productivity world, having a well-configured system with personal shortcuts is not only a way to speedup the boring tasks (open windows, resize windows, create multi-desktop environments, open up the usual web pages for reading, download stuff and place it on the right folder, saving bookmarks, etc etc) it is actually a way to organize your entire day. As many patterns are available for eMail management (I do prefer the zero-inbox pattern, even if I don’t truly succeed in using it) many are available for virtual desktop management. While I was used to manage virtual desktop by functionality (and this works pretty well on MAC OS systems) on a my Linux box I prefer keeping virtual desktops by projects. So yes, I do have many duplicated applications running but specialized on a specific topic. Questionable, I know… but in this way I feel much more confident since I prefer to classify my work into projects rather than on functionalities over multiple virtual environments. Anyway, a great window management would definitely help you out. I’ve always been fascinated in using i3 tiling windows manger but I was always skeptical in the startup phase: on one hand the time to become fluent in i3 and on the other hand the installation procedure and configuration time was kind killing me. But recently I met regolith which change my way to thing window managers. Today I definitely would suggest you to try it at least for one week.

While a lot of ToDo-list software are available out there, I do prefer the simple Todo.txt. It is damn simple, you can access it from multiple devices, it has a command line, it could manage priorities and… it has a command line !! (did I already mentioned ). If you are a more “web oriented” guy, I would suggest you Trello-CLI, but really not more than that.

One of my favorite editor is VIM. But I am not an “old school guy”, I just love the many many plugin available for it and how you can transform it !

VIM Configuration

Once you’ve learned to dominate VIM you don’t need any editor ever, VIM is everywhere and you might customize it in a very quick and fast way. If you like how my VIM looks like HERE my configuration file, feel free to grab and use it if you wish.

Conclusion

I don’t think there would be a definitive setup. It will change over time depending on your needs. You might need electronic boards and soldering irons or a simple laptop at all. It really depends on what you are doing and what are the deliverables you are working on. In this “unusual” (at least for my corner) post I wanted to answer to many questions on the “perfect home setup” that came to me in the past three weeks. Actually I have my “perfect” setup which I’ve shared with you, but I am sure it will change over and over again even if it has changed a lot in the past few years. The only real suggestion that I’d like to appoint is: “Less is More”. Few things you hold on your desk few distraction-points you would have and faster would be your deliverable.

Have fun and #StayAtHome

Limited Shifts in the Cyber Threat Landscape Driven by COVID-19

Though COVID-19 has had enormous effects on our society and economy, its effects on the cyber threat landscape remain limited. For the most part, the same actors we have always tracked are behaving in the same manner they did prior to the crisis. There are some new challenges, but they are perceptible, and we—and our customers—are prepared to continue this fight through this period of unprecedented change.

The significant shifts in the threat landscape we are currently tracking include:

  • The sudden major increase in a remote workforce has changed the nature and vulnerability of enterprise networks.
  • Threat actors are now leveraging COVID-19 and related topics in social engineering ploys.
  • We anticipate increased collection by cyber espionage actors seeking to gather intelligence on the crisis.
  • Healthcare operations, related manufacturing, logistics, and administration organizations, as well as government offices involved in responding to the crisis are increasingly critical and vulnerable to disruptive attacks such as ransomware.
  • Information operations actors have seized on the crisis to promote narratives primarily to domestic or near-abroad audiences.

Same Actors, New Content

The same threat actors and malware families that we observed prior to the crisis are largely pursuing the same objectives as before the crisis, using many of the same tools. They are simply now leveraging the crisis as a means of social engineering. This pattern of behavior is familiar. Threat actors have always capitalized on major events and crises to entice users. Many of the actors who are now using this approach have been tracked for years.

Ultimately, COVID-19 is being adopted broadly in social engineering approaches because it is has widespread, generic appeal, and there is a genuine thirst for information on the subject that encourages users to take actions when they might otherwise have been circumspect. We have seen it used by several cyber criminal and cyber espionage actors, and in underground communities some actors have created tools to enable effective social engineering exploiting the coronavirus pandemic. Nonetheless, COVID-19 content is still only used in two percent of malicious emails.

 

For the time being, we do not believe this social engineering will be abetting. In fact, it is likely to take many forms as changes in policy, economics, and other unforeseen consequences manifest. Recently we predicted a spike in stimulus related social engineering, for example. Additionally, the FBI has recently released a press release anticipating a rise in COVID-19 related Business Email Compromise (BEC) scams.

State Actors Likely Very Busy

Given that COVID-19 is the undoubtedly the overwhelming concern of governments worldwide for the time being, we anticipated targeting of government, healthcare, biotech, and other sectors by cyber espionage actors. We have not yet observed an incident of cyber espionage targeting COVID-19 related information; however, it is often difficult to determine what information these actors are targeting. There has been at least one case reported publicly which we have not independently confirmed.

We have seen state actors, such as those from Russia, China and North Korea, leverage COVID-19 related social engineering, but given wide interest in that subject, that does not necessarily indicate targeting of COVID-19 related information.

Threat to Healthcare

Though we have no reason to believe there is a sudden, elevated threat to healthcare, the criticality of these systems has probably never been greater, and thus the risk to this sector will be elevated throughout this crisis. The threat of disruption is especially disconcerting as it could affect the ability of these organizations to provide safe and timely care. This threat extends beyond hospitals to pharmaceutical companies, as well as manufacturing, administration and logistics organizations providing vital support. Additionally, many critical public health resources lie at the state and local level.

Though there is some anecdotal evidence suggesting some ransomware actors are avoiding healthcare targets, we do not expect that all actors will practice this restraint. Additionally, an attack on state and local governments, which have been a major target of ransomware actors, could have a disruptive effect on treatment and prevention efforts.

Remote Work

The sudden and unanticipated shift of many workers to work from home status will represent an opportunity for threat actors. Organizations will be challenged to move quickly to ensure sufficient capacity, as well as that security controls and policies are in place. Disruptive situations can reduce morale and increase stress, leading to adverse behavior such as decreasing users’ reticence to open suspicious messages, and even increasing the risk of insider threats. Distractions while working at home can cause lowered vigilance in scrutinizing and avoiding suspicious content as workers struggle to balance work and home responsibilities at the same time. Furthermore, the rapid adoption of platforms will undoubtedly lead to security mistakes and attract the attention of the threat actors.

Secure remote access will likely rely on use of VPNs and user access permissions and authentication procedures intended to limit exposure of proprietary data. Hardware and infrastructure protection should include ensuring full disk encryption on enterprise devices, maintaining visibility on devices through an endpoint security tool, and maintaining regular software updates. 

For more on this issue, see our blog post on the risks associated with remote connectivity.

The Information Operations Threat

We have seen information operations actors promote narratives associated with COVID-19 to manipulate primarily domestic or near-abroad audiences. We observed accounts in Chinese-language networks operating in support of the People's Republic of China (PRC), some of which we previously identified to be promoting messaging pertaining to the Hong Kong protests, shift their focus to praising the PRC's response to the COVID-19 outbreak, criticizing the response of Hong Kong medical workers and the U.S. to the pandemic, and covertly promoting a conspiracy theory that the U.S. was responsible for the outbreak of the coronavirus in Wuhan.

We have also identified multiple information operations promoting COVID-19-related narratives that were aimed at Russian- and Ukrainian-speaking audiences, including some that we assess with high confidence are part of the broader suspected Russian influence campaign publicly referred to as "Secondary Infektion," as well as other suspected Russian activity. These operations have included leveraging a false hacktivist persona to spread the conspiracy theory that the U.S. developed the coronavirus in a weapons laboratory in Central Asia, taking advantage of physical protests in Ukraine to push the narrative that Ukrainians repatriated from Wuhan will infect the broader Ukrainian population, and claiming that the Ukrainian healthcare system is ill-equipped to deal with the pandemic. Other operations alleged that U.S. government or military personnel were responsible for outbreaks of the coronavirus in various countries including Lithuania and Ukraine, or insisted that U.S. personnel would contribute to the pandemic's spread if scheduled multilateral military exercises in the region were to continue as planned.

Outlook

It is clear that adversaries expect us to be distracted by these overwhelming events. The greatest cyber security challenge posed by COVID-19 may be our ability to stay focused on the threats that matter most. An honest assessment of the cyber security implications of the pandemic will be necessary to make efficient use of resources limited by the crisis itself.

For more information and resources that can help strengthen defenses, visit FireEye's "Managing Through Change and Crisis" site, which aggregates many resources to help organizations that are trying to navigate COVID-19 related security challenges.

Introducing our new book “Building Secure and Reliable Systems”



For years, I’ve wished that someone would write a book like this. Since their publication, I’ve often admired and recommended the Google Site Reliability Engineering (SRE) books—so I was thrilled to find that a book focused on security and reliability was already underway when I arrived at Google. Ever since I began working in the tech industry, across organizations of varying sizes, I’ve seen people struggling with the question of how security should be organized: Should it be centralized or federated? Independent or embedded? Operational or consultative? Technical or governing? The list goes on and on.

Both SRE and security have strong dependencies on classic software engineering teams. Yet both differ from classic software engineering teams in fundamental ways:
  • Site Reliability Engineers (SREs) and security engineers tend to break and fix, as well as build.
  • Their work encompasses operations, in addition to development.
  • SREs and security engineers are specialists, rather than classic software engineers.
  • They are often viewed as roadblocks, rather than enablers.
  • They are frequently siloed, rather than integrated in product teams.
For many years, my colleagues and I have argued that security should be a first-class and embedded quality of software. I believe that embracing an SRE- inspired approach is a logical step in that direction. As my understanding of the intersection between security and SRE has deepened, I’ve become even more certain that it’s important to more thoroughly integrate security practices into the full lifecycle of software and data services. The nature of the modern hybrid cloud—much of which is based on open source software frameworks that offer interconnected data and microservices—makes tightly integrated security and resilience capabilities even more important.

At the same time, enterprises are at a critical point where cloud computing, various forms of machine learning, and a complicated cybersecurity landscape are together determining where an increasingly digital world is going, how quickly it will get there, and what risks are involved.

The operational and organizational approaches to security in large enterprises have varied dramatically over the past 20 years. The most prominent instantiations include fully centralized chief information security officers and core infrastructure operations that encompass firewalls, directory services, proxies, and much more—teams that have grown to hundreds or thousands of employees. On the other end of the spectrum, federated business information security teams have either the line of business or technical expertise required to support or govern a named list of functions or business operations. Somewhere in the middle, committees, metrics, and regulatory requirements might govern security policies, and embedded Security Champions might either play a relationship management role or track issues for a named organizational unit. Recently, I’ve seen teams riffing on the SRE model by evolving the embedded role into something like a site security engineer, or into a specific Agile scrum role for specialist security teams.

For good reasons, enterprise security teams have largely focused on confidentiality. However, organizations often recognize data integrity and availability to be equally important, and address these areas with different teams and different controls. The SRE function is a best-in-class approach to reliability. However, it also plays a role in the real-time detection of and response to technical issues—including security- related attacks on privileged access or sensitive data. Ultimately, while engineering teams are often organizationally separated according to specialized skill sets, they have a common goal: ensuring the quality and safety of the system or application.

In a world that is becoming more dependent upon technology every year, a book about approaches to security and reliability drawn from experiences at Google and across the industry is an important contribution to the evolution of software development, systems management, and data protection. As the threat landscape evolves, a dynamic and integrated approach to defense is now a basic necessity. In my previous roles, I looked for a more formal exploration of these questions; I hope that a variety of teams inside and outside of security organizations find this discussion useful as approaches and tools evolve. This project has reinforced my belief that the topics it covers are worth discussing and promoting in the industry—particularly as more organizations adopt DevOps, DevSecOps, SRE, and hybrid cloud architectures along with their associated operating models. At a minimum, this book is another step in the evolution and enhancement of system and data security in an increasingly digital world.

The new book can be downloaded for free from the Google SRE website, or purchased as a physical copy from your preferred retailer.

Thinking Outside the Bochs: Code Grafting to Unpack Malware in Emulation

This blog post continues the FLARE script series with a discussion of patching IDA Pro database files (IDBs) to interactively emulate code. While the fastest way to analyze or unpack malware is often to run it, malware won’t always successfully execute in a VM. I use IDA Pro’s Bochs integration in IDB mode to sidestep tedious debugging scenarios and get quick results. Bochs emulates the opcodes directly from your IDB in a Bochs VM with no OS.

Bochs IDB mode eliminates distractions like switching VMs, debugger setup, neutralizing anti-analysis measures, and navigating the program counter to the logic of interest. Alas, where there is no OS, there can be no loader or dynamic imports. Execution is constrained to opcodes found in the IDB. This precludes emulating routines that call imported string functions or memory allocators. Tom Bennett’s flare-emu ships with emulated versions of these, but for off-the-cuff analysis (especially when I don’t know if there will be a payoff), I prefer interactively examining registers and memory to adjust my tactics ad hoc.

What if I could bring my own imported functions to Bochs like flare-emu does? I’ve devised such a technique, and I call it code grafting. In this post I’ll discuss the particulars of statically linking stand-ins for common functions into an IDB to get more mileage out of Bochs. I’ll demonstrate using this on an EVILNEST sample to unpack and dump next-stage payloads from emulated memory. I’ll also show how I copied a tricky call sequence from one IDB to another IDB so I could keep the unpacking process all in a single Bochs debug session.

EVILNEST Scenario

My sample (MD5 hash 37F7F1F691D42DCAD6AE740E6D9CAB63 which is available on VirusTotal) was an EVILNEST variant that populates the stack with configuration data before calling an intermediate payload. Figure 1 shows this unusual call site.


Figure 1: Call site for intermediate payload

The code in Figure 1 executes in a remote thread within a hollowed-out iexplore.exe process; the malware uses anti-analysis tactics as well. I had the intermediate payload stage and wanted to unpack next-stage payloads without managing a multi-process debugging scenario with anti-analysis. I knew I could stub out a few function calls in the malware to run all of the relevant logic in Bochs. Here’s how I did it.

Code Carving

I needed opcodes for a few common functions to inject into my IDBs and emulate in Bochs. I built simple C implementations of selected functions and compiled them into one binary. Figure 2 shows some of these stand-ins.


Figure 2: Simple implementations of common functions

I compiled this and then used IDAPython code similar to Figure 3 to extract the function opcode bytes.


Figure 3: Function extraction

I curated a library of function opcodes in an IDAPython script as shown in Figure 4. The nonstandard function opcodes at the bottom of the figure were hand-assembled as tersely as possible to generically return specific values and manipulate the stack (or not) in conformance with calling conventions.


Figure 4: Extracted function opcodes

On top of simple functions like memcpy, I implemented a memory allocator. The allocator referenced global state data, meaning I couldn’t just inject it into an IDB and expect it to work. I read the disassembly to find references to global operands and templatize them for use with Python’s format method. Figure 5 shows an example for malloc.


Figure 5: HeapAlloc template code

I organized the stubs by name as shown in Figure 6 both to call out functions I would need to patch, and to conveniently add more function stubs as I encounter use cases for them. The mangled name I specified as an alias for free is operator delete.


Figure 6: Function stubs and associated names

To inject these functions into the binary, I wrote code to find the next available segment of a given size. I avoided occupying low memory because Bochs places its loader segment below 0x10000. Adjacent to the code in my code  segment, I included space for the data used by my memory allocator. Figure 7 shows the result of patching these functions and data into the IDB and naming each location (stub functions are prefixed with stub_).


Figure 7: Data and code injected into IDB

The script then iterates all the relevant calls in the binary and patches them with calls to their stub implementations in the newly added segment. As shown in Figure 8, IDAPython’s Assemble function saved the effort of calculating the offset for the call operand manually. Note that the Assemble function worked well here, but for bigger tasks, Hex-Rays recommends a dedicated assembler such as Keystone Engine and its Keypatch plugin for IDA Pro.


Figure 8: Abbreviated routine for assembling a call instruction and patching a call site to an import

The Code Grafting script updated all the relevant call sites to resemble Figure 9, with the target functions being replaced by calls to the stub_ implementations injected earlier. This prevented Bochs in IDB mode from getting derailed when hitting these call sites, because the call operands now pointed to valid code inside the IDB.


Figure 9: Patched operator new() call site

Dealing with EVILNEST

The debug scenario for the dropper was slightly inconvenient, and simultaneously, it was setting up a very unusual call site for the payload entry point. I used Bochs to execute the dropper until it placed the configuration data on the stack, and then I used IDAPython’s idc.get_bytes function to extract the resulting stack data. I wrote IDAPython script code to iterate the stack data and assemble push instructions into the payload IDB leading up to a call instruction pointing to the DLL’s export. This allowed me to debug the unpacking process from Bochs within a single session.

I clicked on the beginning of my synthesized call site and hit F4 to run it in Bochs. I was greeted with the warning in Figure 10 indicating that the patched IDB would not match the depictions made by the debugger (which is untrue in the case of Bochs IDB mode). Bochs faithfully executed my injected opcodes producing exactly the desired result.


Figure 10: Patch warning

I watched carefully as the instruction pointer approached and passed the IsDebuggerPresent check. Because of the stub I injected (stub_IsDebuggerPresent), it passed the check returning zero as shown in Figure 11.


Figure 11: Passing up IsDebuggerPresent

I allowed the program counter to advance to address 0x1A1538, just beyond the unpacking routine. Figure 12 shows the register state at this point which reflects a value in EAX that was handed out by my fake heap allocator and which I was about to visit.


Figure 12: Running to the end of the unpacker and preparing to view the result

Figure 13 shows that there was indeed an IMAGE_DOS_SIGNATURE (“MZ”) at this location. I used idc.get_bytes() to dump the unpacked binary from the fake heap location and saved it for analysis.


Figure 13: Dumping the unpacked binary

Through Bochs IDB mode, I was also able to use the interactive debugger interface of IDA Pro to experiment with manipulating execution and traversing a different branch to unpack another payload for this malware as well.

Conclusion

Although dynamic analysis is sometimes the fastest road, setting it up and navigating minutia detract from my focus, so I’ve developed an eye for routines that I can likely emulate in Bochs to dodge those distractions while still getting answers. Injecting code into an IDB broadens the set of functions that I can do this with, letting me get more out of Bochs. This in turn lets me do more on-the-fly experimentation, one-off string decodes, or validation of hypotheses before attacking something at scale. It also allows me to experiment dynamically with samples that won’t load correctly anyway, such as unpacked code with damaged or incorrect PE headers.

I’ve shared the Code Grafting tools as part of the flare-ida GitHub repository. To use this for your own analyses:

  1. In IDA Pro’s IDAPython prompt, run code_grafter.py or import it as a module.
  2. Instantiate a CodeGrafter object and invoke its graftCodeToIdb() method:
    • CodeGrafter().graftCodeToIdb()
  3. Use Bochs in IDB mode to conveniently execute your modified sample and experiment away!

This post makes it clear just how far I’ll go to avoid breaking eye contact with IDA. If you’re a fan of using Bochs with IDA too, then this is my gift to you. Enjoy!

If You Can’t Patch Your Email Server, You Should Not Be Running It

CVE-2020-0688 Scan Results, per Rapid7

tl;dr -- it's the title of the post: "If You Can't Patch Your Email Server, You Should Not Be Running It."

I read a disturbing story today with the following news:

"Starting March 24, Rapid7 used its Project Sonar internet-wide survey tool to discover all publicly-facing Exchange servers on the Internet and the numbers are grim.

As they found, 'at least 357,629 (82.5%) of the 433,464 Exchange servers' are still vulnerable to attacks that would exploit the CVE-2020-0688 vulnerability.

To make matters even worse, some of the servers that were tagged by Rapid7 as being safe against attacks might still be vulnerable given that 'the related Microsoft update wasn’t always updating the build number.'

Furthermore, 'there are over 31,000 Exchange 2010 servers that have not been updated since 2012,' as the Rapid7 researchers observed. 'There are nearly 800 Exchange 2010 servers that have never been updated.'

They also found 10,731 Exchange 2007 servers and more than 166,321 Exchange 2010 ones, with the former already running End of Support (EoS) software that hasn't received any security updates since 2017 and the latter reaching EoS in October 2020."

In case you were wondering, threat actors have already been exploiting these flaws for weeks, if not months.

Email is one of, if not the most, sensitive and important systems upon which organizations of all shapes and sizes rely. The are, by virtue of their function, inherently exposed to the Internet, meaning they are within the range of every targeted or opportunistic intruder, worldwide.

In this particular case, unpatched servers are also vulnerable to any actor who can download and update Metasploit, which is virtually 100% of them.

It is the height of negligence to run such an important system in an unpatched state, when there are much better alternatives -- namely, outsourcing your email to a competent provider, like Google, Microsoft, or several others.

I expect some readers are saying "I would never put my email in the hands of those big companies!" That's fine, and I know several highly competent individuals who run their own email infrastructure. The problem is that they represent the small fraction of individuals and organizations who can do so. Even being extremely generous with the numbers, it appears that less than 20%, and probably less than 15% according to other estimates, can even keep their Exchange servers patched, let alone properly configured.

If you think it's still worth the risk, and your organization isn't able to patch, because you want to avoid megacorp email providers or government access to your email, you've made a critical miscalculation. You've essentially decided that it's more important for you to keep your email out of megacorp or government hands than it is to keep it from targeted or opportunistic intruders across the Internet.

Incidentally, you've made another mistake. Those same governments you fear, at least many of them, will just leverage Metasploit to break into your janky email server anyway.

The bottom line is that unless your organization is willing to commit the resources, attention, and expertise to maintaining a properly configured and patched email system, you should outsource it. Otherwise you are being negligent with not only your organization's information, but the information of anyone with whom you exchange emails.

Zero-Day Exploitation Increasingly Demonstrates Access to Money, Rather than Skill — Intelligence for Vulnerability Management, Part One

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

FireEye Mandiant Threat Intelligence documented more zero-days exploited in 2019 than any of the previous three years. While not every instance of zero-day exploitation can be attributed to a tracked group, we noted that a wider range of tracked actors appear to have gained access to these capabilities. Furthermore, we noted a significant increase over time in the number of zero-days leveraged by groups suspected to be customers of companies that supply offensive cyber capabilities, as well as an increase in zero-days used against targets in the Middle East, and/or by groups with suspected ties to this region. Going forward, we are likely to see a greater variety of actors using zero-days, especially as private vendors continue feeding the demand for offensive cyber weapons.

Zero-Day Usage by Country and Group

Since late 2017, FireEye Mandiant Threat Intelligence noted a significant increase in the number of zero-days leveraged by groups that are known or suspected to be customers of private companies that supply offensive cyber tools and services. Additionally, we observed an increase in zero-days leveraged against targets in the Middle East, and/or by groups with suspected ties to this region.

Examples include:

  • A group described by researchers as Stealth Falcon and FruityArmor is an espionage group that has reportedly targeted journalists and activists in the Middle East. In 2016, this group used malware sold by NSO group, which leveraged three iOS zero-days. From 2016 to 2019, this group used more zero-days than any other group.
  • The activity dubbed SandCat in open sources, suspected to be linked to Uzbekistan state intelligence, has been observed using zero-days in operations against targets in the Middle East. This group may have acquired their zero-days by purchasing malware from private companies such as NSO group, as the zero-days used in SandCat operations were also used in Stealth Falcon operations, and it is unlikely that these distinct activity sets independently discovered the same three zero-days.
  • Throughout 2016 and 2017, activity referred to in open sources as BlackOasis, which also primarily targets entities in the Middle East and likely acquired at least one zero-day in the past from private company Gamma Group, demonstrated similarly frequent access to zero-day vulnerabilities.

We also noted examples of zero-day exploitation that have not been attributed to tracked groups but that appear to have been leveraged in tools provided by private offensive security companies, for instance:

  • In 2019, a zero-day exploit in WhatsApp (CVE-2019-3568) was reportedly used to distribute spyware developed by NSO group, an Israeli software company.
  • FireEye analyzed activity targeting a Russian healthcare organization that leveraged a 2018 Adobe Flash zero-day (CVE-2018-15982) that may be linked to leaked source code of Hacking Team.
  • Android zero-day vulnerability CVE-2019-2215 was reportedly being exploited in the wild in October 2019 by NSO Group tools.

Zero-Day Exploitation by Major Cyber Powers

We have continued to see exploitation of zero days by espionage groups of major cyber powers.

  • According to researchers, the Chinese espionage group APT3 exploited CVE-2019-0703 in targeted attacks in 2016.
  • FireEye observed North Korean group APT37 conduct a 2017 campaign that leveraged Adobe Flash vulnerability CVE-2018-4878. This group has also demonstrated an increased capacity to quickly exploit vulnerabilities shortly after they have been disclosed.
  • From December 2017 to January 2018, we observed multiple Chinese groups leveraging CVE-2018-0802 in a campaign targeting multiple industries throughout Europe, Russia, Southeast Asia, and Taiwan. At least three out of six samples were used before the patch for this vulnerability was issued.
  • In 2017, Russian groups APT28 and Turla leveraged multiple zero-days in Microsoft Office products. 

In addition, we believe that some of the most dangerous state sponsored intrusion sets are increasingly demonstrating the ability to quickly exploit vulnerabilities that have been made public. In multiple cases, groups linked to these countries have been able to weaponize vulnerabilities and incorporate them into their operations, aiming to take advantage of the window between disclosure and patch application. 

Zero-Day Use by Financially Motivated Actors

Financially motivated groups have and continue to leverage zero-days in their operations, though with less frequency than espionage groups.

In May 2019, we reported that FIN6 used a Windows server 2019 use-after-free zero-day (CVE-2019-0859) in a targeted intrusion in February 2019. Some evidence suggests that the group may have used the exploit since August 2018. While open sources have suggested that the group potentially acquired the zero-day from criminal underground actor "BuggiCorp," we have not identified direct evidence linking this actor to this exploit's development or sale.

Conclusion

We surmise that access to zero-day capabilities is becoming increasingly commodified based on the proportion of zero-days exploited in the wild by suspected customers of private companies. Possible reasons for this include:

  • Private companies are likely creating and supplying a larger proportion of zero-days than they have in the past, resulting in a concentration of zero-day capabilities among highly resourced groups.
  • Private companies may be increasingly providing offensive capabilities to groups with lower overall capability and/or groups with less concern for operational security, which makes it more likely that usage of zero-days will be observed.

It is likely that state groups will continue to support internal exploit discovery and development; however, the availability of zero-days through private companies may offer a more attractive option than relying on domestic solutions or underground markets. As a result, we expect that the number of adversaries demonstrating access to these kinds of vulnerabilities will almost certainly increase and will do so at a faster rate than the growth of their overall offensive cyber capabilities—provided they have the ability and will to spend the necessary funds.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar. 

Sourcing Note: Some vulnerabilities and zero-days were identified based on FireEye research, Mandiant breach investigation findings, and other technical collections. This paper also references vulnerabilities and zero-days discussed in open sources including  Google Project Zero's zero-day "In the Wild" Spreadsheet . While we believe these sources are reliable as used in this paper, we do not vouch for the complete findings of those sources. Due to the ongoing discovery of past incidents, we expect that this research will remain dynamic.

YesWeHack Cybersecurity Training Temporarily Free for Schools and Universities

YesWeHack, a European bug bounty platform, is providing universities and schools with free access to its educational platform YesWeHackEDU. This offer aims to allow educational institutions to hold a practice-oriented cybersecurity training. As of 1st April 2020, all universities and schools can benefit from free licenses of YesWeHackEDU, which are valid until 31st May 2020.

Preparation for IT Security Professions
YesWeHackEDU is aimed at educational institutions that, in the current situation, want to integrate IT topics and cybersecurity into their curricula via distance learning. The educational platform is a simulation of the real bug bounty platform of YesWeHack. The attack scenarios, which are available as practice projects, are simulations of real-world situations. Universities and schools also can kickstart a real bug bounty program on YesWeHackEDU to have their IT infrastructure security-proofed by their students.

YesWeHackEDU teaches the identification and elimination of vulnerabilities and allows both students and instructors to develop technical and managerial skills required to run successful bug bounty programmes. At the same time, it opens up prospects for sought-after professional specialisations such as DevSecOps, Data Science or Security Analysis. Furthermore, YesWeHack EDU facilitates the implementation of cooperations and cross-functional projects between academic institutions and the business community.

Young Cybersecurity Specialists more Needed than Ever
"The current COVID19 pandemic has driven students and teachers out of the classroom. For cybercriminals, however, the pandemic wave is by no means a reason to pause. They are even more active, taking advantage of the insecurity of many consumers" explains Guillaume Vassault-Houliere, CEO and co-founder of YesWeHack. "The training of future cybersecurity talents cannot, therefore, be delayed. We need to support educational institutions in their mission right now. YesWeHackEDU provides a world-class educational resource for educators and students to develop cybersecurity skills in times of pandemic.'

Free licenses for YesWeHackEDU are distributed worldwide with the support of YesWeHack education partner IT-GNOSIS and can be applied for here.

Seeing Book Shelves on Virtual Calls


I have a confession... for me, the best part of virtual calls, or seeing any reporter or commentator working for home, is being able to check out their book shelves. I never use computer video, because I want to preserve the world's bandwidth. That means I don't share what my book shelves look like when I'm on a company call. Therefore, I thought I'd share my book shelves with the world.

My big categories of books are martial arts, mixed/miscellaneous, cybersecurity and intelligence, and military and Civil War history. I've cataloged about 400 print books and almost 500 digital titles. Over the years I've leaned towards buying Kindle editions of any book that is mostly print, in order to reduce my footprint.

For the last many years, my book shelving has consisted of three units, each with five shelves. Looking at the topic distribution, as of 2020 I have roughly 6 shelves for martial arts, 4 for mixed/miscellaneous, 3 for cybersecurity and intelligence, and 2 for military and Civil War history.

This is interesting to me because I can compare my mix from five years ago, when I did an interview for the now defunct Warcouncil Warbooks project.


In that image from 2015, I can see 2 shelves for martial arts, 4 for mixed/miscellaneous, 7 for cybersecurity and intelligence, and 2 for military and Civil War history.

What happened to all of the cybersecurity and intelligence books? I donated a bunch of them, and the rest I'm selling on Amazon, along with books (in new or like new condition) that my kids decided they didn't want anymore.

I've probably donated hundreds, possibly approaching a thousand, cyber security and IT books over the years. These were mostly books sent by publishers, although some were those that I bought and no longer needed. Some readers from northern Virginia might remember me showing up at ISSA or NoVASec meetings with a boxes of books that I would leave on tables. I would say "I don't want to come home with any of these. Please be responsible. And guess what -- everyone was!

If anyone would like to share their book shelves, the best place would be as a reply to my Tweet on this post. I look forward to seeing your book shelves, fellow bibliophiles.

FakeNet Genie: Improving Dynamic Malware Analysis with Cheat Codes for FakeNet-NG

As developers of the network simulation tool FakeNet-NG, reverse engineers on the FireEye FLARE team, and malware analysis instructors, we get to see how different analysts use FakeNet-NG and the challenges they face. We have learned that FakeNet-NG provides many useful features and solutions of which our users are often unaware. In this blog post, we will showcase some cheat codes to level up your network analysis with FakeNet-NG. We will introduce custom responses and demonstrate powerful features such as executing commands on connection events and decrypting SSL traffic.

Since its first release in 2016, we have improved FakeNet-NG by adding new features such as Linux support and content-based protocol detection. We recently updated FakeNet-NG with one of our most requested features: custom responses for HTTP and binary protocols.

This blog post offers seven "stages" to help you master different FakeNet-NG strategies. We present them in terms of common scenarios we encounter when analyzing malware. Feel free to skip to the section relevant to your current analysis and/or adapt them to your individual needs. The stages are presented as follows:

  1. Custom File Responses
  2. Custom Binary Protocols
  3. Custom HTTP Responses
  4. Manual Custom Responses
  5. Blacklisting Processes
  6. Executing Commands on Connection Events
  7. Decrypting SSL Traffic

Read on to upgrade your skill tree and become a FakeNet-NG pro!

Before You Start: Configuring FakeNet-NG

Here is a quick reference for FakeNet-NG configurations and log data locations.

  1. Configuration files are in fakenet\configs. You can modify default.ini or copy it to a new file and point FakeNet-NG to the alternate configuration with -c. Ex: fakenet.py -c custom.ini.
  2. Default files are at fakenet\defaultFiles and Listener implementations are at fakenet\listeners.
  3. The fakenet\configs\default.ini default configuration includes global configuration settings and individual Listener configurations.
  4. Custom response configuration samples are included in the directory fakenet\configs in the files CustomProviderExample.py, sample_custom_response.ini, and sample_raw_response.txt.
  5. The install location for FakeNet-NG in FLARE VM is C:\Python27\lib\site-packages\fakenet. You will find the subdirectories containing the defaultFiles, configs, and listeners in this directory.
  6. In FLARE VM, FakeNet-NG packet capture files and HTTP requests can be found on the Desktop in the fakenet_logs directory

Stage 1: Custom File Responses

As you may have noticed, FakeNet-NG is not limited to serving HTML pages. Depending on the file type requested, FakeNet-NG can serve PE files, ELF files, JPG, GIF, etc. FakeNet-NG is configured with several default files for common types and can also be configured to serve up custom files. The defaultFiles directory contains several types of files for standard responses. For example, if malware sends an FTP GET request for evil.exe, FakeNet-NG will respond with the file defaultFiles\FakeNetMini.exe (the default response for .exe requests). This file is a valid Portable Executable file that displays a message box. By providing an actual PE file, we can observe the malware as it attempts to download and execute a malicious payload. An example FTP session and subsequent execution of the downloaded default file is shown in Figure 1.


Figure 1: Using FTP to download FakeNet-NG's default executable response

Most requests are adequately handled by this system. However, malware sometimes expects a file with a specific format, such as an image with an embedded PowerShell script, or an executable with a hash appended to the file for an integrity check . In cases like these, you can replace one of the default files with a file that meets the malware’s expectation. There is also an option in each of the relevant Listeners (modules that implement network protocols) configurations to modify the defaultFiles path. This allows FakeNet-NG to serve different files without overwriting or modifying default data. A customized FakeNet.html file is shown in Figure 2.


Figure 2: Modify the default FakeNet.html file to customize the response

Stage 2: Custom Binary Protocols

Many malware samples implement custom binary protocols which require specific byte sequences. For example, malware in the GH0ST family may require each message to begin with a signature such as "GH0ST". The default FakeNet-NG RawListener responds to unknown requests with an echo, i.e. it sends the same data that it has received. This behavior is typically sufficient. However, in cases where a custom response is required, you can still send the data the malware expects.

Custom TCP and UDP responses are now possible with FakeNet-NG. Consider a hypothetical malware sample that beacons the string “Hello” to its command and control (C2) server and waits for a response packet that begins with “FLARE” followed by a numeric command (0-9). We will now demonstrate several interesting ways FakeNet-NG can handle this scenario.

Static Custom Response

You can configure how the TCP and/or UDP Raw Listeners respond to traffic. In this example we tell FakeNet-NG how to respond to any TCP raw request (no protocol detected). First uncomment the Custom configuration option in the RawTCPListener section of fakenet/configs/default.ini as illustrated in Figure 3.

[RawTCPListener]
Enabled:     True
Port:        1337
Protocol:    TCP
Listener:    RawListener
UseSSL:      No
Timeout:     10
Hidden:      False
# To read about customizing responses, see docs/CustomResponse.md
Custom:    sample_custom_response.ini

Figure 3: Activate custom TCP response

Next configure the TcpRawFile custom response in fakenet\configs\sample_custom_response.ini as demonstrated in Figure 4. Make sure to comment-out or replace the default RawTCPListener instance.

[ExampleTCP]
InstanceName:     RawTCPListener
ListenerType:     TCP
TcpRawFile:       flare_command.txt

Figure 4: TCP static custom response specifications

Create the file fakenet\configs\flare_command.txt with the content FLARE0. TCP responses will now be generated from the contents of the file.

Dynamic Custom Response

Perhaps you want to issue commands dynamically rather than committing to a specific command in flare_command.txt. This can be achieved programmatically. Configure the TcpDynamic custom response in fakenet\configs\sample_custom_response.ini as demonstrated in Figure 5. Make sure to comment-out or replace the existing RawTCPListener instance.

[ExampleTCP]
InstanceName:     RawTCPListener
TcpDynamic:       flare_command.py

Figure 5: TCP dynamic custom response specifications

The file fakenet\configs\CustomProviderExample.py can be used as a template for our dynamic response file flare_command.py. We modify the HandleTcp() function and produce the new file fakenet\configs\flare_command.py as illustrated in Figure 6. Now you can choose each command as the malware executes. Figure 7 demonstrates issuing commands dynamically using this configuration.

import socket

def HandleTcp(sock):

    while True:
        try:
            data = None

            data = sock.recv(1024)
        except socket.timeout:
            pass

        if not data:
            break

        resp = raw_input('\nEnter a numeric command: ')
        command = bytes('FLARE' + resp + '\n')
        sock.sendall(command)

Figure 6: TCP dynamic response script


Figure 7: Issue TCP dynamic commands

Stage 3: Custom HTTP Responses

Malware frequently implements its own encryption scheme on top of the popular HTTP protocol. For example, your sample may send an HTTP GET request to /comm.php?nonce=<random> and expect the C2 server response to be RC4 encrypted with the nonce value. This process is illustrated in Figure 8. How can we easily force the malware to execute its critical code path to observe or debug its behaviors?


Figure 8: Malware example that expects a specific key based on beacon data

For cases like these we recently introduced support for HTTP custom responses. Like TCP custom responses, the HTTPListener also has a new setting named Custom that enables dynamic HTTP responses. This setting also allows FakeNet-NG to select the appropriate responses matching specific hosts or URIs. With this feature, we can now quickly write a small Python script to handle the HTTP traffic dynamically based upon our malware sample.

Start by uncommenting the Custom configuration option in the HTTPListener80 section as illustrated in Figure 9.

[HTTPListener80]
Enabled:     True
Port:        80
Protocol:    TCP
Listener:    HTTPListener
UseSSL:      No
Webroot:     defaultFiles/
Timeout:     10
#ProcessBlackList: dmclient.exe, OneDrive.exe, svchost.exe, backgroundTaskHost.exe, GoogleUpdate.exe, chrome.exe
DumpHTTPPosts: Yes
DumpHTTPPostsFilePrefix: http
Hidden:      False
# To read about customizing responses, see docs/CustomResponse.md
Custom:    sample_custom_response.ini

Figure 9: HTTP Listener configuration

Next configure the HttpDynamic custom response in fakenet\configs\sample_custom_response.ini as demonstrated in Figure 10. Make sure to comment-out or replace the default HttpDynamic instance.

[Example2]
ListenerType:     HTTP
HttpURIs:         comm.php
HttpDynamic:      http_example.py

Figure 10: HttpDynamic configuration

The file fakenet\configs\CustomProviderExample.py can be used as a template for our dynamic response file http_example.py. We modify the HandleRequest() function as illustrated in Figure 11. FakeNet-NG will now encrypt responses dynamically with the nonce.

import socket
from arc4 import ARC4

# To read about customizing HTTP responses, see docs/CustomResponse.md

def HandleRequest(req, method, post_data=None):
    """Sample dynamic HTTP response handler.

    Parameters
    ----------
    req : BaseHTTPServer.BaseHTTPRequestHandler
        The BaseHTTPRequestHandler that recevied the request
    method: str
        The HTTP method, either 'HEAD', 'GET', 'POST' as of this writing
    post_data: str
        The HTTP post data received by calling `rfile.read()` against the
        BaseHTTPRequestHandler that received the request.
    """

 

    response = 'Ahoy\r\n'

    nonce = req.path.split('=')[1]
    arc4 = ARC4(nonce)
    response = arc4.encrypt(response)

    req.send_response(200)
    req.send_header('Content-Length', len(response))
    req.end_headers()
    req.wfile.write(response)

Figure 11: Dynamic HTTP request handler

Stage 4: Manual Custom Responses

For even more flexibility, the all-powerful networking utility netcat can be used to stand-in for FakeNet-NG listeners. For example, you may want to use netcat to act as a C2 server and issue commands dynamically during execution on port 80. Launch a netcat listener before starting FakeNet-NG, and traffic destined for the corresponding port will be diverted to the netcat listener. You can then issue commands dynamically using the netcat interface as seen in Figure 12.


Figure 12: Use ncat.exe to manually handle traffic

FakeNet-NG's custom response capabilities are diverse. Read the documentation to learn how to boost your custom response high score.

Stage 5: Blacklisting Processes

Some analysts prefer to debug malware from a separate system. There are many reasons to do this; most commonly to preserve the IDA database and other saved data when malware inevitably corrupts the environment. The process usually involves configuring two virtual machines on a host-only network. In this setup, FakeNet-NG intercepts network traffic between the two machines, which renders remote debugging impossible. To overcome this obstacle, we can blacklist the debug server by instructing FakeNet-NG to ignore traffic from the debug server process.

When debugging remotely with IDA Pro, the standard debug server process for a 32-bit Portable Executable is win32_remote.exe (or dbgsrv.exe for WinDbg). All you need to do is add the process names to the ProcessBlackList configuration as demonstrated in Figure 13. Then, the debug servers can still communicate freely with IDA Pro while all other network traffic is captured and redirected by FakeNet-NG.

# Specify processes to ignore when diverting traffic. Windows example used here.
ProcessBlackList: win32_remote.exe, dbgsrv.exe

Figure 13: Modified configs/default.ini to allow remote debugging with IDA Pro

Blacklisting is also useful to filter out noisy processes from polluting Fakenet-NG captured network traffic. Examples include processes that attempt to update the Windows system or other malware analysis tools.

Additional settings are available for blacklisting ports and hosts. Please see the README for more details about blacklisting and whitelisting.

Stage 6: Executing Commands on Connection Events

Fakenet-NG can be configured to execute commands when a connection is made to a Listener. For example, this option can be used to attach a debugger to a running sample upon a connection attempt. Imagine a scenario where we analyze the packed sample named Lab18-01.exe from the Practical Malware Analysis labs. Using dynamic analysis, we can see that the malware beacons to its C2 server over TCP port 80 using the HTTP protocol as seen in Figure 14.


Figure 14: Malware beacons to its C2 server over TCP port 80

Wouldn’t it be nice if we could magically attach a debugger to Lab18-01.exe when a connection is made? We could speedrun the sample and bypass the entire unpacking stub and any potential anti-debugging tricks the sample may employ.

To configure Fakenet-NG to launch and attach a debugger to any process, modify the [HTTPListener80] section in the fakenet\configs\default.ini to include the ExecuteCmd option. Figure 15 shows an example of a complete [HTTPListener80] section.

[HTTPListener80]
Enabled:     True
Port:        80
Protocol:    TCP
Listener:    HTTPListener
UseSSL:      No
Webroot:     defaultFiles/
Timeout:     10
DumpHTTPPosts: Yes
DumpHTTPPostsFilePrefix: http
Hidden:      False
# Execute x32dbg –p to attach to a debugger. {pid} is filled in automatically by Fakenet-NG
ExecuteCmd: x32dbg.exe -p {pid}

Figure 15: Execute command option to run and attach x32dbg

In this example, we configure the HTTPListener on port 80 to execute the debugger x32dbg.exe, which will attach to a running process whose process ID is determined at runtime. When a connection is made to HTTPListener, FakeNet-NG will automatically replace the string {pid} with the process ID of the process that makes the connection. For a complete list of supported variables, please refer to the Documentation.

Upon restarting Fakenet-NG and running the sample again, we see x32dbg launch and automatically attach to Lab18-01.exe. We can now use memory dumping tools such as Scylla or the OllyDumpEx plugin to dump the executable and proceed to static analysis. This is demonstrated in Figure 16 and Figure 17.


Figure 16: Using FakeNet-NG to attach x32dbg to the sample (animated)


Figure 17: Fakenet-NG executes x32dbg upon connection to practicalmalwareanalysis.com

Stage 7: Decrypting SSL Traffic

Often malware uses SSL for network communication, which hinders traffic analysis considerably as the packet data is encrypted. Using Fakenet-NG's ProxyListener, you can create a packet capture with decrypted traffic. This can be done using the protocol detection feature.

The proxy can detect SSL, and "man-in-the-middle" the socket in SSL using Python's OpenSSL library. It then maintains full-duplex connections with the malware and with the HTTP Listener, with both sides unaware of the other. Consequently, there is a stream of cleartext HTTP traffic between the Proxy and the HTTP Listener, as seen in Figure 18.


Figure 18: Cleartext streams between Fakenet-NG components

In order to keep FakeNet-NG as simple as possible, current default settings for FakeNet-NG do not have the proxy intercept HTTPS traffic on port 443 and create the decrypted stream. To proxy the data you need to set the HTTPListener443 Hidden attribute to True as demonstrated in Figure 19. This tells the proxy to intercept packets and detect the protocol based on packet contents. Please read our blog post on the proxy and protocol detection to learn more about this advanced feature.

[HTTPListener443]
Enabled:     True
Port:        443
Protocol:    TCP
Listener:    HTTPListener
UseSSL:      Yes
Webroot:     defaultFiles/
DumpHTTPPosts: Yes
DumpHTTPPostsFilePrefix: http
Hidden:      True

Figure 19: Hide the listener so the traffic will be proxied

We can now examine the packet capture produced by Fakenet-NG. The cleartext can be found in a TCP stream between an ephemeral port on localhost (ProxyListener) and port 80 on localhost (HTTPListener). This is demonstrated in Figure 20.


Figure 20: Cleartext traffic between HTTPListener and Proxy Listener

Conclusion (New Game+)

Fakenet-NG is the de facto standard network simulation tool for malware analysis. It runs without installation and is included in FLARE VM. In addition to its proven and tested default settings, Fakenet offers countless capabilities and configuration options. In this blog post we have presented several tricks to handle common analysis scenarios. To download the latest version, to see a complete list of all configuration options, or to contribute to Fakenet-NG, please see our Github repository.

‘Zoom is malware’: why experts worry about the video conferencing platform

The company has seen a 535% rise in daily traffic in the past month, but security researchers say the app is a ‘privacy disaster’

As coronavirus lockdowns have moved many in-person activities online, the use of the video-conferencing platform Zoom has quickly escalated. So, too, have concerns about its security.

In the last month, there was a 535% rise in daily traffic to the Zoom.us download page, according to an analysis from the analytics firm SimilarWeb. Its app for iPhone has been the most downloaded app in the country for weeks, according to the mobile app market research firm Sensor Tower. Even politicians and other high-profile figures, including the British prime minister, Boris Johnson, and the former US federal reserve chair Alan Greenspan, use it for conferencing as they work from home.

Related: Coronavirus and app downloads: what you need to know about protecting your privacy

Continue reading...

Why isn’t the government publishing more data about coronavirus deaths? | Jeni Tennison

Studying the past is futile in an unprecedented crisis. Science is the answer – and open data is paramount

Coronavirus – latest updates
See all our coronavirus coverage

Wherever we look, there is a demand for data about Covid-19. We devour dashboards, graphs and visualisations. We want to know about the numbers of tests, cases and deaths; how many beds and ventilators are available, how many NHS workers are off sick. When information is missing, we speculate about what the government might be hiding, or fill in the gaps with anecdotes.

Data is a necessary ingredient in day-to-day decision-making – but in this rapidly evolving situation, it’s especially vital. Everything has changed, almost overnight. Demands for food, transport, and energy have been overhauled as more people stop travelling and work from home. Jobs have been lost in some sectors, and workers are desperately needed in others. Historic experience can no longer tell us how our society or economy is working. Past models hold little predictive power in an unprecedented situation. To know what is happening right now, we need up-to-date information.

Related: A public inquiry into the UK's coronavirus response would find a litany of failures | Anthony Costello

Continue reading...

Cyber Security Roundup for April 2020

A roundup of UK focused Cyber and Information Security News, Blog Posts, Reports and general Threat Intelligence from the previous calendar month, March 2020.

The UK went into lockdown in March due to the coronavirus pandemic, these are unprecedented and uncertain times. Unfortunately, cybercriminals are taking full advantage of this situation, both UK citizens and 
businesses have been hit with a wave of COVID-19 themed phishing emails, and scam social media and text messages (smishing). Which prompted warnings by the UK National Cyber Security Centre and UK Banks, and a crackdown by the UK Government.
Convincing COVID-19 Scam Text Message (Smishing)

I have not had the opportunity to analyse a copy of the above scam text message (smishing), but it looks like the weblink displayed is not as it appears. My guess is the link is not part of the gov.uk domain, but the attacker has used an international domain name homograph attack, namely using foreign font characters to disguise the true address of a malicious website that is linked.

I was privileged to be on The Telegraph Coronavirus Podcast on 31st March, where I was asked about the security of video messaging apps, a transcript of what I advised is here. Further coronavirus cybersecurity advice was posted on my blog, on working from home securely and to provide awareness of coronavirus themed message scams.  It was also great to see the UK payment card contactless limit increased from £30 to £45 to help prevent coronavirus spread.

March threat intelligence reports shone a light to the scale of the cybercriminal shift towards exploiting COVID-19 crisis for financial gains. Check Point Global Threat Index reported a spike in the registration of coronavirus themed domains names, stating more than 50% of these new domains are likely to be malicious in nature. Proofpoint reports for more 80% of the threat landscape is using coronavirus themes in some way.  There has been a series of hacking attempts directly against the World Health Organisation (WHO), from DNS hijacking to spread a malicious COVID-19 app to a rather weird plot to spread malware through a dodgy anit-virus solution

Away from the deluge of coronavirus cybersecurity news and threats, Virgin Media were found to have left a database open, which held thousands of customer records exposed, and T-Mobile's email vendor was hacked, resulting in the breach of their customers and employees personal data.  

International hotel chain Marriot reported 5.2 million guest details were stolen after an unnamed app used by guests was hacked. According to Marriots online breach notification, stolen data included guest name, address, email address, phone number, loyalty account number and point balances, employer, gender, birthdays (day and month only), airline loyalty program information, and hotel preferences. It was only on 30th November 2018 Marriott disclosed a breach of 383 million guestsTony Pepper, CEO at Egress said “Marriott International admitted that it has suffered another data breach, affecting up to 5.2 million people. This follows the well-documented data breach highlighted in November 2018 where the records of approximately 339 million guests were exposed in a catastrophic cybersecurity incident. Having already received an intention to fine from the ICO to the tune of £99m for that, Marriott will be more than aware of its responsibility to ensure that the information it shares and stores is appropriately protected. Not only does this news raise further concerns for Marriott, but it also serves as a reminder to all organisations that they must constantly be working to enhance their data security systems and protocols to avoid similar breaches. It will be interesting to see if further action is taken by the ICO”

Five billion records were found to be exposed by UK security company Elasticsearch.  Researchers also found an Amazon Web Services open MongoDB database of eight million European Union citizen retail sales records was left exposed, which included personal and financial information.  And Let’s Encrypt revoked over 3 million TLS certificates due to a bug which certification rechecking

March was another busy month for security updates, patch Tuesday saw Microsoft release fixes for 116 vulnerabilities and there was an out-of-band Microsoft fix for 'EternallDarkness' bug on 10th March, but a zero-day exploited vulnerability in Windows remained unpatched by the Seattle based software giants.  Adobe released a raft of security patches, as did Apple (over 30 patches), Google, Cisco, DrayTek, VMware, and Drupal.

Stay safe, safe home and watch for the scams.

BLOG
NEWS
    VULNERABILITIES AND SECURITY UPDATES
      AWARENESS, EDUCATION AND THREAT INTELLIGENCE

      Kerberos Tickets on Linux Red Teams

      At FireEye Mandiant, we conduct numerous red team engagements within Windows Active Directory environments. Consequently, we frequently encounter Linux systems integrated within Active Directory environments. Compromising an individual domain-joined Linux system can provide useful data on its own, but the best value is obtaining data, such as Kerberos tickets, that will facilitate lateral movement techniques. By passing these Kerberos Tickets from a Linux system, it is possible to move laterally from a compromised Linux system to the rest of the Active Directory domain.

      There are several ways to configure a Linux system to store Kerberos tickets. In this blog post, we will introduce Kerberos and cover some of the various storage solutions. We will also introduce a new tool that extracts Kerberos tickets from domain-joined systems that utilize the System Security Services Daemon Kerberos Cache Manager (SSSD KCM).

      What is Kerberos

      Kerberos is a standardized authentication protocol that was originally created by MIT in the 1980s. The protocol has evolved over time. Today, Kerberos Version 5 is implemented by numerous products, including Microsoft Active Directory. Kerberos was originally designed to mutually authenticate identities over an unsecured communication line.

      The Microsoft implementation of Kerberos is used in Active Directory environments to securely authenticate users to various services, such as the domain (LDAP), database servers (MSSQL) and file shares (SMB/CIFS). While other authentication protocols exist within Active Directory, Kerberos is one of the most popular methods. Technical documentation on how Microsoft implemented Kerberos Protocol Extensions within Active Directory can be found in the MS-KILE standards published on MSDN. 

      Short Example of Kerberos Authentication in Active Directory

      To illustrate how Kerberos works, we have selected a common scenario where a user John Smith with the account ACMENET.CORP\sa_jsmith wishes to authenticate to a Windows SMB (CIFS) file share in the Acme Corporation domain, hosted on the server SQLSERVER.ACMENET.CORP.

      There are two main types of Kerberos tickets used in Active Directory: Ticket Granting Ticket (TGT) and service tickets. Service tickets are obtained from the Ticket Granting Service (TGS). The TGT is used to authenticate the identity of a particular entity in Active Directory, such as a user account. Service tickets are used to authenticate a user to a specific service hosted on a system. A valid TGT can be used to request service tickets from the Key Distribution Center (KDC). In Active Directory environments, the KDC is hosted on a Domain Controller.

      The diagram in Figure 1 shows the authentication flow.

      Figure 1: Example Kerberos authentication flow

      In summary:

      1. The user requests a Ticket Granting Ticket (TGT) from the Domain Controller.
      2. Once granted, the user passes the TGT back to the Domain Controller and requests a service ticket for cifs/SQLSERVER.ACMENET.CORP.
      3. After the Domain Controller validates the request, a service ticket is issued that will authenticate the user to the CIFS (SMB) service on SQLSERVER.ACMENET.CORP.
      4. The user receives the service ticket from the Domain Controller and initiates an SMB negotiation with SQLSERVER.ACMENET.CORP. During the authentication process, the user provides a Kerberos blob inside an “AP-REQ” structure that includes the service ticket previously obtained.
      5. The server validates the service ticket and authenticates the user.
      6. If the server determines that the user has permissions to access the share, the user can begin making SMB queries.

      For an in-depth example of how Kerberos authentication works, scroll down to view the appendix at the bottom of this article.

      Kerberos On Linux Domain-Joined Systems

      When a Linux system is joined to an Active Directory domain, it also needs to use Kerberos tickets to access services on the Windows Active Directory domain. Linux uses a different Kerberos implementation. Instead of Windows formatted tickets (commonly referred to as the KIRBI format), Linux uses MIT format Kerberos Credential Caches (CCACHE files). 

      When a user on a Linux system wants to access a remote service with Kerberos, such as a file share, the same procedure is used to request the TGT and corresponding service ticket. In older, more traditional implementations, Linux systems often stored credential cache files in the /tmp directory. Although the files are locked down and not world-readable, a malicious user with root access to the Linux system could trivially obtain a copy of the Kerberos tickets and reuse them.

      On modern versions of Red Hat Enterprise Linux and derivative distributions, the System Security Services Daemon (SSSD) is used to manage Kerberos tickets on domain-joined systems. SSSD implements its own form of Kerberos Cache Manager (KCM) and encrypts tickets within a database on the system. When a user needs access to a TGT or service ticket, the ticket is retrieved from the database, decrypted, and then passed to the remote service (for more on SSSD, check out this great research from Portcullis Labs).

      By default, SSSD maintains a copy of the database at the path /var/lib/sss/secrets/secrets.ldb. The corresponding key is stored as a hidden file at the path /var/lib/sss/secrets/.secrets.mkey. By default, the key is only readable if you have root permissions.

      If a user is able to extract both of these files, it is possible to decrypt the files offline and obtain valid Kerberos tickets. We have published a new tool called SSSDKCMExtractor that will decrypt relevant secrets in the SSSD database and pull out  the credential cache Kerberos blob. This blob can be converted into a usable Kerberos CCache file that can be passed to other tools, such as Mimikatz, Impacket, and smbclient. CCache files can be converted into Windows format using tools such as Kekeo.

      We leave it as an exercise to the reader to convert the decrypted Kerberos blob into a usable credential cache file for pass-the-cache and pass-the-ticket operations.

      Using SSSDKCMExtractor is simple. An example SSSD KCM database and key are shown in Figure 2.


      Figure 2: SSSD KCM files

      Invoking SSSDKCMExtractor with the --database and --key parameters will parse the database and decrypt the secrets as shown in Figure 3.


      Figure 3: Extracting Kerberos data

      After manipulating the data retrieved, it is possible to use the CCACHE in smbclient as shown in Figure 4. In this example, a domain administrator ticket was obtained and used to access the domain controller’s C$ share.


      Figure 4: Compromising domain controller with extracted tickets

      The Python script and instructions can be found on the FireEye Github.

      Conclusion

      By obtaining privileged access to a domain-joined Linux system, it is often possible to scrape Kerberos tickets useful for lateral movement. Although it is still common to find these tickets in the /tmp directory, it is now possible to also scrape these tickets from modern Linux systems that utilize the SSSD KCM.

      With the right Kerberos tickets, it is possible to move laterally to the rest of the Active Directory domain. If a privileged user authenticates to a compromised Linux system (such as a Domain Admin) and leaves a ticket behind, it would be possible to steal that user's ticket and obtain privileged rights in the Active Directory domain.

      Appendix: Detailed Example of Kerberos Authentication in Active Directory

      To illustrate how Kerberos works, we have selected a common scenario where a user John Smith with the account ACMENET.CORP\sa_jsmith wishes to authenticate to a Windows SMB (CIFS) file share in the Acme Corporation domain, hosted on the server SQLSERVER.ACMENET.CORP.

      There are two main types of Kerberos ticket types used in Active Directory: Ticket Granting Ticket (TGT) and service tickets. Service tickets are obtained from the Ticket Granting Service (TGS). The TGT is used to authenticate the identity of a particular entity in Active Directory, such as a user account. Service tickets are used to authenticate a user to a specific service hosted on a domain- joined system. A valid TGT can be used to request service tickets from the Key Distribution Center (KDC). In Active Directory environments, the KDC is hosted on a Domain Controller.

      When the user wants to authenticate to the remote file share, Windows first checks if a valid TGT is present in memory on the user's workstation. If a TGT isn't present, a new TGT is requested from the Domain Controller in the form of an AS-REQ request. To prevent password cracking attacks (AS-REP Roasting), by default, Kerberos Preauthentication is performed first. Windows creates a timestamp and encrypts the timestamp with the user's Kerberos key (Note: User Kerberos keys vary based on encryption type. In the case of RC4 encryption, the user's RC4 Kerberos key is directly derived from the user's account password. In the case of AES encryption, the user's Kerberos key is derived from the user's password and a salt based on the username and domain name). The domain controller receives the request and decrypts the timestamp by looking up the user's Kerberos key. An example AS-REQ packet is shown in Figure 5.

      Figure 5: AS-REQ

      Once preauthentication is successful, the Domain Controller issues an AS-REP response packet that contains various metadata fields, the TGT itself, and an "Authenticator". The data within the TGT itself is considered sensitive. If a user could freely modify the content within the TGT, they could impersonate any user in the domain as performed in the Golden Ticket attack. To prevent this from easily occurring, the TGT is encrypted with the long term Kerberos key stored on the Domain Controller. This key is derived from the password of the krbtgt account in Active Directory.

      To prevent users from impersonating another user with a stolen TGT blob, Active Directory’s Kerberos implementation uses session keys that are used for mutual authentication between the user, domain, and service. When the TGT is requested, the Domain Controller generates a session key and places it in two places: the TGT itself (which is encrypted with the krbtgt key and unreadable by the end user), and in a separate structure called the Authenticator. The Domain Controller encrypts the Authenticator with the user's personal Kerberos key.

      When Windows receives the AS-REP packet back from the domain controller, it caches the TGT ticket data itself into memory. It also decrypts the Authenticator with the user's Kerberos key and obtains a copy of the session key generated by the Domain Controller. Windows stores this session key in memory for future use. At this point, the user's system has a valid TGT that it can use to request service tickets from the domain controller. An example AS-REP packet is shown in Figure 6.

      Figure 6: AS-REP

      After obtaining a valid TGT for the user, Windows requests a service ticket for the file share service hosted on the remote system SQLSERVER.ACMENET.CORP. The request is made using the service's Service Principal Name (“SPN”). In this case, the SPN would be cifs/SQLSERVER.ACMENET.CORP. Windows builds the service ticket request in a TGS-REQ packet. Within the TGS-REQ packet, Windows places a copy of the TGT previously obtained from the Domain Controller. This time, the Authenticator is encrypted with the TGT session key previously obtained from the domain controller. An example TGS-REQ packet is shown in Figure 7.

      Figure 7: TGS-REQ

      Once the Domain Controller receives the TGS-REQ packet, it extracts the TGT from the request and decrypts it with the krbtgt Kerberos key. The Domain Controller verifies that the TGT is valid and extracts the session key field from the TGT. The Domain Controller then attempts to decrypt the Authenticator in the TGS-REQ packet with the session key. Once decrypted, the Domain Controller examines the Authenticator and verifies the contents. If this operation succeeds, the user is considered authenticated by the Domain Controller and the requested service ticket is created.

      The Domain Controller generates the service ticket requested for cifs/SQLSERVER.ACMENET.CORP. The data within the service ticket is also considered sensitive. If a user could manipulate the service ticket data, they could impersonate any user on the domain to the service as performed in the Silver Ticket attack. To prevent this from easily happening, the Domain Controller encrypts the service ticket with the Kerberos key of the computer the user is authenticating to. All domain-joined computers in Active Directory possess a randomly generated computer account credential that both the computer and Domain Controller are aware of. The Domain Controller also generates a second session key specific to the service ticket and places a copy in both the encrypted service ticket and a new Authenticator structure. This Authenticator is encrypted with the first session key (the TGT session key). The service ticket, Authenticator, and metadata are bundled in a TGS-REP packet and forwarded back to the user. An example TGS-REP packet is shown in Figure 8.

      Figure 8: TGS-REP

      Once Windows receives the TGS-REP for cifs/SQLSERVER.ACMENET.CORP, Windows extracts the service ticket from the packet and caches it into memory. It also decrypts the Authenticator with the TGT specific session key to obtain the new service specific session key. Using both pieces of information, it is now possible for the user to authenticate to the remote file share. Windows negotiates a SMB connection with SQLSERVER.ACMENET.CORP. It places a Kerberos blob in an "ap-req" structure. This Kerberos blob includes the service ticket received from the domain controller, a new Authenticator structure, and metadata. The new Authenticator is encrypted with the service specific session key that was previously obtained from the Domain Controller. The authentication process is shown in Figure 9.

      Figure 9: Authenticating to SMB (AP-REQ)

      Once the file share server receives the authentication request, it first extracts and decrypts the service ticket from the Kerberos authentication blob and verifies the data within. It also extracts the service specific session key from the service ticket and attempts to decrypt the Authenticator with it. If this operation succeeds, the user is considered to be authenticated to the service. The server will acknowledge the successful authentication by sending one final Authenticator back to the user, encrypted with the service specific session key. This action completes the mutual authentication process. The response (contained within an “ap-rep” structure) is shown in Figure 10.

      Figure 10: Final Authenticator (Mutual Authentication, AP-REP)

      A diagram of the authentication flow is shown in Figure 11.

      Figure 11: Example Kerberos authentication flow

      Morrisons not liable for massive staff data leak, court rules

      UK supreme court says retailer not to blame for actions of employee with grudge

      The UK’s highest court has ruled that Morrisons should not be held liable for the criminal act of an employee with a grudge who leaked the payroll data of about 100,000 members of staff.

      The supermarket group brought a supreme court challenge in an attempt to overturn previous judgments which gave the go-ahead for compensation claims by thousands of employees whose personal details were posted on the internet.

      Continue reading...

      Now Open: NICE K12 Cybersecurity Education Conference 2020 Call for Presentations

      6th Annual NICE K12 Cybersecurity Education Conference Call for Presentations: Open April 1 – June 30, 2020 The 2020 NICE K12 Cybersecurity Education Conference Planning Committee is seeking timely and thought-provoking K12 cybersecurity education topics that will challenge and inform educational leaders from across the stakeholder community. The theme for the 2020 K12 Conference is “Expanding the Gateway to the Cybersecurity Workforce of the Future”. The Planning Committee encourages proposals from a diverse array of organizations and individuals with different perspectives, including K12