Daily Archives: October 10, 2018

Control Flow Integrity in the Android kernel


Posted by Sami Tolvanen, Staff Software Engineer, Android Security & Privacy

[Cross-posted from the Android Developers Blog]

Android's security model is enforced by the Linux kernel, which makes it a tempting target for attackers. We have put a lot of effort into hardening the kernel in previous Android releases and in Android 9, we continued this work by focusing on compiler-based security mitigations against code reuse attacks.
Google's Pixel 3 will be the first Android device to ship with LLVM's forward-edge Control Flow Integrity (CFI) enforcement in the kernel, and we have made CFI support available in Android kernel versions 4.9 and 4.14. This post describes how kernel CFI works and provides solutions to the most common issues developers might run into when enabling the feature.

Protecting against code reuse attacks

A common method of exploiting the kernel is using a bug to overwrite a function pointer stored in memory, such as a stored callback pointer or a return address that had been pushed to the stack. This allows an attacker to execute arbitrary parts of the kernel code to complete their exploit, even if they cannot inject executable code of their own. This method of gaining code execution is particularly popular with the kernel because of the huge number of function pointers it uses, and the existing memory protections that make code injection more challenging.
CFI attempts to mitigate these attacks by adding additional checks to confirm that the kernel's control flow stays within a precomputed graph. This doesn't prevent an attacker from changing a function pointer if a bug provides write access to one, but it significantly restricts the valid call targets, which makes exploiting such a bug more difficult in practice.

Figure 1. In an Android device kernel, LLVM's CFI limits 55% of indirect calls to at most 5 possible targets and 80% to at most 20 targets.

Gaining full program visibility with Link Time Optimization (LTO)

In order to determine all valid call targets for each indirect branch, the compiler needs to see all of the kernel code at once. Traditionally, compilers work on a single compilation unit (source file) at a time and leave merging the object files to the linker. LLVM's solution to CFI is to require the use of LTO, where the compiler produces LLVM-specific bitcode for all C compilation units, and an LTO-aware linker uses the LLVM back-end to combine the bitcode and compile it into native code.

Figure 2. A simplified overview of how LTO works in the kernel. All LLVM bitcode is combined, optimized, and generated into native code at link time.
Linux has used the GNU toolchain for assembling, compiling, and linking the kernel for decades. While we continue to use the GNU assembler for stand-alone assembly code, LTO requires us to switch to LLVM's integrated assembler for inline assembly, and either GNU gold or LLVM's own lld as the linker. Switching to a relatively untested toolchain on a huge software project will lead to compatibility issues, which we have addressed in our arm64 LTO patch sets for kernel versions 4.9 and 4.14.
In addition to making CFI possible, LTO also produces faster code due to global optimizations. However, additional optimizations often result in a larger binary size, which may be undesirable on devices with very limited resources. Disabling LTO-specific optimizations, such as global inlining and loop unrolling, can reduce binary size by sacrificing some of the performance gains. When using GNU gold, the aforementioned optimizations can be disabled with the following additions to LDFLAGS:
LDFLAGS += -plugin-opt=-inline-threshold=0 \
-plugin-opt=-unroll-threshold=0
Note that flags to disable individual optimizations are not part of the stable LLVM interface and may change in future compiler versions.

Implementing CFI in the Linux kernel

LLVM's CFI implementation adds a check before each indirect branch to confirm that the target address points to a valid function with a correct signature. This prevents an indirect branch from jumping to an arbitrary code location and even limits the functions that can be called. As C compilers do not enforce similar restrictions on indirect branches, there were several CFI violations due to function type declaration mismatches even in the core kernel that we have addressed in our CFI patch sets for kernels 4.9 and 4.14.
Kernel modules add another complication to CFI, as they are loaded at runtime and can be compiled independently from the rest of the kernel. In order to support loadable modules, we have implemented LLVM's cross-DSO CFI support in the kernel, including a CFI shadow that speeds up cross-module look-ups. When compiled with cross-DSO support, each kernel module contains information about valid local branch targets, and the kernel looks up information from the correct module based on the target address and the modules' memory layout.

Figure 3. An example of a cross-DSO CFI check injected into an arm64 kernel. Type information is passed in X0 and the target address to validate in X1.
CFI checks naturally add some overhead to indirect branches, but due to more aggressive optimizations, our tests show that the impact is minimal, and overall system performance even improved 1-2% in many cases.

Enabling kernel CFI for an Android device

CFI for arm64 requires clang version >= 5.0 and binutils >= 2.27. The kernel build system also assumes that the LLVMgold.so plug-in is available in LD_LIBRARY_PATH. Pre-built toolchain binaries for clang and binutils are available in AOSP, but upstream binaries can also be used.
The following kernel configuration options are needed to enable kernel CFI:
CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=y
Using CONFIG_CFI_PERMISSIVE=y may also prove helpful when debugging a CFI violation or during device bring-up. This option turns a violation into a warning instead of a kernel panic.
As mentioned in the previous section, the most common issue we ran into when enabling CFI on Pixel 3 were benign violations caused by function pointer type mismatches. When the kernel runs into such a violation, it prints out a runtime warning that contains the call stack at the time of the failure, and the call target that failed the CFI check. Changing the code to use a correct function pointer type fixes the issue. While we have fixed all known indirect branch type mismatches in the Android kernel, similar problems may be still found in device specific drivers, for example.
CFI failure (target: [<fffffff3e83d4d80>] my_target_function+0x0/0xd80):
------------[ cut here ]------------
kernel BUG at kernel/cfi.c:32!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP

Call trace:

[<ffffff8752d00084>] handle_cfi_failure+0x20/0x28
[<ffffff8752d00268>] my_buggy_function+0x0/0x10
Figure 4. An example of a kernel panic caused by a CFI failure.
Another potential pitfall are address space conflicts, but this should be less common in driver code. LLVM's CFI checks only understand kernel virtual addresses and any code that runs at another exception level or makes an indirect call to a physical address will result in a CFI violation. These types of failures can be addressed by disabling CFI for a single function using the __nocfi attribute, or even disabling CFI for entire code files using the $(DISABLE_CFI) compiler flag in the Makefile.
static int __nocfi address_space_conflict()
{
void (*fn)(void);

/* branching to a physical address trips CFI w/o __nocfi */
fn = (void *)__pa_symbol(function_name);
cpu_install_idmap();
fn();
cpu_uninstall_idmap();

}
Figure 5. An example of fixing a CFI failure caused by an address space conflict.
Finally, like many hardening features, CFI can also be tripped by memory corruption errors that might otherwise result in random kernel crashes at a later time. These may be more difficult to debug, but memory debugging tools such as KASAN can help here.

Conclusion

We have implemented support for LLVM's CFI in Android kernels 4.9 and 4.14. Google's Pixel 3 will be the first Android device to ship with these protections, and we have made the feature available to all device vendors through the Android common kernel. If you are shipping a new arm64 device running Android 9, we strongly recommend enabling kernel CFI to help protect against kernel vulnerabilities.
LLVM's CFI protects indirect branches against attackers who manage to gain access to a function pointer stored in kernel memory. This makes a common method of exploiting the kernel more difficult. Our future work involves also protecting function return addresses from similar attacks using LLVM's Shadow Call Stack, which will be available in an upcoming compiler release.

Finding a Weak Link in the Chain

While supply-chain attacks pose serious risks, many can be avoided or mitigated by focusing on a defense-in-depth approach and basic security best practices.


Category:

CTU Research

While supply-chain attacks pose serious risks, many can be avoided or mitigated by focusing on a defense-in-depth approach and basic cybersecurity best practices.

Rapidly Evolving Ransomware GandCrab Version 5 Partners With Crypter Service for Obfuscation

The GandCrab ransomware, which first appeared in January, has been updated rapidly during its short life, with Version 5.0.2 appearing this month. In this post we will examine the latest version and how the authors have improved the code (and in some cases have made mistakes). McAfee gateway and endpoint products are able to protect customers from known variants of this threat.

The GandCrab authors have moved quickly to improve the code and have added comments to provoke the security community, law enforcement agencies, and the NoMoreRansom organization. Despite the agile approach of the developers, the coding is not professional and bugs usually remain in the malware (even in Version 5.0.2), but the speed of change is impressive and increases the difficulty of combating it.

The group behind GandCrab has achieved cult status in underground forums; the authors are undoubtedly confident and have strong marketing skills, but flawless programming is not one of their strengths.

Underground alliances

On September 27, the GandCrab crew announced Version 5 with the same showmanship as its earlier versions. GandCrab ransomware has gained a lot of attention from security researchers as well as the underground. The developers market the affiliate program like a “members-only club” and new affiliates are lining up to join, in the hope of making easy money through the large-scale ransomware extortion scheme.

The prospect of making money not only attracts new affiliates, but also leads to the formation of new alliances between GandCrab and other criminal services that strengthen the malware’s supply and distribution networks. One of these alliances became obvious during Version 4, in which the ransomware started being distributed through the new Fallout exploit kit. This alliance was again emphasized in the GandCrab Version 5 announcement, as the GandCrab crew openly endorsed FalloutEK.

The GandCrab Version 5 announcement.

With Version 5, yet another alliance with a criminal service has been formed. The malware crypter service NTCrypt announced that it is partnering with the GandCrab crew. A crypter service provides malware obfuscation to evade antimalware security products.

The NTCrypt-GandCrab partnership announcement offering a special price for GandCrab users.

The partnership between GandCrab and NTCrypt was established in a novel way. At the end of September, the GandCrab crew started a “crypt competition” on a popular underground forum to find a new crypter service they could partner with. NTCrypt applied and eventually won the competition.

The “crypt competition” announcement.

This novel approach emphasizes once more the cult status GandCrab has in the underground community. For a criminal business such as GandCrab, building these alliances makes perfect sense: They increase the ease of operation and a trusted affiliate network minimizes their risk exposure by allowing them to avoid less-trusted suppliers and distributors.

For the security community it is worrisome to see that GandCrab’s aggressive marketing strategy seems to be paying off. It is generating a strong influx of criminal interest and allows the GandCrab crew to form alliances with other essential services in the cybercriminal supply chain.

GandCrab overview

GandCrab Version 5 uses several mechanisms to infect systems. The following diagram shows an overview of GandCrab’s behavior.

GandCrab Version 5 Infection

Entry vector

GandCrab uses several entry vectors:

  • Remote desktop connections with weak security or bought in underground forums
  • Phishing emails with links or attachments
  • Trojanized legitimate programs containing the malware, or downloading and launching it
  • Exploits kits such as RigEK and others such as FalloutEK
  • PowerShell scripts or within the memory of the PowerShell process (the later mainly in Version 5.0.2)
  • Botnets such as Phorpiex (an old botnet that spread not only this malware but many others)

The goal of GandCrab, as with other ransomware, is to encrypt all or many files on an infected system and insist on payment to unlock them. The developer requires payment in cryptocurrency, primarily Dash (or Bitcoin in some older versions), because it is complex to track and quick to receive the payment.

The malware is usually, but not always, packed. We have seen variants in .exe format (the primary form) along with DLLs. GandCrab is effectively ransomware as a service; its operators can choose which version they want.

Version 5.0

This version has two releases. The first works only on Windows 7 or later due to a big mistake in the compiling time. Version 5.0 carries two exploits that try to elevate privileges. It checks the version of the operating system and the TokenIntegrityLevel class of the process. If the SID Subauthority is SECURITY_MANDATORY_LOW_RID (0x1000), it tries to execute the exploits if it also passed one previous check of a mutex value.

One release is the exploit released in August on Twitter and GitHub by the hacker “SandboxEscaper.” The original can be found at this link. The Twitter handle for this hacker is https://twitter.com/sandboxescaper.

This exploit tries to use a problem with the Task System in Windows when the operating system improperly handles calls to an advanced local procedure call.

The GandCrab authors claim there is no CVE of this exploit, but that is incorrect. It falls under CVE-2018-8440. This exploit can affect versions Windows 7 through Windows 10 Server. More information about this exploit can be found at this link.

In the first release of Version 5.0, the malware authors wrote the code exploit using normal calls to the functions. Thus at compiling time the binary has the IAT filled with the DLL needed for some calls. This DLL does not exist in Windows Vista and XP, so the malware fails to run in these systems, showing an error.

Import of xpsprint.dll that will not run on Windows XP or Vista.

The exploit using direct calls.

This release published an HTML file after encrypting the user’s files, but this file was faulty because it did not always have the information needed to decrypt the user’s files.

The second release uses dynamic calls and obfuscates the strings of the exploit, as shown in the previous image. (Earlier they were in plain text.)

The exploit with dynamic calls and obfuscated strings.

The second exploit is covered under CVE-2018-8120, which in Windows 7, Windows Server 2008 R2 and Windows Server 2008 allows an elevation of privileges from the kernel. Thanks to a faulty object in the token of the System process, changing this token in the malware results in executing the malware with System privileges.

Executing the exploit CVE-2018-8120.

You can read more about this exploit on mcafee.com.

The malware checks the version of the operating system and type of user and whether it can get the token elevation information of its own process before employing the use of exploits. In some cases, it fails to infect. For example, in Windows XP the second release of Version 5 runs but does not encrypt the files. (We thank fellow researcher Yassine Lemmou, who shared this information with us.)

We and Lemmou know where the problem is in Version 5.0.2. A few changes to the registry could make the malware run correctly, but we do not want to help the malware authors fix their product. Even though GandCrab’s authors quickly repair mistakes as they are pointed out, they still fail to find some of the basic errors by themselves. (McAfee has had no contact with GandCrab’s developers.)

The second release writes a random extension of five letters instead of using the normal .CRAB or .KRAB extension seen in previous versions. The malware keeps this information as binary data in a new registry entry in the subkey “ext_data\data” and in the value entry of “ext.”

A new registry entry to hold the random extension.

The malware tries creating this new entry in the root key of HKEY_LOCAL_MACHINE. If it cannot—for example, because the user does not have admin rights—it places the entry in the root key HKEY_CURRENT_USER. This entry is deleted in some samples after the files have been encrypted.

Version 5.0.1

This version fixed some internal bugs in the malware but made no other notable changes.

Version 5.0.2

This version changes the random extension length from 5 to 10 characters and fixes some internal bugs. Other bugs remain, however, meaning files cannot always be encrypted.

The latest

This section is based on the latest version of the malware (Version 5.0.2 on October 4), though some elements appear in earlier releases of Version 5. Starting with this version, the malware uses two exploits to try to elevate privileges in the system.

The first exploit uses a dynamic call to the function IsWoW64Process to detect whether the operating system is running in 32 or 64 bits.

The dynamic call to IsWoW64Process with obfuscated strings.

Depending on the result, the malware has two embedded DLLs, encrypted with a simple operation XOR 0x18.

Decrypting the DLL to load with the exploit and fix the header.

The malware authors use a clever trick with fuzzing to avoid detection: The first two bytes of the DLL are trash, something that is later fixed, as we see in the preceding image.

After decryption and loading the exploit, this DLL creates a mutex in the system and some pipes to communicate with the main malware. The malware creates a pipe that the DLL reads later and prepares strings as the mutex string for the DLL.

Preparing the string for the DLL.

The DLL has dummy strings for these strings.

Creating the new mutex and relaunching the process.

This mutex is checked when the malware starts. The function returns a 1 or 0, depending on whether it can open the mutex. Later, this result is checked and if the mutex can be opened the malware will avoid checking the version and will not use the two new exploits to elevate privileges.

Opening the new mutex to check if there is a need to run the exploits.

As with GandCrab Version 4.x and later, the malware later checks the version. If it is Vista or later, it tries to get the “TokenIntegrityLevel” class and relaunch the binary to elevate its privilege with a call to “ShellExecuteExW” with the “runas” application. If the system is Windows XP, the code will avoid that and continue in its normal flow.

This mutex is never created for the main malware; it is created for the DLL loaded using the exploit. To better understand this explanation, this IDA snippet may help:

Explaining the check of mutex and exploits.

This version changes the desktop wallpaper, which is created at runtime and is filled with the extension generated to encrypt the files. (The ransom note text or HTML has the name: <extension_in_uppercase>_DECRYPT. <txt|html>) and the user name of the machine.)

Creating the new wallpaper at runtime.

The username is checked with “SYSTEM.” If the user is “SYSTEM,” the malware puts the name “USER” in the wallpaper.

Checking the name of the user for the wallpaper.

The wallpaper is created in the %TEMP% folder with the name pidor.bmp.

Creating the wallpaper in the temp folder.

Here is an example of strings used in the wallpaper name and to check the name of the user and the format string, whether it is another user, or the final string in the case of SYSTEM user with USER in uppercase.

The name of the wallpaper and special strings.

Finally, the wallpaper is set for any user other than SYSTEM:

Changing the wallpaper.

The malware detects the language of the system and decrypts the strings and writes the correct ransom note in the language of the system.

Coverage

Customers of McAfee gateway and endpoint products are protected against the latest GandCrab versions. Detection names include Ran-Gandcrabv4! and many others.

An independent researcher, Twitter user Valthek, has also created several vaccines. (McAfee has verified that these vaccines are effective.) The version for GandCrab 4.x through 5.0.2 can prevent the files from being encrypted.

For Version 4.x, the deletion of shadow volumes cannot be avoided but at least the files themselves are kept safe.

For Version 5.x, encrypting the files can be avoided but not the creation and changing of the wallpaper, which the malware will still corrupt. The malware cannot create random extensions to encrypt the files but will prepare the string. Running the vaccine a second time removes the wallpaper if it is in the %TEMP% folder.

The vaccine has versions with and without persistence. The version with persistence creates a random filename in a special folder and writes a special random entry in the registry to run each time with the system. In this case, the machine will always be protected against this malware (at least in its current state of October 10, and perhaps in the future).

 

Indicators of compromise

These samples use the following MITRE ATT&CK™ techniques:

  • File deletion
  • System information discovery
  • Execution through API
  • Execution through WMIC
  • Application process discovery: to detect antimalware and security products as well as normal programs
  • Query registry: to get information about keys that the malware needs to create or read
  • Modify registry
  • File and directory discovery: to search for files to encrypt
  • Discovery of network shares to encrypt them
  • Encrypt files
  • Process discovery: enumerating all processes on the endpoint to kill some special ones
  • Create files
  • Elevation of privileges
  • Change wallpaper
  • Flood the network with connections
  • Create mutants

Hashes 

  • e168e9e0f4f631bafc47ddf23c9848d7: Version 5.0
  • 6884e3541834cc5310a3733f44b38910: Version 5.0 DLL
  • 2d351d67eab01124b7189c02cff7595f: Version 5.0.2
  • 41c673415dabbfa63905ff273bdc34e9: Version 5.0.2
  • 1e8226f7b587d6cd7017f789a96c4a65: DLL for 32-bit exploit
  • fb25dfd638b1b3ca042a9902902a5ff9: DLL for 64-bit exploit
  • df1a09dd1cc2f303a8b3d5097e53400b: botnet related to the malware (IP 92.63.197.48)

 

The post Rapidly Evolving Ransomware GandCrab Version 5 Partners With Crypter Service for Obfuscation appeared first on McAfee Blogs.

Breaking Azure Functions with Too Many Connections

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

Breaking Azure Functions with Too Many Connections

For the most part, Have I Been Pwned (HIBP) runs very smoothly, especially given how cheaply I run many parts of the service for. Occasionally though, I screw up and get something wrong that interrupts the otherwise slick operation and results in some outage. Last weekend was one such occasion and I want to explain what I got wrong, how you might get it wrong too and then, of course, how to fix it.

But first, some background: A few months ago I announced that Mozilla and AgileBits were baking HIBP into Firefox and 1Password (I'll call them "subscribers" in this post). They're leveraging the anonymity model described there to ensure their users can search for their identities without me ever knowing what they are. In addition, they can also sign those identities up to receive notifications if they're pwned in the future, again, via an anonymity model. The way the notifications work is that if an identity is signed up, it's just the first 6 characters of a SHA-1 hash of their email address that's sent to HIBP. If test@example.com is signed up, it's 567159 being those first chars which gets stored in HIBP. When I then load a data breach, every email address in the source data is hashed and if any begin with 567159 then a callback needs to be sent to an API on the subscriber's end. Because there may be multiple email addresses beginning with that hash prefixes, a collection of suffixes is sent in the callback and the subscriber immediately discards any that don't match the ones they're monitoring. For example, a callback would look like this:

{
  hashPrefix: "567159",
  breachName: "Apollo"
  hashSuffixes: [
    "D622FFBB50B11B0EFD307BE358624A26EE",
    "..."
  ]
}

You can see the breach name here is "Apollo" and that's an incident of some 126M email addresses which you can read about in this WIRED piece. That's a very large breach and I loaded it not long after Mozilla went live with their service which attracted a very large number of people who signed up for notifications via their service. This led to two very large numbers amplifying each other to create the total number of callbacks I needed to send after the data load.

Now, the way the callbacks work is they all go into an Azure storage queue with each new message triggering a function execution. Azure Functions are awesome because they're "serverless" which, in theory at least, means no logical infrastructure boundaries and you simply pay per execution. Except there are boundaries, one of which caused the aforementioned outage.

So that's the background, let's get onto what happened when the notification queuing process began lining up those hash range callbacks:

Breaking Azure Functions with Too Many Connections

What you're seeing here in green the number of invocations of the Function and the orange line show you the average execution time. There's absolutely no action in the leadup to about 07:30 because there are no callbacks to send. However, that's when I started dumping them into a queue which in turn began triggering the function. You can see the steady ramp-up of executions as the Azure Function service spins up more and more instances to process the queue messages. You can also see the execution time flat-lining at 10 seconds all the way until just after 08:00. That's the first problem so let's talk about that:

The pattern above absolutely hammered Mozilla. I was looking at the logs as messages were queuing and noticed a heap of the callbacks being aborted at the 10 second timeout mark. This is the period that I set on the HttpClient object in the function so that a request isn't left hanging for too long if the receiver can't respond. I noticed it pretty quickly (this was the first load of a really big data set since they came on board so I was watching carefully) and reached out to them. They acknowledged the issue and logged a bug accordingly:

We are receiving a spike of HIBP callback requests which look up our subscribers by sha1 hash. The RDS node is maxing on CPU. :(

When I originally implemented the callback logic, I built a retry model where if the call fails it will put it back into the queue and set the initial visibility delay to a random time period between 1 and 10 minutes. The rationale was that if there were a bunch of failures like these ones here, it would be better to wait a while and ensure there's not a sudden influx of new requests at the same time. And more than 5 consecutive failures using that model would cause the message to go to a "poison" queue where it would sit until the message expired.

In theory, this is all fine; Mozilla was getting overwhelmed but the messages weren't lost and they'd be resent later on. Just to take a bit of pressure off them, I pushed a change which multiplied that visibility period by 10x so the message would appear between 10 and 100 minutes later. (The additional period doesn't really matter that much, it would simply mean that subscribers to Mozilla's service would receive their notification up to about an hour and a half later.) In pushing that change, I moved the definition of the minimum and max viability periods to config settings so I could just tweak them in the Azure portal later on without redeploying. And then stuff broke.

I didn't see it initially because it was Saturday morning and I was trying to have brekkie with the family (so yeah, it's really the kids' fault 😋). Then I saw this:

Breaking Azure Functions with Too Many Connections

I restarted the app service, but that didn't help. I then synced the latest code base from GitHub to my "stage" deployment slot then swapped that with production and the service came back up.

Jeff Holan from the Azure Functions team noticed this and we then got down to troubleshooting. It didn't take long to work out the underlying issue:

In the consumption plan *today* there is a limit to 300 active network connections open at one time.  If you cross that limit it locks down the instance for a bit. I see that even though we scaled out your app to a few different instances, many of them crossed the threshold for connections (on one for example I see 174 connections at one time to IP Address [Mozilla's IP address]).

Ah. And because the function which is triggered by the queue and does the callbacks runs in the same service as the function which is triggered by HTTP requests (i.e. the HIBP API), everything got locked down together at the same time. In other words, I broke Mozilla which in turn broke Azure. Cool?

Let's move on to fixes because there are many simultaneous lessons to be learned here and the first is that timeout: 10 seconds is way too long. Here's a 4-hour graph of callbacks I snapped just after the original spike passed that window and as you can see, the average execution time is now less than 1 second:

Breaking Azure Functions with Too Many Connections

As such, I dropped the timeout down to 3 seconds. This whole thing was made much worse than it needed to be due to multiple connections sitting there open for 10 seconds at a time. Fail early, as they say, and the impact of that comes way down. And remember that this doesn't mean the callback is lost, it just means that HIBP will give it another go a bit later. It's still a window that's 3x longer than what's required on average and not only does it protect HIBP, it also protects my wallet because functions are charged based on the amount of time they spend running.

Next up is that 300 active network connections limit and Azure has now doubled it. Not just for me, but Azure has just begun rolling out updates that take the limit to 600 and it's just coincidental that my problems occurred a few days before the change. I feel much better about this in terms of the whole philosophy of "serverless"; remember, the value proposition of this approach is meant to be that you escape from the bounds of logical infrastructure and you pay per execution instead. However, there are still points where you hit those logical limits and anything that can raise that bar even further is a good thing.

Speaking of which, my own code could have been better. Jeff pointed out that it's important to reuse connections across executions by using a static client. I wasn't doing that and was missing a dead simple performance optimisation that's clearly documented in How to manage connections in Azure Functions:

DO NOT create a new client with every function invocation.

So now the code looks precisely like the example in the link above:

// Create a single, static HttpClient
private static HttpClient httpClient = new HttpClient();

public static async Task Run(string input)
{
    var response = await httpClient.GetAsync("http://example.com");
    // Rest of function
}

Then there's the way I caused traffic to ramp in the first place and as of Saturday, that was basically "dump truck loads of messages in the queue in the space of a few minutes". This causes a massive sudden influx of traffic which can easily overwhelm the recipient of the callbacks and I totally get how that caused Mozilla headaches (certainly I've been on the receiving end of similar traffic patterns myself). The change I've made here is to randomise the initial visibility delay when the message first goes into the queue such that it's between 0 and 60 minutes. When a subscriber is monitoring a large number of hash prefixes that's still going to hit them pretty hard, but it won't be quite the same sudden hit as before and again, a little delay really doesn't matter that much anyway. Plus, as more subscribers use this particular service, the randomised visibility period will ensure a more even distribution among who receives callbacks when regardless of the sequence with which messages are inserted into the queue.

So that's what happened, what I've learned and what I've changed. It's a reminder of the boundaries of serverless (even when that 300 connection limit doubles) and that there are usually easy optimisations around these things too. I could go further yet too - I could put callbacks in a separate app service and isolate them from the API other people are actually hitting. I'm certainly not ruling that out for the future, but right now everything detailed above should massively increase the scope of the service and isolating things further is almost certainly premature optimisation at this point.

Lastly, I've been marching forward on a mission to move more and more code into Azure Functions. As of just yesterday, every API that's now hit at scale sits in a function and I've used Cloudflare workers to ensure requests to the original URLs simply get diverted to the new infrastructure. As such, I'm going to end this blog post with a momentous occasion: I've just scaled down the HIBP App Service that runs the website to half the size and, most pleasingly, half the cost 😎

Breaking Azure Functions with Too Many Connections

Top 5 ThreatConnect Resources for Malware Analysis

Malware Analysis. Some may say it's the most exciting part of the job, right? You have something you know is bad. What's it do? How's it run? Where'd it come from? These are questions we all want to know the answers to. And because technology is a beautiful thing, we have the ability to find out these answers in a safe way.

Over the past year or so, we've released some very useful information on how to use ThreatConnect to enable Malware Analysis. We've compiled them for you with the hope that it will help you in leveraging the Platform as a centralized repository for powering all of your Malware Analysis needs.

First, though, some vocabulary clarification to make sure we are all on the same page:

Automated Malware Analysis (AMA): As you may guess from the name, AMA tools take the manual systems often associated with malware analysis and package them into a solution that uses both static and dynamic analysis methods to detect existing and potential new malware.

Sandbox: This is what you can consider your safe space. It's isolated from other networks, and is a spot where you're able to run known malicious or potentially malicious code or commands without affecting anything else.

Malware Vault: This is what we call the restricted spot within the ThreatConnect Platform where malware can be uploaded and stored. In order for the malware to be uploaded successfully, there are security controls in place. These security controls include things like user access control and encryption requirements.

Playbooks: A critical piece of ThreatConnect, Playbooks allow for the automation and orchestration of various actions and decisions between ThreatConnect and other security technology your organization may be using.. As you'll find out later, they also can be used to power integrations.

With that out of the way, let's move on. 

Below are the top 5 resources we have developed to help analysts better utilize ThreatConnect for daily activities. 

  1. How to Upload Malware to ThreatConnect for Isolated Malware Analysis
    Found on the ThreatConnect Knowledge Base, this is a quick way to get your malware into ThreatConnect as the first step for isolated analysis. Due to (obvious) security concerns, we provide a safe way to complete this task and get the malware into the ThreatConnect Malware Vault. Once it's in the vault, you can start the process of digging deeper into the malicious executable, and make associations with your available intelligence. A step-by-step walkthrough (with screenshots!) can be found here: Uploading Malware to ThreatConnect 
  2. Conducting VMRay Malware Analysis with Playbooks
    This Playbook takes a suspicious or malicious file sample from ThreatConnect's Malware Vault and transmits it to an AMA solution, in this case, VMRay, submitting it for dynamic and static analysis. It turns a manual, multiple-click process into a simple, one-click process, saving you critical amounts of time. Learn how to set up a Playbook here: VMRay and ThreatConnect for Malware Analysis
  3.  Integrating ThreatConnect with Maltego
    Maltego is an open source tool for intelligence gathering. So, what's that mean? What it means is that you're able to run this interactive data mining tool for link analysis to draw correlations between malware you're examining and information from various sources. Given the massive amounts of intelligence available in ThreatConnect, it's a no-brainer to tie these tools together should you be using both in-house for better malware analysis. An integration guide for getting the two to play nicely and make your life much easier can be found here: Integrating ThreatConnect and Maltego
  4. Building an Enterprise Malware Analysis Service in ThreatConnect
    APIs are our friend. They provide a standardized (in most cases) way for a set of disparate technologies to work together. This blog is part of our "Playbook Fridays" blog series, and has a wider theme of how you can use ThreatConnect Playbooks to manage security APIs. Good news is that the practical example is how, by using ThreatConnect Playbooks, you can use APIs to map out an enterprise-grade malware analysis service. Adam Vincent, the author and ThreatConnect's CEO, takes it one step further and shows how you can create a custom dashboard to monitor the malware analysis service you set up. See it for yourself here!
    Above is a screen capture of a dashboard that allows us to quickly see how the Malware Analysis Service is being leveraged (Top left) and how the external enrichment services are being utilized as part of the service (Top Right). On the bottom left you can see who is using the Malware Analysis Service and which are heavy users. On the bottom right are the list of the most highly observed file indicators.
  5. The Diamond Model: An Analysts Best Friend
    The Diamond Model, created by Sergio Caltagirone, Chris Betz, and ThreatConnect's own Andy Pendergast, is an analytic methodology for intrusion analysis. It's meant to guide analysts as they are investigating an intrusion -- from pre to post -- to better understand the activities happening that are targeting any given victim or group of victims. Understanding how the information that may be uncovered during malware analysis is used by other members of the team will provide great context for analysts. Check out the webcast on our YouTube channel.  

 

 

Join us for a quick 30-minute bi-weekly walkthrough of the ThreatConnect Platform. This is a live demo with one of our security engineers who will be happy to answer any questions about how ThreatConnect will fit into your current security program.

 

The post Top 5 ThreatConnect Resources for Malware Analysis appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Cisco Webex Network Recording Player and Cisco Webex Player Remote Code Execution Vulnerabilities

Multiple vulnerabilities in the Cisco Webex Network Recording Player for Microsoft Windows and the Cisco Webex Player for Microsoft Windows could allow an attacker to execute arbitrary code on an affected system.

The vulnerabilities exist because the affected software improperly validates Advanced Recording Format (ARF) and Webex Recording Format (WRF) files. An attacker could exploit these vulnerabilities by sending a user a malicious ARF or WRF file via a link or an email attachment and persuading the user to open the file by using the affected software. A successful exploit could allow the attacker to execute arbitrary code on the affected system.

Cisco has released software updates that address these vulnerabilities. There are no workarounds that address these vulnerabilities.

This advisory is available at the following link:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20181003-webex-rce


Security Impact Rating: High
CVE: CVE-2018-15408,CVE-2018-15409,CVE-2018-15410,CVE-2018-15411,CVE-2018-15412,CVE-2018-15413,CVE-2018-15415,CVE-2018-15416,CVE-2018-15417,CVE-2018-15418,CVE-2018-15419,CVE-2018-15420,CVE-2018-15431

How the McAfee Rotation Program is Providing Opportunities

 By: Darius, Sales & Marketing Rotation Engineer

“The sky is the limit.” It’s a phrase I heard frequently growing up, in school, college and university. To me, the phrase means there are endless opportunities. So, my nine-year-old self desired to be a racecar driver, my freshman year ambition was to be a developer and then my “final” plan was to work in power transmission/distribution or renewable energy.

After graduation, with all these possibilities, I still didn’t really have a clear picture of what I wanted to do. I focused on volunteering and earning great grades in school, but like many of my peers, I hadn’t really prepared for life after college. I hadn’t thought seriously enough about my career path.

What I did know was this: I wanted to use my technical skills while at the same time learn the business side of an industry. It was my broad interest that led me to the McAfee Rotation Program (MRP) in the summer of 2017.

Wide-ranging Experience

The MRP is perfect for anyone who is attracted to the cybersecurity industry but wants experience in a range of business units. I’m gaining extensive knowledge of cybersecurity products and services, developing my technical and business acumen, building communication skills and expanding my network.

The MRP consist of five four-month placements known as “rotations.” This includes our Professional Services, Pre-Sales Engineering, the Security Operations Center, Support and Sales Operations units.

During the first three of my five rotations, I’ve had the opportunity to build my brand, learn McAfee values and business strategies, make a real impact on the departments I’ve worked with and learn from outstanding mentors.

No Fear!

We innovate without fear—this is an important McAfee value. The MRP gave me several opportunities to become a trailblazer and make a real impact with fresh ideas.

During my first rotation in Professional Services, I helped to accelerate the sales cycle with a new opportunity tracking system that included weekly reports. My line management and I could see an instant difference with the new system, which was most satisfying. I am grateful to work for a company that encourages innovation and creative thinking.

Rapid Learning

In my second rotation with Pre-Sales Engineering, I took on detailed product knowledge in multiple solutions through the projects I worked on in my day-to-day job experience.

With mentorship from senior sales engineers, I put my new knowledge and understanding to the test when I demonstrated McAfee’s Endpoint Security and Threat Intelligence Exchange solutions to a customer. This also gave me the chance to work with technical and commercial departments simultaneously.

My rapid acquisition of knowledge, understanding and experience continued in my third rotation with our Security Operations Center (SOC). In the SOC, I have learned first-hand how vital security is and gained a broad spread of new skills and knowledge, applicable across all business units.

Opportunities Abound

The MRP has created opportunities that I could never have imagined. “The sky is the limit” has taken on a whole new meaning for me.

During my first year at McAfee, I have grown professionally, become a more well-rounded individual and experienced more about business operations than many people do throughout their entire career. I can’t recommend the McAfee Rotation Program enough. It’s given me confidence that I can achieve great things—but I’m also honing in on what I want to do with my career and where my skills are best suited. I can’t wait to see what the next two rotations bring!

For more stories like this, follow @LifeAtMcAfee on Instagram and on Twitter @McAfee to see what working at McAfee is all about.

Interested in the McAfee Rotation Program? Find out more here.

The post How the McAfee Rotation Program is Providing Opportunities appeared first on McAfee Blogs.

As Search Engines Blacklist Fewer Sites, Users More Vulnerable to Attack

Turns out, it’s a lot harder for a website to get blacklisted than one might think. A new study found that while the number of bot malware infected websites remained steady in Q2 of 2018, search engines like Google and Bing are only blacklisting 17 percent of infected websites they identify. The study analyzed more than six million websites with malware scanners to arrive at this figure, noting that there was also a six percent decrease in websites being blacklisted over the previous year.

Many internet users rely on these search engines to flag malicious websites and protect them as they surf the web, but this decline in blacklisting sites is leaving many users just one click away from a potential attack. This disregard of a spam attack kit on search engine results for these infected sites can lead to serious disruption, including a sharp decline in customer trust. Internet users need to be more vigilant than ever now that search engines are dropping the ball on blacklisting infected sites, especially considering that total malware went up to an all-time high in Q2, representing the second highest attack vector from 2017-2018, according to the recent McAfee Labs Threats Report.

Another unsettling finding from the report was that incidents of cryptojacking have doubled in Q2 as well, with cybercriminals continuing to carry out both new and traditional malware attacks. Cryptojacking, the method of hijacking a browser to mine cryptocurrency, saw quite a sizable resurgence in late 2017 and has continued to be a looming threat ever since. McAfee’s Blockchain Threat Report discovered that almost 30,000 websites host the Coinhive code for mining cryptocurrency with or without a user’s consent—and that’s just from non-obfuscated sites.

And then, of course, there are just certain search terms that are more dangerous and leave you more vulnerable to malware than others. For all of you pop culture aficionados, be careful which celebrities you digitally dig up gossip around. For the twelfth year in a row, McAfee researched famous individuals to assess their online risk and which search results could expose people to malicious sites, with this year’s Most Dangerous Celebrity to search for being “Orange is the New Black’s” Ruby Rose.

So, how can internet users protect themselves when searching for the knowledge they crave online, especially considering many of the most popular search engines simply aren’t blacklisting as many bot malware infected sites as they should be? Keep these tips in mind:

  • Turn on safe search settings. Most browsers and search engines have a safe search setting that filters out any inappropriate or malicious content from showing up in search results. Other popular websites like iTunes and YouTube have a safety mode to further protect users from potential harm.
  • Update your browsers consistently. A crucial security rule of thumb is always updating your browsers whenever an update is available, as security patches are usually included with each new version. If you tend to forget to update your browser, an easy hack is to just turn on the automatic update feature.
  • Be vigilant of suspicious-looking sites. It can be challenging to successfully identify malicious sites when you’re using search engines but trusting your gut when something doesn’t look right to you is a great way of playing it safe.
  • Check a website’s safety rating. There are online search tools available that will analyze a given URL in order to ascertain whether it’s a genuinely safe site to browse or a potentially malicious one infected with bot malware and other threats.
  • Browse with security protection. Utilizing solutions like McAfee WebAdvisor, which keeps you safe from threats while you search and browse the web, or McAfee Total Protection, a comprehensive security solution that protects devices against malware and other threats, will safeguard you without impacting your browsing performance or experience.

To keep abreast of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post As Search Engines Blacklist Fewer Sites, Users More Vulnerable to Attack appeared first on McAfee Blogs.

How To Spot Tech Support Scams

When something goes wrong with your computer or devices, it can cause a panic. After all, most of us depend on technology not only to work and connect with others, but also to stay on top of our daily lives. That’s why tech support scams are often successful. They appear to offer help when we need it the most. But falling for these scams can put your devices, data, and money at even greater risk.

Although support scams have been around almost as long as the internet, these threats have increased dramatically over the last couple of years, proving to be a reliable way for scammers to make a quick buck.

In fact, the Internet Crime Complaint Center (IC3) said that it received nearly 11,000 tech support related complaints in 2017, leading to losses of $15 million, 90% higher than the losses reported in 2016. Microsoft alone saw a 24% increase in tech scams reported by customers in 2017 over the previous year, with 15% of victims saying they lost money.

Often, scammers convince users that there is a problem with their computer or device by delivering pop-up error messages. These messages encourage the user to “click” to troubleshoot the problem, which can download a piece of malware onto their machine, or prompt them to buy fake security software to fix the issue. In some cases, users wind up downloading ransomware, or paying $200 to $400 for fake software to fix problems they didn’t actually have.

And, in a growing number of instances, scammers pose as legitimate technology companies, offering phony support for real tech issues. Some even promote software installation and activation for a fee, when the service is actually provided for free from the software provider. They do this by posting webpages or paid search results using the names of well-known tech companies. When a user searches for tech help, these phony services can appear at the top of the search results, tricking people into thinking they are the real deal.

Some cybercriminals have even gone so far as to advertise fake services on legitimate online forums, pretending to be real tech companies such as Apple, McAfee, and Amazon. Since forum pages are treated as quality content by search engines, these phony listings rank high in search results, confusing users who are looking for help.

The deception isn’t just online. More and more computer users report phone calls from cybercrooks pretending to be technology providers, warning them about problems with their accounts, and offering to help resolve the issue for a fee. Or worse, the scammer requests access to the victim’s computer to “fix the problem”, with the hopes of grabbing valuable data, such as passwords and identity information. All of these scams leave users vulnerable.

Here’s how to avoid support scams to keep your devices and data safe:

  • If you need help, go straight to the source—Type the address of the company you want to reach directly into the address bar of your browser—not the search bar, which can pull up phony results. If you have recently purchased software and need help, check the packaging the software came in for the correct web address or customer support line. If you are a McAfee customer, you can always reach us at https://service.mcafee.com.
  • Be suspicious—Before you pay for tech support, do your homework. Research the company by looking for other customer’s reviews. Also, check to see if your technology provider already offers the support you need for free.
  • Be wary of callers asking for personal information, especially if they reach out to you first—Situations like this happen all the time, even to institutions like the IRS. McAfee’s own policy is to answer support questions via our website only, and if users need assistance, they should reach out here directly. Never respond to unsolicited phone calls or pop-up messages, warning you about a technical issue, and never let anyone take over your computer or device remotely.
  • Surf Safe—Sometimes it can be hard to determine if search results are safe to click on, or not. Consider using a browser extension that can warn you about suspicious sites right in your search results, and help protect you even if you click on a dangerous link.
  • Keep informed—Stay up-to-date on the latest tech support scams so you know what to watch out for.

Looking for more mobile security tips and trends? Be sure to follow @McAfee Home on Twitter, and like us on Facebook.

The post How To Spot Tech Support Scams appeared first on McAfee Blogs.

Microsoft WindowsCodecs.dll SniffAndConvertToWideString Information Leak Vulnerability

These vulnerabilities were discovered by Marcin Noga of Cisco Talos.

Today, Cisco Talos is disclosing a vulnerability in the WindowsCodecs.dll component of the Windows operating system.

WindowsCodecs.dll is a component library that exists in the implementation of Windows Imaging Component (WIC), which provides a framework for working with images and their data. WIC makes it possible for independent software vendors (ISVs) and independent hardware vendors (IHVs) to develop their own image codecs and get the same platform support as standard image formats (ex. TIFF, JPEG, PNG, GIF, BMP and HDPhoto).

Vulnerability Details

TALOS-2018-0644 - Microsoft WindowsCodecs.dll SniffAndConvertToWideString Information Leak Vulnerability

TALOS-2018-0644 (CVE-2018-8506) is an exploitable memory leak vulnerability that exists in the SniffAndConvertToWideString function of WindowsCodecs.dll, version 10.0.17134.1. A specially crafted JPEG file can cause the library to return uninitialized memory, resulting in a memory leak. An attacker can send or share a malformed JPEG file to trigger this vulnerability.

This vulnerability is present in the WindowsCodecs DLL library — an implementation of Windows Imaging Component (WIC) —that provides an extensible framework for working with images and image metadata.

An attacker can leak heap memory due to the improper sting null termination after calling `IWICImagingFactory::CreateDecoderFromFilename` on a JPEG file with properly malformed metadata.

Additional details can be found here.

Affected versions

The vulnerability is confirmed in the WindowsCodecs.dll, version 10.0.17134.1, but it may also be present in the earlier versions of the product. Users are advised to apply the latest Windows update.

Discussion

WIC enables developers to perform image processing operations on any image format through a single, consistent set of common interfaces, without requiring prior knowledge of specific image formats and it provides an extensive architecture for image codecs, pixel formats, and metadata with automatic run-time discovery of new formats.

It's recommended that developers use operating system components, such as Windows Imaging Component, that are updated frequently so they do not have to apply any specific updates to their own products.

Memory leak vulnerabilities are dangerous and could cause the instability in the system, as the program does not properly free the allocated memory and the memory blocks remain marked as being in use. Vulnerable applications continue to waste memory over time, eventually consuming all RAM resources, which can lead to abnormal system behavior. Developers should be aware of these vulnerabilities' potentially damaging consequences.

Coverage

The following SNORTⓇ rules detect attempts to exploit these vulnerabilities. Please note that additional rules may be released at a future date and current rules are subject to change pending additional vulnerability information. For all current rule information, please refer to your Firepower Management Center or Snort.org.

Snort Rules: 47430 - 47433

Why Apple must be looking into using blockchain

Everyone who can is looking into using Blockchain, and Apple is no exception, though it will be a long time before we see any consumer-facing implementations of this.

Apple looks at lots of technologies

If it’s on the Gartner Hype Cycle, you can bet a few bucks Apple is looking at it.

That’s why I think it will eventually introduce a 3D printer that works in conjunction with ARKit (unverified prediction), and also why it must be thinking about how to use blockchain.

To read this article in full, please click here

Securing the Social Security Number to Protect U.S. Citizens

With cyber criminals having more flexibility in funding and operations than ever before, U.S. citizens are vulnerable not only to breaches of security but also of privacy. In the United States, no article of personal information is meant to be more private or secure than the Social Security Number (SSN). This is for good reason. The SSN has become a common identifier in the U.S. and is now integrated into many identification processes across different institutions.

The SSN is also the gateway to all sorts of other personal information – health records, financial positions, employment records, and a host of other purposes for which the SSN was never designed but has come to fulfill. What do all these pieces of information have in common? They are meant to be private.

Unfortunately, the unforeseen overreliance on the SSN as an identifier has left citizens’ identities vulnerable. The reality is that the SSN can easily be stolen and misused. It is a low-risk, high-reward target for cybercriminals that is used for fraudulent activities and also sold in bulk on the cybercrime black market. This has resulted in major privacy and security vulnerabilities for Americans, with some estimates saying that between 60 percent and 80 percent of all SSNs have been stolen. For example, Equifax and OPM breaches exposed probably millions of SSNs.

This is not a new problem.

Twenty-five years ago, computer scientists voiced concerns about sharing a single piece of permanent information as a means of proving a person’s identity. The issue has only recently gained national attention due to major breaches where cyber criminals were able to access millions of consumers’ personal online information. So, why hasn’t there been any significant measure put in place to safeguard digital identities?

A major reason for a lack of action on this issue has been a lack of incentives or forcing functions to change the way identity transactions work. But it’s time for policymakers to modernize the systems and methods that identify citizens and enable citizens to prove their identity with minimal risk of impersonation and without overtly compromising privacy.

The good news is that the U.S. has the technology pieces to put in place a high-quality and high security identity solution for U.S. citizens.

There are reasonable and near-term steps we can take to modernize and protect the Social Security Number to create better privacy and security in identification practices. McAfee and The Center for Strategic and International Studies (CSIS) recently released a study on Modernizing the Social Security Number with the aim of turning the Social Security Number into a secure and private foundation for digital credentials. The report’s ultimate recommendation is to replace the traditional paper Social Security card with a smart card — a plastic card with an embedded chip, like the credit cards that most people now carry. Having a smart card rather than a paper issued SSN would make the SSN less vulnerable to misuse.

A smart card is a viable solution that already has the infrastructure in place to support it. However, there are other potential solutions that must not be overlooked, such as biometrics. Biometrics measure personal features such as voice, fingerprint, iris and hand motions. Integrating biometrics into a system that relies on two-factor authentication would provide a security and privacy threshold that would make it very difficult for cybercriminals to replicate.

What is most critical, however, is that action is taken. This is an issue that deserves immediate attention and action. Every day this matter remains unresolved is another day cyber criminals continue their efforts to compromise consumer data in order to impersonate those whose data has been breached.

With the Social Security Number serving as the ultimate identifier, isn’t it time that we modernize it to address today’s evolving privacy vulnerabilities? Modernizing the SSN will help with authentication, will provide more security, and will help safeguard individual privacy. Modernizing the SSN must be a high priority for our policymakers.

The post Securing the Social Security Number to Protect U.S. Citizens appeared first on McAfee Blogs.

What the heck is it with Windows updates?

To help make life better for you, my loyal readers, I suffer by running Windows 7 and 10 on two harmless — never hurt anyone in their lives — PCs. Well, I did. But, in the last week I ran into not one, but two, showstopper update bugs.

First, on Windows 10, I was one of those “lucky” people who had files vaporize when I “updated” to Windows 10 October 2018 Update (version 1809). Because I only use Windows for trivial tasks, I didn’t lose anything valuable when the patch decided to erase everything in the My Documents folder.

Somehow, I think most Windows users use Windows for more important work than I do. I hope you have current backups. At least Computerworld’s Woody Leonhard has some good news: You can get those deleted files back.

To read this article in full, please click here