Monthly Archives: July 2018

GandCrab Ransomware Puts the Pinch on Victims

Update: On August 9 we added our analysis of Versions 4.2.1 and 4.3. 

The GandCrab ransomware first appeared in January and has been updated rapidly during its short life. It is the leading ransomware threat. The McAfee Advanced Threat Research team has reverse engineered Versions 4.0 through 4.3 of the malware.

The first versions (1.0 and 1.1) of this malware had a bug that left the keys in memory because the author did not correctly use the flags in a crypto function. One antimalware company released a free decryption tool, posted on, with help of Romanian police and Europol.

The hack was confirmed by the malware author in a Russian forum:

Figure 1. Confirmation by the author of the hack of GandCrab servers.

The text apologizes to partners for the hack and temporarily shuts down the program. It promises to release an improved version within a few days.

The second version of GandCrab quickly appeared and improved the malware server’s security against future counterattacks. The first versions of the ransomware had a list of file extensions to encrypt, but the second and later versions have replaced this list with an exclusion list. All files except those on the list were encrypted.

Old versions of the malware used RSA and AES to encrypt the files, and communicated with a control server to send the RSA keys locked with an RC4 algorithm.

The GandCrab author has moved quickly to improve the code and has added comments to mock the security community, law agencies, and the NoMoreRansom organization. The malware is not professionally developed and usually has bugs (even in Version 4.3), but the speed of changes is impressive and increases the difficulty of combating it.

Entry vector

GandCrab uses several entry vectors:

  • Remote desktop connections with weak security or bought in underground forums
  • Phishing emails with links or attachments
  • Trojanized legitimate programs containing the malware, or downloading and launching it
  • Exploits kits such as RigEK and others

The goal of GandCrab, as with other ransomware, is to encrypt all or many files on an infected system and insist on payment to unlock them. The developer requires payment in cryptocurrency, primarily DASH, because it complex to track, or Bitcoin.

The malware is usually but not always packed. We have seen variants in .exe format (the primary form) along with DLLs. GandCrab is effectively ransomware as a service; its operators can choose which version they want.

Version 4.0

The most important change in Version 4.0 is in the algorithm used to encrypt files. Earlier versions used RSA and AES; the latest versions use Salsa20. The main reason is for speed. RSA is a powerful but slow algorithm. Salsa20 is quick and the implementation is small.

The ransomware checks the language of the system and will not drop the malicious payload if the infected machine operates in Russian or certain other former Soviet languages:

Figure 2. Checking the language of the infected system.

GandCrab encrypts any file that does not appear on the following file-extension exclusion list:

The ransomware does not encrypt files in these folders:

GandCrab leaves these files unencrypted:

The ransomware generates a pair of RSA keys before encrypting any file. The public key encrypts the Salsa20 key and random initialization vector (IV, or nonce)) generated later for each file.

The encryption procedure generates a random Salsa20 key and a random IV for each file, encrypts the file with them, and encrypts this key and IV with a pair of RSA keys (with the public RSA key created at the beginning). The private key remains encrypted in the registry using another Salsa20 key and IV encrypted with an RSA public key embedded in the malware.

After encryption, the file key and IV are appended to the contents of the file in a new field of 8 bytes, increasing the original file size.

This method makes GandCrab very strong ransomware because without the private key to the embedded public key, it is not possible to decrypt the files. Without the new RSA private key, we cannot decrypt the Salsa20 key and IV that are appended to the file.

Finally, the ransomware deletes all shadow volumes on the infected machine and deletes itself.

Version 4.1

This version retains the Salsa20 algorithm, fixes some bugs, and adds a new function. This function, in a random procedure from a big list of domains, creates a final path and sends the encrypted information gathered from the infected machine. We do not know why the malware does this; the random procedure usually creates paths to remote sites that do not exist.

For example, one sample of this version has the following hardcoded list of encrypted domains. (This is only a small part of this list.)

The ransomware selects one domain from the list and creates a random path with one of these words:

Later it randomly chooses another word to add to the URL it creates:

Afterward it makes a file name, randomly choosing three or four combinations from the following list:

Finally the malware concatenates the filename with a randomly chosen extension:

At this point, the malware sends the encrypted information using POST to the newly generated URL for all domains in the embedded list, repeating the process of generating a path and name for each domain.

Another important change in this version is the attempt to obfuscate the calls to functions such as VirtualAlloc and VirtualFree.

Figure 3. New functions to obfuscate the code.

Version 4.1.2

This version has appeared with some variants. Two security companies revealed a vaccine to prevent infections by previous versions. The vaccine involved making a special file in a folder with a special name before the ransomware infects the system. If this file exists, the ransomware finishes without dropping the payload.

The file gets its name from the serial number of the Windows logic unit hard disk value. The malware makes a simple calculation with this name and creates it in the %appdata% or %program files% folder (based in the OS) with the extension .lock.

Figure 4. Creating the special file.

The GandCrab author reacted quickly, changing the operation to make this value unique and use the Salsa20 algorithm with an embedded key and IV with text referring to these companies. The text and the value calculated were used to make the filename; the extension remained .lock.

One of the security companies responded by making a free tool to make this file available for all users, but within hours the author released another Version 4.1.2 with the text changed. The malware no longer creates any file, instead making a mutex object with this special name. The mutex remains and keeps the .lock extension in the name.

Figure 5. Creating a special mutex instead of a special lock file.

The vaccine does not work with the second Version 4.1.2 and Version 4.2, but it does work with previous versions.

Version 4.2

This version has code to detect virtual machines and stop running the ransomware within them.

It checks the number of remote units, the size of the ransomware name running compared with certain sizes, installs a VectoredExceptionHandler, and checks for VMware virtual machines using the old trick of the virtual port in a little encrypted shellcode:

Figure 6. Detecting VMware.

The malware calculates the free space of the main Windows installation logic unit and finally calculates a value.

If this value is correct for the ransomware, it runs normally. If the value is less than 0x1E, it waits one hour to start the normal process. (It blocks automatic systems that do not have “sleep” prepared.) If the value is greater than 0x1E, the ransomware finishes its execution.

Figure 7. Checking for virtual machines and choosing a path.

Version 4.2.1

This version appeared August 1. The change from the previous version is a text message to the company that made the vaccine along with a link to a source code zero-day exploit that attacks one of this company’s products. The code is a Visual Studio project and can be easily recompiled. This code has folders in Russian after loading the project in Visual Studio.

Version 4.3

This version also appeared August 1. This version has several changes from previous versions.

  • It removes the code to detect virtual machines and a few other odd things in Version 4.2. This code had some failure points; some virtual machines could not be detected.
  • It implemented an exploit against one product of the antivirus company that made the vaccine against Version 4.0 through the first release of Version 4.1.2. This code appears after the malware encrypts files and before it deletes itself.

Figure 8. Running an exploit against a product of the company that made a vaccine.

  • New code in some functions makes static analysis with Interactive Disassembler more complex. This is an easy but effective trick: The ransomware makes a delta call (which puts the address of the delta offset at the top of the stack) and adds 0x11 (the size of the special code, meaning the malware author is using a macro) to the value in the ESP register. ESP now points to an address after the block of the special code and makes a jump in the middle of the opcodes of this block. This technique makes it appear like another instruction, in this case “pop eax,” which extracts the value after adding 0x11 from the top of the stack (ESP register). The code later makes an unconditional jump to this address in EAX. This way the ransomware follows its normal code flow.

Figure 9. New code to make static analysis more difficult.


GandCrab is the leading ransomware threat for any person or enterprise. The author uses many ways to install it—including exploits kits, phishing mails, Trojans, and fake programs. The developer actively updates and improves the code to make analysis more difficult and to detect virtual machines. The code is not professionally written and continues to suffer from bugs, yet the product is well promoted in underground forums and has increased in value.

McAfee detects this threat as Ran-GandCrab4 in Versions 4.0 and later. Previous ones are also detected.

Indicators of compromise


This sample uses the following MITRE ATT&CK techniques:

  • File deletion
  • System information discovery
  • Execution through API
  • Execution through WMIC
  • Application process discovery: to detect antimalware and security products as well as normal programs
  • Query registry: to get information about keys that the malware needs make or read
  • Modify registry
  • File and directory discovery: to search for files to encrypt
  • Encrypt files
  • Process discovery: enumerating all processes on the endpoint to kill some special ones
  • Create files
  • Elevation of privileges


  • 9a80f1866450f2f10fa69b1eb8747c344d6ef038468014c59cc50497f9e4675d – version 4.0
  • d9466be5c387eb2fbf619a8cd0922b167ea7fa06b63f13cd330ca974cae1d513 – version 4.0
  • 43b57d2b16c44041916f3b0562712d5dca4f8a42bc00f00a023b4a0788d18276 – version 4.0
  • 786e3c693fcdf55466fd6e5446de7cfeb58a4311442e0bc99ce0b0985c77b45d – version 4.0
  • f5e74d939a5b329dddc94b75bd770d11c8f9cc3a640dccd8dff765b6997809f2 – version 4.1
  • 8ecbfe6f52ae98b5c9e406459804c4ba7f110e71716ebf05015a3a99c995baa1 – version 4.1
  • e454123d852e6a40eed1f2552e1a1ad3c00991541d812fbf24b70611bd1ec40a – version 4.1
  • 0aef79fac6331f9eca49e711291ac116e7f6fbaeb5a1f3eb7fea9e2e4ec6a608 – version 4.1
  • 3277c1649972ab5b43ae9e87087b70ea4825956bfdddd1034f7b0680e6d46efa – version 4.1
  • a92af825bd95b6514f22dea08a4eb6d3491cbad45e69a5b9653b0148ee9f9832 – version 4.1
  • ce093ffa19f020a2b73719f653b5e0423df28ef1d59035d55e99154a85c5c668 – version 4.1.2 (first)
  • a1aae5ae7a3722b83dc1c9b0831c973641b246808de4f3670f2fd916cf498d38 – version 4.1.2 (second)
  • 3b0096d6798b1887cffa1288583e93f70e656270119087ceb2f832b69b89260a – version 4.2
  • e8e948e36fed93061062406693d1b2c402dd8e5788506bfbb50dbd86a5540829 – version 4.2



The post GandCrab Ransomware Puts the Pinch on Victims appeared first on McAfee Blogs.

Police are threatening free expression by abusing the law to punish disrespect of law enforcement

Spencer Gallien

In May 2016, a pair of police officers with the New York City Police Department ticketed Shyam Patel for his car’s tinted windows in Times Square. After parking his car, Patel raised his middle finger at them in response.

The NYPD officers then approached Patel and asked for his identification. When Patel asked what crime he was suspected of committing, he alleges that one officer told him, “You cannot gesture as such…”

When Patel insisted that freedom of speech did grant him the right, Patel alleges that the officer said that he could not curse a police officer, grabbed his phone, and again demanded identification. Patel was arrested and charged of disorderly conduct and resisting arrest.

While the charges were later dropped, Patel is suing the officers for violation of his First Amendment right to free expression. No law prohibits swearing at or flipping off a police officer, and it seems clear that law enforcement were in the wrong. But Patel’s case is only the latest incident of police officers abusing the law and their positions of power to punish people critical or disrespectful of law enforcement.

In 2009, a black man returned to his home in Cambridge, Massachusetts from travels abroad to find his door tightly shut. He, along with his taxi driver, forced the door open. Soon after, police arrived to his residence to respond to a reported burglary.

It’s unclear what words exactly were exchanged, but the man was arrested for “loud and tumultuous behavior”. A report by the officer in question indicated that the man merely used harsh language and called the officer a racist.

If the circumstances were different, this incident may not have made the headlines it did—countless people of color are accused of criminal activity for walking upon their own sidewalks or entering into their own homes. But the man was Henry Louis Gates, Jr., a professor at Harvard University and friend of newly elected President Obama. The details of his arrest quickly made waves across the country.

Coverage of the incident focused on concerns of racial profiling, but it was about free speech, too. Gates was arrested not for breaking and entering, but for disorderly conduct after he used harsh language at the officer—just like Patel in New York. Civil liberties attorney Harvey Silverglate has called disorderly conduct law enforcement’s “charge of choice” for when a citizen gives lip to a cop.

These types of cases are a still regular occurrence, despite the landmark 1974 court case Lewis v. New Orleans, where the Supreme Court struck down a city ordinance that outlawed “obscene or opprobrious language toward or with reference to” a police officer. At that time, the court noted that a “properly trained police officer may reasonably be expected to exercise a higher degree of restraint” than private citizens.

Despite the Supreme Court’s clear ruling on this issue, police in Pennsylvania are using the state’s version of a “hate crime” law to prosecute multiple people who say offensive things to them when they are arrested. These laws are intended to protect the vulnerable, but are instead being wielded as a tool by powerful government entities.

Robbie Sanderson, a 52 year old black man, was arrested for retail theft near Pittsburgh in September 2016. During his arrest, he called the police “Nazis” and “skinheads”, and said that “all you cops just shoot people for no reason.” He was charged with felony ethnic intimidation.

Later that year, Senatta Amoroso became agitated at a police station, and was arrested for disorderly conduct and knocked to the ground. According to the ACLU, she yelled while handcuffed in a jail cell: “Death to all you white bitches. I’m going to kill all you white bitches. I hope ISIS kills all you white bitches.” Her six charges included a felony assault charge for hitting an officer in the arm and felony ethnic intimidation.

Sanderson and Amoroso’s cases are just two of many of Pennsylvania law enforcement agents slapping disrespectful arrestees with “hate crime” charges. These people yelled speech that officers found offensive, but they were handcuffed and posed no physical threat to anyone.

Pennsylvania’s “ethnic intimidation” charge works similarly to “hate crime” laws in other states, which generally enhance penalties for perpetrators when victims were targeted for discrimanatory reasons. (“Hate speech” laws technically do not exist in the United States.) Although hate crimes statutes were enacted to protect minorities, they can and are being enforced to protect powerful groups like police.

Nadine Strossen, a professor at New York Law School who was previously president of the ACLU, is not surprised that police are abusing “hate crime” laws to punish disrespect. She thinks these cases, in New York, Massachusetts, and Pennsylvania, all show the same pattern of such laws being wielded against the people they were intended to protect—minorities, and people who lack political power.

She noted that during the civil rights movement, police would charge people protesting injustice with whatever they could—with “resisting arrest”, “disorderly conduct”, or “fighting words”, all of which Strossen calls “catch-all” crimes.

Strossen thinks that the way police abuse “hate crime” laws reveals the inherent problematic nature of legislation that attempts to single out specific identities. “There’s this hydrologic pressure once you have any hate crime or hate speech law. Additional pressures to expand this definition emerge, until the question becomes: ‘Who is not included?'"

In Strossen’s new book, HATE: Why We Should Resist It with Free Speech, not Censorship, she argues that hate speech laws in many European countries have ended up stifling the speech of the vulnerable populations they are intended to protect. She cautions that these recent examples show how hate crime laws can potentially be used for similar purposes in the United States, and that pushing for hate speech laws can backfire.

While the first hate crime laws in the United States were targeted to race and religion, they have expanded to include other categories like gender and sexual orientation. There is concern that powerful groups like police officers are co-opting these laws to shield themselves from scrutiny or criticism. It’s a pattern not unique to the United States—she referenced a recent proposal in South Africa that considered adding “occupation” to a list of protected classes. “Could this include police and politicians, and government officials?”

Some U.S. policymakers are already aiming to officially establish police as a “protected” class of people. This May, the House of Representatives passed the Protect and Serve Act, which would make assaulting a police officer a federal crime. The Senate’s version of this bill even frames attacks on police as federal hate crimes.

These legislative efforts at the federal level follow on the heels of so-called “Blue Lives Matter” bills already passed in states including Kentucky and Louisiana. And while the federal bill applies to physical attacks on police, the state level laws have been enforced upon mere language hostile to police.

During an arrest on unrelated charges in 2016, a man in New Orleans yelled insults at officers and was slapped with additional charges. In a post about this incident, the ACLU of Louisiana wrote that “While racist, sexist, and other similar language may show a lack of respect for law enforcement, it is the job of the police to protect even the rights of those whose opinions they don’t share.”

These bills are not only unnecessary (attacking police officers is already a crime) but also actively harmful.

“The point is clear, especially with regards to the adoption of hate crime statute frameworks: to reinforce the myth of the police as vulnerable and embattled,” Natasha Lennard wrote about “the Protect and Serve Act” for The Intercept.

Recent incidents in Pennsylvania, New York, and Louisiana are part of a long and disturbing history of police abusing the law to punish speech they find unfavorable. It’s deeply concerning for free expression that police feel empowered to add additional charges to arrestees because of the words that they yell while being handcuffed, and legislation that makes police a protected class only amplifies the police’s ability to silence dissent and intimidate critics.

Ransomware Hits Health Care Once Again, 45,000 Patient Records Compromised in Blue Springs Breach

More and more, ransomware attacks are targeting one specific industry – health care. As detailed in our McAfee Labs Threats Report: March 2018, health care experienced a dramatic 210% overall increase in cyber incidents in 2017. Unfortunately, 2018 is showing no signs of slowing. In fact, just this week it was revealed that patient records from the Missouri-based Blue Springs Family Care have been compromised after cybercriminals attacked the provider with a variety of malware, including ransomware.

Though it’s not entirely sure yet how these attackers gained access, their methods were effective. With this attack, the cybercriminals were able to breach the organization’s entire system, making patient data vulnerable. The attack resulted in 44,979 records being compromised, which includes Social Security numbers, account numbers, driver’s licenses, disability codes, medical diagnoses, addresses, and dates of birth.

The company’s official statement notes, “at this time, we have not received any indication that the information has been used by an unauthorized individual.”  However, if this type of data does become leveraged, it could be used by hackers for both identity and medical fraud.

So, with a plethora of personal information out in the open – what should these patients do next to ensure their personal data is secure and their health information is private? Start by following these tips:

  • Talk with your health provider. With many cyberattacks taking advantage of the old computer systems still used by many health care providers, it’s important to ask yours what they do to protect your information. What’s more, ask if they use systems that have a comprehensive view of who accesses patient data. If they can’t provide you with answers, consider moving on to another practice that has cybersecurity more top of mind. 
  • Set up an alert. Though this data breach does not compromise financial data, this personal data can still be used to obtain access to financial accounts. Therefore, it’s best to proactively place a fraud alert on your credit so that any new or recent requests undergo scrutiny. This also entitles you to extra copies of your credit report so you can check for anything suspicious. If you find an account you did not open, report it to the police or Federal Trade Commission, as well as the creditor involved so you can close the fraudulent account.
  • Keep your eyes on your health bills and records. Just like you pay close attention to your credit card records, you need to also keep a close eye on health insurance bills and prescription records, which are two ways that your health records can be abused. Be vigilant about procedure descriptions that don’t seem right or bills from facilities you don’t remember visiting.
  • Invest in an identity theft monitoring and recovery solution. With the increase in data breaches, people everywhere are facing the possibility of identity theft. That’s precisely why they should leverage a solution tool such as McAfee Identity Theft Protection, which allows users to take a proactive approach to protecting their identities with personal and financial monitoring and recovery tools to help keep their identities personal and secured.

 And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Ransomware Hits Health Care Once Again, 45,000 Patient Records Compromised in Blue Springs Breach appeared first on McAfee Blogs.

The 2018 Cloud Security Guide: Platforms, Threats, and Solutions

Cloud security is a pivotal concern for any modern business. Learn how the cloud works and the biggest threats to your cloud software and network.


Cloud Security

Cloud security is a pivotal concern for any modern business. Learn how the cloud works and the biggest threats to your cloud software and network. Protect your company’s data with cloud incident response and advanced security services. Minimize cyber threats with the help of Secureworks’ expert guidance.

How To Locate Domains Spoofing Campaigns (Using Google Dorks) #Midterms2018

The government accounts of US Senator Claire McCaskill (and her staff) were targeted in 2017 by APT28 A.K.A. “Fancy Bear” according to an article published by The Daily Beast on July 26th. Senator McCaskill has since confirmed the details.

And many of the subsequent (non-technical) articles that have been published has focused almost exclusively on the fact that McCaskill is running for re-election in 2018. But, is it really conclusive that this hacking attempt was about the 2018 midterms? After all, Senator McCaskill is the top-ranking Democrat on the Homeland Security & Governmental Affairs Committee and also sits on the Armed Services Committee. Perhaps she and her staffers were instead targeted for insights into on-going Senate investigations?

Senator Claire McCaskill's Committee Assignments

Because if you want to target an election campaign, you should target the candidate’s campaign server, not their government accounts. (Elected officials cannot use government accounts/resources for their personal campaigns.) In the case of Senator McCaskill, the campaign server is:

Which appears to be a WordPress site.

Running on an Apache server. Apache error log

And it has various e-mail addresses associated with it. email addresses

That looks interesting, right? So… let’s do some Google dorking!

Searching for “” in URLs while discarding the actual site yielded a few pages of results.

Google dork:

And on page two of those results, this…

Definitely suspicious.

Whats is It’s a domain on the .de TLD (not a TLD itself).

Okay, so… what other interesting domains associated with are there to discover?

How about additional US Senators up for re-election such as Florida Senator Bill Nelson? Yep.

Senator Bob Casey? Yep.

And Senator Sheldon Whitehouse? Yep.

But that’s not all. Democrats aren’t the only ones being spoofed.

Iowa Senate Republicans.

And “Senate Conservatives“.

Hmm. Well, while being no more closer to knowing whether or not Senator McCaskill’s government accounts were actually targeted because of the midterm elections – the domains shown above are definitely shady AF. And enough to give cause for concern that the 2018 midterms are indeed being targeted, by somebody.

(Our research continues.)

Meanwhile, the FBI might want to get in touch with the owners of

Google intends to make GCP the most secure cloud platform

I attended my first Google Next conference last week in San Francisco and came away quite impressed. Clearly, Google is throwing its more and more of its engineering prowess and financial resources at its Google Cloud Platform (GCP) to grab a share of enterprise cloud computing dough and plans to differentiate itself based upon comprehensive enterprise-class cybersecurity feature/functionality.

Google Cloud CEO Diane Greene started her keynote by saying Google intends to lead the cloud computing market in two areas – AI and security. Greene declared that AI and security represent the “#1 worry for customers and the #1 opportunity for GCP.” 

AppSec Mistake No. 1: Using Only One Testing Type

We’ve been in the application security business for more than 10 years, and we’ve learned a lot in that time about what works, and what doesn’t. This is the first in a blog series that takes a look at some of the most common mistakes we see that lead to failed AppSec initiatives. Use our experience to make sure you avoid these mistakes and set yourself up for application security success.

The myth of the AppSec silver bullet

There is no application security silver bullet. Trying to pick the “best” testing type would be like trying to pick the best eating utensil – fork, knife, or spoon? Well, it depends on the meal and, ultimately, each plays a different role, and you need all of them. Morning cereal? Spoon best. Steak dinner? Knife would be good, but not so useful without a fork. Think of application security testing types the same way – each has different strengths and weakness and are better in different scenarios, but you won’t be effective without taking advantage of them all.

Why you need both static and dynamic analysis

Effective application security programs analyze code both statically in development and dynamically in production. Why are both these testing types required? Because each find different types of security-related defects. For example, dynamic testing is better at picking up deployment configuration flaws, while static testing finds SQL injection flaws more easily. We examined this issue in one of our recent State of Software Security reports. These were the top five vulnerability categories we found during dynamic testing:

1.      Information leakage

2.      Cryptographic issues

3.      Deployment configuration

4.      Encapsulation

5.      Cross-Site Scripting

Two of these were not in the top five found by static testing:

  • Encapsulation (dynamic found in 39% of apps; static only in 22%) 
  • Cross-Site Scripting (sixth on the static list)

And one category — deployment configuration — was not found by static at all.

In addition, effective application security secures software throughout its entire lifecycle — from inception to production. With the speed of today’s development cycles — and the speed with which software changes and the threat landscape evolves — it would be foolish to assume that code will always be 100 percent vulnerability-free after the development phase, or that code in production doesn’t need to be tested or, in some cases, patched. 

Why you need software composition analysis

Applications are increasingly “assembled” from open source components, rather than developed from scratch. With the speed of today’s development cycles, developers don’t have time to create every line of code from scratch, and why would they, when so much open source functionality is available? However, neglecting to assess and keep track of the open source components you are using would leave a large portion of your code exposed and leave you open to attack. Effective application security entails both assessement of your first-party code, plus assessing and creating a dynamic inventory of your third-party code.  

Why you need manual penetration testing

Automation alone is not enough to ensure an application is thoroughly tested from a security perspective. Some flaws, such as CSRF (Cross-Site Request Forgery) and business logic vulnerabilities, require a human to be in the loop to exploit and verify the vulnerability. Only manual penetration testing can provide positive identification and manual validation of these vulnerabilities.

Learn from others’ mistakes

Don’t repeat the mistakes of the past; learn from other organizations and avoid the most common AppSec pitfalls. First tip: Don’t rely on one testing type; that’s like trying to eat all your meals with only a spoon. Effective application security combines a variety of testing types and assesses security throughout an application’s lifecycle, from inception to production. Get details on all six of the most popular mistakes in our eBook, AppSec: What Not to Do.

Veracode Dynamic Analysis Helps You Check Your Security Headers

Veracode Dynamic Analysis helps you follow Google I/O 2018 security recommendations

I've been binging on the Google I/O 2018 videos. I guess every web geek does! One video caught my attention: Google Chrome security team's improvements to fight off the Spectre & Meltdown "celebrity" vulnerabilities. They're using software at the browser level to mitigate against a hardware vulnerability. How cool is that?

Just like Google, Veracode has been beating the drum on the importance of security headers here in 2012, 2013 and 2014. Google calls out Site Isolation feature, cross-origin read blocking, cookie restrictions, high resolution timers, and Google V8 JavaScript engine. Read more here

However, Chrome security cannot make the web safer on its own. It needs web developers to help defend against Spectre vulnerability and future software vulnerabilities. For these goals, Chrome security recommends a bunch of website configuration best practices. This is where Veracode Dynamic Analysis comes in!

Best part, no new workflows! Just run your Dynamic Analysis scans as usual to verify your web developers are using the website configuration best practices. Checking these security headers is just one of the many vulnerability checks we have to help you safeguard modern web applications.

Veracode Dynamic Analysis checks the following security headers are set correctly. Some of these were called out by Google Chrome in their Google I/O 2018 talk.

X-Content-Type-Options 16 Configuration
X-Frame-Options 16 & 693 Configuration & Protection Mechanism Failure
Strict-Transport-Security 16 Configuration
Access-Control-Allow 668 Exposure of Resource to Wrong Sphere
Content Security Policy directives (including SameSite Cookie) 352 Cross-Site Request Forgery (CSRF)

For more information on setting them up correctly and common misconfigurations, check out our blog post here.

How often do you hear the phrases “Zero Trust” or “Trust but Verify” bandied about? It’s so true in application security. We should enable our developers to do the right thing. But we have to verify, either before production releases or on a regular cadence in production. At Veracode, we happen to favor using our Dynamic Analysis for such purposes! 

P.s. If you want to watch the Google I/O talk in full, see this YouTube link:

Family Matters: How to Help Kids Avoid Cyberbullies this Summer

The summer months can be tough on kids. There’s more time during the day and much of that extra time gets spent online scrolling, surfing, liking, and snap chatting with peers. Unfortunately, with more time, comes more opportunity for interactions between peers to become strained even to the point of bullying.

Can parents stop their kids from being cyberbullying completely? Not likely. However, if our sensors are up, we may be able to help our kids minimize both conflicts online and instances of cyberbullying should they arise.

Be Aware

Summer can be a time when a child’s more prone to feelings of exclusion and depression relative to the amount of time he or she spends online. Watching friends take trips together, go to parties, hang out at the pool, can be a lot on a child’s emotions. As much as you can, try to stay aware of your child’s demeanor and attitude over the summer months. If you need help balancing their online time, you’ve come to the right place.

Steer Clear of Summer Cyberbullies 

  1. Avoid risky apps. Apps like that allow outsiders to ask a user any question anonymously should be off limits to kids. Kik Messenger and Yik Yak are also risky apps. Users have a degree of anonymity with these kinds of apps because they have usernames instead of real names and they can easily connect with profiles that could be (and often are) fake. Officials have linked all of these apps to multiple cyberbullying and even suicide cases.
  2. Monitor gaming communities. Gaming time can skyrocket during the summer and in a competitive environment, so can cyberbullying. Listen in on the tone of the conversations, the language, and keep tabs on your child’s demeanor. For your child’s physical and emotional health, make every effort to help him or her balance summer gaming time.
  3. Make profiles and photos private. By refusing to use privacy settings (and some kids do resist), a child’s profile is open to anyone and everyone, which increases the chances of being bullied or personal photos being downloaded and manipulated. Require kids under 18 to make all social profiles private. By doing this, you limit online circles to known friends and reduces the possibility of cyberbullying.
  4. Don’t ask peers for a “rank” or a “like.” The online culture for teens is very different than that of adults. Kids will be straightforward in asking people to “like” or “rank” a photo of them and attach the hashtag #TBH (to be honest) in hopes of affirmation. Talk to your kids about the risk in doing this and the negative comments that may follow. Remind them often of how much they mean to you and the people who truly know them and love them.
  5. Balance = health. Summer means getting intentional about balance with devices. Stepping away from devices for a set time can help that goal. Establish ground rules for the summer months, which might include additional monitoring and a device curfew.

Know the signs of cyberbullying. And, if your child is being bullied, remember these things:

1) Never tell a child to ignore the bullying. 2) Never blame a child for being bullied. Even if he or she made poor decisions or aggravated the bullying, no one ever deserves to be bullied. 3) As angry as you may be that someone is bullying your child, do not encourage your child to physically fight back. 4) If you can identify the bully, consider talking with the child’s parents.

Technology has catapulted parents into arenas — like cyberbullying — few of us could have anticipated. So, the challenge remains: Stay informed and keep talking to your kids, parents, because they need you more than ever as their digital landscape evolves.

toni page birdsong


Toni Birdsong is a Family Safety Evangelist to McAfee. You can find her on Twitter @McAfee_Family. (Disclosures).

The post Family Matters: How to Help Kids Avoid Cyberbullies this Summer appeared first on McAfee Blogs.

Did ICE detain this Mexican journalist for criticizing U.S. immigration policy?

Gutiérrez and Oscar after release

Emilio Gutiérrez-Soto and his son Oscar speak to the press after being released from an ICE detention facility in El Paso, Texas, on July 26, 2018.

Texas Tribune/Julian Aguilar

Late last night, Mexican journalist Emilio Gutiérrez-Soto and his son Oscar were released from an Immigration and Customs Enforcement (ICE) detention facility in El Paso, Texas. The two had been held in ICE detention for more than seven months, ever since being arrested and nearly deported by ICE agents on December 7, 2017.

The United States government has never offered a convincing reason for arresting Gutiérrez and Oscar in December, or for continuing to detain the two. Gutiérrez and his attorneys have argued that ICE targeted him for arrest in retaliation for his criticism of U.S. immigration policy, in violation of his First Amendment rights — and they have internal ICE documents to back up their case. Freedom of the Press Foundation has obtained the documents and is publishing them for the first time.

This is Gutiérrez’s harrowing story.

"Good morning,” the email began. “Attached is a list of 2,718 non-detained cases that may be candidates for arrest.”

Good morning, Attached is a list of 2,718 non-detained cases that may be candidates for arrest

It was early morning on February 1, 2017, just a few days after President Donald Trump’s inauguration, when an ICE supervisory detention and deportation officer sent that email to other agents in ICE’s El Paso field office. The email carried the subject line, “Non-Detained Target List.” A spreadsheet named “ND Target List.xls” was attached to the email.

An assistant field director in ICE’s El Paso office replied on February 13.

“When u get back, forward this list to the National Criminal Analysis and Targeting Center (NCATC) this is the only FOSC [Fugitive Operations Support Center]. They will run this list and provide info on address location etc …”

One of the many names on the targeting list was “GUTIERREZ SOTO, EMILIO.”

The reason that Gutiérrez was included on that list is a mystery.

Screenshot of ICE emails

Eduardo Beckett, one of Gutiérrez’s attorneys, told Freedom of the Press Foundation that there was no “legitimate law enforcement reason” for Gutiérrez to be on an ICE target list.

“It’s fugitive operations,” he said of the targeting list. “It’s people with felonies. Emilio doesn’t fit that mold.”

Gutiérrez is not a fugitive, and he has no criminal record. He is a Mexican journalist who legally applied for asylum in the United States in 2008, after being threatened by elements of the Mexican military.

So why did ICE target him for possible arrest? Beckett believes it was because Gutiérrez had criticized U.S. immigration policy.

“The only reason he was on that list was because he was a journalist who criticized ICE and the Mexican government,” he said.

Gutiérrez and his son Oscar entered the United States on June 16, 2008.

ICE’s official “Record of Deportable / Inadmissible Alien” for Gutiérrez states that he and Oscar appeared at the Antelope Wells Border Crossing station in New Mexico and formally requested asylum. Gutiérrez was taken to an official “port of entry” — one of the sites where immigrants may legally apply for asylum — and interviewed by a Customs and Border Protection officer. 

Gutiérrez CBP interview recordGutiérrez told the CBP officer that Mexican military police officers had threatened his life after he reported on corruption in the Mexican military.

“The subject continued to state that on May 5, 2008 at approximately midnight several armed military police wearing masks and armed with high caliber weapons entered his house without his permission claiming to look for drugs and weapons,” the ICE record of Gutiérrez’s CBP interview states. “Subject Gutierrez further states that on Saturday June 14, 2008 he was warned by a female friend who claims she overheard military police officers making plans to harm the subject.”

Gutiérrez said that he feared that his life would be in danger if he had to return to Mexico, so ICE gave him a form to fill out and sent him to the El Paso processing center in Texas. His son Oscar, who was still a minor at the time, was detained in a separate facility. In El Paso, Gutiérrez was interviewed by an asylum officer, who assessed that he had a “credible fear” of returning to Mexico, and he was placed into asylum proceedings. He was detained in the El Paso detention center for seven months before being released on parole. He and Oscar, who had been released to family friends in the U.S., reunited and moved to Las Cruces, New Mexico.

The subject continued to state that on May 5, 2008 at approximately midnight several armed military police wearing masks and armed with high caliber weapons entered his house without his permission claiming to look for drugs and weapons.

Years passed without any ruling on their asylum claim, and Gutiérrez and Oscar settled into their new life in New Mexico. Gutiérrez bought a food truck. Though he had not worked as a journalist since fleeing Mexico, he was happy to speak to the press, and he did not hesitate to criticize the United States’ broken asylum system.

“We are talking about an immigration judge and an immigration attorney whose job it is … to keep from expanding the abundance of people looking for protection because of the violence in Mexico,” he told the AP in January 2011, after attending a hearing in his asylum case. “We don’t have a country that accepts us with its laws and regulations even after being aware that we fled Mexico because the Mexican state was persecuting us.”

“We are here because we want to save our lives and it just seems so unfair because a country of freedom and human rights … is ignoring us,” he told the AP a month later, after a ruling on his asylum case was delayed. “We were looking for refuge and they put us in prison.”

In July 2017, immigration judge Robert Hough finally ruled on his nine-year-old asylum claim. Hough ruled that Gutiérrez did not present sufficient evidence to prove that he was targeted for his journalistic work or that his life would be in danger if he returned to Mexico. (According to the Committee to Protect Journalists, more than 60 journalists have been killed in Mexico since June 2008, when Gutiérrez fled to the United States and applied for asylum.)

He simply dismissed all the arguments, put them in the trash can and denied the asylum.

Hough seemed unconvinced that Gutiérrez was really a journalist, in part because Gutiérrez had trouble finding copies of his published newspaper clips to show the judge. Hough denied the asylum claim and ruled that Gutiérrez could be removed from the United States.

“He simply dismissed all the arguments, put them in the trash can and denied the asylum,” Gutiérrez said in an interview with the Knight Center for Journalism in the Americas. “I feel very sad and I am very disappointed in the immigration authorities, especially the policies that the United States exercises.”

On October 4, 2017, Gutiérrez accepted the National Press Club’s prestigious John Aubuchon award on behalf of all Mexican journalists. During his acceptance speech at the club’s black-tie awards gala in Washington, D.C., Gutiérrez accused the U.S. government of hypocrisy for advocating for human rights abroad while denying them at home. Gutiérrez was particularly critical of the United States’ asylum policies.

“Those who seek political asylum in countries like the U.S. encounter the decisions of immigration authorities that barter away international laws,” he said.

As Gutiérrez was publicizing the plight of Mexican journalists and asylum seekers, his legal team tried to get the immigration judge’s decision denying him asylum reversed. They appealed to the Board of Immigration Appeals (BIA), which has the power to review immigration court decisions. But on November 2, 2017, the BIA rejected the appeal because it had been filed late. On November 20, Gutiérrez’s attorney Eduardo Beckett asked the court to reopen the appeal.

Those who seek political asylum in countries like the U.S. encounter the decisions of immigration authorities that barter away international laws.

If the BIA reopened the appeal, then Gutiérrez would be safe. He could not be removed from the country while the appeal was pending. But until the court granted his petition to reopen the appeal, Gutiérrez was at the mercy of ICE. He had to ask the agency to grant him a stay of deportation.

Under the Kafkaesque U.S. immigration law system, ICE officials have the power to issue stays of removal, which prevent the agency from deporting someone. If ICE refuses to issue a stay, then the BIA has an opportunity to step in and issue an emergency stay, which prevents ICE from deporting the person. Crucially, though, the BIA does not have the power to issue an emergency stay until after ICE has already refused to issue a stay and taken someone into custody.

Beckett expected that ICE would officially deny the stay on December 7, when Gutiérrez and his son were scheduled to appear at ICE’s El Paso field office for a routine check-in. He knew that once ICE denied the stay, he could call the BIA and request an emergency stay. Then the BIA would either deny the stay and allow ICE to deport Gutiérrez, or it would grant the stay and order ICE not to deport him.

For assistance in dealing with ICE, Gutiérrez’s legal team reached out to members of Congress. Senator Patrick Leahy of Vermont took a particular interest in the case, and his senate office got in touch with ICE’s congressional liaison to ask about the case.

On November 20, a Leahy aide emailed Gutiérrez’s legal team and said ICE’s congressional liaison had assured her that ICE would “likely make their decision after consulting with BIA.”

Beckett said that ICE told him something similar.

“I had assurances from ICE that they would not try to deport him,” he said. “They told me to bring Emilio and Oscar in and if the stay by ICE was not granted, then ICE would get a ruling from the BIA before taking any action.”

“That was a lie,” he added. “That to me shows the bad faith.”

When Beckett, Gutiérrez, and Oscar arrived at ICE’s El Paso field office on December 7, ICE agents arrested Gutiérrez and Oscar immediately after informing them that they had decided not to grant a stay.

I had assurances from ICE that they would not try to deport him. That was a lie. That to me shows the bad faith.

Beckett called the BIA to petition for an emergency stay of removal, and the court told Beckett that it would call him back as soon as it had ruled on his petition. But ICE had no intention of waiting for the court’s ruling. Agents handcuffed Gutiérrez and Oscar, put the two of them in a car, and started driving toward the border.

As ICE raced to deliver Gutiérrez and Oscar to the border, Gutiérrez’s legal team sent an urgent email to Leahy’s office: “ICE did not wait for the BIA decision. He is being escorted to the bridge. Could you all make a call to please try and stop this? The court has not ruled.”

A Leahy aide wrote back that the senator’s office could not stop ICE: “I am so very sorry to hear this!! There is really nothing else that our office can do to intervene or prevent this.”

According to Beckett, Gutiérrez and Oscar were driven to a parking lot outside of a Border Patrol station, where Gutiérrez was told that Mexican immigration agents were on their way to pick them up and take them back to Mexico.

I am so very sorry to hear this!! There is really nothing else that our office can do to intervene or prevent this.

Before Gutiérrez could be handed over to the Mexican government, the BIA called Beckett back with good news — Gutiérrez and Oscar had been granted an emergency stay of deportation. Beckett immediately called ICE and told them to bring Gutiérrez and Oscar back. The agency refused. The BIA’s emergency order might have prevented ICE from deporting Gutiérrez and his son, but it did not prevent the agency from detaining them.

ICE agents took Gutiérrez and Oscar to an immigration detention facility. They would remain in ICE detention for nearly eight months, and Gutiérrez’s food truck would be stolen while he was still detained.

Gutiérrez’s asylum appeal slowly worked its way through the courts. On December 22, 2017, the BIA decided to reopen Gutiérrez’s appeal. On May 15, 2018, it granted his appeal and remanded his asylum case back to immigration judge Robert Hough, with instructions to consider new evidence and then issue a new decision.

By that time, Gutiérrez’s attorneys were pursuing a new legal strategy.

On March 5, 2018, Gutiérrez filed a petition for habeas corpus in the Western District of Texas federal district court. Habeas corpus — one of the oldest and most fundamental rights in the United States — is the right not to be detained arbitrarily.

Gutiérrez’s habeas corpus petition, which was prepared by Rutgers University’s Institute of International Human Rights law clinic, argued that his ongoing detention by ICE was unconstitutional. The habeas petition advanced a number of arguments for why ICE’s detention of Gutiérrez was unlawful, but the most interesting was the claim that it violated his First Amendment rights to free speech and freedom of the press. Gutiérrez argued that ICE had targeted him for detention because he had publicly criticized the agency in his capacity as a journalist.

As evidence, Gutiérrez’s attorneys noted that Gutiérrez had been arrested by ICE just weeks after publicly criticizing U.S. immigration authorities at the National Press Club awards dinner. They also cited the fact that an ICE official reportedly told National Press Club president Bill McCarren to “tone it down” when it came to advocating for Gutiérrez’s case. (ICE has denied saying this.)

This shows that there was secret emails, a target list, and this was done months before he lost his asylum claim.

Later, Gutiérrez's legal team found their key piece of evidence — the internal ICE emails from February 2017.

On April 30, 2018, National Press Club press freedom fellow Kathy Kiely received copies of the ICE emails in response to a Freedom of Information Act request. She passed them on to Gutiérrez’s legal team, who immediately recognized their significance.

“When Kathy did her FOIA, I told her, this is gold,” Beckett said. “This shows that there was secret emails, a target list, and this was done months before he lost his asylum claim.”

Federal district judge David Guaderrama agreed, citing the ICE emails in his order denying the government’s motion to dismiss the habeas corpus case.

“Respondents [ICE] contend that they detained Petitioners [Gutierrez-Soto and his son] based on a warrant issued after the removal order issued by the immigration judge became final in August 2017,” Guaderrama wrote in a July 10 decision. “However, the emails between ICE officials undermine Respondents’ argument. The emails show that ICE officials were already targeting Mr. Gutierrez-Soto in February 2017. … This is significant because it is before the immigration judge issued the removal order in July 2017, which became final in August 2017.”

Guaderrama concluded that there was sufficient evidence to suggest that “Respondents retaliated against [Petitioners] for asserting their free press rights … [and] Respondents’ reason for detaining Petitioners is a pretext.”

Respondents contend that they detained Petitioners based on a warrant issued after the removal order issued by the immigration judge became final in August 2017. However, the emails between ICE officials undermine Respondents’ argument.

Guaderrama ordered the government to bring Gutiérrez and Oscar to an evidentiary hearing on August 1, 2018, so that he could hear Gutiérrez’s testimony and the government’s defense of his continued detention, and then rule on Gutiérrez’s habeas corpus petition. Guaderrama also denied the government’s motion to delay the hearing and ordered the government to provide Gutiérrez’s legal team with more information about the ICE email thread and the targeting list.

Rather than try to defend ICE’s detention of Gutiérrez and Oscar at a federal court hearing, the government opted to release the two of them.

Beckett credited the federal court with forcing the government’s hand.

“The release of Emilio and his son Oscar is a testament that our Federal Courts protect our Constitutional rights,” he said in a statement. “The Constitution is not just an abstract written document but the cornerstone of our liberty and democracy.”

Now that Gutiérrez is free, he plans to move to Michigan. On May 2, 2018, the University of Michigan awarded him a Knight-Wallace fellowship. The one-year fellowship covers full tuition and health benefits, and includes a $75,000 stipend. Perhaps most importantly, the fellowship will allow Gutiérrez to work alongside other journalists for the first time since he fled Mexico in 2008.

Gutiérrez’s asylum case — which is entirely separate from his habeas corpus case — remains unresolved. In May 2018, the BIA remanded the case back to Hough, the immigration judge who previously denied Gutiérrez’s asylum claim, with instructions to rule on it again after considering new evidence.

Once Gutiérrez moves to Michigan for the Knight-Wallace fellowship, it’s possible that his asylum case will be transferred from Hough, who is based in Texas, to an immigration judge in Michigan. Either way, Gutiérrez’s fate will once again be in the hands of an immigration judge.

If he is denied asylum for a second time, he can try to appeal (again) to the BIA. If the BIA refuses the appeal, ICE will finally be free to deport him and his son.

But if Gutiérrez is granted asylum, then his long ordeal will finally be over, and he will be able to live in the U.S. without fear of being detained or deported by ICE.

Some changes in how libpcap works you should know

I thought I'd document the solution to this problem I had.

The API libpcap is the standard cross-platform way of sniffing packets off the network. It works on Windows (winpcap), macOS, and all the Unixes. It's better than simply opening a "raw socket" on Unix platforms because it takes advantage of higher performance capabilities of the system, including specialized sniffing hardware.

Traditionally, you'd open an adapter with pcap_open(), whose function parameters set options like snap length, promiscuous mode, and timeouts.

However, in newer versions of the API, what you should do instead is call pcap_create(), then set the options individually with calls to functions like pcap_set_timeout(), then once you are ready to start capturing, call pcap_activate().

I mention this in relation to "TPACKET" and pcap_set_immediate_mode().

Over the years, Linux has been adding a "ring buffer" mode to packet capture. This is a trick where a packet buffer is memory mapped between user-space and kernel-space. It allows a packet-sniffer to pull packets out of the driver without the overhead of extra copies or system calls that cause a user-kernel space transition. This has gone through several generations.

One of the latest generations causes the pcap_next() function to wait forever for a packet. This happens a lot on virtual machines where there is no background traffic on the network.

This looks like a bug, but maybe it isn't.  It's unclear what the "timeout" parameter actually means. I've been hunting down the documentation, and curiously, it's not really described anywhere. For an ancient, popular APIs, libpcap is almost entirely undocumented as to what it precisely does. I've tried reading some of the code, but I'm not sure I've come to any understanding.

In any case, the way to resolve this is to call the function pcap_set_immediate_mode(). This causes libpccap to backoff and use an older version of TPACKET such that it'll work as expected, that even on silent networks the pcap_next() function will timeout and return.

I mention this because I fixed this bug in my code. When running inside a VM, my program would never exit. I changed from pcap_open_live() to the pcap_create()/pcap_activate() method instead, adding the setting of "immediate mode", and now things work. Performance seems roughly the same as far as I can tell.

I'm still not certain what's going on here, and there are even newer proposed zero-copy/ring-buffer modes being added to the Linux kernel, so this can change in the future. But in any case, I thought I'd document this in a blogpost in order to help out others who might be encountering the same problem.

Retired Malware Samples: Everything Old is New Again

I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute.  Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.

A Backdoor with a Backdoor

To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.

Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:

“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”

Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.

Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.

You Are an Idiot

The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:

When the visitor attempted to navigate away from the offending site, its JavaScript popped up new instances of the page, making it very difficult to leave. Moreover, each instance of the page played the following jingle on the victim’s speakers. “You are an idiot,” the song exclaimed. “Ahahahahaha-hahahaha!” The cacophony of multiple windows blasting this jingle was overwhelming.


A while later I came across a network worm that played this sound file on victims’ computers, though I cannot find that sample anymore. While writing this post, I was surprised to discover a version of this page, sans the multi-window JavaScript trap, residing on Maybe it’s true what they say: good joke never gets old.

Clipboard Manipulation

When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:

At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.

The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.

As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course.  It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.

How Dropbox dropped the ball with anonymized data

Dropbox found itself in hot water this week over an academic study that used anonymized data to analyze the behavior and activity of thousands of customers.

The situation seemed innocent enough at first — an article in Harvard Business Review, researchers at Northwestern University Institute on Complex Systems (NICO) detailed an extensive two-year study of best practices for collaboration and communication on the cloud file hosting platform. Specifically, the study examined how thousands of academic scientists used Dropbox, which gave the NICO researchers project-folder data from more than 1,000 university departments.

But it wasn’t long before serious issues were revealed. The article, titled “A Study of Thousands of Dropbox Projects Reveals How Successful Teams Collaborate,” initially claimed that Dropbox gave the research team raw user data, which the researchers then anonymized. After Dropbox was hit with a wave of criticism, the article was revised to say the original version was incorrect – Dropbox anonymized the user data first and then gave it to the researchers.

That’s an extremely big error for the authors to make (if indeed it was an error) about who anonymized the data and when the data was anonymized — especially considering article was co-authored by a Dropbox manager (Rebecca Hinds, head of Enterprise Insights at Dropbox). I have to believe the article went through some kind of review process from Dropbox before it was published.

But let’s assume one of the leading cloud collaboration companies in the world simply screwed up the article rather than the process of handling and sharing customer data. There are still issues and questions for Dropbox, starting with the anonymized data itself. A Dropbox spokesperson told WIRED the company “randomized or hashed the dataset” before sharing the user data with NICO.

Why did Dropbox randomize *or* hash the datasets? Why did the company use two different approaches to anonymizing the user data? And how did it decide which types of data to hash and which types to randomize?

Furthermore, how was the data hashed? Dropbox didn’t say, but that’s an important question. I’d like to believe that a company like Dropbox wouldn’t use an insecure, deprecated hashing algorithm like MD5 or SHA-1, but there’s plenty of evidence those algorithms are still used by many organizations today.

The Dropbox spokesperson also told WIRED it grouped the dataset into “wide ranges” so no identifying information could be derived. But Dropbox’s explanation of the process is short on details. As a number of people in the infosec community have pointed out this week, anonymized data may not always be truly anonymous. And while some techniques work better than others, the task of de-anonymization appears to be getting easier.

And these are just the issues relating to the anonymized data; there are also serious questions about Dropbox’s privacy policy. The company claims its privacy policy covers the academic research, which has since sparked a debate about the requirements of informed consent. The policy states Dropbox may share customer data with “certain trusted third parties (for example, providers of customer support and IT services) to help us provide, improve, protect, and promote our services,” and includes a list of those trusted third parties like Amazon, Google and Salesforce. NICO, however, is not on the list. It’s also not entirely clear whether the anonymized data was given to NICO to improve the Dropbox service or to advance scientific research.

And while this isn’t close to the gross abuse of personal data we’ve seen with the Cambridge Analytica scandal, it’s nevertheless concerning. These types of questionable decisions regarding data usage and sharing can lead to accidental breaches, which can be just as devastating as any malicious attack that breaches and exposes user data. If companies in the business of storing and protecting data — like Dropbox — don’t have clear policies and procedures for sharing and anonymizing data, then we’re in for plenty more unforced errors.

The post How Dropbox dropped the ball with anonymized data appeared first on Security Bytes.

Emoji are wierd

So I put a “man shrugging” emoji in my last post; it shows up strangely in RSS as displayed by NetNewsWire, showing “woman shrugging”, the “mars zodiac” sign and a bar code. No idea. Chaos, emergent.

REVIEW: Best VPN routers for small business

When selecting VPN routers, small businesses want ones that support the VPN protocols they desire as well as ones that fit their budgets, are easy to use and have good documentation.

We looked at five different models from five different vendors: Cisco, D-Link, and DrayTek, Mikrotik and ZyXEL. Our evaluation called for setting up each unit and weighing the relative merits of their price, features and user-friendliness.

Below is a quick summary of the results:

To read this article in full, please click here

(Insider Story)

Threat Modeling Thursday: 2018

Since I wrote my book on the topic, people have been asking me “what’s new in threat modeling?” My Blackhat talk is my answer to that question, and it’s been taking up the time that I’d otherwise be devoting to the series.

As I’ve been practicing my talk*, I discovered that there’s more new than I thought, and I may not be able to fit in everything I want to talk about in 50 minutes. But it’s coming together nicely.

The current core outline is:

  • What are we working on
    • The fast moving world of cyber
    • The agile world
    • Models are scary
  • What can go wrong? Threats evolve!
    • STRIDE
    • Machine Learning
    • Conflict

And of course, because it’s 2018, there’s cat videos and emoji to augment logic. Yeah, that’s the word. Augment. 🤷‍♂

Wednesday, August 8 at 2:40 PM.

* Oh, and note to anyone speaking anywhere, and especially large events like Blackhat — as the speaker resources say: practice, practice, practice.

Focus on Real Friends This Friendship Day

I walked into my niece’s room and found her busy making colourful bands.

“What are these for?” I asked.

“Friendship Day is coming up and this year I have decided to make my own bands to give to my friends. Got to finish making them all today.”

“That’s lovely,” and then as a thought struck me, I added, “Are you making them for your friends online?”

“No!!! What a question! How do you think I would give these to them? Virtually? These bands only for real friends.”

Happy as I was to hear that, I couldn’t help adding a parting shot, “Really? Then why do you share so much about yourself with these virtual friends?”

We spent the next few minutes thinking about friends and friendship.

The charm of school and college life lies in friends- the better the group of friends you have the more enjoyable your student life is. Such friendships stand the test of time and can be revived even after years of separation.

If adults can be duped, then aren’t the highly impressionable teens also at risk? Even tech-savvy kids tend to be duped by fake profiles so the smart parenting thing to do is to create awareness beforehand.

Friendship Day is the perfect time to initiate a discussion with your kids on how to establish if online friends are actual people. Start by administering this quiz on real vs. online friends:

Who are your real friends? (Check the boxes that apply):

  • You know them well in person
  • Your parents know them too, and approve of them
  • You are most probably studying in the same school or college
  • You live in the same apartment block or neighborhood
  • You have shared interests and know each other’s strengths and weaknesses
  • You have been to each another’s house
  • You know they will accept you the way you are and never embarrass you in public
  • You trust them

Then, ask them to tick the boxes that apply for their virtual friends and follow it up with a discussion.

Takeaway: The online world holds infinite promises and possibilities but they can be realized only when the user is judicious and careful. In the early years of adolescence, it’s better to keep virtual friends limited to known people.

 Next in line is to find ways to identify fake profiles and learn to block and report:

Teach kids to identify fake profiles online:

  • Profile – Profile pictures is very attractive but there are rarely any family, group pictures
  • Name- The name sounds weird or is misspelled
  • Bio – The personal details are sketchy
  • Friend list – Have no common friends
  • Posts – The posts and choice of videos make you feel uncomfortable or are clearly spams
  • Verification – A Google search throws up random names for profile pic

Show kids how to block and report fake profiles:

  • Save: If you had erroneously befriended a suspicious person, no worries. Keep records of all conversations by taking screen shots, or copy + pasting or through a print screen command
  • Unfriend: Remove the user from your friend list
  • Block: Prevent the person from harassing you with friend requests in future by using the blocking function
  • Flag: Report suspicious profiles to the social media site to help them check and remove such profiles and maintain the hygiene of the platform

Share digital safety tips:

  1. Practice STOP. THINK. CONNECT. -Do not be in a hurry to hike friend count and choose your friends wisely
  2. Share with care: Be a miser when it comes to sharing personal details like name, pictures, travel and contact details online. The less shared, the better it is for the child
  3. Review privacy and security: Check all your posts periodically and delete those you don’t like. Maximize account security and keep privacy at max

Finally, share this message with your kids.

On Friendship Day, pledge to be a good friend to your real friends and limit your online friends to those you know well in real life. Secure your online world by using security tools on your devices and acting judiciously online. If you act responsibly online, you not only make your digital world safer but also help to secure the digital worlds of your friends. That’s the sign of an ideal digital citizen.


The post Focus on Real Friends This Friendship Day appeared first on McAfee Blogs.

Offensive Security Online Exam Proctoring

When we started out with our online training courses over 12 years ago, we made hard choices about the nature of our courses and certifications. We went against the grain, against the common certification standards, and came up with a unique certification model in the field - "Hands-on, practical certification". Twelve years later, these choices have paid off. The industry as a whole has realized that most of the multiple choice, technical certifications do not necessarily guarantee a candidate's technical level...and for many in the offensive security field, the OSCP has turned into a golden industry standard. This has been wonderful for certification holders as they find themselves actively recruited by employers due to the fact that they have proven themselves as being able to stand up to the stress of a hard, 24-hour exam - and still deliver a quality report.

Insurance Occurrence Assurance?

You may have seen my friend Brian Krebs’ post regarding the lawsuit filed last month in the Western District of Virginia after $2.4 million was stolen from The National Bank of Blacksburg from two separate breaches over an eight-month period. Though the breaches are concerning, the real story is that the financial institution suing its insurance provider for refusing to fully cover the losses.

From the article:

In its lawsuit (PDF), National Bank says it had an insurance policy with Everest National Insurance Company for two types of coverage or “riders” to protect it against cybercrime losses. The first was a “computer and electronic crime” (C&E) rider that had a single loss limit liability of $8 million, with a $125,000 deductible.

The second was a “debit card rider” which provided coverage for losses which result directly from the use of lost, stolen or altered debit cards or counterfeit cards. That policy has a single loss limit of liability of $50,000, with a $25,000 deductible and an aggregate limit of $250,000.

According to the lawsuit, in June 2018 Everest determined both the 2016 and 2017 breaches were covered exclusively by the debit card rider, and not the $8 million C&E rider. The insurance company said the bank could not recover lost funds under the C&E rider because of two “exclusions” in that rider which spell out circumstances under which the insurer will not provide reimbursement.

Cyber security insurance is still in its infancy and issues with claims that could potentially span multiple policies and riders will continue to happen – think of the stories of health insurance claims being denied for pre-existing conditions and other loopholes. This, unfortunately, is the nature of insurance. Legal precedent, litigation, and insurance claim issues aside, your organization needs to understand that cyber security insurance is but one tool to reduce the financial impact on your organization when faced with a breach.

Cyber security insurance cannot and should not, however, be viewed as your primary means of defending against an attack.

The best way to maintain a defensible security posture is to have an information security program that is current, robust, and measurable. An effective information security program will provide far more protection for the operational state of your organization than cyber security insurance alone. To put it another way, insurance is a reactive measure whereas an effective security program is a proactive measure.

If you were in a fight, would you want to wait and see what happens after a punch is thrown to the bridge of your nose? Perhaps you would like to train to dodge or block that punch instead? Something to think about.

Microsoft Office Vulnerabilities Used to Distribute FELIXROOT Backdoor in Recent Campaign

Campaign Details

In September 2017, FireEye identified the FELIXROOT backdoor as a payload in a campaign targeting Ukrainians and reported it to our intelligence customers. The campaign involved malicious Ukrainian bank documents, which contained a macro that downloaded a FELIXROOT payload, being distributed to targets.

FireEye recently observed the same FELIXROOT backdoor being distributed as part of a newer campaign. This time, weaponized lure documents claiming to contain seminar information on environmental protection were observed exploiting known Microsoft Office vulnerabilities CVE-2017-0199 and CVE-2017-11882 to drop and execute the backdoor binary on the victim’s machine. Figure 1 shows the attack overview.

Figure 1: Attack overview

The malware is distributed via Russian-language documents (Figure 2) that are weaponized with known Microsoft Office vulnerabilities. In this campaign, we observed threat actors exploiting CVE-2017-0199 and CVE-2017-11882 to distribute malware. The malicious document used is named “Seminar.rtf”. It exploits CVE-2017-0199 to download the second stage payload from (Figure 3). The downloaded file is weaponized with CVE-2017-11882.

Figure 2: Lure documents

Figure 3: Hex dump of embedded URL in Seminar.rtf

Figure 4 shows the first payload trying to download the second stage Seminar.rtf.

Figure 4: Downloading second stage Seminar.rtf

The downloaded Seminar.rtf contains an embedded binary file that is dropped in %temp% via Equation Editor executable. This file drops the executable at %temp% (MD5: 78734CD268E5C9AB4184E1BBE21A6EB9), which is used to drop and execute the FELIXROOT dropper component (MD5: 92F63B1227A6B37335495F9BCB939EA2).

The dropped executable (MD5: 78734CD268E5C9AB4184E1BBE21A6EB9) contains the compressed FELIXROOT dropper component in the Portable Executable (PE) binary overlay section. When it is executed, it creates two files: an LNK file that points to %system32%\rundll32.exe, and the FELIXROOT loader component. The LNK file is moved to the startup directory. Figure 5 shows the command in the LNK file to execute the loader component of FELIXROOT.

Figure 5: Command in LNK file

The embedded backdoor component is encrypted using custom encryption. The file is decrypted and loaded directly in memory without touching the disk.

Technical Details

After successful exploitation, the dropper component executes and drops the loader component. The loader component is executed via RUNDLL32.EXE. The backdoor component is loaded in memory and has a single exported function.

Strings in the backdoor are encrypted using a custom algorithm that uses XOR with a 4-byte key. Decryption logic used for ASCII strings is shown in Figure 6.

Figure 6: ASCII decryption routine

Decryption logic used for Unicode strings is shown in Figure 7.

Figure 7: Unicode decryption routine

Upon execution, a new thread is created where the backdoor sleeps for 10 minutes. Then it checks to see if it was launched by RUNDLL32.exe along with parameter #1. If the malware was launched by RUNDLL32.exe with parameter #1, then it proceeds with initial system triage before doing command and control (C2) network communications. Initial triage begins with connecting to Windows Management Instrumentation (WMI) via the “ROOT\CIMV2” namespace.

Figure 8 shows the full operation.

Figure 8: Initial execution process of backdoor component

Table 1 shows the classes referred from the “ROOT\CIMV2” and “Root\SecurityCenter2” namespace.

WMI Namespaces









Table 1: Referred classes

WMI Queries and Registry Keys Used

  1. SELECT Caption FROM Win32_TimeZone
  2. SELECT CSNAME, Caption, CSDVersion, Locale, RegisteredUser FROM Win32_OperatingSystem
  3. SELECT Manufacturer, Model, SystemType, DomainRole, Domain, UserName FROM Win32_ComputerSystem

Registry entries are read for potential administration escalation and proxy information.

  1. Registry key “SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System ” is queried to check the values ConsentPromptBehaviorAdmin and PromptOnSecureDesktop.
  2. Registry key “Software\Microsoft\Windows\CurrentVersion\Internet Settings\” is queried to gather proxy information with values ProxyEnable, Proxy: (NO), Proxy, ProxyServer.

Table 2 shows FELIXROOT backdoor capabilities. Each command is performed in an individual thread.




Fingerprint System via WMI and Registry


Drop File and execute


Remote Shell


Terminate connection with C2


Download and run batch script


Download file on machine


Upload File

Table 2: FELIXROOT backdoor commands

Figure 9 shows the log message decrypted from memory using the same mechanism shown in Figure 6 and Figure 7 for every command executed.

Figure 9: Command logs after execution

Network Communications

FELIXROOT communicates with its C2 via HTTP and HTTPS POST protocols. Data sent over the network is encrypted and arranged in a custom structure. All data is encrypted with AES, converted into Base64, and sent to the C2 server (Figure 10).

Figure 10: POST request to C2 server

All other fields, such as User-Agents, Content-Type, and Accept-Encoding, that are part of the request / response header are XOR encrypted and present in the malware. The malware queries the Windows API to get the computer name, user name, volume serial number, Windows version, processor architecture and two additional values, which are “1.3” and “KdfrJKN”. The value “KdfrJKN” may be used as identification for the campaign and is found in the JOSN object in the file (Figure 11).

Figure 11: Host information used in every communication

The FELIXROOT backdoor has three parameters for C2 communication. Each parameter provides information about the task performed on the target machine (Table 3).




This parameter contains target machine information in the following format:

<Computer Name>, <User Name>, <Windows Versions>, <Processor Architecture>, <1.3>, < KdfrJKN >, <Volume Serial Number>


This parameter includes the information about the command executed and its results.


This parameter contains the information about data associated with the C2 server.

Table 3: FELIXROOT backdoor parameters


All data is transferred to C2 servers using AES encryption and the IbindCtx COM interface using HTTP or HTTPS protocol. The AES key is unique for each communication and is encrypted with one of two RSA public keys. Figure 12 and Figure 13 show the RSA keys used in FELIXROOT, and Figure 14 shows the AES encryption parameters.

Figure 12: RSA public key 1

Figure 13: RSA public key 2

Figure 14: AES encryption parameters

After encryption, the cipher text to be sent over C2 is Base64 encoded. Figure 15 shows the structure used to send data to the server, and Figure 16 shows the structural representation of data used in C2 communications.

Figure 15: Structure used to send data to server

Figure 16: Structure used to send data to C2 server

The structure is converted to Base64 using the CryptBinaryToStringA function.

FELIXROOT backdoor contains several commands for specific tasks. After execution of every task, the malware sleeps for one minute before executing the next task. Once all the tasks have been executed completely, the malware breaks the loop, sends the termination buffer back, and clears all the footprints from the targeted machine:

  1. Deletes the LNK file from the startup directory.
  2. Deletes the registry key HKCU\Software\Classes\Applications\rundll32.exe\shell\open
  3. Deletes the dropper components from the system.


CVE-2017-0199 and CVE-2017-11882 are two of the more commonly exploited vulnerabilities that we are currently seeing. Threat actors will increasingly leverage these vulnerabilities in their attacks until they are no longer finding success, so organizations must ensure they are protected. At this time of writing, FireEye Multi Vector Execution (MVX) engine is able to recognize and block this threat. We also advise that all industries remain on alert, as the threat actors involved in this campaign may eventually broaden the scope of their current targeting.


Indicators of Compromise












Network Indicators of Compromise

Accept-Encoding: gzip, deflate

content-Type: application/x-www-form-urlencoded

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Configuration Files

Version 1:

{"1" : "","2" : "30","4" : "GufseGHbc","6" : "3", "7" :


Version 2:

{"1" : "","2" : "30","4" : "KdfrJKN","6" : "3", "7" :


FireEye Detections

































Table 5: FireEye Detections


Special thanks to Jonell Baltazar, Alex Berry and Benjamin Read for their contributions to this blog.

Top 10 Signs of a Malware Infection on Your PC

Not all viruses that find their way onto your computer dramatically crash your machine. Instead, there are viruses that can run in the background without you even realizing it. As they creep around, they make messes, steal, and much worse.

Malware today spies on your every move. It sees the websites you visit, and the usernames and passwords you type in. If you login to online banking, a criminal can watch what you do and after you log off and go to bed, he can log right back and start transferring money out of your account.

Here are some signs that your device might already be infected with malware:

  1. Programs shut down or start up automatically
  2. Windows suddenly shuts down without prompting
  3. Programs won’t start when you want them to
  4. The hard drive is constantly working
  5. Your machine is working slower than usual
  6. Messages appear spontaneously
  7. Instead of flickering, your external modem light is constantly lit
  8. Your mouse pointer moves by itself
  9. Applications are running that are unfamiliar
  10. Your identity gets stolen

If you notice any of these, first, don’t panic. It’s not 100% that you have a virus. However, you should check things out. Make sure your antivirus program is scanning your computer regularly and set to automatically download software updates. This is one of the best lines of defense you have against malware.

Though we won’t ever eliminate malware, as it is always being created and evolving, by using antivirus software and other layers of protection, you can be one step ahead. Here are some tips:

  • Run an automatic antivirus scan of your computer every day. You can choose the quick scan option for this. However, each week, run a deep scan of your system. You can run them manually, or you can schedule them.
  • Even if you have purchased the best antivirus software on the market, if you aren’t updating it, you are not protected.
  • Don’t click on any attachment in an email, even if you think you know who it is from. Instead, before you open it, confirm that the application was sent by who you think sent it, and scan it with your antivirus program.
  • Do not click on any link seen in an email, unless it is from someone who often sends them. Even then, be on alert as hackers are quite skilled at making fake emails look remarkably real. If you question it, make sure to open a new email and ask the person. Don’t just reply to the one you are questioning. Also, never click on any link that is supposedly from your bank, the IRS, a retailer, etc. These are often fake.
  • If your bank sends e-statements, ignore the links and login directly to the banks website using either a password manager or your bookmarks.
  • Set your email software to “display text only.” This way, you are alerted before graphics or links load.

When a device ends up being infected, it’s either because of hardware or software vulnerabilities. And while there are virus removal tools to clean up any infections, there still may be breadcrumbs of infection that can creep back in. It’s generally a good idea to reinstall the devices operating system to completely clear out the infection and remove any residual malware .

As an added bonus, a reinstall will remove bloatware and speed up your devices too.

Robert Siciliano is a Security and Identity Theft Expert. He is the founder of a cybersecurity speaking and consulting firm based in Massachussets. See him discussing internet and wireless security on Good Morning America.

CactusTorch Fileless Threat Abuses .NET to Infect Victims

McAfee Labs has noticed a significant shift by some actors toward using trusted Windows executables, rather than external malware, to attack systems. One of the most popular techniques is a “fileless” attack. Because these attacks are launched through reputable executables, they are hard to detect. Both consumers and corporate users can fall victim to this threat. In corporate environments, attackers use this vector to move laterally through the network.

One fileless threat, CactusTorch, uses the DotNetToJScript technique, which loads and executes malicious .NET assemblies straight from memory. These assemblies are the smallest unit of deployment of an application, such as a .dll or .exe. As with other fileless attack techniques, DotNetToJScript does not write any part of the malicious .NET assembly on a computer’s hard drive; hence traditional file scanners fail to detect these attacks.

In 2018 we have seen rapid growth in the use of CactusTorch, which can execute custom shellcode on Windows systems. The following chart shows the rise of CactusTorch variants in the wild.

Source: McAfee Labs.

The DotNetToJScript tool kit

Compiling the DotNetToJScript tool gives us the .NET executable DotNetToJScript.exe, which accepts the path of a .NET assembly and outputs a JavaScript file.


Figure 1: Using DotNetToJScript.exe to create a malicious JavaScript file.

The DotNetToJScript tool kit is never shipped with malware. The only component created is the output JavaScript file, which is executed on the target system by the script host (wscript.exe). For our analysis, we ran some basic deobfuscation and found CactusTorch, which had been hidden by some online tools:

Figure 2: CactusTorch code.

Before we dive into this code, we need to understand .NET and its COM exposure. When we install the .NET framework on any system, several .NET libraries are exposed via Microsoft’s Component Object Model (COM).

Figure 3: COM exposing the .NET library System.Security.Cryptography.FromBase64Transform.

If we look at the exposed interfaces, we can see IDispatch, which allows the COM object to be accessed from the script host or a browser.

Figure 4: Exposed interfaces in a .NET library.

To execute malicious code using the DotNetToJScript vector, an attack uses the following COM objects:

  • Text.ASCIIEncoding
  • Security.Cryptography.FromBase64Transform
  • IO.MemoryStream
  • Runtime.Serialization.Formatters.Binary.BinaryFormatter
  • Collections.ArrayList

Now, let’s return to the JavaScript code we saw in Figure 2. The function base64ToStream()converts the Base64-encoded serialized object to a stream. Before we can fully understand the logic behind the JavaScript code, we need to examine the functionality of the Base64-encoded serialized object. Thus our next step is to reverse engineer the embedded serialized object and recreate the class definition. Once that was done, the class definition looks like the following code, which is responsible for executing the malicious shellcode. (Special thanks to Casey Smith, @subTee, for important pointers regarding this step).

Figure 5: The class definition of the embedded serialized object.

Now we have the open-source component of CactusTorch, and the JavaScript code in Figure 2 makes sense. We can see how the malicious shellcode is executed on the targeted system. In Figure 2, line 29 the code invokes the flame(x,x) function with two arguments: the executable to launch and the shellcode.

The .NET assembly embedded in the CactusTorch script runs the following steps to execute the malicious shellcode:

  • Launches a new suspended process using CreateProcessA (to host the shellcode)
  • Allocates some memory with VirtualAllocEx() with an EXECUTE_READWRITE privilege
  • Writes the shellcode in the target’s process memory with WriteProcessMemory()
  • Creates a new thread to execute the shellcode using CreateRemoteThread()


Fileless malware takes advantage of the trust factor between security software and genuine, signed Windows applications. Because this type of attack is launched through reputable, trusted executables, these attacks are hard to detect. McAfee Endpoint Security (ENS) and Host Intrusion Prevention System (HIPS) customers are protected from this class of fileless attack through Signature ID 6118.



The author thanks the following colleagues for their help with this analysis:

  • Abhishek Karnik
  • Deepak Setty
  • Oliver Devane
  • Shruti Suman


MITRE ATT&CK techniques

  • Drive-by compromise
  • Scripting using Windows Script Host
  • Decode information
  • Command-line interface
  • Process injection


  • 4CF9863C8D60F7A977E9DBE4DB270819
  • 5EEFBB10D0169D586640DA8C42DD54BE
  • 69A2B582ED453A90CC06345886F03833
  • 74172E8B1F9B7F9DB600C57E07368B8F
  • 86C47B9E0F43150FEFF5968CF4882EBB
  • 89F87F60137E9081F40E7D9AD5FA8DEF
  • 8A33BF71E8740BDDE23425BBC6259D8F
  • 8DCCC9539A499D375A069131F3E06610
  • 924B7FB00E930082CE5B96835FDE69A1
  • B60E085150D53FCE271CD481435C6E1E
  • BC7923B43D4C83D077153202D84EA603
  • C1A7315FB68043277EE57BDBD2950503
  • D2095F2C1D8C25AF2C2C7AF7F4DD4908
  • D5A07C27A8BBCCD0234C81D7B1843FD4
  • E0573E624953A403A2335EEC7FFB1D83
  • E1677A25A047097E679676A459C63A42
  • F0BC5DFD755B7765537B6A934CA6DBDC
  • F6526E6B943A6C17A2CC96DD122B211E
  • CDB73CC7D00A2ABB42A76F7DFABA94E1
  • D4EB24F9EB1244A5BEAA19CF69434127


The post CactusTorch Fileless Threat Abuses .NET to Infect Victims appeared first on McAfee Blogs.

OVPN review: An ideal VPN except for one big drawback

OVPN in brief:

P2P allowed: Yes
Business location: Stockholm, Sweden
Number of servers: 56
Number of country locations: 7
Cost: $84 per year
VPN protocol: OpenVPN
Data encryption: AES-256-GCM
Data authentication: SHA1 HMAC
Handshake encryption: TLSv1.2

One of the big questions many people have about a VPN service is just how well they can trust a company’s no-logging claim. OVPN tries to allay that concern as much as possible by running its own small network of servers in seven countries.

To read this article in full, please click here

“Here Be Dragons”, Keeping Kids Safe Online

Sitting here this morning sipping my coffee, I watched fascinated as my 5-year-old daughter set up a VPN connection on her iPad while munching on her breakfast out of absent-minded necessity.

It dawned on me that, while daughter has managed to puzzle out how to route around geofencing issues that many adults can’t grasp, her safety online is never something to take for granted. I have encountered parents that allow their kids to access the Internet without controls beyond “don’t do X” — which we all know is as effective as holding up gauze in front of semi and hoping for the best (hat tip to Robin Williams).

More parents need to be made aware that on the tubes of the Internet, “here be dragons.”

First and foremost for keeping your kids safe online is that you need to wrap your head around a poignant fact. iThingers and their ilk are NOT babysitters. Please get this clear in your mind. Yes, I have been known to use these as child suppression devices for long car rides but, we need to be honest with ourselves. Far too often they become surrogates and this needs to stop. When I was kid my folks would plonk me down in front of the massive black and white television with faux wood finish so I could watch one of the three channels. Too a large extent this became the forerunner of the modern digital iBabysitter.

These days I can’t walk into a restaurant without seeing some family engrossed in their respective devices oblivious of the world around them, let alone each other. Set boundaries for usage. Do not let these devices be a substitute parent or a distraction and be sure to regulate what is being done online for both you and your child.

I have had conversations about what is the best software to install on a system to monitor a child’s activity with many parents. Often that is a conversation borne out of fear of the unknown. Non-technical parents outnumber the technically savvy ones by an order of magnitude and we can’t forget this fact. There are numerous choices out there that you can install on your computer but, the software package that is frequently overlooked is common sense.

All kidding aside, there seems to a precondition in modern society to offload and outsource responsibility. Kids are curious and they will click links and talk to folks online without the understanding that there are bad actors out there. It is incumbent upon us, the adults, to address that situation through education. Talk with your kids so that they understand what the issues are that they need to be aware of when they’re online. More importantly, if you as a parent aren’t aware of the dangers that are online you need to avail yourself of the information.

This is where programs such as the ISC2’s “Safe and Secure Online” come in.

Protecting your children is your top priority and helping children protect themselves online is ours. The (ISC)² Safe and Secure Online (SSO) program brings cyber security experts into classrooms and to community groups like scouts or sports clubs at no charge to teach children ages 7-10 and 11-14 how to stay safe online. We also offer a parent presentation so that you may learn these vital tools as well.

This is by no means that only choice out there but, it is a good starting point. The Internet is a marvelous collection of information but, as with anything that is the product of a hive mind, there is a dark side. Parents and kids need to take the time to arm themselves with the education to help guard against perils of the online world.

If you don’t know, ask. If you don’t ask, you’ll never know.

Originally posted on CSO Online by me.

The post “Here Be Dragons”, Keeping Kids Safe Online appeared first on Liquidmatrix Security Digest.

Millions of iOS and Android Users Could Be Compromised by Bluetooth Bug

Similar to smartphones and computers, Bluetooth is one of the modern-day pieces of tech that has spread wide and far. Billions of devices of all types around the world have the technology woven into their build. So when news about the BlueBorne vulnerabilities broke back in late 2017, everyone’s ears perked up. Fast forward to present day and a new Bluetooth flaw has emerged, which affects devices containing Bluetooth from a range of vendors—including Apple, Intel, Google, Broadcom, and Qualcomm.

Whether it’s connecting your phone to a speaker so you can blast your favorite tunes, or pairing it with your car’s audio system so you can make phone calls hands-free, the pairing capabilities of Bluetooth ensures the technology remains wireless. And this bug affects precisely that — Bluetooth’s Secure Simple Pairing and Low Energy Secure Connections, which are capabilities within the tech designed to assist users with pairing devices in a safe and secure way.

Essentially, this vulnerability means that when data is sent from device to device over Bluetooth connections, it is not encrypted, and therefore vulnerable. And with this flaw affecting Apple, Google and Intel-based smartphones and PCs, that means millions of people may have their private data leaked. Specifically, the bug allows an attacker that’s within about 30 meters of a user to capture and decrypt data shared between Bluetooth-paired devices.

Lior Neumann, one of the researchers who found the bug, stated, “As far as we know, every Android—prior to the patch published in June—and every device with a wireless chip from Intel, Qualcomm or Broadcom is vulnerable.” That includes iPhone devices with a Broadcom or Qualcomm chip as well.

Fortunately, fixes for this bug within Apple devices have already been available since May with the release of iOS 11.4. Additionally, two Android vendors, Huawei and LG, say they have patched the vulnerability as well. However, if you don’t see your vendor on this list, or if you have yet to apply the patches – what next steps should you take to secure your devices? Start by following these tips:

  • Turn Bluetooth off unless you have to use it. Affected software providers have been notified of these vulnerabilities and are working on fixing them as we speak. But in the meantime, it’s crucial you turn off your Bluetooth unless you absolutely must use it. To do this on iOS devices, simply go to your “Settings”, select “Bluetooth” and toggle it from on to off. On Android devices, open the “Settings” app and the app will display a “Bluetooth” toggle button under the “Wireless and networks” subheading that you can use to enable and disable the feature.
  • Update your software immediately. It’s an important security rule of thumb: always update your software whenever an update is available, as security patches are usually included with each new version. Patches for iOS and some Android manufacturers are already available, but if your device isn’t on the list, fear not – security patches for additional providers are likely on their way.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Millions of iOS and Android Users Could Be Compromised by Bluetooth Bug appeared first on McAfee Blogs.

Scammers Use Breached Personal Details to Persuade Victims

Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.

Personalized Porn Extortion Scam

Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:

“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?

I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”

The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.

The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.

Data Breach Lawsuit Scam

In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:

“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”

The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.

What to Do?

If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.

Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.

To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:


Top Cyber Threats Organizations Are Facing Right Now

What are the top cyber threats the public and private sectors should be concerned about in the latter part of 2018? Cyber security is a continuous game of Spy vs. Spy. Every time a new technology is introduced, the potential attack surface expands. The moment one vulnerability is patched, hackers find another way in. Keeping… Read More

The post Top Cyber Threats Organizations Are Facing Right Now appeared first on .

Software Quality Is a Competitive Differentiator

software quality

One of the ironies of DevOps is that while the methodology supports faster and more automated software production, it doesn't boost code quality unless quality is a focus for the software team. As more than a few business leaders have discovered, gaining a competitive edge in the digital economy requires a more concentrated and comprehensive approach.

It's no secret that software code powers our world — it’s in jet engines, automobiles, the electric grid, medical systems, commerce, appliances…just about everything. Yet, producing reliable and secure software has become increasingly difficult. Applications are not only growing in size, they’re also becoming more complex and intertwined across platforms, systems and devices. APIs and the Internet of Things (IoT) are inserting code — and distributing processing — across millions of applications and devices, as well as the cloud.

This complicated environment is forcing business executives, IT leaders and software developers to think and work differently. For example, a growing array of systems and devices rely on artificial intelligence (AI) to drive activity. Automated systems increasingly decide on the course of action based on constantly changing inputs. A system — and the software that runs it — must adapt dynamically.

The upshot? Software quality can no longer be a checkbox item — it must be a framework that spans an organization. Ultimately, an enterprise must own the success of its code — and develop habits that produce high-quality software. This includes understanding how and why code quality is important not only for performance but also for security and final business results. A DevOps initiative can succeed only when an enterprise recognizes the scope of today's software frameworks.

Software Quality Redefined

The digital world is creating intriguing challenges related to software quality. These extend beyond the sheer volume of code that’s required to run systems. For instance, UI/UX has emerged front stage center — particularly as apps have proliferated. Maturing technologies, such as augmented reality and virtual reality, have introduced new challenges. The takeaway? It's no longer acceptable to view UI/UX testing as a traditional, commoditized function — a quality experience is paramount.

There are other challenges, too. As the IoT matures and grows, there's a need for innovation in testing. The variety and number of edge devices is exploding, and all of this introduces enormous QA challenges. Ensuring that software performs adequately and meets user requirements is critical. The need for service level agreements between service providers and consumers has never been more important.

Artificial intelligence changes the testing landscape as well. It can take over some human roles. However, those hoping to replace traditional software testing teams with AI readily admit that largely autonomous applications would still require continuous training to ensure that technological and business goals are met. Simply put, AI will augment rather than replace QA professionals and will create new fields of specialization.

Raising the Quality of Code

All of this is changing the stakes. Yet, many organizations aren't prepared. For example, the rollout of was delayed by months as a result of breakdowns in processes. In the end, the cost of building out the IT framework exceeded the original estimate by three times due to ongoing performance, load and management issues. In the private sector, breach after breach has occurred in recent years.

How can organizations step out of the development morass and transform software development into success stories? These factors make or break an initiative:

  • The need for automation. This encompasses everything from quality controls to scanning code for vulnerabilities. Quality tests and metrics are part of a continuous delivery pipeline — and these benchmarks must be clearly defined across the organization. Investments in quality automation — unit tests, functional tests, and performance, load and system tests — generate long term savings.
  • The need for a modular approach. Organizations that produce smaller and more focused batches of code simplify scanning and testing, and increase overall delivery velocity. It's also easier to identify problems when software is composed of modules and sub-modules. Finally, with these modules in place, an Agile approach becomes far more viable. The enterprise can produce and reconfigure software while maintaining quality.
  • The need to address scope. What needs to be tested and scanned has also changed. As we enter a world where infrastructure is comprised of code, we also need to plan and test the quality of infrastructure creation and configuration scripts. This requires the right internal governance framework and processes as well as the right tools and technologies.
  • The need for continuous feedback. It's critical to fail, then fail fast and move on. A rapidly evolving product can be shaped according to customer feedback, and fast turnaround allows your teams to stamp out defects and hone the software for your audience or customer base. This involves tracking how users interact with a site through blue-green or A-B testing that that analyzes features and new code based on a subset of the user population.

Security Can’t Be an Afterthought

Finally, there's a need to connect security to code quality. Although organizations are embracing DevOps, many aren't addressing the need for secure, high-quality code. Incredibly, 69% of apps fail the OWASP Top 10 in the first scan. A more holistic DevSecOps approach — one that incorporates automation, modular software, scope and continuous feedback — helps organizations achieve a superior position in the marketplace. Simply put, their code becomes a competitive differentiator.  

Best-practice organizations understand that delays due to code defects, a failed product launch, or savage user reviews can severely impact business goals. Application crashes and security breaches directly impact the bottom line. The takeaway is that the need for strategic risk assessment has never been greater. Rather than adopting a defensive and reactive posture, it's wise to focuses on quality throughout the software lifecycle. The move from DevOps to DevSecOps can prove transformative.

McAfee Interns Share Their Experience for #NationalInternDay

By Christie, HR Communications Intern

As someone who always wanted to make an impact in the world, I thought nonprofit was the only fit for my passions in marketing and philanthropy. Because of this, I’ve worked primarily in the nonprofit sector for the last three years. But to keep my options open, I desired to experience at least one corporate internship before I graduated college.

I wasn’t sure if any company would take me under its wing due to my lack of corporate experience though. That was until McAfee offered me the opportunity to work with them this summer.

As a senior in college, McAfee provided me the real-life experience I hoped for and more. Below are the top three reasons why my internship experience with McAfee has truly been nothing less than invaluable:

Playing to Win Even as an Intern

Since day one, I knew this internship was unique and not like any other. Everyone at McAfee works with agility. Although the nonprofit industry is notorious for moving fast, it is still fascinating to see employees so eager to work on tasks of all sizes with such drive and efficiency.

Instead of being delegated tasks to fulfill, I get to share what I want to work on and what I want to take away from my time working with McAfee.

As a huge social media enthusiast, I helped manage @LifeatMcAfee’s Instagram strategy from implementing new social trends, generating online advertisements and publishing my own designs.

But the best part? I am not seen or approached as an intern, but as a team member. I am held to the same expectations and given the same opportunities – being able to add value to the team and carry out real, impactful work every day.

People First and Foremost

If I’ve learned anything from my first 10 weeks here, it is that McAfee genuinely values its employees and community. McAfee does not shy away from diversity or from supporting its employees in every way possible.

I experienced this firsthand by assisting with social media during Pride Month by covering the Global LGBT Pride Photo Competition, Gender Revolution Documentary Watch Party and Keyeon’s “How I Wear My #McAfee Pride” Life at McAfee blog. Although this doesn’t fully portray how McAfee practices inclusive candor and transparency, it really showed me how McAfee embraces diversity and its employees’ authentic selves.

Giving back is also very important in McAfee’s company culture. This is visible through its various events and programs such as Global Community Service Day, McAfee Explorers, Bring Your Kid to Work Day, McAfee Blood Drive, and the list goes on and on. This undeniably displayed to me McAfee also shares my value of making a positive impact on the world. And knowing colleagues share this significant value with me, reinforced McAfee as a truly one tight-knit, loving family.

Together is Power

On the first day of my internship, I signed my name on the McAfee Pledge Wall among all the other employees’ signatures – signifying our single pledge to defend the world from cyber threats.

This symbolic gesture is evident every day when I step my foot into the office. I work with people from different positions, departments and even countries. Everyone is always willing to help, even in projects they’re not involved in.

This sense of togetherness is something I really value and believe is one of the best things about working at McAfee. We all have one mission that we want to fulfill and strive towards every day, together.

An Unforgettable Experience

McAfee makes an impact in the world every day by providing the best cybersecurity possible, but also gives back to the community and its employees through its various educational and community outreach programs. But notably, McAfee has made a lasting impact on me. These short 10 weeks have shown me my career options are unlimited and I can truly make a difference in any field of work, especially with a great team that strives to fulfill the same mission as I do every day.

Read from other McAfee interns from around the globe about their internship experiences below!

Internship Experiences at McAfee

Juan – Customer Experience (Argentina)

“These past few months, I got to meet some of the most talented people and all of them were eager to share their knowledge and expertise with me. McAfee is truly a great place to work while making our world and our communities a safer place.”



Emily – Digital Marketing & Content Operations (US)

“I get to help my team work on redesigning our Marketing Intranet, so that new Marketing hires, as well as existing employees, can have a resource to answer questions they may have. I really love working here at McAfee!”



Adam – Human Resources & Talent Acquisition (Ireland)

“This opportunity has provided me with priceless experience and insight into one of the leading cybersecurity companies in the world. I have been extremely privileged to have been given the responsibilities I have had during my time here and I have gleaned a vast amount of experience as a result.”



Mark – Advanced Threat Research (US)

“I got to meet all the wonderful people I’d be working most closely with, whose locations ranged from Dallas to the UK. McAfee places importance on interpersonal relationships in their teams and even as an intern, I was treated as one of the gang since day one.”



Csaradhi – Platform Engineering (India)

“The transition from college to corporate life has been so beautiful. I’ve learned so many tings apart from the technical aspects. I thank McAfee for choosing to believe in me and I’m here to make the most of it.”



For more stories like this, follow @LifeAtMcAfee on Instagram and on Twitter @McAfee to see what working at McAfee is all about.

Interested in joining our teams? We’re hiring! Apply now.

The post McAfee Interns Share Their Experience for #NationalInternDay appeared first on McAfee Blogs.

Half the US population will live in 8 states

In about 20 years, half the population will live in eight states“, and 70% of Americans will live in 15 states. “Meaning 30 percent will choose 70 senators. And the 30% will be older, whiter, more rural, more male than the 70 percent.” Of course, as the census shows the population shifting, the makeup of the House will also change dramatically.

Maybe you think that’s good, maybe you think that’s bad. It certainly leads to interesting political times.

Free SANS Webinar: I Before R Except After IOC

Join Andrew Hay on Wednesday, July 25th, 2018 at 10:30 AM EDT (14:30:00 UTC) for an exciting free SANS Institute Webinar entitled “I” Before “R” Except After IOC. Using actual investigations and research, this session will help attendees better understand the true value of an individual IOC, how to quantify and utilize your collected indicators, and what constitutes an actual incident.

Just because the security industry touts indicators of compromise (IOCs) as much needed intelligence in the war on attackers, the fact is that not every IOC is valuable enough to trigger an incident response (IR) activity. All too often our provided indicators contain information of varying quality including expired attribution, dubious origin, and incomplete details. So how many IOCs are needed before you can confidently declare an incident? After this session, the attendee will:

  • Know how to quickly determine the value of an IOC,
  • Understand when more information is needed (and from what source), and
  • Make intelligent decisions on whether or not an incident should be declared.

Register to attend the webinar here:

Cyber Security Roundup for July 2018

The importance of assuring the security and testing quality of third-party provided applications is more than evident when you consider an NHS reported data breach of 150,000 patient records this month. The NHS said the breach was caused by a coding error in a GP application called SystmOne, developed by UK based 'The Phoenix Partnership' (TTP). The same assurances also applies to internally developed applications, case-in-point was a publically announced flaw with Thomas Cook's booking system discovered by a Norwegian security researcher. The research used to app flaw to access the names and flights details of Thomas Cook passengers and release details on his blog. Thomas Cook said the issue has since been fixed.

Third-Third party services also need to be security assured, as seen with the Typeform compromise. Typeform is a data collection company, on 27th June, hackers gained unauthorised access to one of its servers and accessed customer data. According to their official notification, Typeform said the hackers may have accessed the data held on a partial backup, and that they had fixed a security vulnerability to prevent reoccurrence. Typeform has not provided any details of the number of records compromised, but one of their customers, Monzo, said on its official blog that is was in the region of 20,000. Interestingly Monzo also declared ending their relationship with Typeform unless it wins their trust back. Travelodge one UK company known to be impacted by the Typeform breach and has warned its impacted customers. Typeform is used to manage Travelodge’s customer surveys and competitions.

Other companies known to be impacted by the Typeform breach include:

The Information Commissioner's Office (ICO) fined Facebook £500,000, the maximum possible, over the Cambridge Analytica data breach scandal, which impacted some 87 million Facebook users. Fortunately for Facebook, the breach occurred before the General Data Protection Regulation came into force in May, as the new GDPR empowers the ICO with much tougher financial penalties design to bring tech giants to book, let's be honest, £500k is petty cash for the social media giant.
Facebook-Cambridge Analytica data scandal
Facebook reveals its data-sharing VIPs
Cambridge Analytica boss spars with MPs

A UK government report criticised the security of Huawei products, concluded the government had "only limited assurance" Huawei kit posed no threat toUK national security. I remember being concerned many years ago when I heard BT had ditched US Cisco routers for Huawei routers to save money, not much was said about the national security aspect at the time. The UK gov report was written by the Huawei Cyber Security Evaluation Centre (HCSEC), which was set up in 2010 in response to concerns that BT and other UK companies reliance on the Chinese manufacturer's devices, by the way, that body is overseen by GCHQ.

Banking hacking group "MoneyTaker" has struck again, this time stealing a reported £700,000 from a Russia bank according to Group-IB. The group is thought to be behind several other hacking raids against UK, US, and Russian companies. The gang compromise a router which gave them access to the bank's internal network, from that entry point, they were able to find the specific system used to authorise cash transfers and then set up the bogus transfers to cash out £700K.


Porn Extortion Email tied to Password Breach

(An update to this post has been made at the end)

This weekend I received an email forwarded from a stranger.  They had received a threatening email and had shared it with a former student of mine to ask advice.  Fortunately, the correct advice in this case was "Ignore it."  But they still shared it with me in case we could use it to help others.

The email claims that the sender has planted malware on the recipient's computer and has observed them watching pornography online.   As evidence that they really have control of the computer, the email begins by sharing one of the recipient's former passwords.

They then threaten that they are going to release a video of the recipient recorded from their webcam while they watched the pornography unless they receive $1000 in Bitcoin.  The good news, as my former student knew, was that this was almost certainly an empty threat.   There have dozens of variations on this scheme, but it is based on the concept that if someone knows your password, they COULD know much more about you.  In this case, the password came from a data breach involving a gaming site where the recipient used to hang out online.  So, if you think to yourself "This must be real, they know my password!" just remember that there have been  HUNDREDS of data breaches where email addresses and their corresponding passwords have been leaked.  (The website "Have I Been Pwned?" has collected over 500 Million such email/password pair leaks.  In full disclosure, my personal email is in their database TEN times and my work email is in their database SIX times, which doesn't concern me because I follow the proper password practice of using different passwords on every site I visit.  Sites including Adobe, which asks for you to register before downloading software, and LinkedIn are among some of the giants who have had breaches that revealed passwords.  One list circulating on the dark web has 1.4 BILLION userids and passwords gathered from at least 250 distinct data breaches.)

Knowing that context, even if you happen to be one of those millions of Americans who have watched porn online.  DON'T PANIC!  This email is definitely a fake, using their knowledge of a breached password to try to convince you they have blackmail information about you.

We'll go ahead and share the exact text of the email, replacing only the password with the word YOURPASSWORDHERE.

YOURPASSWORDHERE is one of your passphrase. Lets get directly to the point. There is no one who has paid me to investigate you. You don't know me and you are most likely wondering why you are getting this mail?
In fact, I actually installed a malware on the X video clips (porn) web site and do you know what, you visited this site to experience fun (you know what I mean). When you were watching video clips, your browser initiated functioning as a RDP that has a key logger which provided me accessibility to your display screen and also cam. after that, my software obtained your entire contacts from your Messenger, Facebook, and email . After that I made a double-screen video. 1st part shows the video you were viewing (you've got a nice taste omg), and next part shows the view of your web cam, & its you. 
You have got not one but two alternatives. We will go through these choices in details:
First alternative is to neglect this email message. In such a case, I will send out your very own videotape to all of your contacts and also visualize about the embarrassment you will definitely get. And definitely if you happen to be in a romantic relationship, exactly how this will affect?
Latter solution is to compensate me $1000. Let us describe it as a donation. In such a case, I will asap delete your video. You can go forward your daily life like this never occurred and you surely will never hear back again from me.
You'll make the payment through Bitcoin (if you do not know this, search for "how to buy bitcoin" in Google). 
BTC Address: 192hBrF64LcTQUkQRmRAVgLRC5SQRCWshi[CASE sensitive so copy and paste it]
If you are thinking about going to the law, well, this email can not be traced back to me. I have taken care of my moves. I am not attempting to charge a fee a huge amount, I simply want to be rewarded. You have one day in order to pay. I have a specific pixel in this e-mail, and now I know that you have read through this mail. If I do not receive the BitCoins, I will definately send your video to all of your contacts including family members, co-workers, and so forth. Having said that, if I receive the payment, I'll destroy the video right away. If you really want proof, reply with Yes & I definitely will send out your video recording to your 5 friends. This is the non-negotiable offer and thus don't waste mine time & yours by responding to this message.
This particular scam was first seen in the wild back in December of 2017, though some similar versions predate it.  However, beginning in late May the scam kicked up in prevalence, and in the second week of July, apparently someone's botnet started sending this spam in SERIOUS volumes, as there have been more than a dozen news stories just in the past ten days about the scam.

Here's one such warning article from the Better Business Bureau's Scam Tracker.

One thing to mention is that the Bitcoin address means that we can track whether payments have been made to the criminal.  It seems that this particular botnet is using a very large number of unique bitcoin addresses.  It would be extremely helpful to this investigation if you could share in the comments section what Bitcoin address (the "BTC Address") was seen in your copy of the spam email.

As always, we encourage any victim of a cyber crime to report it to the FBI's Internet Crime and Complaint Center by visiting

Please feel free to share this note with your friends!
Thank you!


The excellent analysts at the SANS Internet Storm Center have also been gathering bitcoin addresses from victims.  In their sample so far, 17% of the Bitcoins have received payments totalling $235,000, so people truly are falling victim to this scam!

Please continue to share this post and encourage people to add their Bitcoin addresses as a comment below!

Defining Counterintelligence

I've written about counterintelligence (CI) before, but I realized today that some of my writing, and the writing of others, may be confused as to exactly what CI means.

The authoritative place to find an American definition for CI is the United States National Counterintelligence and Security Center. I am more familiar with the old name of this organization, the  Office of the National Counterintelligence Executive (ONCIX).

The 2016 National Counterintelligence Strategy cites Executive Order 12333 (as amended) for its definition of CI:

Counterintelligence – Information gathered and activities conducted to identify, deceive,
exploit, disrupt, or protect against espionage, other intelligence activities, sabotage, or assassinations conducted for or on behalf of foreign powers, organizations, or persons, or their agents, or international terrorist organizations or activities. (emphasis added)

The strict interpretation of this definition is countering foreign nation state intelligence activities, such as those conducted by China's Ministry of State Security (MSS), the Foreign Intelligence Service of the Russian Federation (SVR RF), Iran's Ministry of Intelligence, or the military intelligence services of those countries and others.

In other words, counterintelligence is countering foreign intelligence. The focus is on the party doing the bad things, and less on what the bad thing is.

The definition, however, is loose enough to encompass others; "organizations," "persons," and "international terrorist organizations" are in scope, according to the definition. This is just about everyone, although criminals are explicitly not mentioned.

The definition is also slightly unbounded by moving beyond "espionage, or other intelligence activities," to include "sabotage, or assassinations." In those cases, the assumptions is that foreign intelligence agencies and their proxies are the parties likely to be conducting sabotage or assassinations. In the course of their CI work, paying attention to foreign intelligence agents, the CI team may encounter plans for activities beyond collection.

The bottom line for this post is a cautionary message. It's not appropriate to call all intelligence activities "counterintelligence." It's more appropriate to call countering adversary intelligence activities counterintelligence.

You may use similar or the same approaches as counterintelligence agents when performing your cyber threat intelligence function. For example, you may recruit a source inside a carding forum, or you may plant your own source in a carding forum. This is similar to turning a foreign intelligence agent, or inserting your own agent in a foreign intelligence service. However, activities directing against a carding forum are not counterintelligence. Activities directing against a foreign intelligence service are counterintelligence.

The nature and target of your intelligence activities are what determine if it is counterintelligence, not necessarily the methods you use. Again, this is in keeping with the stricter definition, and not becoming a victim of scope creep.

TA18-201A: Emotet Malware

Original release date: July 20, 2018

Systems Affected

Network Systems


Emotet is an advanced, modular banking Trojan that primarily functions as a downloader or dropper of other banking Trojans. Emotet continues to be among the most costly and destructive malware affecting state, local, tribal, and territorial (SLTT) governments, and the private and public sectors.

This joint Technical Alert (TA) is the result of Multi-State Information Sharing & Analysis Center (MS-ISAC) analytic efforts, in coordination with the Department of Homeland Security (DHS) National Cybersecurity and Communications Integration Center (NCCIC).


Emotet continues to be among the most costly and destructive malware affecting SLTT governments. Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat. Emotet infections have cost SLTT governments up to $1 million per incident to remediate.

Emotet is an advanced, modular banking Trojan that primarily functions as a downloader or dropper of other banking Trojans. Additionally, Emotet is a polymorphic banking Trojan that can evade typical signature-based detection. It has several methods for maintaining persistence, including auto-start registry keys and services. It uses modular Dynamic Link Libraries (DLLs) to continuously evolve and update its capabilities. Furthermore, Emotet is Virtual Machine-aware and can generate false indicators if run in a virtual environment.

Emotet is disseminated through malspam (emails containing malicious attachments or links) that uses branding familiar to the recipient; it has even been spread using the MS-ISAC name. As of July 2018, the most recent campaigns imitate PayPal receipts, shipping notifications, or “past-due” invoices purportedly from MS-ISAC. Initial infection occurs when a user opens or clicks the malicious download link, PDF, or macro-enabled Microsoft Word document included in the malspam. Once downloaded, Emotet establishes persistence and attempts to propagate the local networks through incorporated spreader modules.

Figure 1: Malicious email distributing Emotet

Currently, Emotet uses five known spreader modules: NetPass.exe, WebBrowserPassView, Mail PassView, Outlook scraper, and a credential enumerator.

  1. NetPass.exe is a legitimate utility developed by NirSoft that recovers all network passwords stored on a system for the current logged-on user. This tool can also recover passwords stored in the credentials file of external drives.
  2. Outlook scraper is a tool that scrapes names and email addresses from the victim’s Outlook accounts and uses that information to send out additional phishing emails from the compromised accounts.
  3. WebBrowserPassView is a password recovery tool that captures passwords stored by Internet Explorer, Mozilla Firefox, Google Chrome, Safari, and Opera and passes them to the credential enumerator module.
  4. Mail PassView is a password recovery tool that reveals passwords and account details for various email clients such as Microsoft Outlook, Windows Mail, Mozilla Thunderbird, Hotmail, Yahoo! Mail, and Gmail and passes them to the credential enumerator module.
  5. Credential enumerator is a self-extracting RAR file containing two components: a bypass component and a service component. The bypass component is used for the enumeration of network resources and either finds writable share drives using Server Message Block (SMB) or tries to brute force user accounts, including the administrator account. Once an available system is found, Emotet writes the service component on the system, which writes Emotet onto the disk. Emotet’s access to SMB can result in the infection of entire domains (servers and clients).
Figure 2: Emotet infection process

To maintain persistence, Emotet injects code into explorer.exe and other running processes. It can also collect sensitive information, including system name, location, and operating system version, and connects to a remote command and control server (C2), usually through a generated 16-letter domain name that ends in “.eu.” Once Emotet establishes a connection with the C2, it reports a new infection, receives configuration data, downloads and runs files, receives instructions, and uploads data to the C2 server.

Emotet artifacts are typically found in arbitrary paths located off of the AppData\Local and AppData\Roaming directories. The artifacts usually mimic the names of known executables. Persistence is typically maintained through Scheduled Tasks or via registry keys. Additionally, Emotet creates randomly-named files in the system root directories that are run as Windows services. When executed, these services attempt to propagate the malware to adjacent systems via accessible administrative shares.

Note: it is essential that privileged accounts are not used to log in to compromised systems during remediation as this may accelerate the spread of the malware.

Example Filenames and Paths:

C:\Users\<username>\AppData \Local\Microsoft\Windows\shedaudio.exe

C:\Users\<username>\AppData\Roaming\Macromedia\Flash Player\macromedia\bin\flashplayer.exe

Typical Registry Keys:




System Root Directories:






Negative consequences of Emotet infection include

  • temporary or permanent loss of sensitive or proprietary information,
  • disruption to regular operations,
  • financial losses incurred to restore systems and files, and
  • potential harm to an organization’s reputation.


NCCIC and MS-ISAC recommend that organizations adhere to the following general best practices to limit the effect of Emotet and similar malspam:

  • Use Group Policy Object to set a Windows Firewall rule to restrict inbound SMB communication between client systems. If using an alternative host-based intrusion prevention system (HIPS), consider implementing custom modifications for the control of client-to-client SMB communication. At a minimum, create a Group Policy Object that restricts inbound SMB connections to clients originating from clients.
  • Use antivirus programs, with automatic updates of signatures and software, on clients and servers.
  • Apply appropriate patches and updates immediately (after appropriate testing).
  • Implement filters at the email gateway to filter out emails with known malspam indicators, such as known malicious subject lines, and block suspicious IP addresses at the firewall.
  • If your organization does not have a policy regarding suspicious emails, consider creating one and specifying that all suspicious emails should be reported to the security or IT department.
  • Mark external emails with a banner denoting it is from an external source. This will assist users in detecting spoofed emails.
  • Provide employees training on social engineering and phishing. Urge employees not to open suspicious emails, click links contained in such emails, or post sensitive information online, and to never provide usernames, passwords, or personal information in answer to any unsolicited request. Educate users to hover over a link with their mouse to verify the destination prior to clicking on the link.
  • Consider blocking file attachments that are commonly associated with malware, such as .dll and .exe, and attachments that cannot be scanned by antivirus software, such as .zip files.
  • Adhere to the principal of least privilege, ensuring that users have the minimum level of access required to accomplish their duties. Limit administrative credentials to designated administrators.
  • Implement Domain-Based Message Authentication, Reporting & Conformance (DMARC), a validation system that minimizes spam emails by detecting email spoofing using Domain Name System (DNS) records and digital signatures.

If a user or organization believes they may be infected, NCCIC and MS-ISAC recommend running an antivirus scan on the system and taking action to isolate the infected workstation based on the results. If multiple workstations are infected, the following actions are recommended:

  • Identify, shutdown, and take the infected machines off the network;
  • Consider temporarily taking the network offline to perform identification, prevent reinfections, and stop the spread of the malware;
  • Do not log in to infected systems using domain or shared local administrator accounts;
  • Reimage the infected machine(s);
  • After reviewing systems for Emotet indicators, move clean systems to a containment virtual local area network that is segregated from the infected network;
  • Issue password resets for both domain and local credentials;
  • Because Emotet scrapes additional credentials, consider password resets for other applications that may have had stored credentials on the compromised machine(s);
  • Identify the infection source (patient zero); and
  • Review the log files and the Outlook mailbox rules associated with the infected user account to ensure further compromises have not occurred. It is possible that the Outlook account may now have rules to auto-forward all emails to an external email address, which could result in a data breach.


MS-ISAC is the focal point for cyber threat prevention, protection, response, and recovery for the nation’s SLTT governments. More information about this topic, as well as 24/7 cybersecurity assistance for SLTT governments, is available by phone at 866-787-4722, by email at, or on MS-ISAC’s website at

To report an intrusion and request resources for incident response or technical assistance, contact NCCIC by email at or by phone at 888-282-0870.


Revision History

  • July, 20 2018: Initial version

This product is provided subject to this Notification and this Privacy & Use policy.

How the Rise of Cryptocurrencies Is Shaping the Cyber Crime Landscape: The Growth of Miners


Cyber criminals tend to favor cryptocurrencies because they provide a certain level of anonymity and can be easily monetized. This interest has increased in recent years, stemming far beyond the desire to simply use cryptocurrencies as a method of payment for illicit tools and services. Many actors have also attempted to capitalize on the growing popularity of cryptocurrencies, and subsequent rising price, by conducting various operations aimed at them. These operations include malicious cryptocurrency mining (also referred to as cryptojacking), the collection of cryptocurrency wallet credentials, extortion activity, and the targeting of cryptocurrency exchanges.

This blog post discusses the various trends that we have been observing related to cryptojacking activity, including cryptojacking modules being added to popular malware families, an increase in drive-by cryptomining attacks, the use of mobile apps containing cryptojacking code, cryptojacking as a threat to critical infrastructure, and observed distribution mechanisms.

What Is Mining?

As transactions occur on a blockchain, those transactions must be validated and propagated across the network. As computers connected to the blockchain network (aka nodes) validate and propagate the transactions across the network, the miners include those transactions into "blocks" so that they can be added onto the chain. Each block is cryptographically hashed, and must include the hash of the previous block, thus forming the "chain" in blockchain. In order for miners to compute the complex hashing of each valid block, they must use a machine's computational resources. The more blocks that are mined, the more resource-intensive solving the hash becomes. To overcome this, and accelerate the mining process, many miners will join collections of computers called "pools" that work together to calculate the block hashes. The more computational resources a pool harnesses, the greater the pool's chance of mining a new block. When a new block is mined, the pool's participants are rewarded with coins. Figure 1 illustrates the roles miners play in the blockchain network.

Figure 1: The role of miners

Underground Interest

FireEye iSIGHT Intelligence has identified eCrime actor interest in cryptocurrency mining-related topics dating back to at least 2009 within underground communities. Keywords that yielded significant volumes include miner, cryptonight, stratum, xmrig, and cpuminer. While searches for certain keywords fail to provide context, the frequency of these cryptocurrency mining-related keywords shows a sharp increase in conversations beginning in 2017 (Figure 2). It is probable that at least a subset of actors prefer cryptojacking over other types of financially motivated operations due to the perception that it does not attract as much attention from law enforcement.

Figure 2: Underground keyword mentions

Monero Is King

The majority of recent cryptojacking operations have overwhelmingly focused on mining Monero, an open-source cryptocurrency based on the CryptoNote protocol, as a fork of Bytecoin. Unlike many cryptocurrencies, Monero uses a unique technology called "ring signatures," which shuffles users' public keys to eliminate the possibility of identifying a particular user, ensuring it is untraceable. Monero also employs a protocol that generates multiple, unique single-use addresses that can only be associated with the payment recipient and are unfeasible to be revealed through blockchain analysis, ensuring that Monero transactions are unable to be linked while also being cryptographically secure.

The Monero blockchain also uses what's called a "memory-hard" hashing algorithm called CryptoNight and, unlike Bitcoin's SHA-256 algorithm, it deters application-specific integrated circuit (ASIC) chip mining. This feature is critical to the Monero developers and allows for CPU mining to remain feasible and profitable. Due to these inherent privacy-focused features and CPU-mining profitability, Monero has become an attractive option for cyber criminals.

Underground Advertisements for Miners

Because most miner utilities are small, open-sourced tools, many criminals rely on crypters. Crypters are tools that employ encryption, obfuscation, and code manipulation techniques to keep their tools and malware fully undetectable (FUD). Table 1 highlights some of the most commonly repurposed Monero miner utilities.

XMR Mining Utilities











Table 1: Commonly used Monero miner utilities

The following are sample advertisements for miner utilities commonly observed in underground forums and markets. Advertisements typically range from stand-alone miner utilities to those bundled with other functions, such as credential harvesters, remote administration tool (RAT) behavior, USB spreaders, and distributed denial-of-service (DDoS) capabilities.

Sample Advertisement #1 (Smart Miner + Builder)

In early April 2018, actor "Mon£y" was observed by FireEye iSIGHT Intelligence selling a Monero miner for $80 USD – payable via Bitcoin, Bitcoin Cash, Ether, Litecoin, or Monero – that included unlimited builds, free automatic updates, and 24/7 support. The tool, dubbed Monero Madness (Figure 3), featured a setting called Madness Mode that configures the miner to only run when the infected machine is idle for at least 60 seconds. This allows the miner to work at its full potential without running the risk of being identified by the user. According to the actor, Monero Madness also provides the following features:

  • Unlimited builds
  • Builder GUI (Figure 4)
  • Written in AutoIT (no dependencies)
  • FUD
  • Safer error handling
  • Uses most recent XMRig code
  • Customizable pool/port
  • Packed with UPX
  • Works on all Windows OS (32- and 64-bit)
  • Madness Mode option

Figure 3: Monero Madness

Figure 4: Monero Madness builder

Sample Advertisement #2 (Miner + Telegram Bot Builder)

In March 2018, FireEye iSIGHT Intelligence observed actor "kent9876" advertising a Monero cryptocurrency miner called Goldig Miner (Figure 5). The actor requested payment of $23 USD for either CPU or GPU build or $50 USD for both. Payments could be made with Bitcoin, Ether, Litecoin, Dash, or PayPal. The miner ostensibly offers the following features:

  • Written in C/C++
  • Build size is small (about 100–150 kB)
  • Hides miner process from popular task managers
  • Can run without Administrator privileges (user-mode)
  • Auto-update ability
  • All data encoded with 256-bit key
  • Access to Telegram bot-builder
  • Lifetime support (24/7) via Telegram

Figure 5: Goldig Miner advertisement

Sample Advertisement #3 (Miner + Credential Stealer)

In March 2018, FireEye iSIGHT Intelligence observed actor "TH3FR3D" offering a tool dubbed Felix (Figure 6) that combines a cryptocurrency miner and credential stealer. The actor requested payment of $50 USD payable via Bitcoin or Ether. According to the advertisement, the Felix tool boasted the following features:

  • Written in C# (Version
  • Browser stealer for all major browsers (cookies, saved passwords, auto-fill)
  • Monero miner (uses pool by default, but can be configured)
  • Filezilla stealer
  • Desktop file grabber (.txt and more)
  • Can download and execute files
  • Update ability
  • USB spreader functionality
  • PHP web panel

Figure 6: Felix HTTP

Sample Advertisement #4 (Miner + RAT)

In January 2018, FireEye iSIGHT Intelligence observed actor "ups" selling a miner for any Cryptonight-based cryptocurrency (e.g., Monero and Dashcoin) for either Linux or Windows operating systems. In addition to being a miner, the tool allegedly provides local privilege escalation through the CVE-2016-0099 exploit, can download and execute remote files, and receive commands. Buyers could purchase the Windows or Linux tool for €200 EUR, or €325 EUR for both the Linux and Windows builds, payable via Monero, bitcoin, ether, or dash. According to the actor, the tool offered the following:

Windows Build Specifics

  • Written in C++ (no dependencies)
  • Miner component based on XMRig
  • Easy cryptor and VPS hosting options
  • Web panel (Figure 7)
  • Uses TLS for secured communication
  • Download and execute
  • Auto-update ability
  • Cleanup routine
  • Receive remote commands
  • Perform privilege escalation
  • Features "game mode" (mining stops if user plays game)
  • Proxy feature (based on XMRig)
  • Support (for €20/month)
  • Kills other miners from list
  • Hidden from TaskManager
  • Configurable pool, coin, and wallet (via panel)
  • Can mine the following Cryptonight-based coins:
    • Monero
    • Bytecoin
    • Electroneum
    • DigitalNote
    • Karbowanec
    • Sumokoin
    • Fantomcoin
    • Dinastycoin
    • Dashcoin
    • LeviarCoin
    • BipCoin
    • QuazarCoin
    • Bitcedi

Linux Build Specifics

  • Issues running on Linux servers (higher performance on desktop OS)
  • Compatible with AMD64 processors on Ubuntu, Debian, Mint (support for CentOS later)

Figure 7: Miner bot web panel

Sample Advertisement #5 (Miner + USB Spreader + DDoS Tool)

In August 2017, actor "MeatyBanana" was observed by FireEye iSIGHT Intelligence selling a Monero miner utility that included the ability to download and execute files and perform DDoS attacks. The actor offered the software for $30 USD, payable via Bitcoin. Ostensibly, the tool works with CPUs only and offers the following features:

  • Configurable miner pool and port (default to minergate)
  • Compatible with both 64- and 86-bit Windows OS
  • Hides from the following popular task managers:
  • Windows Task Manager
  • Process Killer
  • KillProcess
  • System Explorer
  • Process Explorer
  • AnVir
  • Process Hacker
  • Masked as a system driver
  • Does not require administrator privileges
  • No dependencies
  • Registry persistence mechanism
  • Ability to perform "tasks" (download and execute files, navigate to a site, and perform DDoS)
  • USB spreader
  • Support after purchase

The Cost of Cryptojacking

The presence of mining software on a network can generate costs on three fronts as the miner surreptitiously allocates resources:

  1. Degradation in system performance
  2. Increased cost in electricity
  3. Potential exposure of security holes

Cryptojacking targets computer processing power, which can lead to high CPU load and degraded performance. In extreme cases, CPU overload may even cause the operating system to crash. Infected machines may also attempt to infect neighboring machines and therefore generate large amounts of traffic that can overload victims' computer networks.

In the case of operational technology (OT) networks, the consequences could be severe. Supervisory control and data acquisition/industrial control systems (SCADA/ICS) environments predominately rely on decades-old hardware and low-bandwidth networks, therefore even a slight increase in CPU load or the network could leave industrial infrastructures unresponsive, impeding operators from interacting with the controlled process in real-time.

The electricity cost, measured in kilowatt hour (kWh), is dependent upon several factors: how often the malicious miner software is configured to run, how many threads it's configured to use while running, and the number of machines mining on the victim's network. The cost per kWh is also highly variable and depends on geolocation. For example, security researchers who ran Coinhive on a machine for 24 hours found that the electrical consumption was 1.212kWh. They estimated that this equated to electrical costs per month of $10.50 USD in the United States, $5.45 USD in Singapore, and $12.30 USD in Germany.

Cryptojacking can also highlight often overlooked security holes in a company's network. Organizations infected with cryptomining malware are also likely vulnerable to more severe exploits and attacks, ranging from ransomware to ICS-specific malware such as TRITON.

Cryptocurrency Miner Distribution Techniques

In order to maximize profits, cyber criminals widely disseminate their miners using various techniques such as incorporating cryptojacking modules into existing botnets, drive-by cryptomining attacks, the use of mobile apps containing cryptojacking code, and distributing cryptojacking utilities via spam and self-propagating utilities. Threat actors can use cryptojacking to affect numerous devices and secretly siphon their computing power. Some of the most commonly observed devices targeted by these cryptojacking schemes are:

  • User endpoint machines
  • Enterprise servers
  • Websites
  • Mobile devices
  • Industrial control systems
Cryptojacking in the Cloud

Private sector companies and governments alike are increasingly moving their data and applications to the cloud, and cyber threat groups have been moving with them. Recently, there have been various reports of actors conducting cryptocurrency mining operations specifically targeting cloud infrastructure. Cloud infrastructure is increasingly a target for cryptojacking operations because it offers actors an attack surface with large amounts of processing power in an environment where CPU usage and electricity costs are already expected to be high, thus allowing their operations to potentially go unnoticed. We assess with high confidence that threat actors will continue to target enterprise cloud networks in efforts to harness their collective computational resources for the foreseeable future.

The following are some real-world examples of cryptojacking in the cloud:

  • In February 2018, FireEye researchers published a blog detailing various techniques actors used in order to deliver malicious miner payloads (specifically to vulnerable Oracle servers) by abusing CVE-2017-10271. Refer to our blog post for more detailed information regarding the post-exploitation and pre-mining dissemination techniques used in those campaigns.
  • In March 2018, Bleeping Computer reported on the trend of cryptocurrency mining campaigns moving to the cloud via vulnerable Docker and Kubernetes applications, which are two software tools used by developers to help scale a company's cloud infrastructure. In most cases, successful attacks occur due to misconfigured applications and/or weak security controls and passwords.
  • In February 2018, Bleeping Computer also reported on hackers who breached Tesla's cloud servers to mine Monero. Attackers identified a Kubernetes console that was not password protected, allowing them to discover login credentials for the broader Tesla Amazon Web services (AWS) S3 cloud environment. Once the attackers gained access to the AWS environment via the harvested credentials, they effectively launched their cryptojacking operations.
  • Reports of cryptojacking activity due to misconfigured AWS S3 cloud storage buckets have also been observed, as was the case in the LA Times online compromise in February 2018. The presence of vulnerable AWS S3 buckets allows anyone on the internet to access and change hosted content, including the ability to inject mining scripts or other malicious software.
Incorporation of Cryptojacking into Existing Botnets

FireEye iSIGHT Intelligence has observed multiple prominent botnets such as Dridex and Trickbot incorporate cryptocurrency mining into their existing operations. Many of these families are modular in nature and have the ability to download and execute remote files, thus allowing the operators to easily turn their infections into cryptojacking bots. While these operations have traditionally been aimed at credential theft (particularly of banking credentials), adding mining modules or downloading secondary mining payloads provides the operators another avenue to generate additional revenue with little effort. This is especially true in cases where the victims were deemed unprofitable or have already been exploited in the original scheme.

The following are some real-world examples of cryptojacking being incorporated into existing botnets:

  • In early February 2018, FireEye iSIGHT Intelligence observed Dridex botnet ID 2040 download a Monero cryptocurrency miner based on the open-source XMRig miner.
  • On Feb. 12, 2018, FireEye iSIGHT Intelligence observed the banking malware IcedID injecting Monero-mining JavaScript into webpages for specific, targeted URLs. The IcedID injects launched an anonymous miner using the mining code from Coinhive's AuthedMine.
  • In late 2017, Bleeping Computer reported that security researchers with Radware observed the hacking group CodeFork leveraging the popular downloader Andromeda (aka Gamarue) to distribute a miner module to their existing botnets.
  • In late 2017, FireEye researchers observed Trickbot operators deploy a new module named "testWormDLL" that is a statically compiled copy of the popular XMRig Monero miner.
  • On Aug. 29, 2017, Security Week reported on a variant of the popular Neutrino banking Trojan, including a Monero miner module. According to their reporting, the new variant no longer aims at stealing bank card data, but instead is limited to downloading and executing modules from a remote server.

Drive-By Cryptojacking


FireEye iSIGHT Intelligence has examined various customer reports of browser-based cryptocurrency mining. Browser-based mining scripts have been observed on compromised websites, third-party advertising platforms, and have been legitimately placed on websites by publishers. While coin mining scripts can be embedded directly into a webpage's source code, they are frequently loaded from third-party websites. Identifying and detecting websites that have embedded coin mining code can be difficult since not all coin mining scripts are authorized by website publishers, such as in the case of a compromised website. Further, in cases where coin mining scripts were authorized by a website owner, they are not always clearly communicated to site visitors. At the time of reporting, the most popular script being deployed in the wild is Coinhive. Coinhive is an open-source JavaScript library that, when loaded on a vulnerable website, can mine Monero using the site visitor's CPU resources, unbeknownst to the user, as they browse the site.

The following are some real-world examples of Coinhive being deployed in the wild:

  • In September 2017, Bleeping Computer reported that the authors of SafeBrowse, a Chrome extension with more than 140,000 users, had embedded the Coinhive script in the extension's code that allowed for the mining of Monero using users' computers and without getting their consent.
  • During mid-September 2017, users on Reddit began complaining about increased CPU usage when they navigated to a popular torrent site, The Pirate Bay (TPB). The spike in CPU usage was a result of Coinhive's script being embedded within the site's footer. According to TPB operators, it was implemented as a test to generate passive revenue for the site (Figure 8).
  • In December 2017, researchers with Sucuri reported on the presence of the Coinhive script being hosted on, which allows users to publish web pages directly from GitHub repositories.
  • Other reporting disclosed the Coinhive script being embedded on the Showtime domain as well as on the LA Times website, both surreptitiously mining Monero.
  • A majority of in-browser cryptojacking activity is transitory in nature and will last only as long as the user’s web browser is open. However, researchers with Malwarebytes Labs uncovered a technique that allows for continued mining activity even after the browser window is closed. The technique leverages a pop-under window surreptitiously hidden under the taskbar. As researchers pointed out, closing the browser window may not be enough to interrupt the activity, and that more advanced actions like running the Task Manager may be required.

Figure 8: Statement from TPB operators on Coinhive script

Malvertising and Exploit Kits

Malvertisements – malicious ads on legitimate websites – commonly redirect visitors of a site to an exploit kit landing page. These landing pages are designed to scan a system for vulnerabilities, exploit those vulnerabilities, and download and execute malicious code onto the system. Notably, the malicious advertisements can be placed on legitimate sites and visitors can become infected with little to no user interaction. This distribution tactic is commonly used by threat actors to widely distribute malware and has been employed in various cryptocurrency mining operations.

The following are some real-world examples of this activity:

  • In early 2018, researchers with Trend Micro reported that a modified miner script was being disseminated across YouTube via Google's DoubleClick ad delivery platform. The script was configured to generate a random number variable between 1 and 100, and when the variable was above 10 it would launch the Coinhive script coinhive.min.js, which harnessed 80 percent of the CPU power to mine Monero. When the variable was below 10 it launched a modified Coinhive script that was also configured to harness 80 percent CPU power to mine Monero. This custom miner connected to the mining pool wss[:]//ws[.]l33tsite[.]info:8443, which was likely done to avoid Coinhive's fees.
  • In April 2018, researchers with Trend Micro also discovered a JavaScript code based on Coinhive injected into an AOL ad platform. The miner used the following private mining pools: wss[:]//wsX[.]www.datasecu[.]download/proxy and wss[:]//www[.]jqcdn[.]download:8893/proxy. Examination of other sites compromised by this campaign showed that in at least some cases the operators were hosting malicious content on unsecured AWS S3 buckets.
  • Since July 16, 2017, FireEye has observed the Neptune Exploit Kit redirect to ads for hiking clubs and MP3 converter domains. Payloads associated with the latter include Monero CPU miners that are surreptitiously installed on victims' computers.
  • In January 2018, Check Point researchers discovered a malvertising campaign leading to the Rig Exploit Kit, which served the XMRig Monero miner utility to unsuspecting victims.

Mobile Cryptojacking

In addition to targeting enterprise servers and user machines, threat actors have also targeted mobile devices for cryptojacking operations. While this technique is less common, likely due to the limited processing power afforded by mobile devices, cryptojacking on mobile devices remains a threat as sustained power consumption can damage the device and dramatically shorten the battery life. Threat actors have been observed targeting mobile devices by hosting malicious cryptojacking apps on popular app stores and through drive-by malvertising campaigns that identify users of mobile browsers.

The following are some real-world examples of mobile devices being used for cryptojacking:

  • During 2014, FireEye iSIGHT Intelligence reported on multiple Android malware apps capable of mining cryptocurrency:
    • In March 2014, Android malware named "CoinKrypt" was discovered, which mined Litecoin, Dogecoin, and CasinoCoin currencies.
    • In March 2014, another form of Android malware – "Android.Trojan.MuchSad.A" or "ANDROIDOS_KAGECOIN.HBT" – was observed mining Bitcoin, Litecoin, and Dogecoin currencies. The malware was disguised as copies of popular applications, including "Football Manager Handheld" and "TuneIn Radio." Variants of this malware have reportedly been downloaded by millions of Google Play users.
    • In April 2014, Android malware named "BadLepricon," which mined Bitcoin, was identified. The malware was reportedly being bundled into wallpaper applications hosted on the Google Play store, at least several of which received 100 to 500 installations before being removed.
    • In October 2014, a type of mobile malware called "Android Slave" was observed in China; the malware was reportedly capable of mining multiple virtual currencies.
  • In December 2017, researchers with Kaspersky Labs reported on a new multi-faceted Android malware capable of a variety of actions including mining cryptocurrencies and launching DDoS attacks. The resource load created by the malware has reportedly been high enough that it can cause the battery to bulge and physically destroy the device. The malware, dubbed Loapi, is unique in the breadth of its potential actions. It has a modular framework that includes modules for malicious advertising, texting, web crawling, Monero mining, and other activities. Loapi is thought to be the work of the same developers behind the 2015 Android malware Podec, and is usually disguised as an anti-virus app.
  • In January 2018, SophosLabs released a report detailing their discovery of 19 mobile apps hosted on Google Play that contained embedded Coinhive-based cryptojacking code, some of which were downloaded anywhere from 100,000 to 500,000 times.
  • Between November 2017 and January 2018, researchers with Malwarebytes Labs reported on a drive-by cryptojacking campaign that affected millions of Android mobile browsers to mine Monero.

Cryptojacking Spam Campaigns

FireEye iSIGHT Intelligence has observed several cryptocurrency miners distributed via spam campaigns, which is a commonly used tactic to indiscriminately distribute malware. We expect malicious actors will continue to use this method to disseminate cryptojacking code as for long as cryptocurrency mining remains profitable.

In late November 2017, FireEye researchers identified a spam campaign delivering a malicious PDF attachment designed to appear as a legitimate invoice from the largest port and container service in New Zealand: Lyttelton Port of Chistchurch (Figure 9). Once opened, the PDF would launch a PowerShell script that downloaded a Monero miner from a remote host. The malicious miner connected to the pools and

Figure 9: Sample lure attachment (PDF) that downloads malicious cryptocurrency miner

Additionally, a massive cryptojacking spam campaign was discovered by FireEye researchers during January 2018 that was designed to look like legitimate financial services-related emails. The spam email directed victims to an infection link that ultimately dropped a malicious ZIP file onto the victim's machine. Contained within the ZIP file was a cryptocurrency miner utility (MD5: 80b8a2d705d5b21718a6e6efe531d493) configured to mine Monero and connect to the pool. While each of the spam email lures and associated ZIP filenames were different, the same cryptocurrency miner sample was dropped across all observed instances (Table 2).

ZIP Filenames













Table 2: Sampling of observed ZIP filenames delivering cryptocurrency miner

Cryptojacking Worms

Following the WannaCry attacks, actors began to increasingly incorporate self-propagating functionality within their malware. Some of the observed self-spreading techniques have included copying to removable drives, brute forcing SSH logins, and leveraging the leaked NSA exploit EternalBlue. Cryptocurrency mining operations significantly benefit from this functionality since wider distribution of the malware multiplies the amount of CPU resources available to them for mining. Consequently, we expect that additional actors will continue to develop this capability.

The following are some real-world examples of cryptojacking worms:

  • In May 2017, Proofpoint reported a large campaign distributing mining malware "Adylkuzz." This cryptocurrency miner was observed leveraging the EternalBlue exploit to rapidly spread itself over corporate LANs and wireless networks. This activity included the use of the DoublePulsar backdoor to download Adylkuzz. Adylkuzz infections create botnets of Windows computers that focus on mining Monero.
  • Security researchers with Sensors identified a Monero miner worm, dubbed "Rarogminer," in April 2018 that would copy itself to removable drives each time a user inserted a flash drive or external HDD.
  • In January 2018, researchers at F5 discovered a new Monero cryptomining botnet that targets Linux machines. PyCryptoMiner is based on Python script and spreads via the SSH protocol. The bot can also use Pastebin for its command and control (C2) infrastructure. The malware spreads by trying to guess the SSH login credentials of target Linux systems. Once that is achieved, the bot deploys a simple base64-encoded Python script that connects to the C2 server to download and execute more malicious Python code.

Detection Avoidance Methods

Another trend worth noting is the use of proxies to avoid detection. The implementation of mining proxies presents an attractive option for cyber criminals because it allows them to avoid developer and commission fees of 30 percent or more. Avoiding the use of common cryptojacking services such as Coinhive, Cryptloot, and Deepminer, and instead hosting cryptojacking scripts on actor-controlled infrastructure, can circumvent many of the common strategies taken to block this activity via domain or file name blacklisting.

In March 2018, Bleeping Computer reported on the use of cryptojacking proxy servers and determined that as the use of cryptojacking proxy services increases, the effectiveness of ad blockers and browser extensions that rely on blacklists decreases significantly.

Several mining proxy tools can be found on GitHub, such as the XMRig Proxy tool, which greatly reduces the number of active pool connections, and the CoinHive Stratum Mining Proxy, which uses Coinhive’s JavaScript mining library to provide an alternative to using official Coinhive scripts and infrastructure.

In addition to using proxies, actors may also establish their own self-hosted miner apps, either on private servers or cloud-based servers that supports Node.js. Although private servers may provide some benefit over using a commercial mining service, they are still subject to easy blacklisting and require more operational effort to maintain. According to Sucuri researchers, cloud-based servers provide many benefits to actors looking to host their own mining applications, including:

  • Available free or at low-cost
  • No maintenance, just upload the crypto-miner app
  • Harder to block as blacklisting the host address could potentially impact access to legitimate services
  • Resilient to permanent takedown as new hosting accounts can more easily be created using disposable accounts

The combination of proxies and crypto-miners hosted on actor-controlled cloud infrastructure presents a significant hurdle to security professionals, as both make cryptojacking operations more difficult to detect and take down.

Mining Victim Demographics

Based on data from FireEye detection technologies, the detection of cryptocurrency miner malware has increased significantly since the beginning of 2018 (Figure 10), with the most popular mining pools being minergate and nanopool (Figure 11), and the most heavily affected country being the U.S. (Figure 12). Consistent with other reporting, the education sector remains most affected, likely due to more relaxed security controls across university networks and students taking advantage of free electricity to mine cryptocurrencies (Figure 13).

Figure 10: Cryptocurrency miner detection activity per month

Figure 11: Commonly observed pools and associated ports

Figure 12: Top 10 affected countries

Figure 13: Top five affected industries

Figure 14: Top affected industries by country

Mitigation Techniques

Unencrypted Stratum Sessions

According to security researchers at Cato Networks, in order for a miner to participate in pool mining, the infected machine will have to run native or JavaScript-based code that uses the Stratum protocol over TCP or HTTP/S. The Stratum protocol uses a publish/subscribe architecture where clients will send subscription requests to join a pool and servers will send messages (publish) to its subscribed clients. These messages are simple, readable, JSON-RPC messages. Subscription requests will include the following entities: id, method, and params (Figure 15). A deep packet inspection (DPI) engine can be configured to look for these parameters in order to block Stratum over unencrypted TCP.

Figure 15: Stratum subscription request parameters

Encrypted Stratum Sessions

In the case of JavaScript-based miners running Stratum over HTTPS, detection is more difficult for DPI engines that do not decrypt TLS traffic. To mitigate encrypted mining traffic on a network, organizations may blacklist the IP addresses and domains of popular mining pools. However, the downside to this is identifying and updating the blacklist, as locating a reliable and continually updated list of popular mining pools can prove difficult and time consuming.

Browser-Based Sessions

Identifying and detecting websites that have embedded coin mining code can be difficult since not all coin mining scripts are authorized by website publishers (as in the case of a compromised website). Further, in cases where coin mining scripts were authorized by a website owner, they are not always clearly communicated to site visitors.

As defenses evolve to prevent unauthorized coin mining activities, so will the techniques used by actors; however, blocking some of the most common indicators that we have observed to date may be effective in combatting a significant amount of the CPU-draining mining activities that customers have reported. Generic detection strategies for browser-based cryptocurrency mining include:

  • Blocking domains known to have hosted coin mining scripts
  • Blocking websites of known mining project websites, such as Coinhive
  • Blocking scripts altogether
  • Using an ad-blocker or coin mining-specific browser add-ons
  • Detecting commonly used naming conventions
  • Alerting and blocking traffic destined for known popular mining pools

Some of these detection strategies may also be of use in blocking some mining functionality included in existing financial malware as well as mining-specific malware families.

It is important to note that JavaScript used in browser-based cryptojacking activity cannot access files on disk. However, if a host has inadvertently navigated to a website hosting mining scripts, we recommend purging cache and other browser data.


In underground communities and marketplaces there has been significant interest in cryptojacking operations, and numerous campaigns have been observed and reported by security researchers. These developments demonstrate the continued upward trend of threat actors conducting cryptocurrency mining operations, which we expect to see a continued focus on throughout 2018. Notably, malicious cryptocurrency mining may be seen as preferable due to the perception that it does not attract as much attention from law enforcement as compared to other forms of fraud or theft. Further, victims may not realize their computer is infected beyond a slowdown in system performance.

Due to its inherent privacy-focused features and CPU-mining profitability, Monero has become one of the most attractive cryptocurrency options for cyber criminals. We believe that it will continue to be threat actors' primary cryptocurrency of choice, so long as the Monero blockchain maintains privacy-focused standards and is ASIC-resistant. If in the future the Monero protocol ever downgrades its security and privacy-focused features, then we assess with high confidence that threat actors will move to use another privacy-focused coin as an alternative.

Because of the anonymity associated with the Monero cryptocurrency and electronic wallets, as well as the availability of numerous cryptocurrency exchanges and tumblers, attribution of malicious cryptocurrency mining is very challenging for authorities, and malicious actors behind such operations typically remain unidentified. Threat actors will undoubtedly continue to demonstrate high interest in malicious cryptomining so long as it remains profitable and relatively low risk.

Veracode Dynamic Analysis: Reduce the Risk of a Breach

Veracode Dynamic Analysis is a dynamic scanning solution that features automation, depth of coverage, and unmatched scalability. Built on microservices and cloud technologies, the Veracode Dynamic Analysis solution is available on the Veracode SaaS platform. Veracode Dynamic Analysis helps both vulnerability managers tasked with safeguarding the entire web application portfolio, and AppSec managers tasked with safeguarding critical applications in pre-production. With the frameworks developers use to build web applications changing often, and the push toward single page applications, Veracode Dynamic Analysis gives you the automated dynamic scanning you need to find vulnerabilities quickly and accurately.

Benefits of Scheduling Automation

Consistent dynamic scanning is key to keeping your web applications safe, and consistent scanning is achievable with an automated dynamic scanning solution. Imagine your CISO tells you to scan your web apps as often as feasible. Depending on remediation frequency, you come up with a quarterly, monthly, or weekly scanning schedule. To add additional complexity, IT gives you a maintenance window when dynamic scanning cannot occur. If you’re part of a global company, you also have time zones to contend with, making it virtually impossible to depend on a manual pause and resume, not to mention the inconvenience of waking up at 3:00 AM to pause a running scan. With all these variables to handle, you need a dynamic scanning solution that provides true automation to handle scheduling and IT maintenance windows, so you can “set it and forget it.” 

Recurring Scan Scheduling provides the ability to set up a schedule such that the application can be automatically scanned on a weekly, monthly, or quarterly cadence (or anything in between). Once the schedule has been set up, the dynamic scan will kick off automatically at the defined cadence. If the scan has been set up to start on a Tuesday, it will maintain that start day for the weekly scans to avoid running into weekends and holidays.

Automated Pause & Resume provides the ability to designate a maintenance window when the applications won’t be scanned. Dynamic scanning will be automatically paused when the IT maintenance window begins and automatically resume when the applications can be scanned. The pause and resume functionality has been built to ensure scanning resumes where it left off, with the goal of full coverage.

The screenshot below shows how to set up a weekly recurring scan that runs year round, pauses at midnight, and resumes at 4:00 AM each day.

  • Each week the application is dynamically scanned with the automated schedule and scan kick-off.
  • The system automatically pauses at the start of the maintenance window at 12:00 AM and resumes scanning at 4:00 AM.
  • You can adjust the duration based on the size of the application and the number of applications scanned in the batch to get the best coverage.

Authenticated Batch Scanning provides the ability to increase coverage by scanning behind the login screen, using a multitude of login mechanisms such as auto login, basic authentication, or uploading a login script. You can depend on the pre-scan feature to provide accurate feedback on the connection and authentication for the application under test, so you can fix any access issues ahead of the scheduled start time. In addition, a batch of scans can be kicked off at the same time to allow concurrent scanning with authentication. You save a lot of time when all applications can be concurrently scanned, with coverage for single page application frameworks and the ability to cover large web applications quickly.

Dynamic Analysis makes it easy to onboard applications and provides multiple input mechanisms. Uploading a CSV file is a quick way for large and small companies to take advantage of scanning applications concurrently.

Show Me the Results: Consolidated View

Veracode Dynamic Analysis provides visibility into the scanning process to give you peace of mind and comprehensive results once the scanning is complete. The Veracode Platform’s Triage Flaw Viewer provides CWE details, vulnerability severity, along with request/response. In addition, the Platform provides reports to show scan coverage, summary reports for executives, and detailed reports for AppSec teams.

The goal of dynamic scanning is to find exploitable vulnerabilities at runtime, and remediate the issues found. The Dynamic Flaw Inventory provides a dashboard that provides historical vulnerability information, allowing AppSec managers to track team progress toward fixing vulnerabilities. 

Veracode Dynamic Analysis gives you a solution to scan your entire portfolio of web applications with ease, provides accurate results, and puts you on the path to remediate the findings. Even if you are running static scans early in the SDLC, dynamically scanning your web application at runtime uncovers exploitable vulnerabilities that static scans won’t find. Use our dynamic scanning solution to find and remediate flaws before a hacker exploits the vulnerability, resulting in a breach.

I’d love to hear your feedback

Would Veracode Dynamic Analysis benefit your AppSec program and reduce the risk of a breach? I’d like to hear your thoughts. To learn more please visit us online or to schedule a demo now, click here.

When Disaster Comes Calling

There are times like this when I can’t help but wonder about disaster recovery plans. A large number of companies that I have worked at or spoken with over the years seemed to pay little more than lip service to this rather significant elephant in the room. This came to mind today while I was reading about the storm that ran roughshod over Toronto. In the midst of all the flooding I read about the servers at Toronto’s Pearson airport (YYZ).They had become, well, rather wet. There was “flooding in server rooms.” according to their tweet July 8th at 9:16 pm.

This really got me thinking as to how this could have happened in the first place.

At one organization that I worked for the role of disaster recovery planning fell to an individual that had neither the interest nor the wherewithal to accomplish the task. This is a real problem for many companies and organizations. The fate of their operations can, at times, reside in the hands of someone who is disinclined to properly perform the task.

Of course this is not a truism of every company. But, there are many instances where it is the sheer force of will of the staff needed to restore service in the event of an outage. One company that I had worked for suffered an SAP outage that made it such that invoices could not be paid. The impact of this was a massive financial burden and it took the better part of a month to sort out. There was no disaster recovery plan. There was no system back up. There was no failover. In this case the DR plan was not even in existence. Through the Herculean efforts of the staff, invoices were paid manually.

A second example that I can’t help but pull from the archives was when I was working for a certain power company. It was the end of the day and I was heading for the door with my coworker. We came upon our head of IT operations and one of the building security guards working feverishly to contain a water leak in the janitorial closet. The faucet would not close. We dropped our bags and pitched in to help.

In relatively short order the tap sheered off from the wall and the real flooding began. The difficulty that presented itself in short order was that the main water shut off valve was no where to be found. There was no disaster recovery plan that covered this contingency. To make matters worse, the computer control room was located on the floor directly below the janitor closet.

Um, yeah.

Ultimately the situation was resolved and the control room was saved. But, it should have never gotten to that point.

So what is the actionable take away to had from this post? Take some time to review your organizations disaster recovery plans. Are backups taken? Are they tested? Are they stored offsite? Does the disaster recovery plan even exist anywhere on paper? Has that plan been tested with the staff? No plan survives first contact with the “enemy” but, it is far better to be well trained and prepared than to be caught unawares.

Even if you’re not directly involved with the plans in your shop be sure to ask the question. Are we prepared?

Originally posted on CSO Online by me.

The post When Disaster Comes Calling appeared first on Liquidmatrix Security Digest.

By targeting encrypted content, Australia threatens press freedom

Jole Aron

The Australian government is considering legislation that would endanger source protection, confidential reporting processes, and the privacy of everyone in an ill-conceived effort to grant law enforcement easier access to electronic communications.

Freedom of the Press Foundation has joined a group of digital rights organizations in calling for the Australian government to refrain from any effort to weaken access to encrypted communication services. “We strongly urge the government to commit to not only supporting, but investing in the development and use of encryption and other security tools and technologies that protect users and systems,” the open letter to Australian officials states.

While it has not yet introduced such legislation, the government has reiterated its intention of doing so consistently over the past year. In July 2017, Australian Prime Minister Turnbull and Attorney General George Brandis held a press conference at which they initially stated their intention to force communications companies to comply with law enforcement decryption efforts. Months later, the foreign minister said legislation intending to work with communication providers to stop terrorism was imminent.

It’s unclear what this legislation will look like, but communication companies or device makers could face significant government fines if they refuse to assist law enforcement with accessing users’ data. This could apply not only to Australian telecommunications companies like Telstra and Optus, but also to huge, internationally-based tech companies like Facebook and Apple.

If companies have the ability to decrypt their users’ data and hold their private encryption keys, those companies could be forced to provide confidential communications anytime the government deems access necessary. Taylor has claimed there will be no requirements for companies to build “backdoors” into their products for law enforcement, but the alternative to undermining encryption itself is to target physical devices.

This is one of the fears of Nathan White, Senior Legislative Manager at Access Now. He is concerned that rather than compelling WhatsApp or Gmail to provide access to encrypted content, the legislation will force device manufacturers to push targeted malware to the devices of people who are the subject of investigations.

Regular software updates are critical to the security and privacy, because they often fix vulnerabilities and introduce new protections. Laws that could force a company like Apple to target a user’s device with malware would eradicate trust between device makers and their users in software updates. The government could hypothetically demand malware to be sent to the devices of journalists, sources, or activists, and use confidential communications acquired through targeted malware to prosecute or investigative them.

Australian Attorney General George Brandis called encryption “potentially the greatest degradation of intelligence and law enforcement capability” in a lifetime. He has indicated that the new laws would be akin to the United Kingdom’s Investigatory Powers Act, and would grant the government the ability to force companies to comply with investigations.

It’s a chilling comparison to make. The Investigatory Powers Act is one of the world’s most Orwellian and sweeping surveillance laws, which authorizes the blanket collection, monitoring, retention of citizens’ communications and online activity.

Australia is also part of the powerful “Five Eyes” intelligence alliance that includes the United Kingdom, United States, New Zealand, and France. The adoption of laws that use broad “terrorism” claims to justify weakening of encryption or targeting of devices could open the door not only to similar legislation in other countries and even normalize international sharing of decrypted sensitive data. (Australia is also hosting a Five Eyes meeting in August, where these legislative efforts could be discussed.)

It’s unclear what this legislation will look like, or when it will be introduced, but the government’s efforts will be met with widespread opposition when it does so. Any laws that threaten software updates or encryption would threaten the privacy of everyone in Australia, and set a disturbing precedent for governments and intelligence agencies around the world.

IDG Contributor Network: DAM if you do and DAM if you don’t

Digital Asset Management or DAM is traditionally associated with rich media and the companies who employ that type of content, such as media and entertainment. It is big business too. The market for DAM is expected to be worth $9.1 billion USD by 2024. Much of this is driven by the increasing importance of content marketing - digital content offering a very good ROI according to Smart Insights.

I’ve always felt that marketing is a discipline that can inform the world of digital identity. It is engaged with customers, it has a good grasp on user behavior, and it utilizes statistics and reporting to optimize systems. So, where does DAM fit into a consumer identity platform and how can digital content add benefit?

To read this article in full, please click here

Is the new California privacy law a domestic GDPR?

The difference between data privacy protections afforded to European Union residents and people in the U.S. is more sharply highlighted now that the EU’s General Data Protection Regulation has taken effect. Will passage of a new California privacy law make a difference?

At first glance, it may seem California state legislators took a bold first step when they quickly passed a comprehensive data privacy protection law last month known as the California Consumer Privacy Act of 2018.

Like the GDPR, this new legislation spells out these rights for protection of the privacy of California consumers. From the text of the new law, these rights include:

(1) The right of Californians to know what personal information is being collected about them.

(2) The right of Californians to know whether their personal information is sold or disclosed and to whom.

(3) The right of Californians to say no to the sale of personal information.

(4) The right of Californians to access their personal information.

(5) The right of Californians to equal service and price, even if they exercise their privacy rights.

While the intent of the new California privacy law and the GDPR are the same — protecting consumer privacy — the most important differences between the two laws lie in the process. Whereas the GDPR was the product of years of careful preparation and collaboration between bureaucrats, privacy experts, politicians and technology practitioners, the California privacy law was mashed together in less than a week, according to the Washington Post, in order to forestall more stringent privacy protections from being passed via a ballot initiative that had broad support in California.

The bipartisan rush to enact the new California privacy law (passed unanimously) has everything to do with control, and little to do with the will of the people. Legislation passed by the electorate through a ballot initiative is much more difficult for legislators to tinker with: any changes require a two-thirds majority, while laws passed the usual way by the legislature can be more easily modified with a simple majority.

Another superficial similarity between the GDPR and the California Consumer Privacy Act is that enforcement of the new law is set to begin (almost) two years from the date of passage. For the GDPR, enforcement began May 25, 2018; the California privacy law goes into effect on Jan. 1, 2020. Companies facing the requirement to comply with the GDPR were given a two-year window by the EU lawmakers to get ready, but the conventional wisdom around the California privacy law is that the next year and a half will be used by big tech companies and legislators to negotiate the precise terms of the law.

There are many other differences, but companies aiming to comply with the California privacy law should note that the terms of the law as currently written could be softened considerably before enforcement begins.

And some of the differences are worth noting. First, most businesses are likely to not be affected at all, as businesses subject to the law must meet one or more of the following conditions:

  • Annual gross revenues in excess of $25 million,
  • process information of 50,000 or more consumers, households or devices,
  • derive at least 50% of their annual revenues from the sale of personal information

As for penalties, companies subject to the regulation face fines as high as $7,500 for each violation, to be levied through a civil action “brought in the name of the people of the State of California by the Attorney General,” the law reads — but that requires the finding that the offending entity violated the law “intentionally.”

Is the California privacy law comparable to the GDPR? We don’t know, and we probably won’t know for at least a year — and perhaps not until after Jan. 1, 2020, when the new law goes into effect. If the law, as written, is applied to a company like Equifax, which exposed roughly half the adult U.S. population in the breach uncovered last year, then the results could be devastating. The share of Californians exposed in that breach can be estimated at about 12 million; if the Equifax breach was found to have been caused intentionally, the maximum fine would be close to $100 billion.

That’s far higher than the GDPR maximum penalty of 4% of annual global turnover, which in 2017 was only $3.36 billion, meaning the maximum fine could be about $135 million.

However, GDPR penalties don’t require a finding of intent to break the law on the part of the offending entity, and many smaller companies subject to GDPR — those with annual gross revenues lower than $25 million, processing personal data of fewer than 50,000 consumers, households or devices, and which make less than 50% of their revenue from the sale of that data — would be immune from any penalties under the new California privacy law.

The bottom line: unlike in 2016, when the final form of the GDPR was approved and companies were granted a two-year period to prepare to comply with the new privacy regulation, the new California privacy law is coming — but it’s still an open question just how effective or useful it will be for protecting consumer privacy.

The post Is the new California privacy law a domestic GDPR? appeared first on Security Bytes.

Never patch another system again

Over the years I have been asked a curious question numerous times. ‘If we use product x or solution y we wouldn’t have to patch anymore, right?” At this point in the conversation I would often sit back in my seat and try to look like I was giving their question a lot of thought. The reality was more pragmatic. I was trying very hard to stifle my screams while appearing considerate of their query.

Let’s be honest with ourselves. No one likes to apply patches. If that were the opposite I have little doubt that we would have far fewer data breaches than we read about in the news these days. I’m sure that there is a mythical unicorn out there that simply lives for this sort of activity. I will be entirely honest when I say that I have never met this person.

Applying patches is a very necessary activity. So, why is it that we continually have to return to this discussion point? Time and again we read in the press about companies that were compromised because of a missed patch or configuration error. One of the things that I do a fair bit is to read the data breach notices that companies issue. There are some trends that are inescapable. A piece of software wasn’t patched to current. There was a configuration error or a laptop was stolen but, have no fear, there was a password.

Two of the aforementioned were easily preventable situations and the third…well, I’ll just leave that one alone for this post.

Let’s just dispense with the nonsense. There is no product on this little blue marble that we call home that will ever give you 100% security. It just isn’t going to happen. Full stop. There are so many moving parts in the modern IT ecosystem that we have to take this in to account. There is a real problem that we seem to drift farther and farther from each and every day. We are failing to tackle the fundamentals well and as a result the security of our digital supply chain is suffering.

I often get teased by some friends for using the phrase “defined repeatable process”. This idea is absolutely nothing new. This is a term that has been floating around for a long while now but, we seem incapable of implementing them. Why is that? When we drift away from doing things well, such as patching, we are inadvertently increasing our technical security debt. As this chasm continues to widen there will come a point after which most organizations would not be able to pivot to the safety of higher ground.

So as I knock this idea around in my head I continue to wonder what it is that we can do to improve things from a repeatable process standpoint.

Go ahead and put up your feet on your desk basking in the glow of knowledge that some vendor is going to solve all of your security issues. Never patch another system again and we shall gleefully dance around the smoldering crater that was once your enterprise network after the hordes of attackers are done savaging it.

An apple a day keeps the doctor away and all that sort of rot.

Originally posted on CSO Online by me.

The post Never patch another system again appeared first on Liquidmatrix Security Digest.

InfoSec Recruiting – Is the Industry Creating its own Drought?

The InfoSec industry has a crippling skills shortage, or so we’re told. There’s a constant stream of articles, keynotes, research and initiatives all telling us of the difficulty companies have in finding new talent. I’ve been in the industry for over 30 years now and through my role as one of the directors of Security BSides London, I often help companies who are struggling to grow their teams. More recently, my own circumstances have led me to once again join the infosec candidate pool and go through the job hunt and interview process.

I have been in the position of hiring resources in the past and understand that it is not easy and takes time. But having sat through a few interviews of my own now, I am beginning to wonder if we have not brought this situation upon ourselves. Are the expectations of recruiters out of proportion? Are they expecting to uncover a hidden gem that ticks every single box? Is it really true that the infosec talent pool is running empty, or is it that the hiring process in the industry is creating its own drought?

Part of this situation may be coming from the way hiring managers are questioning candidates. There is no perfect questioning methodology, but today, focusing purely on technical questions cannot be a good solution because – LMGTFY – even fairly lazy candidates can study and prepare for any technical questions beforehand. It might seem obvious that a hiring manager needs to look at a wider scope, evaluating the candidate’s ability to learn, adapt, and demonstrate their analytic or creative capabilities, but this is the part that seems to be missed.

I’ve found that candidate questioning within some organisations has become vague and far too open-ended to provide a meaningful evaluation. For example, I recently was asked – and know of a few other people who were asked – the open-ended question: “What happens when you use a browser?”. I won’t go into the pros and cons of this specific question in the hiring process as it is quite well covered in the post: The “What happens when you use a browser?” Question from Scott J Roberts.

This type of question can be answered in so many ways, from a high level overview to the nuts-and-bolts. And when the candidate has been building networks before the Internet was really a thing and was probably already working before the hiring manager was born, they’re unlikely to simply guess which response they should give. Exchanging with and discussing the situation with the candidate resembles the normal process of working and analysing the situation to achieve a target.

Now that I have experienced this type of questioning first-hand, I’ve been dumbfounded as to the total lack of interactivity from the hiring manager across the table. My natural reaction to an unclear, vague or unspecific question is to question it; discuss and clarify to identify a common ground and answer in a more appropriate way. The problem lies in that the hiring manager typically won’t engage at all, simply stating that it is an “open-ended question and to answer how I feel best”. How can this be a constructive way to gauge a candidate’s abilities?

I’ve always taught and been taught that asking questions is a good thing because it demonstrates logical and analytical thinking and shows that you are trying to better understand the situation and audience and react with the most appropriate response. If a hiring manager simply pursues a vague line of questioning they’ll only ever be able to evaluate a candidate by taking a subjective decision. I’ve even heard reports that hiring managers have rejected a candidate on the basis that they felt the person would outshine them.

In people management, one of the rules that you learn is that you need to evaluate performance based on attainable and measurable indicators. I propose this needs to be the same for the hiring process so that the hiring manager can make a meaningful decision.

Ultimately, interviewing a candidate on the principles of discussion, exchange and analytic capabilities will help the hiring manager identify the right person. It’s important to assess whether the person has a good foundational skill set that allows them to analyse and understand the work that needs to be performed. A good candidate not only needs the technical competencies but also the softer skills that help them adapt, learn and acquire the broader capabilities needed to successfully integrate a team. Onboarding and probationary periods are there to allow a team to conduct a final check of the candidate’s technical and soft skills.

So what needs to change? I believe hiring managers need to ask themselves whether searching for that golden needle in the haystack is the most effective way to identify and recruit talent. By changing the perspective that the interview process should be more of a constructive discussion instead of vague and rigid Q&A, companies will get a better view of how that candidate might actually work on the ground. And by adapting questions to the level of experience in front of them, they are likely to see much more potential from every candidate that they engage with. Sure, the infosec talent pool might not be overflowing, but maybe our skills shortage isn’t quite as terrible as we might think.

The post InfoSec Recruiting – Is the Industry Creating its own Drought? appeared first on Liquidmatrix Security Digest.

Welcoming the Conversational Economy | Securing the Voice Channel

In our latest study, 500 IT and business leaders across the US, France, Germany, and the UK were surveyed to find 85 percent of businesses expect to use voice technology in the next year despite significant security fears. Coming together to create a “Conversational Economy,” 28 percent of businesses currently communicate with customers via voice technology including Microsoft’s Cortana and Amazon’s Alexa voice assistants. In this new ecosystem, voice is dominating all other interfaces and enterprises are moving quickly to adapt.

The number of businesses to utilize voice to interact with their customers will triple, with over two-thirds planning to use voice for the majority of interactions, and nearly one fourth to use voice for all interactions in the next five years. Additionally, respondents reflected not only a growing trust in voice technology’s growing capabilities, but also various ambitious enterprise voice plans:

Enterprises are taking a considered approach when examining and deciding which assistant would fit into their business processes – even though Amazon’s Alexa has ruled headlines.

Though businesses seem to be welcoming voice technology with open arms, there are also high levels of concern (80 percent) regarding the ability for businesses to keep the data acquired through voice-based technology safe. The concern for securing voice as an interface is one of the main factors determining how quickly the conversational economy will grow.

Vijay Balasubramaniyan, CEO and Co-founder, Pindrop, said: “People’s security, identity, and therefore the wider Conversational Economy, is at stake as its use increases. If businesses intend to use voice technology for the majority of customer interactions in the near-future they need to make sure that this method of interaction is as secure as any other.”

Learn more here.

The post Welcoming the Conversational Economy | Securing the Voice Channel appeared first on Pindrop.

How to Get Paid for Proposals

Proposals are one of the most expensive things you will spend your time on in a small business (or a large business, for that matter). You not only spend tons of time discovering and understanding what the client needs, but you also spend countless hours (often late at night) putting the proposal together, polishing it, tweaking the numbers and creating a whiz-bang presentation to accompany the proposal.

All of that for free, and often for nothing.

I’m very much against charging by the hour, but in this case calculating your effective hourly rate is a good exercise:

Let’s say that you recently landed a project and you’re going to make $10,000 from it. You’re going to spend 50 hours delivering the project so you’re earning $200 per hour (this is your billable rate). Easy calculation. But when you figure in the time that you spent on putting the proposal together – lets say another 20 hours – you’re only generating around $142 per hour, or a 25% drop in your effective hourly rate. Add in the other non-billable time you spent with the client and you’re easily pushing your effective hourly rate – for that project – down below 50% of your billable rate.

Pushing your effective hourly rate down is of course not the only bad thing that happens.

It breaks my heart

You put your heart and soul into understanding what the client really needs, give them the benefit of your experience to make sure they don’t fall into traps and put a lot of thought and effort into how you can help them solve their business problem. You’re invested – both in time and in emotional energy.

So when they turn you down, there’s a double whammy. You’ve just done a lot of work for nothing and you’ve just had your emotional investment kicked in the face (or that’s how it feels, at least at first). That hurts – especially when you’re new to the game. Over time you learn that opportunities come and go and you get less emotionally invested, but each time a proposal doesn’t hit the mark you take an emotional hit.

But what if there’s a better way? What if you could actually get paid for your proposals? And have your client like it that way?

There is a way to do this, and it starts with understanding the value of the proposal.

Proposals are valuable

By the time a client asks you to put together a proposal, you’ve already been dancing for a while. You’ve had some initial meetings, a couple of discovery sessions and they like what they see.

Now they ask you to do a proposal, and you’re going to have to spend more time with them. You need to make sure you understand exactly what they need, how much you can get done within their budget, what takes priority and where the skeletons are. You’re going to apply your expertise to dig into details, find out what else needs fixing and so on…the point is you’re going to spend more time with them.

Then you head off to your cave, put together the proposal and present it to them. And they say thanks, great work, we’ll get back to you. So far so good.

How much value did your potential client get from this proposal development process? The answer is: a lot.

They’ve just had an expert analyse their problem, dig into the details and tell them what they need to do to solve the problem. They now understand their problem a lot better and know what needs to be done to fix it (even if they don’t have the expertise to do it themselves). And of course you may not be the only one submitting a proposal, so the client has received a lot of valuable advice – from multiple experts.

And you gave it to them for free.


Think about it this way: when you go to a doctor with a complaint, they will diagnose you, maybe run some tests, make some recommendations and perhaps prescribe some medicine. Then they’re going to ask you to come in for an extended treatment or checkup to see if things have improved. And you’re happy to pay for this initial consultation.

When you develop a proposal for a client, you’re effectively doing what a doctor does in an initial consultation. You’re listening to the “patient”, running some tests to find out if there’s a deeper cause for the problem, and applying your expertise to recommend a way to get rid of the problem.

You’ve provided a lot of value, but you’re willing to give it away for free because that’s the way your industry usually works. Doctors don’t work like this; they charge for the “proposal” phase of their work with you.

The first key in moving from free to paid proposals is to understand that your proposal is tremendously valuable to your client.

But you need to present it to them as something valuable; and you need to deliver that value. The way to do that is to provide a roadmap.

The differences between a proposal and a roadmap

A proposal is usually a document that defines a scope of work, the number of hours required to do it and a price. If you’ve been at this for a while you will know that you need to base the proposal on the client’s ROI (Return On Investment) – what they get in return for their investment in your services.

A roadmap is also a document, but in this case the document clearly spells out what the client will need to do (or get done first), second and so on. A roadmap sometimes includes a timeline to help the client understand how long the whole process could take. Again, justifying the business case is critical to help the client make the right decision.

A roadmap is the output of one or more roadmap sessions. A roadmap session is like a discovery session, but includes co-development of the roadmap.

If you’re familiar with project planning, you will already have noticed that a roadmap is a high-level project plan.

But there are more differences between a proposal and a roadmap:


When you follow the proposal route of getting work, your engagement with the client looks something like this:

  • initial meeting to see if there’s a fit (make sure you can you help them);
  • a series of meetings to discover what they really need;
  • crafting the proposal;
  • (if you’re experienced) working with the client on the draft proposal to make sure you’re hitting the mark;
  • presenting the final proposal to them; and
  • hoping for the best.

When you use the roadmap route, the engagement looks a little different:

  • initial meeting to see if there’s a fit (can you help them);
  • present the roadmap option (standard for each client); and
  • hope for the best.


Your client can do only one thing with a proposal: say yes or no (or haggle a bit). A roadmap is something they can use; on their own, with you or with someone else:

  • A proposal effectively says “here stuff I will do for you”. A smart proposal says “here’s how I will solve your problem and here’s the ROI”.
  • A roadmap says “here’s where you need to get to, here’s the road you need to follow and here are the stops along the way. You can use this roadmap on your own, with me or with someone else.”


Saying yes to a proposal is a big step, because it usually requires the client to make a big financial investment. The risk for the client is high and their objections will reflect that.

Saying yes to a roadmap exercise is a much smaller commitment. My roadmap sessions typically run for half a day (usually with a couple of hours before and after) and therefore cost a lot less. Much easier for the client to say yes to this much smaller investment.

A difference in how they perceive your expertise

When you present a roadmap option you are clearly placing yourself in charge of the situation. You know exactly how you’re going to go about building the roadmap, you have a defined process and the confidence to present this as the right option for the client. (This is why the client is hiring you in the first place: you are the expert, you know how this should be done and you know exactly how to go about doing it.)

When you present a proposal, you are to some extent asking the client to approve not just the expenditure, but also to make a judgment on whether this is the right thing to do. You’ve given up some control of your expertise.

A difference in the amount of time involved

The proposal route is a big investment (in time) for you and for your client. It is not uncommon to spend tens or even hundreds of hours on discovery meetings, user requirements analysis and proposal polishing for a large contract. A roadmap approach, on the other hand, is a lot smaller investment for you and for your client. You’ve spent maybe two or three hours with the client and then it’s up to them to decide.

(There are more differences, for example the idea that a roadmap is a collaborative exercise versus a proposal which is something you give to the client, but I think you get the point.)


None of my roadmaps contain pricing. The whole idea is that the client can use the roadmap now, later, on their own, with me or with someone else – so I don’t want them to confuse the roadmap with a proposal. Where appropriate, I will send a proposal for some or all of the work in the roadmap; the proposal can be very short because the heavy lifting has already been done in the roadmap.

So how do you move from (free) proposals to (paid) roadmaps?

To get a client to pay for a roadmap, you have to deliver value. That value comes from three places:

  • the roadmap itself: the output of the roadmap session(s) – a tool the client can use;
  • the process you will use to create the roadmap: this is where your expertise has to shine; you must know exactly how you’re going to go about creating the roadmap; what happens before, during and after the roadmap session(s); what the output will look like, and how you’re going to get the client to co-develop the roadmap;
  • your confidence: you have to be confident that this is the right thing to do, the right way to do it and that it delivers substantial value to your client.

This is not an easy road by any means, but there is a way to build up to it:

  • Start by taking proposals you’ve done in the past and turning them into roadmaps. Can you make them look like high-level project plans? Can the work be clearly grouped into relatively small chunks where each chunk builds on the previous one? Is there value from each chunk of work?
  • Define your process for creating a roadmap. Before you head into a roadmap session, there’s likely some pre-work that you need to do, for example running an analysis on their website (if that’s part of the problem) or doing an analysis of their business using something like the Tornado Method. Then define what the output would typically look like, and what you need to do during a roadmap session to get there. Then define what happens after the session. Turn it all into a collection of checklists.
  • Trial and refine your process. Find a friend or a willing client to be your first roadmap client. Follow your process and make sure you make notes of what’s working and what needs to be improved. Refine your process and repeat the exercise. Each time you do it you will gain more confidence.

Remember that a roadmap is a short, low-cost exercise and therefore relatively easy to sell to potential clients. You have to stress that the exercise delivers a roadmap that they can then use themselves, with you or with someone else; and you will follow up with a proposal if and when they’re ready for it.

A roadmap gives your client clarity on their problem and what they need to do to solve it. They may not have the expertise to do it themselves (that’s where you will eventually earn your keep), but just the process of building the roadmap provides them with peace of mind and builds trust that you can solve the problem for them.

Finally, a roadmap educates your client. They will understand that there is a well-defined process for solving the problem, the sequence in which the work needs to be done and what they get out of each part. An educated client is a collaborative, engaged and enthusiastic; your expertise just helps them solve a problem.

What you can do now

It took me about two years to move from free proposals to paid roadmaps. You can get there a lot faster because you can tap into articles like this and a growing awareness amongst professionals that even proposals are highly valuable.

I will be releasing a step-by-step guide on how to move from free proposals to paid roadmaps in the near future. To get notified when this is released, sign up for my newsletter here – you will get access to more articles like this, I promise I won’t spam you and you can unsubscribe at any time.

And if you have questions or comments, please drop me a note!


Your IoT security concerns are stupid

Lots of government people are focused on IoT security, such as this bill or this recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question. Government efforts focus on vulns and patching, ignoring more important issues.

Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.

Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.

Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.

Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.

IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services.

In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home.

Lastly, focusing on the vendor is a tired government cliche. Chronic internet security problems that go unsolved year after year, decade after decade, come from users failing, not vendors. Vendors quickly adapt, users don't. The most important solutions to today's IoT insecurities are to firewall and microsegment networks, something wholly within control of users, even home users. Yet government policy makers won't consider the most important solutions, because their goal is less cybersecurity itself and more how cybersecurity can further their political interests. 

The best government policy for IoT policy is to do nothing, or at least focus on more relevant solutions than patching vulns. The ideas propose above will add costs to devices while making insignificant benefits to security. Yes, we will have IoT security issues in the future, but they will be new and interesting ones, requiring different solutions than the ones proposed.

Identity eats security: How identity management is driving security

Protecting data and assets starts with the ability to identify with an acceptable level of certainty the people and devices requesting access to systems. Traditionally, identity has been established using a “secret handshake” (user ID and password) that gets the person or device through a gateway with access to permitted systems. Once through, few safeguards are in place to further confirm identity.

To read this article in full, please click here