The results of a survey published in Forbes stated that 56% of CEOs have seen that digital developments in their company have contributed to an increase in revenue. Efficient and error-free digital services have made this possible. At the customer front, the GUI offered by internet or standalone applications looks smooth only because of a series of testing and changes that have led to the release of the final product.
Testing is an important part of any software’s life cycle. It is complicated and involves a lot of effort. Thankfully, in recent years, the testing process has seen faster turnaround times due to test automation practices. Automation testing has helped companies improve their releases and also cut down on costs.
Test automation has witnessed growing trends with the boom in internet technologies. Test automation by Oxagile is aimed at providing services in this category. Automation testing if done correctly can significantly reduce costs and derive maximum value from such practices. Following the below-mentioned guidelines can help a company achieve this.
1) Plan before Executing
Businesses should establish a clear set of goals and plans to proceed with automation testing. The main objective of test automation is to identify the defects by making use of automation solutions. To do this efficiently a well-formulated execution plan should be drafted that includes the entire scope of testing. This will help the company to make decisions about selecting the right tool and engaging an efficient team. Companies can then smoothly run different stages of testing like unit tests, API tests, and GUI tests.
2) Hire an Automation Engineer
If your business already has one then it’s good but if you don’t, then you should hire one. Many companies are of the view that simply migrating their manual system to an automated architecture will provide the necessary results. This is not true. A dedicated manager is needed for automation testing. Automation testing is a continuous process and requires differentiated resources to facilitate the operations. In the absence of a dedicated expert who is accustomed to the latest test automation practices, the process may not yield the expected results.
3) Select the right tools
With several automation tools available in the market it becomes difficult for businesses to identify the right tool for their testing needs. Not all the automation tools will meet the requirements of a particular business. This is why it important to select an automation tool that matches the testing needs of the business.
Companies should carefully evaluate the features of tools like testing methods available, flexibility in updating test cases, and manageability while making a decision. The automation tool selected should also be consistent with the resources of the enterprise. Otherwise, you may have to spend a lot of time and effort training your team to learn a new script or language. Selecting the right tool which is consistent with the testing requirements will allow businesses to derive maximum benefits from the test automation process.
4) Efficient GUI automation practices
GUI testing is the last and most important stage of testing. This is because it is at the UI level where the customer interacts with the software or application under development. It is also the most difficult stage of automation testing. A combination of automation testing and manual testing is recommended at the GUI level. This is because there is a possibility of exceptions being present to the test rules laid down by the development team at this stage.
For example, if your team is testing the installation process of an application there might be a situation where the system crashes in between the installation process. This may be caused by a fault in the script of the application or malfunctioning of the system on which it is being installed. In this scenario, manual testing can help to make the overall testing process smooth and expeditious. This is also highlighted in a survey that reported only 5% of respondents carrying out 100% automation testing while 66% conducted a ratio of 75:25 manual: automation testing.
5) Periodic Evaluation
DevOps (Development & Operations) is an expensive and effort taking procedure. The interlinking complexity between the two also makes it difficult to identify faults and implement the necessary changes. An appropriate plan should be laid out to carry out a periodic appraisal of the ongoing test automation practices. This becomes all the more crucial if a company has recently switched to automation.
Teams should be instructed to design and evaluate the tests at different stages. This will ensure a more holistic approach is followed to test automation and the company’s resources are put to efficient use.
Conclusion
It is important to note that test automation is not a replacement for manual testing rather it helps to expedite the whole process of testing. According to Software Testing News, 94% of respondents stated that test automation is a method to support the testing efforts. To make the testing process efficient and value-driven, a well-executed plan and the services of an experienced agency or manager is required.
Do you run a B2B business with an active online presence? If so, then you must be concerned about your cybersecurity and data protection practices. Unless you do that, security breaches such as supply chain attacks, ransomware, man-in-the-middle attacks, and phishing attacks could ruin your market reputation. B2B businesses thrive on customer retention, and therefore endangering customer data by not investing in the right security measures could sabotage your business.
There are two things you need to watch out for — on-premise security measures and in-transit security measures when it comes to cybersecurity. For a minute, let us assume that you and your clients have all the on-premise security essentials in place, including updated software, firewall, antivirus, etc.…
In that case, your only concern should be the in-transit data. This can very well be taken care of with an SSL certificate. Now, if you are thinking of buying a cheap SSL certificate, then you probably don’t know much about this technology, so let’s begin with that.
What is an SSL Certificate?
If you wonder what an SSL certificate is and whether it is any different from the TLS certificate, then no worries. We will tell you everything there is to know about these two technologies. The Secure Socket Layer (SSL) certificate, sometimes called the Transport Layer Security (TLS) certificate refers to the technology that encrypts communication between the client and the server.
Primarily, Netscape developed the SSL technology way back in 1995 to uphold data integrity and prevent unauthorized access. However, since 1996, the SSL technology has not been updated, and what we currently use is the TLS, which makes use of the encryption protocol. So, the TLS is the successor of SSL, and therefore the two terms are used interchangeably. So, whenever you see a website that shows ‘HTTPS’ or a green padlock in the URL bar, then you can be sure that it is encrypted with an SSL certificate.
How does an SSL Certificate work?
SSL certificates make use of cryptography to encrypt the in-transit data by deploying the public-private key encryption. To get started with it, you need to install the desired type of SSL certificate on the webserver that hosts your website. Installing a valid SSL certificate enables end-to-end encryption, which is also possible through a self-signed certificate but is not recommended.
For an SSL certificate to be valid, it must be duly signed by a Certifying Authority and must be digitally signed with the CA’s private key. You can buy a cheap SSL certificate and install it in less than fifteen minutes, but only if you opt for a domain validated SSL.
As a B2B business, you probably make use of multiple subdomains and extensions. So you must consider a more advanced SSL certificate like the Wildcard SSL or the Organization Validated SSL. Although all types of SSL certificates use the same encryption protocol, they offer different types of validations.
Why should I install an SSL Certificate?
If you are still wondering whether you need an SSL certificate for your B2B business’s official website, then read on. Below listed are some of the core benefits that come with installing the right SSL certificate.
Ø Secure Data Transmission
Transmission of customer data through the internet can be intercepted by cybercriminals who may then use it against your customers’ best interests. As the internet transmits communication through multiple computers or servers, there could be a vulnerability at some transmission point that a cybercriminal might exploit. An SSL certificate prevents this through the public-private key encryption, ensuring that the data remains accessible only to the intended recipient.
Compliance
As a business owner, you might have stumbled upon the term ‘HTTPS’. You may be aware of its role in complying with the various data privacy and cybersecurity laws and regulations. For example, the HTTPS is mandatory under the GDPR and PCI DSS.
The HTTPS is recommended because it is the secure version of its predecessor, the HTTP protocol. Unlike the HTTP protocol, the HTTPS does not transmit the data as plain text but rather encrypts it through cryptography. This prevents unauthorized interception of personally identifiable and sensitive data such as addresses, phone numbers, email IDs, passwords, credit card details, etc…
SEO Benefits
Every business strives hard to rank higher in Google’s search results, and one way of doing that is by installing an SSL certificate. Back in 2014, Google emphasized the significance of SSL and its impact on search engine rankings. So, having one installed on your website would give your business higher visibility and generate more organic traffic.
Join the HTTPS Everywhere Movement
Let us assume you did everything right and have a decent number of visitors coming to your website. Now your goal should be to establish yourself as a credible business and turn your visitors into customers. In 2020, this won’t be possible without installing an SSL certificate on your website.
That’s because Google Chrome, the browser with the largest market share, has now adopted the ‘HTTPS Everywhere’ approach. So, it flags websites that do not run on the HTTPS protocol by alerting the user of potential security threats. While that is something you can overcome with a basic domain validated SSL certificate, using a more advanced validation is recommended.
Declare your Legitimacy
B2B businesses such as digital marketers, SaaS product developers, and remote consultants who have little to no physical interaction with their clients must use advanced SSL certificates. We recommend the Organization Validated (OV) SSL certificate, which is slightly expensive but comes with many benefits for such businesses. Before issuing an OV SSL certificate, the Certifying Authority performs a comprehensive validation of a business’s existence. It, therefore, brings along more credibility to B2B businesses and professionals that operate remotely.
Conclusion
We have discussed everything you need to know about SSL certificates as a B2B business owner. As you may have realized, a B2B business needs to avoid buying a cheap SSL certificate to save a few bucks. Instead, B2B business owners must consider investing in one based on the level of validation they seek. It does not matter how big or small your B2B business is because as long as it is credible, there is hope.
In an earlier article, we took a look at smartphone alternatives for free-ranging kids. Next up is the follow-on conversation … the time you give them their first, fully functional smartphone—and how to manage having it in your lives.
For children, learning to use a first smartphone is just like learning to ride a bike. And that’s just as true for you just as it is for them.
When a child learns to ride a bike, they take it in steps and stages. Maybe they start tooling around on little kick-bikes, a tricycle, scooter, or so on, just to get their feet under them so to speak. Next, it’s that first bike with training wheels, and then the big day that they come off (complete with a few scrapes and bruises too). They’re on two wheels, and a whole new world has opened up for them—one that you have to monitor and parent as you give them increasing freedom to roam—from the block, to the neighborhood, to your town—as they grow older and more responsible.
Your Child’s First Smartphone
Now, apply that same progression to the day your child finally gets their first smartphone. Plenty has led up to that moment: the times when they first tapped around your phone as a toddler, when as a preschooler they watched cartoons on a tablet, and maybe when they got a little older they had some other device, like a smartphone alternative designed just for kids.
Then comes along that first smartphone. And for parents it’s a game-changer, because it opens up yet another new world to them. The entire internet.
As you can see, your child doesn’t enter the world of smartphones entirely cold. They’ve already been on the internet and had the chance to experience selective slices of it under your supervision. But a smartphone—well, that’s another story entirely. A smartphone, out of the box, is a key to the broader internet. And just as you likely wouldn’t let your brand-new cyclist ride five miles to go and buy ice cream in town, there are plenty of places you wouldn’t let your new internet user go.
What follows here are a few words of advice that can ease your child into that new world, and ease you into it as well, so that you can all get the tremendous benefits of smartphone ownership with more confidence and care.
Start with the Basics: Smartphone Protection and Parental Controls
Whether you go with an Android device or iPhone, make sure you protect it. You can get mobile security for Android phones and mobile security for iPhones that’ll give you basic protection, like system scans, along with further protection that steers your child clear of suspicious websites and links. While I recommend protection for both types of phones, I strongly recommend it for Android phones given the differences in the way Apple and Android handle the code that runs their operating systems.
Apple is a “closed platform,” meaning that they do not release their source code to the public and partners. Meanwhile, Android is “open-source” code, which makes it easier for people to modify the code—hackers included. So while Apple phones have been historically less prone to attacks than Android phones, any device you own is inherently a potential target, simply because its connected to the internet. Protect it. (Also, for more on the differences between the security on Android phones and iPhones, check out this article from How-To Geek. It’s worth the quick read.)
Next up on your list is to establish a set of parental controls for the smartphone. You’ll absolutely want these as well. After all, you won’t be able to look over their shoulder while they’re using their phone like you could when they were little. Think of it as the next line of protection you can provide as a parent. A good set of parental controls will allow you to:
• Monitor their activity on their phone—what they’re doing and how much they’re doing it.
• Limit their screen time—allowing you to restrict access during school hours or select times at home.
• Block apps and filter websites—a must for keeping your children away from distractions or inappropriate content.
The great thing about parental controls is that they’re not set in stone. They give you the flexibility to parent as you need to parent, whether that’s putting the phone in a temporary time out to encourage time away from the screen or expanding access to more apps and sites as they get older and show you that they’re ready for the responsibility. Again, think about that first bike and the day you eventually allowed your child ride beyond the block. They’ll grow and become more independent on their phone too.
You need more than technology to keep kids safe on their smartphones.
Unlike those rotisserie ovens sold on late-night infomercials, a smartphone isn’t a “set it and forget it” proposition. Moreover, you won’t find the best monitoring, safety, and guidance software in an app store. That’s because it’s you.
As a parent, you already have a strong sense of what does and does not work for your household. Those rules, those expectations, need to make the jump from your household to your child’s smartphone and your child’s behavior on that smartphone. Obviously, there’s no software for that. Here’s the thing, though: they’ve established some of those behaviors already, simply by looking at you. Over the years, your child has seen your behavior with the phone. And let’s face it, none of us have been perfect here. We’ll sneak a peek at our phones while waiting for the food to show up to the table at a restaurant or cracked open our phones right as we’ve cracked open our eyes at the start of the day.
So, for starters, establishing the rules you want your child to follow may mean making some fresh rules for yourself and the entire household. For example, you may establish that the dinner table is a phone-free zone or set a time in the evening when phones are away before bedtime. (On a side note, research shows that even dim light from a smartphone can impact a person’s sleep patterns and their health overall, so you’ll want to consider that for your kids—and yourself!)
Whatever the rules you set in place end up being, make them as part of a conversation. Children of smartphone age will benefit from knowing not only what the rules are but why they’re important. Aside from wanting them to be safe and well, part of the goal here is to prepare them for the online world. Understanding “the why” is vital to that.
“The (Internet) Talk”
And that leads us to “The Internet Talk.”. In a recent McAfee blog on “What Security Means to Families,” we referred to the internet as a city, the biggest one there is. And if we think about letting our children head into town on their bikes, the following excerpt from that blog extends that idea to the internet:
For all its libraries, playgrounds, movie theaters, and shopping centers, there are dark alleys and derelict lots as well. Not to mention places that are simply age appropriate for some and not for others. Just as we give our children freer rein to explore their world on their own as they get older, the same holds true for the internet. There are some things we don’t want them to see and do.
There are multiple facets to “The Talk,” ranging anywhere from “stranger danger” to cyberbullying, and just general internet etiquette—not to mention the basics of keeping safe from things like malware, bad links, and scams. That’s a lot! Right? It sure is.
The challenge is this: while we’ve grown up with or grown into the internet over the course of our lives, the majority of children are amongst the first waves of children who were “born into” the internet. As parents, that means we’re learning much, if not all, of what we know about digital parenting from scratch.
The good news is that you’re far from alone. Indeed, a good portion of our blog is dedicated entirely to family safety. And with that, I’ve pulled out a few select articles below that can give you some information and inspiration for when it’s time to have “The Internet Talk.”
And those are just a few for starters. We have plenty more, and a quick search will keep them coming. Meanwhile, know that once you have The Internet Talk, keep talking. Making sure your child is safe and happy on the internet is an ongoing process—and conversation, which will cover more in a moment.
Keeping tabs on their activity
One reason parents often cite for giving their child a smartphone is its location tracking capabilities that allow parents to see where their children are ranging about with a quick glance. And whether or not you choose to use such tracking features, that’s a decision you’ll have to make. However, consider your child’s privacy when you do. That’s not to say that you’re not in charge or that you shouldn’t track your child. Rather, it’s a reminder that your child is in fact getting older. Their sense of space and privacy is growing. Thus, if you choose to monitor their location, let them know you’re doing it. Be above the board with the intent that if you don’t hide anything from them, they’ll be less inclined to hide anything from you.
The same applies to parental controls software. Many of them will issue a report of app usage and time spent using the app, along with surfing habits too. Go ahead, monitor those early on and then adjust as them as it feels right to you. Let your child know that you’re doing it and why.
Another thing I’ve seen many of the parents I know do is share the credentials to any social media account their child sets up. Doing this openly lets your child take those first steps into social media (when you feel they’re ready) while giving you the opportunity to monitor, correct, and even cheer on certain behaviors you see. Granted, it’s not unusual for kids to work around this by setting up alternate accounts that they hide from their parents. With parental controls in place, you can mitigate some of that behavior, yet vigilance and openness on your part will be the greatest tool you have in that instance.
While you’re at it, go ahead and have conversations with your kid about what they’re doing online. Next time you’re in the car, ask what’s the latest app their friends are using. Take a peek at what games they’re playing. Download that game yourself, give it a try, and play it online with them if you can. This kind of engagement makes it normal to talk about the internet and what’s happening on it. Should the time come to discuss more serious topics or pressing matters (like a cyberbullying event, for instance), you have a conversational foundation already built.
The common denominator is you.
So, as we’ve discussed, technology is only part of the answer when managing that first smartphone in your child’s life. The other part is you. No solution works without your engagement, care, consistent application of rules, and clear expectations for behavior.
So, as you once looked on proudly as those training wheels came off your child’s first bike, you’ll want to consider doing the digital equivalent in those first months of that first smartphone. Keep your eyes and ears open as they use it. Have conversations about where their digital travels have taken them—the games they’re playing, the friends they’re chatting with. While you do, keep a sharp eye on their moods and feelings. Any changes could be a sign that you need to step in and catch them before they fall or pick them up right after they’ve fallen.
In all, your child’s first smartphone is a wonderful moment for any family, as it represents another big step in growing up. Celebrate it, have fun with it, and play your role in making sure your child gets the very best out of it.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
When faced with the COVID-19 crisis earlier this year, Adobe’s IT organization, like many others, acted quickly to shift the entire company to a work-from-home situation. For us that meant moving the entire Adobe workforce of more than 22,000 global employees over a single weekend. The pandemic also forced us to focus more critically on such things as collaboration strategies, security, and the employee experience during these challenging times.
The Latest on the Uber Data Breach and Protecting Your Info
You may have spotted the news last week that U.S. federal prosecutors brought charges against the former chief security officer of Uber. At issue was a breach that occurred in 2016, where prosecutors allege that he covered up a $100,000 payoff to the hackers responsible for the attack. The specific charges are obstructing justice and concealing a felony for the alleged cover-up.
According to research we recently published, nearly three-quarters of all breaches have required public disclosure or have affected financial results, up five points from 2015. Additionally, industry studies show that it can take roughly nine month on average to identify and contain a breach. Yes, that’s more than nine months, and a lot can happen to your credit in that timeframe. Thus the onus is on us to be vigilant about our own credit.
Here’s a quick list of things you can do right now to keep on top of your credit—and that you can do on an ongoing basis as well, because that’s what it takes to keep tabs on your personal info today.
Protecting yourself from data breaches
Closely monitor your online accounts: Whether it’s your credit card statements, banking statements, or your individual accounts for services like Uber, review them closely. If you see any suspicious activity, notify the institution or service and put a freeze on your account(s) as needed. Even a small charge can indicate a bigger problem, as that means your information is out there in the wild and could be used for bigger purchases down the pike. In the event you feel your Uber account has been compromised, you can contact them via their “I think my Uber account has been hacked” page.
Update your settings: That includes your privacy settings in addition to changing your password. As far as passwords go, strong and layered passwords are best, and never reuse your credentials across different platforms. Plus, update your passwords on a regular basis. That’ll further protect your data. Using a password manager will help you keep on top of it all, while also storing your passwords securely.
Enable two-factor authentication: While a strong and unique password is a good first line of defense, enabling app-based two-factor authentication across your accounts will help your cause by providing an added layer of security.
Check your credit: Depending on where you live, there are different credit reporting agencies that keep a centralized report of all your credit activities. For example, the major agencies in the U.S. are primarily Equifax, Experian, and TransUnion. Likewise in the U.S., the Fair Credit Reporting Act (FCRA) requires these agencies to provide you with a free credit check at least once every 12 months. It’s a relatively quick process, and you might be surprised what you find—anywhere to incorrect address information to bills falsely associated with your name. Get your free credit report here from the U.S. Federal Trade Commission (FTC). Other nations provide similar services, such as the free credit reports for UK customers.
Freeze your credit: Freezing your credit will make it impossible for criminals to take out loans or open up new accounts in your name. To do this effectively, you will need to freeze your credit at each of the three major credit-reporting agencies (Equifax, TransUnion, and Experian).
Consider using identity theft protection: A solution like McAfee Identify Theft Protection will help you to monitor your accounts and alert you of any suspicious activity in addition to the activities I’ve listed above. Additionally, you can use a comprehensive security solution such as McAfee Total Protection to help protect your devices and data from known vulnerabilities and emerging threats.
Be your own best defense
For all the technology we have at our fingertips, our best defense is our eyes. Keeping a lookout for fishy activity and following up with family members when unfamiliar charges show up on your accounts will help you keep your good name in good standing.
The thing is, we never know when the next data breach might hit and how long it may be until that information is discovered and finally disclosed to you. Staying on top of credit has always been important, but given all our apps, accounts, and overall exposure these days, it’s a must.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
It was every administrator’s worst nightmare. A small district hospital in western Colorado lost access to 5 years’ worth of patient records after ransomware attackers exploited holes in an aging infrastructure to strike. But it was also an increasingly familiar story as ransomware attackers escalate their attacks and go after targets across all sectors of the economy.
But being a target doesn’t mean you’re fated to become a victim. With the deployment of complete and proactive security software, organizations can still defend their data in the face of a veritable epidemic of attacks against their endpoints. At McAfee, this is one of our core strengths.
The best defense starts with prevention. As we like to say, being informed is halfway to being prepared. With MVISION Insights, for instance, customers receive advance notice whenever there are ransomware attacks happening in their sector or region. Take the example of an attack against hospital attack. MVISION Insights will notice an uptick in ransomware attacks against other healthcare organizations and share that intelligence so other hospitals could get ahead of the potential threat and review the state of their own defenses.
MVISION Insights would help SOC teams know whether their defenses were in shape to protect against an attack. If not, it would offer prescriptive advice about what measures to take before the threat or campaign ever got launched. That is phase number one. Check out MVISION Insights in action.
As an organization goes about the work of hardening its environment, suppose that an APT group then uncovers a loophole. When you have thousands of endpoints, it’s always the case that some endpoint is going to be misconfigured. But before the bad guys can launch an attack, our prevention technology comes into play to prevent ransomware from infecting the endpoint in question.
McAfee leverages an integrated technology stack that includes machine learning, exploit prevention, behavioral blocking and dynamic application containment. That works to stop not just traditional portable executable files but also file-less attacks.
Fig: Intelligent and Proactive Endpoint Security
What’s more, McAfee’s global intelligence capabilities tap into over 1 billion sensors around the world and deploys static machine learning to identify newer types of endpoint attacks. Instead of relying on a signature, we can examine a file’s attributes and calculate a score based on multiple vectors that helps determine whether the file in question exceeds a certain security threshold and whether to flag it as potentially malicious.
The Power of Big Data
McAfee’s advanced AI capabilities also pay other security dividends in terms of prevention. Suppose that someone creates a new piece of ransomware with the contents of the file obscured. We are able to then apply dynamic machine learning which examines the actual behavior of the process. Malicious malware behaves, well, maliciously and ransomware acts in very specific patterns. On our end, we’ll run all of those behaviors through a machine learning engine to figure out whether to remediate the activities of a questionable process.
This is the unique power of combined intelligence.
Let’s consider a case where a ransomware attack actually manages to infect an endpoint and the malware began to move laterally within the network.
Here’s where McAfee’s host-based intrusion prevention technology helps to stop ransomware’s lateral movement, so it doesn’t spread and infect the rest of your endpoints. EDR will detect and prioritize alerts of anomalous behavior for further investigation so SOCs can respond to these threats – such as isolating or quarantining particular end points.
Typically, customers have had only two courses of action after a ransomware attack. If they were fortunate to have made backups, they can choose to reimage their machines. But that’s also a laborious process that takes time and can be quite expensive. Or they can surrender to the attacker’s demands and pay the ransom to unlock their information.
But McAfee’s endpoint solution includes a unique feature that allows customers to actually roll back the effects of a ransomware attack with enhanced remediation technology that can even restore encrypted data. This is a brilliant technical innovation that further sets our solution apart from the rest of the industry. Organizations can save on average $500 per node in labor and productivity costs by eliminating the need to reimage machines with Rollback Remediation. Watch the video below to see Rollback Remediation in action.
Dynamic application containment (DAC) is another technology that McAfee has developed to further protect endpoints. DAC both reduces ability of greyware to make malicious changes to the system while minimizing end-user impact as it does not use or require heavy sandbox or app virtualization. This works either online or offline and protects endpoints without compromising business continuity.
Human-Machine Teaming
After collecting telemetry from a vast data lake, our threat researchers apply AI to extract insights that translate into actionable intelligence for our customers. This process of “human machine teaming” is a powerful combination that generates proactive intelligence, so organizations remain ahead of the gathering threats on the horizon. SOC teams can view real-time updates as they drill down to learn about new threats in their environments based on geography and industry.
All too often, security defenders find themselves in a mad scramble trying to separate out false positives from an overwhelming number of alerts flooding their screens. All the while, the bad guys are plying their trade. But McAfee takes the guesswork out of that process so they can get a complete and realistic look at the attack landscape.
Our endpoint security platform alerts defenders about any devices in their network that may lack sufficient protection. They can then go ahead and isolate any devices at risk of getting breached or take any remediation actions to protect the organization. When all is said and done, the system is fully protected.
For organizations increasingly in the crosshairs of ransomware attacks, these tools will make all the difference. It’s the future of intelligent endpoint security.
Ransomware Evolution to Most Promising Victim (MPV) Attacks
Ransomware cost businesses over $11.5 B with a 500% increase in attacks in 2019 according to Forrester Research. It’s your persistent threat. Ransomware is a type of malicious software that infects a computer and restricts users’ access to it and their data until a ransom is paid to unlock it. It significantly challenges CIA (confidentiality, integrity and availability) all in one swoop. To understand the current ransomware, it’s best to review its evolution of attack structures.
Ransomware has been around for over 30 years, or three generations, ironically starting at a 1989 global health conference where an infected diskette was distributed to over 20,000 attendees. As the internet evolved with access to more compute devices and online payment capabilities so did the attackers playing field.
Early variants of ransomware merely locked individual computers, sometimes even without encryption, thus preventing single user access. However, this has now evolved to locking entire organizations down. Criminals got clever with social engineering by masquerading the ransomware as a law enforcement agency (perhaps the FBI) and making accusations that illegal files are on the system.
With CryptoLocker in 2013, ransomware moved beyond scare tactics and became more aggressive and straightforward with demands of damaging systems by a certain timeframe. It seems 2014 is when ransomware took great strides forward. CTB –Locker, partly due to their business model, created hundreds of thousands of infections through phishing, making it the most dangerous ransomware family of 2016.
But the ransomware that got the world’s attention was WannaCry. Why? It practically held the world hostage. It took only days to infect over a quarter million computers. The ransomware worm targeted older versions of Windows. Once in a network on a device, it searched for more devices to exploit. Given this vast global reach, WannaCry received a massive amount of media attention. . One could argue it was pivotal to bringing cybersecurity to the boardroom as noted by research , making cybersecurity a mainstream business concern.
The most recent evolution— a business model called Ransomware-as-a-Service(RaaS)— worked for CTB locker and was taken to another level by GandCrab, becoming the most prolific ransomware of 2018 and Q1 2019. This model lowers the bar and gives cyber criminals a platform to deliver ransomware thus commoditizing the business.
Ransomware’s transformation over the years has been built on technology advances i.e.. Internet, mobile, crypto currency, etc. and a range and combination of malware tactics. Given the historical legacy of malware tactics, cybersecurity solutions should leverage this knowledge to hunt and investigate these artifacts and indicators of compromise.
Today’s Ransomware
So where are we today? Ransomware market has advanced with highly targeted tactics moving away from casting a wide net in the hopes some will engage. Threat actors now tailor the attack to target organizations with money, essential IP or sensitive critical data on their IT systems and organizations that are heavily depended on business continuity—as the most-promising victim (MPV). Adversaries infiltrate first to scout the ransomware opportunity, and appear to be using the infected organizations to do reconnaissance and decide to as select the most-promising-victims for further exploitation and ransomware. In addition, they employ effective data deletion attack structures to prevent recovery. A new threat of “pay or your data goes public” has emerged as well
Could your organization be a target or MPV for sophisticated tailored ransomware? Watch this video on a recent example of MPV attack.
Get Ahead of the Savvy Ransomware
Ideally one does not want to be an MPV. The best position to avoid becoming an MPV is to become proactive. What if your cybersecurity could automatically prioritize these attacks based on industry, region and your security posture? What if you could get detailed information on how the attack works before you get hit? What if you had the ability to predict the likelihood of being an MPV? More importantly what if you were prescribed specific actions to take to counter these attacks before they hit. Enter MVISION Insights intelligently driving your endpoint security!
Fruits of Human Interface and Artificial Intelligence
The Advanced Threat Research (ATR) team is the human intelligence powered with artificial intelligence to bring these proactive insights to your attention. As a researcher on the ATR team I am getting a wealth of ransomware and other advanced threats insights from our over one billion sensors. McAfee’s footprint and community brings a hefty outlook on the threat landscape and real-life best practices on what to do. A recent and worthy ransomware find is Netwalker and its variants. We perform a deep dive on Netwalker Ransomware. We not only looked at the technical details of the ransomware itself but tracked a large portion of the criminal profits. Netwalker has gained some quick success by gathering more than 1900 BitCoins in one quarter! Literally picking out MPV’s one by one.
MVISION Insights automates the findings to be delivered proactively to alert and advise you what threat matters to you. Is there any other offering that brings the power of human intuition and machine learning at this grand level? To explore what MVISION Insights offers, check out the Preview of MVISION Insights. This is a web-based experience of a sampling of threats and proactive actionable intelligence that MVISION Insights automatically offers you. Don’t miss out!
In order to enable emulation of malware samples at scale, we have
developed the Speakeasy
emulation framework. Speakeasy aims to make it as easy as
possible for users who are not malware analysts to acquire triage
reports in an automated way, as well as enabling reverse engineers to
write custom plugins to triage difficult malware families.
Originally created to emulate Windows kernel mode malware, Speakeasy
now also supports user mode samples. The project’s main goal is high
resolution emulation of the Windows operating system for dynamic
malware analysis for the x86 and amd64 platforms. Similar emulation
frameworks exist to emulate user mode binaries. Speakeasy attempts to
differentiate from other emulation frameworks the following ways:
Architected specifically around emulation of Windows
malware
Supports emulation of kernel mode binaries to analyze
difficult to triage rootkits
Emulation and API support
driven by current malware trends to provide the community with a
means to extract indicators of compromise with no extra tooling
Completely configurable emulation environment requiring no
additional code
The project currently supports kernel mode drivers, user mode
Windows DLLs and executables, as well as shellcode. Malware samples
can be automatically emulated, with reports generated for later post
processing. The ongoing project goal will be continuing to add support
for new or popular malware families.
In this blog post, we will show an example of Speakeasy’s
effectiveness at automatically extracting network indicators from a
Cobalt Strike Beacon sample acquired from an online malware aggregate.
Background
Dynamic analysis of Windows malware has always been a crucial step
during the malware analysis process. Understanding how malware
interacts with the Windows API and extracting valuable host-based and
network-based indicators of compromise (IOCs) are critical to
assessing the impact malware has on an affected network. Typically,
dynamic analysis is performed in an automated or targeted fashion.
Malware can be queued to execute within a sandbox to monitor its
functionality, or manually debugged to reveal code paths not executed
during sandbox runs.
Code emulation has been used historically for testing, validation
and even malware analysis. Being able to emulate malicious code lends
many benefits from both manual and automated analysis. Emulation of
CPU instructions allows for total instrumentation of binary code where
control flow can be influenced for maximum code coverage. While
emulating, all functionality can be monitored and logged in order to
quickly extract indicators of compromise or other useful intelligence.
Emulation provides several advantages over execution within a
hypervisor sandbox. A key advantage is noise reduction. While
emulating, the only activity that can be recorded is either written by
the malware author, or statically compiled within the binary. API
hooking within a hypervisor (especially from a kernel mode
perspective) can be difficult to attribute to the malware itself. For
example, sandbox solutions will often hook heap allocator API calls
without knowing if the malware author intended to allocate memory, or
if a lower-level API was responsible for the memory allocation.
However, emulation has disadvantages as well. Since we are removing
the operating system from the analysis phase, we, as the emulator, are
now responsible for providing the expected inputs and outputs from API
calls and memory access that occur during emulation. This requires
substantial effort in order to successfully emulate malware samples
that are expected to the run on a legitimate Windows system.
Shellcode as an Attack Platform
In general, shellcode is an excellent choice for attackers to remain
stealthy on an infected system. Shellcode runs within executable
memory and does not need to be backed by any file on disk. This allows
attacker code to hide easily within memory where most forms of
traditional forensic analysis will fail to identify it. Either the
original binary file that loads the shellcode must first be
identified, or the shellcode itself must be dumped from memory. To
avoid detection, shellcode can be hidden within a benign appearing
loader, and then be injected into another user mode process.
In the first part of this blog series, we will show the
effectiveness of emulation with one of the more common samples of
shellcode malware encountered during incident response investigations.
Cobalt Strike is a commercial penetration testing framework that
typically utilizes stagers to execute additional code. An example of a
stager is one that downloads additional code via a HTTP request and
executes the HTTP response data. The data in this case is shellcode
that commonly begins with a decode loop, followed by a valid PE that
contains code to reflectively load itself. In the case of Cobalt
Strike, this means it can be executed from the start of the executable
headers and will load itself into memory. Within the Cobalt Strike
framework, the payload in this case is typically an implant known as
Beacon. Beacon is designed to be a memory resident backdoor used to
maintain command and control (C2) over an infected Windows system. It
is built using the Cobalt Strike framework without any code
modifications and can be easily built to have its core functionality
and its command and control information modified.
All of this allows attackers to rapidly build and deploy new
variants of Beacon implants on compromised networks. Therefore, a tool
to rapidly extract the variable components of Beacon are necessary
and, ideally, will not require the valuable time of malware analysts.
Speakeasy Design
Speakeasy currently employs the QEMU-based emulator engine Unicorn
to emulate CPU instructions for the x86 and amd64 architectures.
Speakeasy is designed to support arbitrary emulation engines in the
future via an abstraction layer, but it currently relies on Unicorn.
Full OS sandboxing will likely always be required to analyze all
samples as generically emulating all of Windows is somewhat
unfeasible. Sandboxing can be difficult to scale on demand and can be
time consuming to run samples. However, by making sure we emulate
specific malware families, such as Beacon in this example, we can
quickly reduce the need to reverse engineer variants. Being able to
generate high level triage reports in an automated fashion is often
all the analysis that is needed on a malware variant. This allows
malware analysts more time to focus on samples that may require deeper analysis.
Shellcode or Windows PEs are loaded into the emulated address space.
Windows data structures required to facilitate basic emulation of
Windows kernel mode and user mode are created before attempting to
emulate the malware. Processes, drivers, devices and user mode
libraries are “faked” in order to present the malware with a realistic
looking execution environment. Malware will be able to interact with
an emulated file system, network and registry. All these emulated
subsystems can be configured with a configuration file supplied to
each emulation run.
Windows APIs are handled by Python API handlers. These handlers will
try to emulate expected outputs from these APIs so that malware
samples will continue their expected execution path. When defining an
API handler, all that is needed is the name of the API, the number of
arguments the API expects, and an optional calling convention
specification. If no calling convention is supplied, stdcall is
assumed. Currently, if an API call is attempted that is not supported,
Speakeasy will log the unsupported API and move on to the next entry
point. An example handler for the Windows HeapAlloc function exported
by kernel32.dll is shown in Figure 1.
Figure 1: Example handler for Windows
HeapAlloc function
All entry points are emulated by default. For example, for DLLs, all
exports are emulated, and for drivers, the IRP major functions are
each emulated. In addition, dynamic entry points that are discovered
during runtime are followed. Some examples of dynamic entry points
include threads that are created or callbacks that are registered.
Attributing activity to specific entry points can be crucial to seeing
the whole picture when trying to identify the impact of a malware infection.
Reporting
Currently, all events captured by the emulator are logged and
represented by a JSON report for easy post processing. This report
contains events of interest that are logged during emulation. Like
most emulators, all Windows API calls are logged along with arguments.
All entry points are emulated and tagged with their corresponding API
listings. In addition to API tracing, other specific events are called
out including file, registry and network access. All decoded or
“memory resident” strings are dumped and displayed in the report to
revealed useful information not found within static string analysis.
Figure 2 shows an example of a file read event logged in a Speakeasy
JSON report.
Figure 2: File read event in a Speakeasy report
Speed
Because the framework is written in Python, speed is an obvious
concern. Unicorn and QEMU are written in C, which provides very fast
emulation speeds; however, the API and event handlers we write are in
Python. Transitioning between native code and Python is extremely
expensive and should be done as little as possible. Therefore, the
goal is to only execute Python code when it is absolutely necessary.
By default, the only events we handle in Python are memory access
exceptions or Windows API calls. In order to catch Windows API calls
and emulate them in Python, import tables are doped with invalid
memory addresses so that we only switch into Python when import tables
are accessed. Similar techniques are used for when shellcode accesses
the export tables of DLLs loaded within the emulated address space of
the malware. By executing as little Python code as possible, we can
maintain reasonable speeds while still allowing users to rapidly
develop capabilities for the framework.
Memory Management
Speakeasy implements a lightweight memory manager on top of the
emulator engine’s memory management. Each chunk of memory allocated by
malware is tracked and tagged so that meaningful memory dumps can be
acquired. Being able to attribute activity to specific chunks of
memory can prove to be extremely useful for analysts. Logging memory
reads and writes to sensitive data structures can reveal the true
intent of malware not revealed by API call logging, which is
particularly useful for samples such as rootkits.
Speakeasy offers an optional “memory tracing” feature that will log
all memory accesses that samples exhibit. This will log all reads,
writes and executes to memory. Since the emulator tags all allocated
memory chunks, it is possible to glean much more context from this
data. If malware hooks a critical data structure or pivots execution
to dynamically mapped memory this will be revealed and can be useful
for debugging or attribution. This feature comes at a great speed
cost, however, and is not enabled by default.
The emulated environment presented to malware includes common data
structures that shellcode uses to locate and execute exported Windows
system functions. It is necessary to resolve exported functions in
order to invoke the Win32 API and therefore have meaningful impact on
a targeted system. In most cases, Beacon included, these functions are
located by walking the process environment block (commonly called the
PEB). From the PEB, shellcode can access a list of all loaded modules
within a process’s virtual address space.
Figure 3 shows a memory report generated from emulating a Beacon
shellcode sample. Here we can trace the malware walking the PEB in
order to find the address of kernel32.dll. The malware then manually
resolves and calls the function pointer for the “VirtualAlloc” API,
and proceeds to decode and copy itself into the new buffer to pivot execution.
Figure 3: Memory trace report
Configuration
Speakeasy is highly configurable and allows users to create their
own “execution profiles”. Different levels of analysis can be
specified in order to optimize individual use cases. The end goal is
allowing users easy switching of configuration options with no code
changes. Configuration profiles are currently structured as JSON
files. If no profile is provided by the user, a default configuration
is provided by the framework. The individual fields are documented
within the Speakeasy project.
Figure 4 shows a snippet of the network emulator configuration
subsection. Here, users can specify what IP addresses get returned
when a DNS lookup occurs, or in the case of some Beacon samples, what
binary data gets returned during a TXT record query. HTTP responses
have custom responses configured as well.
Figure 4: Network configuration
Many HTTP stagers will retrieve a web resource using a HTTP GET
request. Often, such as with Cobalt Strike or Metasploit stagers, this
buffer is then immediately executed so the next stage of execution can
begin. This response can be easily configured with Speakeasy
configurations. In the configuration in Figure 4, unless overridden,
the framework will supply the data contained in the referenced
default.bin file. This file currently contains debug interrupt
instructions (int3), so if the malware attempts to execute the data it
exits and will be logged in the report. Using this, we can easily
label the malware as a downloader that downloads additional code.
Configuration fields also exist for file system and registry
emulation. Files and registry paths can similarly be configured to
return data to samples that expect to be running on a live Windows system.
Limitations
As said, emulation comes with some challenges. Maintaining feature
parity with the system being emulated is an ongoing battle; however,
it provides unique opportunities for controlling the malware and
greater introspection options.
In cases where emulation does not complete fully, emulation reports
and memory dumps can still be generated in order to gather as much
data as possible. For example, a backdoor may successfully install its
persistence mechanism, but fail to connect to its C2 server. In this
situation, the valuable host-based indicators are still logged and can
provide value to an analyst.
Missing API handlers can quickly and easily be added to the emulator
in order to handle these situations. For many API handlers, simply
returning a success code will be sufficient to make the malware to
continue execution. While full emulation of every piece of malware may
not be feasible, targeting functionality of specific malware families
can greatly reduce the need to reverse engineer variants of the same families.
Usage
Speakeasy is available right now on
our GitHub. It can be installed with the included Python
installer script or installed within a Docker container using the
provided Dockerfile. It is platform agnostic and can be used to
emulate Windows malware on Windows, Linux or MacOS. More information
can be found on the project’s README.
Once installed, Speakeasy can be used as a standalone library or
invoked directly using the provided run_speakeasy.py script. In this
blog post we will demonstrate how to emulate a malware sample directly
from the command line. For information on how to use Speakeasy as a
library, see the project’s README.
The included script is meant to emulate a single sample and generate
a JSON report with the logged events. The command line arguments for
run_speakeasy.py are shown in Figure 5.
Figure 5: Command line arguments for run_speakeasy.py
Speakeasy also offers a rich development and hooking interface for
writing custom plugins. This will be covered in more detail in a later
blog post.
Emulation of a Beacon Implant
For this example, we will be emulating shellcode that decodes and
executes a Beacon implant variant that has a SHA-256 hash of
7f6ce8a8c2093eaf6fea1b6f1ef68a957c1a06166d20023ee5b637b5f7838918. We
begin by verifying the file format of the sample. This sample is
expected to be launched either by a loader or used as part of an
exploit payload.
Figure 6: Hex dump of malware sample
In Figure 6, we can clearly see that the file is not in the PE file
format. An analyst who has seen many shellcode samples may notice the
first two bytes: “0xfc 0xe8”. These bytes disassemble to the intel
assembly instructions “cld” and “call”. The “cld” instruction is a
common prelude to position independent shellcode as it will clear the
direction flag allowing malware to easily parse string data from
system DLL’s export tables. The following call instruction is often
used by shellcode to get its current program counter by following it
with a “pop” instruction. This allows the malware to discover where it
is executing from in memory.
Since we are reasonably certain this sample is shellcode, we will
invoke Speakeasy with the command line shown in Figure 7.
Figure 7: Command line used to emulate
malware sample
This will instruct Speakeasy to emulate the sample from offset zero
as x86 shellcode. Note: even though we are emulating code and not
actually executing it, these are still attacker generated binaries. It
may still be wise to emulate malicious code within a virtual machine
in the event a vulnerability is discovered in whatever native CPU
emulation engine is used.
After emulation, a report will be generated named “report.json”. In
addition, a full memory dump of the emulation environment will be
compressed and written to “memory_dump.zip”. The malware will get
loaded into emulated memory inside of a fake container process to
simulate a real execution environment that shellcode would expect to
be running in. Once emulation begins, emulated API calls will be
logged to the screen along with their arguments and return values.
Figure 8 shows the Beacon sample allocating a new memory buffer where
it will copy itself. The malware then begins to manually resolve
exports it needs to execute.
Figure 8: Network configuration
After additional decoding and setup, the malware attempts to connect
to its C2 server. In Figure 9, we can see the malware using the
Wininet library to connect and read data from the C2 server using HTTP.
Figure 9: Wininet API calls to connect to C2
The malware will loop endlessly until it receives the data it
expects from its C2 server. Speakeasy will timeout after a
predetermined amount of time and generate a JSON report.
Figure 10: Network C2 events
The network indicators are summarized in the “network_events” and
“traffic” sections of the generated report. In Figure 10, we can see
the IP address, port number and, in this case, HTTP headers associated
with the connections made by the malware.
In this example, when we emulated the sample, we instructed
Speakeasy to create a memory dump of the emulated address space. A ZIP
archive will get created of each memory allocation along with context
around it. This context includes base address, size and a tag that is
assigned by the emulator in order to identify what the memory
allocation corresponds to. Figure 11 shows a snippet of the memory
dump files created during emulation. The file names contain the tag
and base address associated with each memory allocation.
Figure 11: Individual memory blocks
acquired from emulation
If we just run strings on these memory dumps, we can quickly locate
interesting strings along with the Beacon configuration data, which is
shown in Figure 12.
Figure 12: Configuration string data for
the malware
In a triage level of analysis, we may only care about the indicators
of compromise for a malware variant of a known family. However, if
full reverse engineering of the sample is required, we can also
recover the decoded version of the Beacon malware in its DLL form. By
simply doing a primitive grep for the “MZ” magic bytes, we find the
only hits are the memory dumps related to the original sample’s
allocation and the virtual allocated buffer that the malware copies
itself to (Figure 13).
Figure 13: Memory dump containing the
decoded malware
If we look at the bytes in the original shellcode buffer, we can see
that it was decoded before it was copied and is sitting in memory
ready to be dumped at offset 0x48. We can now successfully load the
decoded Beacon DLL into IDA Pro for full analysis (Figure 14).
Figure 14: Decoded malware successfully
loaded into IDA Pro
Conclusion
In this blog post we demonstrated how the Speakeasy emulation
framework can be used to automatically triage a Beacon malware sample.
We used it to discover valuable network indicators, extract its config
information from memory, and acquire a decoded Beacon DLL for further analysis.
Head over to our GitHub to start using Speakeasy
today, and stay tuned for the next blog post where we will
demonstrate kernel malware analysis using emulation.
My name is Seb and I???m an application security (AppSec) engineer, part of the Application Security Consultant (ASC) team here at Veracode. My role is to help remediate flaws at scale and at pace, and to help you get the most out of the Veracode toolset. With a background as an engineering lead, I???ve run AppSec initiatives for government and global retailers.
I???ve found that successful AppSec is all about people. To help bring that ???people??? element to your AppSec program,ツ?a Security Champions initiative is an effective way of turning security-interested developers into security evangelistsツ?for your organization. Security Champions become a bridge and a multiplier, transferringツ?knowledge to their own team members and working with security teams to find better, faster, more secure ways of creating secure software. Having interfaced with Security Champions many times, there are some key tips for success that I???ve picked up ??? many of which we???ve implemented at Veracode.
Don???t underestimate program interest
First and foremost,ツ?ツ?more people will be interested in a Security Champions program than you think.ツ?ツ?At Veracode, we see a lot of interest and typically have two security champions per team.ツ?ツ?I???ve always been surprised by the positive response I receive when starting a Security Champions initiative. Cyber is cool; it???s relevant, it has great career opportunities, and it makes a difference. Once you explain the purpose, goals, and rewards involved, you shouldn???t have trouble finding Security Champions in your own organization.ツ?
Make it fun, engaging, and rewarding
You???ll also need to work to make it ???feel??? special. You will have just started an elite club, but you can???t simply book a room and wash your hands. To keep it interesting in the past, I???ve run capture the flag (CTF) games, competitions, brought in external speakers, ran training sessions, and even organized for Security Champions to go to training camps and conferences. Your role as the person initiating the Security Champions program is to become a great facilitator, a marketer, and an evangelist for AppSec. If you bring the party, your Security Champions will stay engaged.
Work like engineers
I also recommend that you organize like a software team. If all your engineers are using SCRUM, an agile framework for development, then run your Security Champions program like a SCRUM team. If they???re all using Azure DevOps, run your Security Champions using Azure DevOps as well. It also helps to have a backlog of potential work and groom the backlog together, run sprints, estimate work, and most importantly, run retrospectives.
Build a team identity to maximize impact
Remember: the same team-building rules apply, and your group of Security Champions are a group of individuals to begin with. If you want the maximum impact through collaboration and open discussion, then you need to invest in building that team and a sense of identity. At Veracode, we have a #security-champions Slack channel where collaboration can occur on Veracode integration projects or to ask questions about secure coding. And it doesn???t just have to be engineers. Anyone can be a Security Champion. Anyone can bang the drum, try to help influence secure practices, and be a fan of AppSec.
Let security help with developer roadblocks
Security team members in a Security Champions group can start to absorb the challenges, tooling, and complexities of what their software teams are going through. AppSec is often a people-challenge and having security team members who understand the pressure developers face to deliver software ???trading off between security debt, new features, and continuous improvement ???helps to build empathetic relationships.
Don???t forget to share your challenges and welcome the Security Champions into the world of security; knowledge transfer works both ways. Both sides of the aisle may have ideas on how to address challenges in other teams or parts of the business, especially when it comes to automation or better ways of working. As an example, the AppSec dashboards at Veracode were built in collaboration with Security Champions ???ツ?it???s a great way to inner source code.
Work transparently and inclusively across teams
My most important tip is towork in the open.One of the principles of secure software is open design. A secure solution that relies on a hidden design secret or obfuscation is trouble waiting to happen, and the same can be said for decision making. Running a Security Champions initiative is an opportunity to demonstrate how to work transparently and inclusively across an organization.
Your agendas, minute meetings, and prioritized backlog should be open for all to see, critique, and eventually contribute to. And, you should seek the biggest audience possible for sprint reviews on the Security Champions initiative. Why? Because interacting with Security Champions is a chance for someone else in the organization to have a positive experience with security and become part of the initiative. The more open you are, the more people you reach, the more perceptions can be changed.
The goal is cultural change, not 100% compliance
In organizations where security and developers have strong relationships, influence becomes the driving force rather instead of compliance. With influence, we start to improve the culture and attitude towards security. Only then can we encourage everyone to move more securely because it???s the right thing to do, not just because the compliance team is blocking a release. Security Champions are a critical part of improving security culture and I hope you consider investing in an initiative like this, as we have here at Veracode.
Read the recent Forrester Report: Build a Developer Security Champions Programツ?and browse our handy checklist below to learn more.
Cybercriminals tend to keep with the times, as they oftenleverage current events as a way to harvest user data or spread malicious content. McAfee COVID-19 Threat Report July 2020 points to a rather significant surge in attacks exploiting the current pandemic with COVID-19 themed malicious apps, phishing campaigns, malware, and ransomware. However, what many users don’t realize is that ransomware attacks are a lot more than meets the eye.
COVID-19 Themed Ransomware
During the first few months of 2020, the McAfee Advanced Threat Research (ATR) team saw that cybercriminals were targeting manufacturing, law, and construction businesses. After pinpointing their targets, hackers spread COVID-19 themed ransomware campaigns to thesecompaniesin an effort tocapitalize on their relevancy during this time.
An example of one of these attacksin action is Ransomware-GVZ. Ransomware-GVZ displays a ransom note demanding payment in return for decrypting the firm’s compromised systems and the personal and corporate data they contain. The ransomware then encrypts the organization’s files and displays a lock screen if a user attempts to reboot their device. As a result, the company is left with a severely crippled network while the criminals behind the attack gain a treasure trove of data – information belonging to consumers that have previously interacted with the business.
Ransomware Could Be the New Data Breach
As ransomware attacks continue to evolve, it’s not just file encryption that users need to be aware of – they also need to be aware ofthe impact the attack has on compromised data. Senior Principal Engineer and Lead Scientist Christiaan Beek stated, “No longer can we call these attacks just ransomware incidents. When actors have access to the network and steal the data prior to encrypting it, threatening to leak if you don’t pay, that is a data [infraction].” If a ransomware attack exploits an organization and their network is compromised, so is the data on that network. Hackers can steal this data before encrypting it and use this stolen information to conduct identity theft or spread other misfortune that can affect both the organization’s employees and their customers.
This surge in ransomware is only compounded by traditional data infringements– which have also spiked in conjunction with the global pandemic. According to the McAfee COVID-19 Threat Report July 2020, the number of reported incidents targeting the public sector, individuals, education, and manufacturing dramatically increased.In fact, McAfee Labs counted 458 publicly disclosed security incidents in the few months of 2020, with a 60% increase in attacks from Q4 2019 to Q1 2020 in the United States alone.Coincidentally, the attacks targeting organizations also impact the consumers who buy from them, as the company’s data consists of their customer’s personal and financial information.
Don’t Let Your Data Be Taken for Ransom
Because of the high volume ofdata that’s compromised by ransomware attacks, it’s crucial for consumers to shift how they approach these threats and respond in a similar way that they would a data incident. Luckily, there are actionable steps you can take as a consumer to help secure your data.
Change your credentials
If you discover that a data leak or a ransomware attack has compromised a company you’ve interacted with, err on the side of caution and change your passwords for all of your accounts. Taking extra precautions can help you avoid future attacks.
Take password protection seriously
When updating your credentials, you should always ensure that your password is strong and unique. Many users utilize the same password or variations of it across all their accounts. Therefore, be sure to diversify your passcodes to ensure hackers cannot obtain access to all your accounts at once, should one password be compromised. You can also employ a password manager to keep track of your credentials.
Enable two-factor or multi-factor authentication
Two or multi-factor authentication provides an extra layer of security, as it requires multiple forms of verification. This reduces the risk of successful impersonation by hackers.
If you are targeted, never pay the ransom
It’s possible that you could be targeted individually by a ransomware campaign. If this happens, don’t pay the ransom. Although you may feel that this is the only way to get your encrypted files back, there is no guarantee that the ransomware developers will send a decryption tool once they receive the payment. Paying the ransom also contributes to the development of more ransomware families, so it’s best to hold off on making any payments.
Use a comprehensive security solution
Adding an extra layer of security with a solution such as McAfee® Total Protection, which includes Ransom Guard, can help protect your devices from these cyberthreats.
Stay Updated
To stay updated on all things McAfeeand for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Global positioning system (GPS) technology is now the standard way for travelers to efficiently get from point A to point B. While GPS delivers unparalleled opportunities to businesses and individuals, there are some drawbacks to using this technology. GPS devices can be vulnerable to cyber attacks through GPS spoofing.
GPS Spoofing 101
Global navigation satellite systems (GNSS) have been around for years in many industrialized countries, and GPS is just one of those systems. GPS spoofing happens when someone uses a radio transmitter to send a counterfeit GPS signal to a receiver antenna to counter a legitimate GPS satellite signal. Most navigation systems are designed to use the strongest GPS signal, and the fake signal overrides the weaker but legitimate satellite signal.
Commercial Hazards of GPS Spoofing
GPS spoofing isn’t to be confused with GPS jamming. GPS jamming happens when a cyber criminal blocks GPS signals altogether. Selling or using GPS jamming equipment that can block communications is illegal in the United States. While GPS jamming appears to be the greater threat, GPS spoofing delivers a sucker punch to a variety of businesses.
GPS spoofing allows hackers to interfere with navigation systems without operators realizing it. The fake GPS feeds cause drivers, ship captains, and other operators to go off course without any coercion. Businesses that are particularly vulnerable to GPS spoofing are shipping companies, taxi services, and construction companies.
Shipping Companies
Shipping companies that haul freight via land, air, and sea all use GPS-based navigation systems to get cargo safely to destinations all over the world. GPS spoofing leaves these shipments vulnerable to hijacking and theft. A practical example of this is where hijackers use GPS spoofing to misdirect a vehicle to a location where its cargo can be robbed—and hid the truck’s location while it’s happening. Additionally, many shippers use GPS-enabled locks to secure their cargo, allowing them to open only when the truck arrives at its set destination. GPS spoofing undoes those locks as well. In all, this puts drivers in danger, and trucking companies lose millions of dollars of cargo each year due to hijacking incidents such as these.
Taxi and Ride Sharing Services
Gone are the days when taxi drivers relied solely on their knowledge of a city’s streets to transport passengers. Today’s taxi drivers can go into any city that their license allows and do their jobs efficiently with the use of GPS technology. This flexibility comes with some drawbacks, however. GPS spoofing allows drivers to fake their location and commit criminal acts while still on the clock. Drivers from ride services can also use the technique to fraudulently place themselves in surge areas to get more money for their services. Projecting a false location is a financial risk to companies and is potentially dangerous for passengers.
Construction Companies
While skilled construction workers are certainly valued, specialized tools, equipment, and machinery are the assets that many construction companies seek to track. These expensive assets commonly go missing on worksites, which eats into company profits. In recent years, GPS asset tracking systems have been installed to make sure construction equipment, tools, and machinery remain at authorized worksites. By using GPS spoofing, a thief could move an asset to a new location without anyone knowing about it until it was too late.
Dangers of GPS Spoofing for Everyone Else
GPS spoofing isn’t just a threat to businesses and government agencies; it also can be the catalyst for significant harm to individuals who rely on GPS. Cruising waterways along the coasts is a favorite hobby for those who enjoy boating.. Modern boats are equipped with GPS-based navigation systems. A cyber criminal can use GPS spoofing to get a skipper to steer his boat off course and into the path of danger from modern-day pirates.
The makers of location-based dating apps tout them as a safe way to meet a potential mate. These apps use GPS technology to help users identify dates by their location. When a bad actor uses GPS spoofing, he can fake his location or guide his date to a dangerous location.
The future of driving is now. Some electric cars are already equipped with an autopilot feature that offers unparalleled convenience to travel-weary drivers. However, independent research findings have uncovered a critical vulnerability in the cars’ navigation systems. What will happen when fully autonomous, self-driving cars are made without steering devices that would allow a person to take control of their car during a GPS spoofing incident?
Tips to Combat GPS Spoofing Attacks
If you own a business that relies on GPS-based navigation systems, you’ll want to know the best ways to sabotage GPS spoofing attacks. The Department of Homeland Security points out some physical and procedural techniques to fight the problem. It recommends that companies hide GPS antennas from public view. GPS spoofing works well when an attacker can get close to an antenna and override legitimate GPS signals that come from orbiting satellites.
The agency suggests installing a decoy antenna that’s in plain view of would-be cyber criminals. Adding redundant antennas in different locations at your site allows you to notice if one antenna is being targeted for GPS spoofing. Companies such as Regulus Cyber are also developing GPS spoofing detection software that alerts users of spoofing incidents and keeps their devices from acting on spoofed GPS data.
Additionally, organizations should consider taking GPS-enabled equipment offline whenever connectivity isn’t actively required—thus making them less susceptible to attack. Likewise, following the basics of security hygiene provide further protection, such as regular updates and changing of passwords, along with the use of two-factor authentication, network firewalls, and other cyber defenses.
GPS Spoofing for Privacy
While GPS spoofing can cause big problems for people, businesses, and governments, there is a legitimate use for the practice. GPS tracking and location sharing present everyone with real privacy issues. GPS spoofing allows users to hide their actual location from those who could cause harm. Security companies can use GPS spoofing to guard high-profile clients or expensive merchandise. Individuals can install GPS spoofing apps for free on their Android phones to mask their locations and protect their privacy.
We are most appreciative of our customers who support our solutions and share their opinions through forums like Gartner Peer Insights. The voice and passion of our customers is instrumental in shaping our success and motivates us each day to improve and innovate.
To that end we have taken our SIEM product and made it deployable on-prem or in the cloud via ESM Cloud. By leveraging the power of cloud computing, the new McAfee ESM Cloud allows customers to accelerate time to value for security operations centers by removing operational barriers, providing automated deployment, 24/7 system health monitoring, regular software updates, and patches, thereby allowing teams to focus efforts on security tasks.
Here are some quotes from customers that contributed to Gartner Peer Insights’ recognition of ESM:
“Provides the features you need, in a simple easy to use, easy to understand display”
“Integration and deployment was very easy, we integrated the McAfee Enterprise Security Manager (ESM), McAfee Event Receiver (ERC), and McAfee Enterprise Log Manager (ELM) in our lab in just a little under 4 hours… In under 4 hours we were collecting from a variety of MS Windows systems and a variety of Linux systems (RHEL, Ubuntu, and CENTOS). Other SIEM systems that we were evaluating took days to get running and then we still spent time on having to tune them.”
“A complete realistic security solution equipped with all major tools to secure structures.”
“This security manager is the best choice out there… McAfee Manager is best in the ways that we can view and analyze all the major activities being performed in the company’s system and securities and how we can improve the overall security related concerns. It has all these pre-equipped features which facilitates the overall requirements for enterprises.
To all our customers who submitted reviews, thank you! These reviews mold our products and our customer journey, and we look forward to building on the experience that earned us this distinction!
Learn more about our award winning SIEM solution by visiting the ESM solutions page.
Read the SIEM reviews written by IT professionals that earned us this distinction by visiting Gartner Peer Insights’ SIEM page.
Gartner Peer Insights ‘Voice of the Customer’: Security Information and Event Management, 3 July 2020. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.
Operational technology (OT) asset owners have historically considered
red teaming of OT and industrial control system (ICS) networks to be
too risky due to the potential for disruptions or adverse impact to
production systems. While this mindset has remained largely unchanged
for years, Mandiant's experience in the field suggests that these
perspectives are changing; we are increasingly delivering value to
customers by safely red teaming their OT production networks.
This increasing willingness to red team OT is likely driven by a
couple of factors, including the growing number and visibility of
threats to OT systems, the increasing adoption of IT hardware and
software into OT networks, and the maturing of OT security teams. In
this context, we deemed it relevant to share some details on
Mandiant's approach to red teaming in OT based on years of experience
supporting customers learning about tangible threats in their
production environments.
In this post we introduce Mandiant's approach to OT red teaming and
walk through a case study. During that engagement, it took Mandiant
only six hours to gain administrative control on the target's OLE for
Process Control (OPC) servers and clients in the target's Distributed
Control System (DCS) environment. We then used this access to collect
information and develop an attack scenario simulating the path a
threat actor could take to prepare for and attack the physical process
(We highlight that the red team did not rely on weaknesses of the DCS,
but instead weak password implementations in the target environment).
NOTE:Red teaming in OT production systems requires
planning, preparation and "across the aisle"
collaboration. The red team must have deep knowledge of industrial
process control and the equipment, software, and systems used to
achieve it. The red team and the asset owner must establish
acceptable thresholds before performing any activities.
Mandiant's approach to red teaming OT production systems consists of
two phases: active testing on IT and/or OT intermediary systems, and
custom attack modeling to develop one or more realistic attack
scenarios. Our approach is designed to mirror the OT-targeted attack
lifecycle—with active testing during initial stages (Initial
Compromise, Establish Foothold, Escalate Privileges, and Internal
Reconnaissance), and a combination of active/passive data collection
and custom threat modeling to design feasible paths an attacker would
follow to complete the mission.
Figure 1: Mandiant OT red teaming approach
Mandiant's OT red teaming may begin either from the
perspective of an external attacker leveraging IT compromises to
pivot into the OT network, or from the perspective of an actor who
has already gained access into the OT network and is ready to
escalate the intrusion.
We then leverage a range of
commodity tools and utilities that are widely available in most
target environments to pivot across OT intermediary systems and gain
privileged access to target ICS.
Throughout this process,
we maintain constant communication with the customer to establish
safety thresholds. Active participation from the defenders will also
enable the organization to learn about the techniques we use to
extract information and the weaknesses we exploit to move across the
target network.
Once the active testing stops at the agreed
safety threshold, we compile this information and perform additional
research on the system and processes to develop realistic and
target-specific attack scenarios based on our expertise of threat
actor behaviors.
Mandiant's OT red teaming can be scoped in different ways depending
on the target environment, the organization's goals, and the asset
owner's cyber security program maturity. For example, some
organizations may test the full network architecture, while others
prefer to sample only an attack on a single system or process. This
type of sampling is useful for organizations that own a large number
of processes and are unlikely to test them one by one, but instead
they can learn from a single-use case that reflects target-specific
weaknesses and vulnerabilities. Depending on the scope, the red
teaming results can be tailored to:
Model attack scenarios based on target-specific
vulnerabilities and determine the scope and consequences if a threat
actor were to exploit them in their environment.
Model
attack paths across the early stages of reconnaissance and lateral
movement to identify low-hanging fruit that adversaries may exploit
to enable further compromise of OT.
Operationalize threat
intelligence to model scenarios based on tactics, techniques, and
procedures (TTPs) from known actors, such as advanced
persistent threats (APTs).
Test specific processes or
systems deemed at high risk of causing a disruption to safety or
operations. This analysis highlights gaps or weaknesses to determine
methods needed to secure high-risk system(s).
Red Teaming in OT Provides Unique Value to Defenders
Red teaming in OT can be uniquely helpful for defenders, as it
generates value in a way very specific to an organizations' needs,
while decreasing the gap between the "no holds barred" world
of real attackers and the "safety first" responsibility of
the red team. While it is common for traditional red teaming
engagements to end shortly after the attacker pivots into a production
OT segment, a hybrid approach, such as the one we use, makes it
possible for defenders to gain visibility into the specific strengths
and weaknesses of their OT networks and security implementations. Here
are some other benefits of red teaming in OT production networks:
It helps defenders understand and foresee possible paths that
sophisticated actors may follow to reach specific goals. While cyber
threat intelligence is another great way to build this knowledge,
red teaming allows for additional acquisition of site-specific
data.
It responds to the needs of defenders to account for
varying technologies and architectures present in OT networks across
different industries and processes. As a result, it accounts for
outliers that are often not covered by general security best
practices guidance.
It results in tangible and realistic
outputs based on our active testing showing what can really happen
in the target network. Mandiant's OT red teaming results often show
that common security testing tools are sufficient for actors to
reach critical process networks.
It results in conceptual
attack scenarios based on real attacker behaviors and specific
knowledge about the target. While the scenarios may sometimes
highlight weaknesses or vulnerabilities that cannot be patched,
these provide defenders with the knowledge needed to define
alternative mitigations to mitigate risks earlier in the
lifecycle.
It can help to identify real weaknesses that could
be exploited by an actor at different stages of the attack
lifecycle. With this knowledge, defenders can define ways to stop
threat activity before it reaches critical production systems, or at
least during early phases of the intrusion.
Applying Our Approach in the Real World: Big Steam Works
During this engagement, we were tasked with gaining access to
critical control systems and designing a destructive attack in an
environment where industrial steaming boilers are operated with an
Distributed Control System (DCS). In this description, we redacted
customer information—including the name, which we refer to as
"Big Steam Works"—and altered sensitive details. However,
the overall attack techniques remain unchanged. The main objective of
Big Steam Works is to deliver steam to a nearby chemical production company.
For the scope of this red team, the customer wanted to focus
entirely on its OT production network. We did not perform any tests in
IT networks and instead begun the engagement with initial access
granted in the form of a static IP address in Big Steam Work's OT
network. The goal of the engagement was to deliver consequence-driven
analysis exploring a scenario that could cause a significant physical
impact to both safety and operations. Following our red teaming
approach, the engagement was divided in two phases: active testing
across IT and/or OT intermediary systems, and custom attack modeling
to foresee paths an attacker may follow to complete its mission.
We note that during the active testing phase we were very careful to
maintain high safety standards. This required not only highly skilled
personnel with knowledge about both IT and OT, but also constant
engagement with the customer. Members from Big Steam Works helped us
to set safety thresholds to stop and evaluate results before moving
forward, and actively monitored the test to observe, learn, and remain
vigilant for any unintended changes in the process.
Phase 1 – Active Testing
During this phase, we leveraged publicly accessible offensive
security tools (including Wireshark, Responder, Hashcat, and
CrackMapExec) to collect information, escalate privileges, and move
across the OT network. In close to six hours, we achieved
administrative control on several Big Steam Works' OLE for Process
Control (OPC) servers and clients in their DCS environment. We
highlight that the test did not rely on weaknesses of the DCS, but
instead weak password implementations in the target environment.
Figure 2 details our attack path:
Figure 2: Active testing in Big Steam
Work's OT network
We collected network traffic using Wireshark to map network
communications and identify protocols we could use for credential
harvesting, lateral movement, and privilege escalation. Passive
analysis of the capture showed Dynamic Host Configuration Protocol
(DHCP) broadcasts for IPv6 addresses, Link-Local Multicast Name
Resolution (LLMNR) protocol traffic, and NetBios Name Service
(NBT-NS) traffic.
We responded to broadcast LLMNR, NBT-NS,
and WPAD name resolution requests from devices using a publicly
available tool called Responder. As we supplied our IP address in
response to broadcasted name resolution requests from other clients
on the subnet, we performed man-in-the-middle (MiTM) attacks and
obtained NTLMv1/2 authentication protocol password hashes from
devices on the network.
We then used Hashcat to crack the
hashed credentials and use them for further lateral movement and
compromise. The credentials we obtained included, but were not
limited to, service accounts with local administrator rights on OPC
servers and clients. We note that Hashcat cracked the captured
credentials in only six seconds due to the lack of password strength
and complexity.
With the credentials captured in the first
three steps, we accessed other hosts on the network using
CrackMapExec. We dumped additional cached usernames, passwords, and
password hashes belonging to both local and domain accounts from
these hosts.
This resulted in privileged access and control
over the DCS's OPC clients and servers in the network. While we did
not continue to execute any further attack, the level of access
gained at this point enabled us to perform further reconnaissance
and data collection to design and conceptualize the last steps of a
targeted attack on the industrial steaming boilers.
The TTPs we used during the active testing phase resemble some of
the simplest resources that can be used by threat actors during real
OT intrusions. The case results are concerning given that they
illustrate only a few of the most common weaknesses we often observe
across Mandiant OT red team engagements. We highlight that all the
tools used for this intrusion are known and publicly available. An
attacker with access to Big Steam Works could have used these methods
as they represent low-hanging fruit and can often be prevented with
simple security mitigations.
Phase 2 – Custom Attack Modeling
For roughly a week, Mandiant gathered additional information from
client documentation and research on industrial steaming boilers. We
then mirrored the process an attacker would follow to design a
destructive attack on the target process given the results achieved
during phase 1. At this point of the intrusion, the attacker would
have already obtained complete control over Big Steam Works' OPC
clients and servers, gaining visibility and access to the DCS environment.
Before defining the path to follow, the attacker would likely have
to perform further reconnaissance (e.g., compromising additional
systems, data, and credentials within the Big Steam Works DCS
environment). Specifically, the attacker could:
Gain access to the DCS configuration software/engineering
workstation
Obtain configuration/control logic files
Determine the type/function of the different DCS nodes in the
environment
Use native DCS tools for system overview,
graphics display, and point drill down
Identify
alarms/alerts monitored by operators via remote HMI screens and map
them to defined points
Map the flow of the physical process
based on data collection and review
Our next step was to develop the custom scenario. For this example,
we were tasked with modeling a case where the attacker was attempting
to create a condition that had a high likelihood of causing physical
damage and disruption of operations (see Figure 3). In this scenario,
the attacker attempted to achieve this by lowering the water level in
a boiler drum below the safe threshold while not tripping the burner
management system or other safety mechanisms. If successful, this
would result in rapid and extreme overheating in the boiler. Opening
the feedwater valve under such conditions could result in a
catastrophic explosion.
Figure 3: Custom attack model diagram for
Big Steam Works
Figure 3 describes how a real attacker might pursue their mission
after gaining access to the OPC servers and clients. As the actor
moves closer to their goals, it becomes more difficult to assess both
the probability of success and the actual impact of their actions due
to nuances specific to the client environment and additional safety
and security controls built into the process. However, the analysis
holds significant value as it illustrates the overall structure of the
physical process and potential attacker behaviors aimed at achieving
specific end goals. Furthermore, it proceeds directly from the results
obtained during the first phase of the red teaming.
The model presents one feasible combination of actions that an
attacker could perform to access devices governing the boiler drum and
modify the water level while remaining undetected. With the level of
access obtained from phase 1, the attacker would likely be able to
compromise engineering workstations (EWS) for the boiler drum's
controller using similar tools. This would likely enable the actor to
perform actions such as changing the drum level setpoints, modifying
the flow of steam scaling, or modifying water flow scaling. While the
model does not reflect all additional safety and security measures
that may be present deeper in the process, it does account for the
attacker's need to modify alarms and control sensor outputs to remain undetected.
By connecting the outcomes produced in the test to the potential
physical impacts and motivations involved in a real attack, this model
provided Big Steam Works with a realistic overview of cyber security
threats to a specific physical process. Further collaboration with the
customer enabled us to validate the findings and support the
organization to mitigate the risks reflected in the model.
Outlook
Mandiant's OT red teaming supports organizations by combining both
the hands-on analysis of vulnerabilities and weaknesses in IT and OT
networks with the conceptual modeling of attacker goals and possible
avenues to reach specific outcomes. It also enables security
practitioners to adopt the attacker's perspective and explore attack
vectors that may otherwise have not been conceived regardless of their
value as low-hanging fruit for OT intrusions.
Our approach presents realistic scenarios based upon technical
evidence of intrusion activity upon OT intermediary systems in the
tested network. In this way, it is tailored to support
consequence-driven analysis of threats to specific critical systems
and processes. This enables organizations to identify attack scenarios
involving digital assets and determine safeguards that can best help
to protect the process and ensure the safety of their facilities.
Guest post by Adrian Taylor, Regional VP of Sales for A10 Networks
The Emotet trojan recently turned from a major cybersecurity threat to a laughingstock when its payloads were replaced by harmless animated GIFs. Taking advantage of a weakness in the way Emotet malware components were stored, white-hat hackers donned their vigilante masks and sabotaged the operations of the recently revived cyberthreat. While highly effective as well as somewhat humorous, the incident should not distract attention from two unavoidable truths.
First, while the prank deactivated about a quarter of all Emotet malware payload downloads, the botnet remains a very real, ongoing threat and a prime vector for attacks such as ransomware. And second, relying on one-off operations by whimsical vigilantes is hardly a sustainable security strategy. To keep the remaining active Emotet botnets—and countless other cyber threats—out of their environment, organisations need to rely on more robust and reliable measures based on SSL interception (SSL inspection) and SSL decryption.
History of Emotet and the threat it presents First identified in 2014, version one of Emotet was designed to steal bank account details by intercepting internet traffic. A short time after, a new version of the software was detected. This version, dubbed Emotet version two, came packaged with several modules, including a money transfer system, malspam module, and a banking module that targeted German and Austrian banks. Last year, we saw reports of a botnet-driven spam campaign targeting German, Polish, Italian, and English victims with craftily worded subject lines like “Payment Remittance Advice” and “Overdue Invoice.” Opening the infected Microsoft Word document initiates a macro, which in turn downloads Emotet from compromised WordPress sites.
After a relative quiet start to 2020, the Emotet trojan resurfaced suddenly with a surge of activity in mid-July. This time around, the botnet’s reign of terror took an unexpected turn when the payloads its operators had stored on – poorly secured WordPress sites – were replaced with a series of popular GIFs. Instead of being alerted of a successful cyberattack, the respective targets received nothing more alarming than an image of Blink 182, James Franco, or Hackerman.
Whilst this is all in good fun, the question remains: what if the white hats had left their masks in the drawer instead of taking on the Emotet trojan? And what about the countless other malware attacks that continue unimpeded, delivering their payloads as intended?
A view into the encryption blind spot with SSL interception (SSL inspection) Malware attacks such as Emotet often take advantage of a fundamental flaw in internet security. To protect data, most companies routinely rely on SSL encryption or TLS encryption. This practice is highly effective for preventing spoofing, man-in-the-middle attacks, and other common exploits from compromising data security and privacy. Unfortunately, it also creates an ideal hiding place for hackers. To security devices inspecting inbound communications for threats, encrypted traffic appears as gibberish—including malware. In fact, more than half of the malware attacks seen today are using some form of encryption. As a result, the SSL encryption blind spot ends up being a major hole in the organisation’s defence strategy.
The most obvious way to address this problem would be to decrypt traffic as it arrives to enable SSL inspection before passing it along to its destination within the organisation—an approach known as SSL interception. But here too, problems arise. For one thing, some types of data are not allowed to be decrypted, such as the records of medical patients governed by privacy standards like HIPAA, making across-the-board SSL decryption unsuitable. And for any kind of traffic, SSL decryption can greatly degrade the performance of security devices while increasing network latency, bottlenecks, cost, and complexity. Multiply these impacts by the number of components in the typical enterprise security stack—DLP, antivirus, firewall, IPS, and IDS—and the problem becomes clear.
How efficient SSL inspection saves the day With many organisations relying on distributed per-hop SSL decryption. A single SSL inspection solution can provide the best course of action by decrypting traffic across all TCP ports and advanced protocols like SSH, STARTTLS, XMPP, SMTP and POP3. Also, this solution helps provide network traffic visibility to all security devices, including inline, out-of-band and ICAP-enabled devices.
Whilst we should celebrate the work of the white hats who restrained Emotet, it is not every day that a lethal cyber threat becomes a matter of humour. But having had a good laugh at their expense, we should turn our attention to making sure that attacks like Emotet have no way to succeed in the future—without the need to count on vigilante justice - this is where SSL inspection can really save the day.
McAfee’s Senior Manager of Business Development, Tranel Hawkins and DB Cybertech’s Chief Data Scientist & Product Manager Ben Farber discuss the Security Innovation Alliance.
“How should we adapt our cybersecurity controls to address the new WFH reality?" This question is top-of-mind for CIOs and security executives. When it comes to cybersecurity in the post-COVID era, every CIO needs an answer to three key questions:
According to the Yoroi annual cyber security report (available HERE), to Cyber Threat Trends (available HERE) and to many additional resources, Microsoft Office files (Word documents and Excel spreadsheet) are one of the most used malware loaders in the current era. Attackers lure victims, by seducing them to open a specially crafted Office document, which loads (sometime even drops from external resources) malicious contents and execute it on the landed host. Today, I decided to write some personal notes on how to deal with them. Following a list of reverse engineering and malware analysis techniques that could help you to analyze such a droppers.
Many different file formats and methodologies plus a lot of singular ways to hide malicious content have been developed in the past years, I decided to group the techniques by paragraphs in order to smooth the whole reading in a way you can jump directly to the interested section without need to read everything.
Hope you find it interesting and useful, if so please share it in a way many professionals/practitioners can use or improve this by sending me contents to be added !
Rich Text Format are interesting documents since they can carry Objects.
Rich Text Format Data
Didier Stevens built a great tool named rtfdump.py (available HERE) which can be used to deal with RTF files. Indeed if you run it against an RTF file you will see its composition and the objects that are included and used once run. The following picture shows an example of such a run on a RTF document (b98b7be0d7a4004a7e3f22e4061b35a56f825fdc3cba29248cf0500beca2523d). Usually I suggest to investigate from the heavier one, in other words from the object with higher Bytes on it.
Check RTF Content
rtfdump.py offers the way to select specific sections (-s) and you might decide to show it or to dump it to a file for additional analyses. Selecting the section 2 and showing its content through the following command you might appreciate an interesting string.
python rtfdump.py -s 2 -H mal1.doc
EquationEditor (CVE-2017-11882) ?
EquationEditor is always a red flag in my personal experience. Indeed CVE-2017-11882 is often abused from attacker in order to run specific shellCode. If you follow in checking in section 2’s HexView you would probably see encoding patterns: recurring characters and symbols. This is a typical behavior in XOR/ROL/SHIFT encryption functions. Didier Stevens comes out with another interesting tool names xorsearch.
Dumping BinaryContent
Before dealing with xorsearch (available HERE) we need to dump the equationeditor section into an external file. Once you have done such a dump you should move to Windows (we will need it later on) and run xorsearch.exe against the dumped binary. “[..] XORSearch is a program to search for a given string in an XOR, ROL, ROT or SHIFT encoded binary file. An XOR encoded binary file is a file where some (or all) bytes have been XORed with a constant value (the key). A ROL (or ROR) encoded file has its bytes rotated by a certain number of bits (the key). A ROT encoded file has its alphabetic characters (A-Z and a-z) rotated by a certain number of positions. A SHIFT encoded file has its bytes shifted left by a certain number of bits (the key): all bits of the first byte shift left, the MSB of the second byte becomes the LSB of the first byte, all bits of the second byte shift left, … XOR and ROL/ROR encoding is used by malware programmers to obfuscate strings like URLs. [..] (from Didier Stevens’Blog)
XORSSearch to find-out xored positions binary
Once run xorsearch would give us offsets in where there is higher probability to find change of control. In other words where you might start your shellcode in order to run it without falling into unaligned instructions. From that point you might use another great and widly known software “The ShellCode Debugger”: scDbg (available HERE). Once you run it (the following picture shows the GUI) you need to make emulator starting from the offset found in xorsearch.exe in my specifi case it was on 0x2c74c. I suggest to check “Unlimited steps” so that the emulator would follow on shellcode without stopping it and check the Reporting Mode, so that you would have a summary view at the end of the execution.
scDbg on offset 0x2c74c
Once run, here we go ! We do have our IoC out the shellcode.
Shellcode Excetution
Sometimes the attacker uses a different syscall: ExpandEnvironmentStringsW which is not a hooked function by scDbg. In that case you might need to open up the “just dumped file” and patch the binary by replacing the string: ExpandEnvironmentStringsW with the string ExpandEnvironmentStringsA, Once you have done it, reload the patched version of your shellcode into scDbg and re-run it, you would obtain better results.
Sometimes you might experience encrypted office content. Running oleid you would see Encrypted content set to True. Once you have an OLE file with encrypted VBA you cannot access them, and you might not be able to reverse/study/understand what they do. In such a case you need to figure out the encryption key and to decrypt the content.
OLEID shows Encrypted Content
Fortunately even if you encrypt your MACROs, the running client needs to know how to decrypt them in order to run the MACRO code.
This protection seem to be relatively stable at first sight, but a more detailed analysis revealed that it is not the password that is entered (or its hash) which is used to encrypt the document, but rather a fixed key stored in the MS Excel program code. This key is generated from the password ‘VelvetSweatshop’. What a nice joke by Microsoft! Try to protect a MS Excel document with this password (or to use this password to open a document). The most surprising thing is that no password is required to open a document.
A great tool to check this issue is the msoffcrypto-crack.py (available HERE).
Finding the “Secret” Microsoft Encrypting Password
Once you have found the “Encryption Key” you can just decrypt the file content (using the same msoff-crypto-crack.py) save it in “clear text” and run oledump.py over it. At this point you should see normal object contents. In this specific case one more Equation Editor is used. Let’s dump it (oledump.py).
Now let’s check if common control flow patterns have found with xorsearch.exe ! In case of positives, please join the analysis using scDBG.exe (from the section above: Rich Text Format (.RTF) )
XORSearch embedded binary
IoC:
3f3c2a4cb476c76b8bf84d6d2b0ee1a0a589709ccc69e84ffe6b2afd2dadbb39 (XLS download from HERE)
03u.ru (D&C2)
Office With VBA Macro
Maybe one of the most classic scenario happens when you are facing a document with VBA Macro on it. By running oledump.py you would check various VBA contents (M tag where MACRO are in ) and focus on the most “fat” one. In other words I definitely suggest to start investigating where more content is (so where high number of Bytes are found, in the following picture A11) since there is high probability to find interesting IoC for blocking or detection purposes.
oledump shows VB Macros
In that case olevba comes in helping us (available HERE). It emulates VBA engine and runs the MACRO script like a charm without any big issue. The execution will end up like the following image.
VBA Emulator
The emulator engine keeps going on until one known functions reaches the end. For example
Sometime it happens you open a malicious Microsoft Excel but no MACROs are in there. This technique provides attackers a simple and reliable method to get a foothold on a target network, as it simply represents an abuse of a legitimate feature of Excel, and does not rely on any vulnerability or exploit. It is just an old feature (almost 30-year old Microsoft Excel feature) that has been exploited only from the past few years. One of the best content regarding this type of attack evasion is given by Lastline (HERE)
Excel with no MACRO !
Once you run OLEVBA, you can check if it finds something interesting. In that run it suggests that XLM Excel 4 were used on such a document. In order to deobfuscate them and to analyze their contents there are many ways, from single “find” to more complex tool-sets. In this note I would add how I did in the past months. Today there is a script which works quite well, made by DissectMalware it’s theXLMDeobfuscator (that you can find it HERE). But we will cover this tool later on the following notes.
In order to un-hide the XLM Obfuscation MACRO what I’ve successfully used the following technique.
OLEVBA.py finds MACRO 4.0
Open the malicious file with no macro enabled, open the Macro editor, copy the following reveal script, save it and re-open with macro enabled (credit HERE )
Sub ShowAllSheets()
Dim sh As Worksheet
For Each sh In ActiveWorkbook.Sheets
sh.Visible = True
Next
End Sub
If you cant open the malicious file since the macro get executed and you have no control over the execution (since evasion) you might open another sheet, open the VBA editor and “import” the malicious document directly with VBA in the following way
Public Sub Convert_XML_To_Excel_From_Local_Path()
Dim xml_File_Path As String
Dim wb As Workbook
'Load XML Data into a New Workbook - Code from Officetricks.com
Application.DisplayAlerts = False
xml_File_Path = "c:/FileToOpen.xlm"
Set wb = Workbooks.OpenXML(Filename:=xml_File_Path)
'Copy Content from New workbook to current active Worksheet
wb.Sheets(1).UsedRange.Copy ThisWorkbook.Sheets(Sheet2).Range(A1)
'Close New Workbook & Enable Alerts
wb.Close False
Application.DisplayAlerts = True
End Sub
Now you should see the hidden sheet or the hidden cells. One more TIP here, in order to quick find the cells with the content on it, you might search for =. The following images shows what I meant.
Now, by checking the top-left box (in the following image BG35344) you can see where is the starting point. In this file Auto_Open is the first function that is called and you find its reference on there. Then you might see two main formats being used: FORMULA and GOTO
Hidden Excel 4.0 Macro Revealed
At this point you might decide to deobfuscate XLM by executing the MACRO 4 in a controlled way. In other word you might decide to delete the last GOTO in that way you will give no the control flow to the deobfuscated MACRO but you rather stop them (substituting the last GOTO with HALT) and see the deobfuscated code on the sheet.
Sometime you might find .csv files. They get imported into Microsoft Excel and become “true” Ole Files. Indeed running OleDump against a well-crafted csv you might discover interesting things such as that the CSV file holds VBA or Objects or. For example if you consider sha-256 : d5db2034631e56d58dffd797d25d286469f56690a1b00d4e6a0a80c31dbf119e you might find the following stuff in there (even if you open it with a common editor is normal text divided by commas). Running OleDump will shows a bunch of interesting sections.
OleDump Result
Now you might decide if you prefer to dump the code and to manually analyze it or if you prefer running a code emulator. By running OleVBA against that CSV your would figure-out many interesting indicators (check the following image). For example the tool points out that AutoExec is called once you would open the document with Microsoft excel. Many suspicious calls would be performed, for example: exec, run, and some hex – base64 string obfuscation techniques. On this run it was able to even decode such a strings and to recognize IoC such as URLs and file names.
oleVBA Analysis
If the code emulator wont work you might decide to dump the entire code by using OleDump. Once you have dumped the code you might analyze it trough a debugger or just reading it if it’s not obfuscated.
XLMDeobfuscator (grab it HERE) is definitely a great tool developed by @DissectMalware. It can be used to decode obfuscated XLM macros (also known as Excel 4.0 macros). It utilizes an internal XLM emulator to interpret the macros, without fully performing the code. It supports both xls, xlsm, and xlsb formats.
XLMDeobfuscator
Before such a great tool the mythical OleDump plugin plugin_biffis able to overlook to every Microsoft Excel cell and to find functions and formulas. By using the -x plugin option you are able to show the hidden Macro XLM while using the -f plugin option the plugin tries to figure-out external links by interpreting encoding (such as hex and base64) and printing out strings.
“Features are a nice to have, but at the end of the day, all we care about when it comes to our web and cloud security is architecture.” – said no customer ever.
The fact is that nobody likes to talk about architecture when shopping for the latest and greatest cyber security technology, and most organizations have been content to continue fitting new security tools and capabilities into their existing traditional architectures. However, digital transformation projects including cloud migration and ubiquitous mobile access have revealed architectural cracks, and many companies have seen the dam burst with the explosion in remote access demand in recent months. As a result, organizations are coming around to the realization that digital transformation demands a corresponding network and security architectural transformation.
The Secure Access Service Edge (SASE) framework provides organizations for a model to achieve this transformation, by bringing network and security technology together into a single, cloud-delivered service that ensures fast, secure, reliable, and cost-effective access to web and cloud resources. In this blog we are going to focus in on remote offices and how the combination of SD-WAN and Next-Generation Secure Web Gateway capabilities offered by MVISION UCE can enable SASE and deliver on the promise of digital transformation.
The Cloud and the Architectural Dilemma
In the past organizations were largely concentrated in a limited number of locations. Applications and data were hosted on servers at a central data center location on the local area network – typically at or near the headquarters. Users typically worked in the office, so they would also be located at the office and access corporate resources on the same network. Surrounding this network was a perimeter of security controls that could inspect all traffic going in or out of the organization, keeping trusted resources safe while keeping the bad guys out. Remote users and branch offices were logically connected to this central network via technologies like VPN, MPLS, and leased lines, so the secure network perimeter could be maintained.
While this approach sufficed for years, digital transformation has created major challenges. Applications and data storage have migrated to the cloud, so they no longer reside on the corporate network. Logic would dictate that the optimal approach would be for remote users and offices to have direct access to cloud resources without having to route back through the corporate network. But this would result in the organization’s IT security perimeter being completely circumvented, meaning lost security visibility and control, leading to unacceptable security and compliance risks.
So network and security architects everywhere are facing the same dilemma: What is the best way to enable digital transformation without any major compromises? Organizations have generally followed one of the four following architectural approaches based on their willingness to embrace new technologies and bring them together:
We’re going to discuss these four options here, and evaluate them based on four factors: security, speed, latency, and cost. The results will show that there’s only one way to achieve fast, secure, and cost-effective access to web and cloud resources.
Approach 1: STATUS QUO
Due to risk of losing security visibility and control, many organizations have refused to allow “direct-to-cloud” re-architecting. So even when high-speed internet links could connect users directly to cloud and web resources, this approach necessitates that all traffic still be pushed through slower MPLS links back to the corporate network, and then go back out through a single aggregated internet pipe to access web and cloud resources. While this theoretically maintains security visibility and control, it comes at great cost.
For starters, the user experience is greatly hampered by poor performance. Bandwidth suffers from the slow MPLS link back to the corporate office, as well as through the congested company internet connection. In addition, the extra network hops and increased network contention leads to high latency – this has been drastically amplified in recent months as the amount of remote traffic backhauling through the corporate network has exploded well beyond original design expectations. These factors don’t even take into account the potential impact of service disruptions brought about by introducing a single point of failure into the network architecture.
In addition to poor performance, there is a tangibly higher financial cost associated with this approach. Multiple MPLS lines connecting branch offices to the corporate data center are considerably more expensive than public internet connectivity. Additionally, in order to accommodate the routing of ALL user traffic, organizations need to dramatically increase investment in their central network and security perimeter infrastructure capacity, as well as the bandwidth of the shared internet pipe.
So we’re left needing to find a long-term answer to the challenges of speed, latency, and cost. These considerations are what have led many network architects to proceed to deploy SD-WAN.
Approach 2a: GOING DIRECT-TO-CLOUD WITH SD-WAN
The first step in delivering a cloud-ready architecture is removing the bottleneck incurred by forcing all traffic to be routed through slow MPLS lines to the central network and then back out to the cloud. SD-WAN technology can help in this regard. By deploying SD-WAN equipment at the edge of the branch network, optimized traffic policies can be created that route traffic directly to web and cloud resources using fast, affordable internet connections, while using the same internet connection to send only data center-bound traffic directly back to the corporate network over a dynamic set of VPN tunnels. WAN optimization and QoS, as well as various other edge network and security functions like firewall filtering that are better suited to being performed at the network edge, deliver the fastest and most reliable user experience, while minimizing the traffic burden on the central network.
By employing SD-WAN, network architects can achieve substantial cost savings by eliminating expensive MPLS links back to the corporate data center. Additionally, users aren’t constrained by the much slower bandwidth of those MPLS lines.
However, there are major drawbacks to this model. While SD-WAN solutions feature a number of strong flow control capabilities that can be distributed to each remote site – including firewalling, DNS protection, and data obfuscation – they don’t have the same robust data and threat protection capabilities that organizations have built into their network perimeter security. Therefore, architects still need to backhaul all traffic over the internet back to the data center, even if that traffic is ultimately destined to go right back out to the internet! So while the speed and cost-effectiveness of this connection is greatly improved in comparison to the old model, the need to continue backhauling traffic presents the same latency and congestion challenges.
Approach 2b: MCAFEE MVISION UNIFIED CLOUD EDGE
So if traffic paths need to run back to the corporate data center for organizations to maintain security visibility and control, but the majority of resources users are accessing are in the cloud, wouldn’t it make sense to situate the security controls in the cloud a more direct and secure traffic path? Enter McAfee MVISION Unified Cloud Edge.
MVISION UCE’s Next-Gen Secure Web Gateway provides a cloud-native, lightning-fast, 99.999% reliable, hyper-scale secure edge. By converging SWG, CASB, and DLP, and Remote Browser Isolation technologies, MVISION UCE ensures that remote users and offices enjoy the most sophisticated levels of threat, data, and cloud application protection, as well as unique proactive risk management capabilities that even exceed what is possible in a traditional on-premises security framework.
Just as important as the advanced security capabilities is the fact that MVISION UCE is built on a fast, reliable, scalable foundation. Thanks to a global Point of Presence (POP) network and unique peering relationships, MVISION UCE can extend a hyper-scale secure edge wherever users need it. Despite a 240% surge in traffic during the spring of 2020, McAfee was able to maintain 99.999% availability and met all of the latency requirements stipulated in our SLAs. Organizations could count on our infrastructure in the toughest of times, and can continue to do so going forward.
By subscribing to an affordable public internet connection at the branch site and connecting to MVISION UCE, customers can achieve many of the desired benefits. MVISION UCE’s comprehensive data, threat, and cloud application protection capabilities more than satisfy security requirements. And for the majority of user traffic that is destined for the web or cloud, the direct internet connection ensures fast, low-latency access.
However, without deploying SD-WAN in conjunction with UCE, organizations still need to have those slow, expensive MPLS links to maintain connectivity to their legacy data center applications and resources. Therefore, customers won’t be able to realize cost savings, and those connections to data center resources will suffer the same speed and latency challenges. And that is where we finally arrive at the ideal cloud security architecture, bringing MVISION UCE together with SD-WAN.
Approach 3: MVISION UCE + SD-WAN = SASE
By bringing together MVISION UCE with SD-WAN in a seamlessly integrated solution, organizations can deliver SASE and build a network security architecture fit for the cloud era. McAfee makes it possible for customers to easily converge MVISON UCE with virtually any SD-WAN solution via robust native support for SD-WAN connectivity, leveraging industry standard Dynamic IPSec and GRE protocols. Through this integration, customers benefit from the complete range of essential SASE capabilities, with SD-WAN providing the integrated networking functionality and MVISION UCE delivering the security capabilities. McAfee has supported our channel partners in successfully delivering joint SD-WAN-cloud SWG projects with many of the major SD-WAN vendors in the market, and we have forged tight alliances with the industry leaders through our Security Innovation Alliance (SIA).
So how does a combined UCE-SD-WAN solution satisfy the four architectural requirements? Security is clearly addressed by UCE’s threat, data, and cloud application protection capabilities, as well as the distributed firewall capabilities delivered by SD-WAN. By using a single fast internet connection, SD-WAN is able to intelligently and efficiently route traffic directly to cloud resources or back to the corporate data center. With MVISION UCE providing security directly in the cloud, SD-WAN can forward web- and cloud-bound traffic directly, without any excessive latency. Cost savings come from removing the expensive MPLS lines, and since the majority of traffic no longer needs to backhaul through the corporate data center, additional savings can be achieved by reducing central network bandwidth and infrastructure capacity.
Build a Cloud-Ready Network Security Architecture Today
Digital Transformation represents the next great technological revolution, and organizations’ ability to move to the cloud and empower their distributed workforces with fast, secure, simple, and reliable access will likely determine how successful they are in the new era. SASE represents the best way to achieve a direct-to-cloud architecture that doesn’t compromise on security visibility & control, performance, complexity, or cost. By seamlessly integrating our MVISION UCE solution with SD-WAN, it’s never been easier for organizations to deliver SASE to remote offices. As a result, users will benefit from greater productivity, IT personnel will enjoy greater operational efficiency, and companies will enjoy exceptional cost savings as a result of consolidated infrastructure and optimized network traffic.
To learn more about how MVISION UCE and SD-WAN can work together, attend a webinar hosted by McAfee and one of our key SD-WAN technology partners, Silver Peak Systems. Click here to register.
Guest post By Tom Kellermann, Head of Cybersecurity Strategy, VMware Carbon Black
COVID-19 has reshaped the global cyberthreat landscape. While cyberattacks have been on the rise, the surge in frequency and increased threat sophistication is notable. The latest VMware Carbon Black Global Incident Threat Report, Extended Enterprise Under Threat – Global Threat Report series, found cybercriminals have seized the opportunity, taking advantage of the global disruption to conduct nefarious activity.
COVID-19 has Exacerbated pre-existing Cyber Threats The VMware Carbon Black latest global survey of Incident Response (IR) professionals found that COVID-19 has exacerbated pre-existing cyberthreats. From counter incident response and island hopping to destructive attacks. Remote work then compounds this bringing additional cybersecurity challenges as employees access critical data and applications from their home networks or with personal devices outside of the corporate perimeter. Cybercriminals are also targeting the cloud, which organisations rely on to enable remote work. If you’re a cybercriminal, the pool of people you can trick now is exponentially larger, simply because we are in a global disaster.
As the threat landscape transforms and expands, the underlying methodologies behind the attacks have remained relatively consistent. Attackers have just nuanced their threat strategies. For example, last Christmas, the number one consumer purchase was smart devices, now they’re in homes that have fast become office spaces. Cybercriminals can use those family environments as a launchpad to compromise and conduct attacks on organizations. In other words, attackers are still island hopping – but instead of starting from an organisation’s network and moving along the supply chain, the attack may now originate in home infrastructures.
Next-Generation Cyberattacks require Next-Generation IR While more than half (53%) of the IR professionals reported encountering or observing an increase in cyberattacks exploiting COVID-19, this isn’t a one-sided battle and there is much security teams can do to fight back.
Next-generation cyberattacks – with adversaries increasingly working to maintain persistence on systems – call for next-generation IR, especially as corporate perimeters across the world breakdown. To this point, here are seven key steps that security teams can take to fight back:
Gain better visibility into your system’s endpoints: Doing so can empower security teams to be proactive in their IR – rather than merely responding to attacks once they come, they can hunt out prospective threats. This is increasingly important in today’s landscape, with more attackers seeking to linger for long periods on a network and more vulnerable endpoints online via remote access.
Establish digital distancing practices: People working from home should have two routers, segmenting traffic from work and home devices. They should have a room free of smart devices for holding potentially sensitive conversations. And they should restrict sensitive file sharing across insecure applications, like video conferencing tools.
Enable real-time updates, policies and configurations across the network: This may include updates to VPNs, audits or fixes to configurations across remote endpoints and other security updates – even when outside the corporate network. It’s important to keep in mind the security architecture when making these changes, otherwise, things get changed without having the proper controls in place to react.
Enhance collaboration between IT and security teams – and make IT teams more cybersecurity savvy: As noted, 92% of IR professionals agree that a culture of collaboration between IT and security teams will improve enterprise security and response to cyber risks. This is especially true under the added stress of the pandemic. Alignment should also help elevate IT personnel to become experts on their own systems, whether it’s training them to threat hunt on a Windows box or identify anomalous configurations on certain SaaS applications.
Expand Cyber-Threat Hunting: Threat hunting provides ground truth and context which is essential for defence. Situational awareness is dependent on ground truth which is based in the assumption of breach. One must proactively explore their environment for abnormal activity. The cadence of threat hunting must be increased, and the scope should extend to the information supply chain as well as Senior Executives laptops as they work from home.
Integrate Security Controls: Integration allows organisations to uniquely see across traditional boundaries/silos providing richer telemetry and allowing for defenders to react seamlessly.
Remember to communicate: Now more than ever, organizations must motivate IT and SECops to get on the same page and prioritize change management while maintaining clear lines of communication – about new risk factors (application attacks, OS exploitation, smart devices, file-sharing applications, etc.), protocols and security resources.
As we move into the next normal, the workforce will largely remain remote and distributed. Organisations will need to prioritise sharpening their security defences and gaining a clearer picture of the evolving threat landscape to inform today, tomorrow and the challenging months to come.
The controversial app’s users are ignoring geopolitical battle over its digital security, says Richard Waterworth
TikTok’s UK chief has strenuously denied the video-sharing app, which Donald Trump has threatened to ban, shares data with China.
Richard Waterworth told the Observer that the UK and European arm of TikTok was growing quickly, despite the “turbulent” geopolitical battle in which the Chinese-born app has found itself.
The concept of phishing is gaining immense popularity during the Covid-19 pandemic. People, by and large, are becoming victims of such fraudulent activities. Therefore, we have come up with 5 ways businesses can avoid getting trapped with emails that are meant to deteriorate their online identity. Based on recent Phishing records, almost 90 percent of companies have encountered spear phishing attacks. So, by now, you must have realized how crucial it is to learn how to avoid phishing scams online.
1. Make sure your devices are encrypted
It is highly imperative to use a secured device connection while you browse online. And the best way to make it happen is to use a multiple devices VPN service on all your workable gadgets. Using a virtual private network tunnel will shield your data from malware and other unwanted cyber threats.
The best part about using a VPN connection is that it will provide robust military support to your internet connection. And you can be rest assured that all your data is safe from spammers. Thus, this will serve as a barrier for phishers to refrain attacking your sensitive information. Whether you have to email or send any critical data using a website, make it encrypted via a VPN.
2. Protect your email & personal data
Be vigilant when you respond to those emails that seem more like spam or ask you to give them personal details. Keep your log-in credential restricted to trusted members only. You must also avoid using the same credentials for every account. Do not respond to the email that comes from an unknown person.
Moreover, you must also restrain from replying to people who intend to confirm your confidential information online. You have to act logically to avoid phishing scams, and the best way is to recognize those fraudulent emails before you become a victim. Always check beforehand whether the sender is unknown or not.
Do not respond to people you don’t know. Moreover, check if the recipient sent a personalized email, or the email seems more like a copied template because phishers do not customize their emails. Also, remember that trusted authorities never ask you to disclose any of your account details over email.
3. Beware of online pop-ups scams
Phishing through pop-up windows is increasing gradually, and with that, the need to refrain from this scam becomes a necessity. Ensure you do not enter your details on such pop-up screens. Also, you must not click on the website links available on the pop-up window as it leads you to the extensive vulnerabilities. Such pop-ups include enticing messages for users to drive clicks from them and to misuse their confidential information.
4. Ensure you use only secured websites
Always follow this rule-of-thumb of using the website that is secured with “Https:// instead of a simple Http:// site. Encrypted sites are less likely to cause harm to your online activities. With HTTPS://, you can get increased security and the robust trust factor. Moreover, it prevents your site from being caught by hackers. It will also protect your information from possible threats.
5. Audit your email campaigns
No matter how seamless your email process is, you have to keep a security check and balance. Try to audit your campaigns every after some time to make sure you follow standard email procedures. Give proper training to your email marketing team and let them know about how to recognize phishing emails. Doing all this will ensure your business is safe from unwanted cyber-attacks.
Bottom Line
Phishing activities are rising considerably, and with that, it has become crucial to look for ways to combat such email threats proactively. By practicing the above five techniques, you can ensure to have secure business email campaigns that are free from spyer. Remember, a secured business must be your priority, and you cannot risk it at any cost. Start taking precautionary measures to ensure your business is safe from malware.
Author Bio: Amtul Rafay is a Cybersecurity enthusiast who loves to write on topics pertaining to online privacy, internet security, and web privacy. She believes in the influential power of research-backed opinions to stay updated with the futuristic technology trends.
While a well-written one can improve your chances of being invited for an interview, a poorly-written paper can lead to the rejection of your application by the employer. According to Forbes, 53% of employers prefer candidates that submit cover letters.
There are several ways to create a successful cover letter. If you know how to write it, you can do it yourself or ask a professional writer to prepare one. However, many job-seekers use online cover letter writing services to build their applications. One of the reasons is that they can make the job search and application process easier. Another point is that a cover letter builder can create a professional application document for any industry.
In this article, we will try to answer whether using an online builder is worth it. But first, we will try to figure out what an online cover letter service is, how it works, and the advantages and disadvantages.
What Is Online Cover Letter Builder and How Does It Work
An online cover letter builder is a tool that allows job-seekers to create professional application documents. Usually, it doesn’t require any understanding of programming and design. Cover letter builders offer different layouts, samples, templates, and content for any industry, from business to sport and science.
You can use a cover letter builder via a laptop or mobile phone. You only need to choose your skills and qualities to make a personalized document, select a template, and add text, if necessary. After automatic or manual formatting, you can download a PDF file, save your copy, send, or print it. There are many different online services that you can check out to see how it works—for instance, this fast and easy www.getcoverletter.com professional generator with unique designs and content.
Why Do We Need Cover Letter Builders
Usually, a professional recruiter needs no more than 6 seconds to decide whether to invite a candidate for an interview. Using this time to your advantage is crucial. Thus, you should send a cover letter and a resume to introduce yourself to the recruiters and catch their attention. An impressive paper can win you an interview, but sometimes it can even win you a dream job.
When writing a cover letter, you should keep in mind that the perfect one should be concise and addressed to a specific person. It needs a clear structure and no spelling or grammatical errors.
It can be challenging, time-consuming, and annoying to write an application for some poorly-versed people in writing. A professional cover letter writing service can become a great helper. Moreover, it can be a real lifesaver for people with limited time.
Let’s go deeper into the advantages to understand all the benefits of using a cover letter writing service online.
1. Makes job search easier
Job hunting can be hard due to high competition. That’s why sometimes people apply for several positions simultaneously to improve their chances of being hired. In this case, they should write several different applications.
The top cover letter writing service advantage is that it helps to customize your document quickly. Usually, it has a library where your paper can be stored and edited according to the new position requirements. So, at least something about job search can be simple.
2. Structures your document
A winning cover letter is neatly organized. Usually, builders offer a specific structure that mainly includes:
date;
contact information;
salutation;
opening
body;
closing paragraph;
conclusion;
3. Writes for you
If you often feel writer’s block, online builders can be a useful resource. All the content for cover letters is written by professional career experts who know how to present essential qualities for specific positions. These texts can link your skills to particular problem-solving actions or tangible business results that you have achieved in your career. And all this within the limits of the optimal cover letter length.
Moreover, modern builders have grammar, spelling, and punctuation checkers to eliminate mistakes that negatively affect the potential employer’s impression of you.
Additionally, such builders offer unique texts. Thus, you are not risking handing in a document the recruiter has seen before.
4. Offers awesome design and format
Expert-recommended cover letter templates are one of the best cover letter writing service assets. You can be sure that each of them is crafted by professionals to suit your style, meet your job seeking needs, and show some personality and flair in your job application.
An application document that is poorly formatted, or is difficult to read, can eliminate you from the list of candidates. With an online cover letter builder, you can choose proper formatting in a few clicks. Some builders can share useful tips and show you the best examples of formatted cover letters.
5. Provides customization options
It’s essential to target your cover letter to the job to which you’re applying. Cover letter builders offer a customization option for each paper. It means that you can add specific details and accomplishments to the text to show how you add value to the organization. You can also customize templates for each job.
6. Doesn’t take much time
With an online builder, you can write your cover letter in minutes, because most of them only need you to do several simple actions like answering a short questionnaire and choosing skills and design.
What About Cons
To form an objective opinion about something, we should assess not only the pros but cons.
Someone might say that using a builder for cover letter creating is a bad idea because the software can’t write as a person does. However, in today’s world, online services are so modernized that they can put your thoughts into words. And keep in mind that most online cover letter services cooperate with professional career experts to create content.
Price can be another possible downside of using a cover letter builder. It won’t be very high, but still, you will need to pay. However, if you write your cover letter yourself, you will spend too, but with your time, not money. You will need to research a bunch of different materials, read useful articles, practice a lot, and this cannot guarantee that you will manage to create a perfect cover letter that meets all the standards.
You can also get a professional writer’s help, but one paper will cost you much more than a subscription to an online builder where you can create and edit several letters.
Conclusions
Building a cover letter using an online service may sound strange at first, but it has more positive aspects than other approaches.
First and foremost, you get a unique and customized cover letter in just a few minutes. When you are looking for a job in such a competitive labor market, every minute can be critical. Additionally, people who don’t feel confident using modern technologies will be surprised by how easy these services are. Moreover, they will be guided along the way.
Сconsidering all of the above, our advice is to search for the perfect online cover letter constructor to meet your needs.
A cross-party group of more than 20 MPs has accused the UK’s privacy watchdog of failing to hold the government to account for its failures in the NHS coronavirus test-and-trace programme.
The MPs have urged Elizabeth Denham, the information commissioner, to demand that the government change the programme after it admitted failing to conduct a legally required impact assessment of its privacy implications.
Women at McAfee are making powerful contributions to our sales efforts. Saleswomen from varying backgrounds share their unique perspectives in hopes of encouraging more women to join them in sales.
This week, women share the inspirations and support networks which are leading them down successful sales paths.
Find a sponsor:“First, find someone working in sales or presales who can serve as your sponsor. This is someone whocan get you in touch with the right people. Then, show your value, communicate clearly, and own your brand. Make yourself visible and connect with people who can promote you inside of sales. Start with a professional LinkedIn account. Work hard, and most importantly, tell your manager and peers about yourgoals. If you don’t communicate your interest in advancing or moving to different roles, they won’t be aware and can’t help you get there.”
– Aleks, Inside Sales, Cork, Ireland
Find inspiration through family: “The biggest inspirations for me are my mom and my grandma. Back in the day, my grandma broke glass ceilings with double university degrees and built a career in Soviet Russia when it was uncommon. My mom passed me all her knowledge and her entrepreneur spirit. She has always been very strict and demanding, but she taught me how to be brave and independent. She continues to teach me to this day how to question my own thoughts and habits as well as encourage me to think outside-of-the-box.”
– Anastasia, Inside Sales, Cork, Ireland
Strive to grow and improve. “Early in my career, I worked for an amazing manager. She told me it was up to me to decide what I wanted to be. She also said there was no one right choice if I stayed as an individual contributor or grew to be a senior leader – because whatever I decided, I should be incredibly ambitious and always strive to make myself better. I encourage you to ask a successful woman in tech, ‘How did you get there and what skills are needed to succeed in your role?’ Ask questions, find a mentor and create a plan for how you can enter that space.”
– Brenda, Consumer Sales, Vancouver, Canada
Look to the women around you: “The women in my family inspire me the most. They are not afraid to take on big challenges that move them forward – whether as a stay-at-home mom, professional in the corporate world, or an entrepreneur. They’ve overcome a lot of challenges. My mother started her own business at the age of 62. She was a stay-at-home mom when my siblings and I were still at home. It is the small things that women do that inspire me all the time!”
– Carine, Presales/Sales Engineering, Plano, TX, United States
Partnership goes a long way: “Networking can create big opportunities. I built a relationship with one of the sales engineers while working in a support role. He came to me for help when he worked with customers and partnered together. I stayed in touch and reached out to him when deciding to pursue a career in sales. I asked for the chance to interview for a sales engineer position and ultimately got the job.”
– Elizabeth, Presales/Sales Engineering, Plano,TX, United States
Build support systems with networking:“Networking is a great way to find out how you fit in a certain role while also making a valuable connection with an organization.Reach out and get know members of a community, association or social networking group like LinkedIn. Don’t be afraid to seek their advice. Learn from them and, if possible, get referrals. You can always leverage their network(s) to get introductions to various employers. I firmly believe women want to help women succeed.”
Findinspiration all around:“I’m inspired reading about what other ladies like me accomplish daily. I see them juggling so many balls and still besuccessful, yet they are vulnerable. Many of them advocate publicly and privately for the advancement and equality for all women.Ruth Bader Ginsburg is a pioneer and Rosa Parks just decided to say no. They are just two of many, many others who have shaped our world. They are inspiration for many, but we inspire ourselves more than we realize by the things we do every single day. This inspiration has helped me achieve success in the technological world of cybersecurity sales at McAfee.”
– Kristol, Inside Sales, Plano, TX
Thank you to these talented women for sharing a glimpse into what it takes to be successful in sales. With our combined perspectives andexperiences, we can achieve of our mission to protect all that matters.
Interested in joining a company that supports inclusion and belonging? Search our jobs. Subscribe to job alerts.
Recently, we conducted a survey of 600 families and professionals in the U.S. to better understand what matters to them—in terms of security and the lives they want to lead online. The following article reflects what they shared with us, and allows us to share it with you in turn, with the aim of helping your workday go a little more smoothly.1
How many windows are open on your computer right now? Check out your browser. How many tabs do you have? If it’s a typical workday, you’ve probably run out of fingers counting them up.
Professionals put their computers through the paces. Consider the number of back-to-back meetings, video conferences, and presentations you lead and attend in a day, not to mention the time that you pour into work itself. Your computer has to keep up. It’s certainly no surprise that this is exactly the notion that came up in our research, time and time again.
What’s on the minds of professionals when it comes to their security?
In speaking with professionals about security, their answers largely revolved getting work done.
I need trusted apps and sites to work, always.
I need to maximize battery life while in transit or on a plane.
I need live presentations and demos to be seamless.
I need to multitask with multiple apps or multiple browser tabs open without locking up.
I need my computer to respond reliably and quickly without locking up.
While on the surface this may mean performance is top of mind, a closer look reveals that performance is often a function of security. A quick and easy example of this is the classic virus infection, where getting a virus on your computer can bring work to a screeching halt.
More broadly though, we see security as far more than just antivirus. We see it as protecting the person and helping them stay productive—giving them the tools to take care of the things that matter most to them. Thus, plenty of what we offer in a security suite focuses squarely on those concerns:
Battery optimization keeps you working longer without fretting over finding an outlet in the airport or simply working without wires for longer.
Password managers let you log into the apps and sites you count on without a second thought, also knowing that they’re securely stored and managed for protection.
Vulnerability scanners make sure that your apps always have the latest updates, which ensures you have all the upgraded features and security protocols that come along with those updates.
Inbox spam filters take yet another headache off your plate by removing junk mail before it can clutter up your inbox.
Secure VPN keeps data safe from prying eyes on public Wi-Fi in places like airports, hotels, and coffee shops, which gives you more independence to work in more places knowing that your information is secure.
Those are a few examples of specific features. Yet also important is that any security solution you use should your computer running quickly as well as smoothly. It should be lightweight and not hog resources so that your computer runs and responds quickly. (That’s a major focus of ours, where independent labs show that our performance is five times better than the average competitor.)
Where can professionals get started?
Drop by our page that’s put together just for professionals. We’ve gathered up several resources that’ll help you stay productive and safer too. Check it out, and we hope that it’ll keep you going whether you’re working on the road, in the office, or at home.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Survey conducted in October 2019, consisting of 600 computer-owning adults in the U.S.
Veracode recently sponsored Enterprise Strategy Group???s (ESG) survey of 378 developers and security professionals, which explored the dynamic between the roles, their trigger points, the extent to which security teams understand modern development, and the buying intentions of application security (AppSec) teams.
The first survey question for developers and security professionals was to rate the efficacy of their organization???s AppSec program on a scale of zero to 10, zero being ???we continually have security issues,??? and 10 being ???we feel confident in the efficacy and efficiency of our program.??? Two-thirds of the organizations surveyed rated their programs as an eight or higher. And, even more surprising, of that two-thirds, one-third rated their program as a nine or 10.
Veracode???s Chris Wysopal, Chief Technology Officer and Co-Founder, and Chris Eng, Chief Research Officer, addressed this finding during an exclusive Black Hat session with ESG, New Data Reveals How AppSec Is Adapting to New Development Realities. During the session, Chris Eng pointed out that organizations are more likely to rank themselves favorably in an online survey ??? like the ESG survey ??? versus a face to face interaction. Chris Wysopal mentioned that respondents may have been answering based on their own experiences with AppSec and that they may not know what a fully mature AppSec program should look like ??? therefore, overinflating the response to their program???s effectiveness.
To further gauge the accuracy of the result, Eng and Wysopal reviewed the responses from the follow-up questions. The first follow up question was, ???What percentage of your organization???s overall application portfolio codebase is protected by application security tools???? The results unveiled that approximately 71 percent of organizations use AppSec tools on more than half their codebase. Since around 70 percent of organizations ranked their AppSec programs as effective, it makes sense that a similar number of respondents are actively testing the majority of their codebase.
But the next question confirmed Wysopal???s suspicions that the developers and security professionals may not be gauging their responses off fully mature AppSec programs. The next question asked, ???Have any of your organization???s production applications been exploited by OWASP top-10 vulnerabilities in the past 12 months???? The responses showed that 81 percent of organizations are experiencing exploits. There are several factors that could be contributing to the continuation of exploits ??ヲ and all of the factors point back to the fact that the organizations need to further mature their AppSec programs.ツ?
How can organizations make the case for AppSec budget?
From the ESG survey results, we???ve established that the respondents??? AppSec programs are likely making a positive impact on their organization, but they still need to invest in maturing their programs. Showing the return on investment can help organizations gain additional AppSec budget from stakeholders. But many organizations don???t have the tools to quantify the results from their AppSec program.
With Veracode Analytics, organizations can see how their AppSec programs are performing through pre-built dashboards and visualizations. The dashboards can be shared with stakeholders to show metrics across all our offerings, displaying the value of different scan types, and how those scans impact security findings. With that data, teams can pinpoint where further investment is required to achieve business goals. And as a bonus, since Veracode is SaaS-based, our solution can benchmark the success of a program against similar organizations within the industry.
Developer security training is more critical than ever, but data shows us that the industry isn???t taking it quite as seriously as it should. A recent ESG survey report, Modern Application Development Security, highlights the glaring gaps in effective developer security training. In the report, we learned that only 20 percent of surveyed organizations offer security training to new developers who join their company, and 35 percent say that less than half of their developers even participate in formal training to begin with.
More troublesome, less than half of organizations surveyed for the report require developers to participate in formal training more than once a year. While robust application security (AppSec) tools and solutions help developers learn as they code to get ahead of flaws before deployment, the need to continually remediate only slows teams down and bottlenecks innovation. So how can you get ahead of it? Consistent, engaging training that sticks.
Paired with the right scanning and testing tools, training solutions that go beyond checking boxes and watching tutorials are an effective way to embed the knowledge needed to write more secure code. That means less time spent fixing flaws and more time flexing creative muscles to improve your organization???s digital footprint.
Training techniques that count
Recently, Forrester Research published its Now Tech: Static Application Security Testing, Q3 2020, an overview of Static Application Security Testing (SAST) providers and the various benefits companies can realize with SAST. The report also discussed how SAST can integrate with developer solutions to improve engagement and knowledge. It also calls out the important role SAST plays in tandem with hands-on learning tools to reduce remediation time, enhance predictability, and teach developers about modern secure coding practices.
The Forrester report notes that firms that integrate SAST into their software development lifecycle (SDLC) will see an array of benefits, one of which includes developer education. With fast feedback in the IDE and pipeline, Veracode Static Analysis provides clear and actionable guidance on which flaws you should be fixing ??? and how you can fix them faster to improve efficiency.
SAST is undoubtedly a critical piece of the puzzle for closing knowledge gaps, but as Forrester???s report points out, it shouldn???t be viewed as a standalone tool. To drive engagement and adoption, managers leading this effort should integrate their SAST solution with engaging security training for developers to achieve a well-rounded AppSec program that developers want to participate in.
A Veracode Security Labs solution
At Veracode, we think out of the box when it comes to developer training. Veracode Security Labs closes a lot of gaps for developers looking to get a handle on modern threats and improve efficiency.
It uses real applications in contained, hands-on environments that users can practice exploiting and patching. There???s even a Community Edition, which is a forever-free version that offers some of the same Enterprise-grade tools to all developers interested in improving security knowledge on their own.
Level up without burning out on boring lessons. Veracode Security Labs brings real-world examples into the mix to build muscle memory, which means fewer flaws to fix and an easier path to compliance certifications. Engaging and customizable, there are even creative ways to gamify training with Veracode Security Labs through Capture the Flag (CTF) events and coding contests.
The ???Top Secure Coder??? crown
To highlight the efficacy of hands-on developer training, we recently held a ???Top Secure Coder??? challenge at Black Hat USA???s 2020 virtual event, where participants competed by completing Veracode Security Labs challenges. The results were exciting: over 330 people filled out participant application forms, most of which then attempted to climb the leaderboard and contended for the top prize.
???
While participants racked up points by completing labs over the course of the Black Hat 2020 conference, two competitors, who happened to be coworkers at the same company, (friendly contending developers within a Veracode customer) skyrocketed up the leaderboard. After several lead changes through the competition, it came down to mere seconds for a tie with 310 points, but user ???th3jiv3r??? completed the labs just a little faster than ???turtl3fac3??? which helped to serves as a tiebreaker on the leaderboard.
While this friendly challenge spurred an entertaining race for all of us, it proves that when there is a fun competition on the line, teams will push harder than they normally might have on their own. Engaging developer training works, and when it uses real-world application coding examples, that knowledge sticks.
Think you have what it takes? If you missed out on our first ???Top Secure Coder??? challenge, we???re bringing it back and hosting another virtual competition during DevOps World that you won???t want to miss.
Register for the conference to see us at DevOps World 2020 and join our next ???Top Secure Coder??? challenge to start improving your security skills.
The cloud transformation that we are seeing today is happening across several dimensions. Companies are rapidly moving from their private data centers to hybrid cloud models with Cloud Service Providers. Cloud is where all modern-day work is happening by employees and developers. Employees use SaaS based applications like Box, Office365, Workday, Teams, etc. For their day to day tasks and developers are building applications directly in the cloud. Industry analysts predict that, by 2022, up to 60% of organizations will use an external service provider’s cloud managed service offering1. The question is no longer an “if” but “when” companies will be moving to the cloud.
At McAfee, we are committed to help our customers with this transition. McAfee’s MVISION portfolio of products is built to enable customers to succeed in the cloud, helping to secure cloud workloads, applications and data. One of the flagship suites in this portfolio, designed to secure customer’s cloud applications and data is MVISION Unified Cloud Edge (UCE). UCE integrates three technologies CASB, DLP and Web into a single suite that helps unleash a broader set of use cases for the customers. For example, Web, DLP and CASB together allow customers to protect data across the entire continuum of enforcement points: the device, web, and cloud, with the ease of implementing consistent data protection policies.
McAfee MVSION UCE is also designed to keep in mind the ongoing cost pressures of current times and unpredictability of a V or U-shaped post Covid recovery phase. A top question on the mind of nearly every customer is how to stay on track with their cloud transformation goals while being prudent with their spend. McAfee teams are continuously helping customers work through this dilemma discussing the economic value they can derive from UCE. Let’s take a look at some of these scenarios.
Better together pricing benefits
Greenfield customers who are in early stages of making their buying decisions on technologies to start their cloud transition can enjoy significant cost savings through better together pricing of the UCE suite. For instance, a customer with 10K employees looking to procure CASB and Web independently staggered over a period of time is likely to end up spending ~25-30% more compared to buying the UCE suite (base version) that comes with CASB and Web. Additionally, buying the suite enables customers to have a more complete security posture. For example, CASB identifies If your employees are using a file sharing application that is not approved by IT and the Next Gen Secure Web Gateway will block employees from accessing it. Not having one vs the other creates vulnerabilities within your cloud architecture. In another situation, if the customer with the similar 10K configuration compares an On prem Web Gateway and CASB deployment to a cloud native UCE suite for securing their web traffic and cloud applications they can yield an even higher savings of ~35-40% annually.
Direct cloud traffic security savings
Remote workforce scenario
A recent study from McAfee shows that 47% of employees would prefer not to go back to working from the office like it was before the pandemic. Additionally, 21% of employees stated that they would like to continue to be part of the at-home workforce. The same McAfee study also shows that at the meantime, threats to enterprises increased by 630% over the same period, with most attacks targeting collaboration services that enable remote work.
With this spike of remote workforce that will continue to trend, VPN is no longer a need for the road warriors but required for majority of the company’s employee. IT departments are facing an increased load on their existing VPN infrastructure and VPN costs are top of mind for them.
For example, if you were to split the VPN traffic in such a way that traffic does not need to go through the company’s private data center but instead gets routed directly to the internet, it would not only improve the remote employee’s experience but also reduce the load on the company’s VPN infrastructure. However, taking traffic directly to the internet leaves employees vulnerable without the right security measures in place.
With McAfee’s UCE suite, customers can now have the peace of mind of keeping their remote workforce protected. This reduces the IT spend on infrastructure supporting high VPN workload and connections, traditional proxy and firewall costs within the company’s private data center. This yields a net positive TCO for the customer as shown in the illustration below going from scenario A to B, where 90% of the workforce is remote and majority of the remote workforce traffic (~80%) is routed directly to the cloud. For more details on how Next gen secure web gateway solutions protect direct to cloud traffic, please refer to the recent blog on “What to Expect from the Next Generation of Secure Web Gateways.”
Branch office scenario
Another common use case where it is critical to secure the direct to cloud traffic is at the branch offices. A recent industry analyst survey suggests that the market sees continuing strong momentum for SD-WAN, with almost 95% of enterprises surveyed expecting to use SD-WAN within 24 months2. With increasing numbers of branch offices adopting SD-WAN architecture, each branch office is set up as a local internet breakout routing traffic directly to the cloud. This creates the need for every branch office to have a level of security that is comparable to what would be in place with a traditional WAN approach where traffic is backhauled to the company’s private data center. The cost of each branch office having its own set of security appliances, spending cost in deploying and managing these can stack up quickly for any organization.
McAfee UCE offers a cloud native security solution that secures the direct cloud traffic from every branch with a net positive TCO in addition to the SD WAN savings, as shown in the illustration above going from scenario B to C. Customers deploying UCE for both remote workforce and branch offices can enjoy even higher savings combining savings one and two shown on the figure above.
We intend to offer customers these compelling economic benefits from UCE enabling them to stay on track with their transition to the cloud even with the economic realities of the current times! Is there a specific cloud security use case you are looking to solve?
It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions? This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial
I had the great delight of reading Geoff White’s new book, “Crime Dot Com: From Viruses to Vote Rigging, How Hacking Went Global”, I thoroughly recommend it. The book is superbly researched and written, the author’s storytelling investigative journalist style not only lifts the lid on the murky underground world of cybercrime but shines a light on the ingenuity, persistence and ever-increasing global scale of sophisticated cybercriminal enterprises.
In Crime Dot Com Geoff takes the reader on a global historic tour of the shadowy cybercriminal underworld, from the humble beginnings with a rare interview with the elusive creator of the ‘Love Bug’ email worm, which caused havoc and panic back in 2000, right up to the modern-day alarming phenomenal of elections hacking by nation-state actors.
The book tells the tales of the most notorious hacks in recent history, explaining how they were successfully planned and orchestrated, all wonderfully written in a plain English style that my Luddite mother-in-law can understand. Revealing why cybercrime is not just about the Hollywood stereotypical lone hacker, eagerly tapping away on a keyboard in the dark finding ingenious ways of exploiting IT systems. But is really about society obscured online communities of likeminded individuals with questionable moral compasses, collaborating, and ultimately exploiting innocent victims people out of billions of pounds.
The book covers the UK’s most notorious cyberattacks, such as the devasting 2017 WannaCry ransomware worm attack on the NHS, and the infamous TalkTalk hack carried out by teenage hackers.Delving beyond the media 'cyber scare' headlines of the time, to bring the full story of what happened to the reader. The book also explores the rise and evolution of the Anonymous hacktivist culture and takes a deep dive into the less savoury aspects of criminal activities occurring on the dark web.
As you read about the history of cybercrime in this book, a kind of symbiosis between cybercriminals and nation-state hackers activities becomes apparent, from Russian law enforcement turning a blind-eye to Russia cybercriminals exploiting the West, to both the NSA’s and North Korea’s alleged involvement in creating the heinous WannaCry ransomware worm, and the UK cybercriminal that disabled that attack. The growing number of physical world impacts caused by cyber-attacks are also probed in Crime Dot Com, so-called ‘kinetic warfare’. How sophisticated malware called Stuxnet, attributed by the media as United States military created, was unleashed with devastating effect to physically cripple an Iranian nuclear power station in a targeted attack, and why the latest cyber threat actors are targeting Britain’s energy network.
While this book is an easily digestible read for non-cyber security experts, the book provides cybersecurity professionals working on the frontline in defending organisations and citizens against cyber-attacks, with valuable insights and lessons to be learnt about their cyber adversaries and their techniques, particularly in understanding the motivations behind today's common cyberattacks.
5 out of 5: A must-read for anyone with an interest in cybercrime
Here’s How to Be Yourself and NOT let a Scammer Be You!
If you hadn’t truly embraced the incredible benefits of managing your life online, then I bet 2020 has changed things for you. With social distancing still a big consideration in day to day life, more Aussies than ever are managing their lives online so they can stay home and stay well. But, unfortunately, there is a downside – online scams. The expanded online playground of 2020 means cybercrims are upping the ante and investing even more energy doing what they do so well – scamming everyday Aussies out of their hard-earned cash.
This week is Scams Awareness Week in Australia – a good opportunity to be reminded of how our growing use of technology can give scammers more opportunities to trick us into giving away our valuable personal information. In 2019, a whopping $630 million dollars was lost by Australians in scams.
Scammers are Pivoting Too!
We’ve all heard it. 2020 is about pivoting – being flexible and seeking out new opportunities. Well, clearly, we aren’t the only ones heeding the advice with scammers changing things up to capitalise on the chaotic nature of 2020. In fact Scamwatch has seen a 55 per cent increase in reports involving loss of personal information this year compared with the same period in 2019, totalling more than 24, 000 reports and over $22 million in losses.
Many experts are warning that scammers are renowned for ‘following the money’ so are currently expending a lot of energy targeting Aussies’ superannuation and government relief payments. Using email, text or phone, a scammer will often pretend to be from a Government agency eg MyGov or the Health Department and will insist that they require personal information in order to help the ‘victim’ access government payments, or access their super fund.
They are after driver’s license details, Medicare numbers – whatever they can get their hands on that will give them 100 points of identification which is enough for them to assume the identity of the victim and effectively do anything in their name. They could apply for a credit card, access superannuation accounts, or even tap into a victim’s government payments!
What Can We Do to Protect Ourselves?
There are steps we can all take to minimise the risk of getting caught up in a scam and, to be honest, most of them are remarkably simple. Here are my top tips:
Think Critically
If there was ever a time to tap into your inner Sherlock Holmes, it’s now! If you receive a call, text, or email from someone out of the blue who claims to be from a government agency then tread VERY carefully! Do not feel pressured to share any information with anyone who has contacted you – regardless of what they say. Take a moment and ask yourself why they would be contacting you. If they are calling you – and you still aren’t sure – ask for their number so you can call them back later.
Remember – reputable organisations will rarely – if ever – call you and ask for personal information. And if you still aren’t sure – ask a family member of trusted friend for their advice. But remain sceptical at all times!
Passwords, Passwords, Passwords
Yes, I know I sound like a broken record – but having an easily guessable password that you use on all your devices and accounts is no different to playing Russian Roulette – you won’t come out on top! Unfortunately, data breaches are a reality of our digital life. If a scammer gets their hands on your email and password combo through a data breach – and you have used that same combo on all your accounts – then you are effectively giving them access to your online life.
So, you need a separate complex password for each of your online accounts. It needs to be at least 8 characters and a combination of numbers, letter (lower and uppercase) and symbols. I love using a nonsensical sentence but a password manager that does all the thinking – and remembering – is the absolute best way of managing your passwords. Check out McAfee Total Protection, which includes a Password Manager.
Keep Your Personal Information Tight
The best way to keep your personal information safe is by keeping it to yourself! Limit the amount of personal information you share – particularly on social media. Oversharing online makes it even easier for a scammer to steal your identity. And please avoid linking up your credit or debit cards to your online accounts unless absolutely necessary. Yes, it’s so convenient and a great way of making spontaneous purchases but it is a very risky business.
2020 has certainly been a tough year for us all. Many of us are struggling – financially and psychologically as we come to grips with our changing world. So, please – take a little time to tighten up your online life and remember if something seems too good to be true then it probably is!
Back-to-School: Could Your Remote Learner Be Cyber Cheating?
As families across the country ramp up for the new school year, most are considering one of three basic learning options. Kids can attend traditional, in-class learning, they can attend their classes online from home, or they can choose a hybrid of the two. There are also learning pods, or small community groups, springing up as we recently discussed.
Whatever learning scenario your family chooses, each will likely have its own unique challenges. One challenge that seems to be heating up online chats lately is cyber cheating. And it’s not just teachers, administrators, and parents concerned about the potential fallout, kids aren’t thrilled either.
Macy, who is going into her sophomore year of high school will be returning to the classroom. “I’m going to be in class every day taking notes and then studying at night. On exam day, I’ll take the exam and if it’s a tough subject like Statistics, I will be lucky to get a C. My friend Lindley, whose parents let her learn do school online can take the same exam, figure out a way to cheat, and probably get an A. How is that fair?”
The topic is inspiring a number of potential solutions.
Some schools have included cyber cheating as part of their Back-to-School Guidelines for teachers. Others are leaving testing and monitoring up to individual teachers while some districts with bigger budgets are hiring digital proctors or relying on robots, video feeds, and webcams to curb cyber cheating.
At the college level, the effort to reduce cyber cheating is getting sophisticated. T staff at Georgia Tech recently programmed an online bot named Jack to infiltrate popular online cheating sites and pose as a student willing to write papers and do homework for a fee. It’s working.
While exactly how to even out testing requirements for all students — in-class or at home — is a work in progress, there are some practical ways to set your kids up for success this school year wherever they choose to learn.
Ways to Curb Cyber Cheating
Discuss expectations. Does your child understand exactly what cheating is? Sometimes the lines between the real world and the digital world can blur and create grey areas that are tough for kids to navigate. Depending on the age of your child, be sure to define cheating and establish the expectation of integrity and honesty whether in a classroom or at home. Discuss the goal of comprehension and understanding versus googling answers.
Don’t do your child’s work. Parents want to help struggling kids but can often go overboard. When we do our child’s work, it’s easy to forget — we’re actually cheating!
Review the hot topics. Discuss the big topics around cheating such as plagiarism, googling answers, cheat sites, downloading past tests, crib sheets, sharing school work between friends, doing work for others, copyright violations, giving proper attribution.
Keep in touch with teachers. With school guidelines constantly changing, it’s important to keep in close contact with teachers. Ask about test monitoring and expectations for remote students.
Be present. It’s natural to hover over younger kids but we can get lax with our teens. Be present and monitor their workload. Let your remote high schooler know that his or her learning is a priority.
Monitor workload. As academic pressure mounts, so too can the temptation to cut corners or cheat. Talk through the rough spots, get your child a tutor if needed, and step in to help prepare for tests (just don’t do the work).
Rely on software for help. If you suspect our child may be cheating, or that it may be a temptation, use parental monitoring software. Monitoring software can show you a log of sites accessed on any given day and allow you to block other sites.
Equip Yourself. Follow the advice of a Pennsylvania superintendent who says his teachers will be reading Generation Z Unfiltered, a book by Tim Elmore, to help them easily identify signs of cheating.
No matter where your child settles in to learn this year, it will take a family-sized effort to navigate these new academic halls. Stick together, keep talking, give extra grace for mistakes along the way, and work together to make this the best school year ever. You’ve got this, parents!
Back-to-School: Prepping Your Tech for Learning Pods or Micro School
With a new academic year starting up, the look of “back to school” will vary greatly depending on where you live. While some schools will open again for in-person classes, numerous others will adopt a hybrid mix of in-person and online education, with others remaining online-only for the time being. In all, this leaves many families in the same situation they faced last spring: school at home.
In April, I published a series of articles with the aim of offering parents and children some support as they made this sudden shift to online learning. Whether your school year is about to start, or if it’s just getting underway, let’s revisit that advice and make a few updates to it as well.
Meeting the challenges of schooling at home
There’s a good chance that this is your second round with schooling at home. And no doubt, there’s certainly a range of feelings that come along with that as you face the challenges of school at home once again. You’re not alone. Back in April, we checked in with 1,000 parents of school-aged children in the U.S. and got their take on the challenges their families were facing when remote learning was just beginning. Here’s a quick refresher:
The Top Five Difficulties
1. Keeping children focused on schoolwork (instead of other online activities) – 50.31%
2. Establishing a daily routine – 49.26%
3. Balancing household responsibilities and teaching – 41.83%
4. Establishing a wake-up and bedtime schedule – 33.40%
5. Balancing working from home and teaching – 33.31%
Perhaps you’ve experienced much the same, and perhaps you’ve found a few solutions to address these difficulties since then. For parents who’re looking for a little more support on that front, check out my blog on bringing structure to your school day at home. I wrote it along with a long-time educator who was tackling the same challenges herself, albeit on two fronts: one as a teacher and one as a parent of school-aged and college-aged children. There’s plenty of good advice in there—from sample schedules to free educational resources.
What are learning pods and how do you get your tech ready for one?
Another approach to remote learning that’s generating some conversation right now are “learning pods,” also referred to as “bubbles” or “micro-schools” in some places as well. Broadly, a learning pod entails a small group of families sharing spaces and the responsibilities of schooling at home by gathering a few children together to do their online learning as a group—which appears to be very much in response to those top 5 difficulties mentioned above.
These learning pods take many forms, and in some instances tutors participate in the pods as well. I encourage you to read up on the topic, and to get to know the pros, cons, and issues associated with it. This is all relatively new territory and the practices are still taking shape. Needless to say, health, safety, and well-being are vital considerations as we continue to navigate these days and figure out the best way to educate and support our children.
Whether your family participates in a learning pod or not, your child will likely have a primary device that they’ll use for online schooling. That could be one your child already has, one that they share with other members of the family, or one that’s provided to them by their school. In any case, you’ll want to make sure that the device they’re using is protected. Here’s a quick look:
Using your home computer or laptop for school
In the case of a computer that your family already has, you’ll want to ensure that it has a full security service installed. This means more than just antivirus, though. A strong security service will also have firewall protection that protects from hackers, safe browsing tools that alert you of unsafe sites or links before you click them, plus a set of parental controls that will block distractions like certain apps or websites—along with inappropriate content. One other reality of online learning is a set of new passwords as your child is pointed towards new learning portals and educational sites. A good option here is to use a password manager that will keep all of that organized and encrypted so that they’re safer from attacks.
Using computers issued by schools
In some cases, this will be a laptop computer or tablet provided by the school district, which students can keep for the school year. In this case, the security on these devices, security software and settings, will already in place thanks to the district. Be aware that a device that’s managed centrally this way will likely be limited in terms of which settings can be updated and what software can be added. Thus, if your child has a school-issued device, follow the advice of the school and its IT admin. If you have any questions, contact them.
Protect your children even more with a VPN
I recommend the use of a VPN (virtual private network) pretty much across the board, and I likewise recommend one here. A sound VPN service will protect the privacy of your internet connection with bank-level encryption and keep hackers at bay, such as when they try to steal passwords or data. This is particularly ideal for your child, who will be spending several hours a day online with school and other activities. The bonus is that anyone who uses the device with the VPN service will be similarly protected too.
Get your Wi-Fi ready for the best bandwidth
Whether you’re planning on taking part in a learning pod or will have multiple family members using your Wi-Fi at once, giving your network a tune for performance is a must.
1) Start out with a speed test. This will give you a baseline idea of just how fast your network is. There are plenty of free services available, like the speed test from Ookla. The results take just seconds, which you can then compare to the speed you’ve subscribed to on your internet bill. If you see a gap, you can contact your provider to diagnose the issue or perhaps get an updated router from them.
2) Check your router location. Place it in a central location in your residence if possible, preferably high on a shelf with no obstructions like books or furniture. This will help broadcast a better signal and avoid “dead zones” in your home. Another option is to contact your provider about Wi-Fi extenders, which are often simple as a little plug-in device that will rebroadcast your Wi-Fi router’s signal and give you better coverage. You can also look for home routers that include built-in security.
3) Create a guest network with its own password. This option allows you to share your Wi-Fi with guests without worrying about them accessing things like shared folders, printers, or other devices on your network. This way, if your home network is called “MyFamily,” you can create “MyFamily-Guest” that offers more limited access and with its own unique password. The way you go about this varies from router to router, yet it involves using your browser to adjust your router settings. Because of this, your best bet is to get in touch with your provider for help, as they should have resources that can walk you through it.
Look after your mental well-being this school year too.
With another round of schooling at home upon so many of us, the other thing we should talk about is our continued well-being—particularly as so much of daily life still feels as if it’s in limbo. Beyond taking care of your devices, take care of yourself too. While we’re all spending more time on our screens, remember the importance of unplugging on a regular basis and mixing up your routine. Likewise, take time to strengthen your family’s wellbeing, both online and offline.
We’ve been learning so much as we’ve made what seems like a continued stream of adaptations to the days we’re living in. Some we’ll be glad to cast aside when the time is right, yet I’m sure there are others that will be welcome additions to our lives. I know of more than a few I’ll certainly keep when this has all passed. And I hope you have a good list of your own too.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Students everywhere are adjusting to new forms of education outside of the classroom. In conjunction with distance learning, it’s likely that many students will now be required to take online exams. To ensure that students don’t cheat on their online exams, educators are employing the help of online proctoring tools that help protect exam integrity. According to Tom’s Guide, a popular proctoring platform for online exams, named ProctorU, recently disclosed that it was the victim of a major data breach, resulting in the data of 444,000 people exposed online.
What You Should Know About This Breach
While it’s unclear how exactly the company was breached, ProctorU’s database is now offered for free in online hacker forums. The database contains private information of mostly college students, including their names, home addresses, emails, cell phone numbers, hashed passwords, and organization details. Specifically, it contained email addresses associated with a variety of institutions, including UCLA, Harvard, Princeton, Yale, North Virginia Community College, University of Texas, Columbia, UC Davis, Syracuse University, and more.
Don’t Get Schooled By Hackers
In their blog responding to the incident, ProctorU stated that they disabled the server, terminated access to the environment, and are currently investigating the incident. The company also implemented additional security measures to prevent similar events from recurring and notified universities and organizations affected by the breach.
However, students can take matters into their own hands as well. There are multiple proactive steps users can take to help minimize the damage caused by data breaches like this one. If you’re participating in distance education this school year, follow these security tips to help protect your data from hackers:
Check to see if you were affected by the breach
If you or someone you know has made a ProctorU account, use this tool – which shows users if they’ve been compromised by a security breach – to check if you could have been potentially affected.
Change your credentials
If you’ve been compromised by the ProctorU breach or another like it, err on the side of caution and change your passwords for all of your accounts. Taking extra precautions can help you avoid future attacks.
Take password protection seriously
When updating your credentials, you should always ensure that your password is strong and unique. Many users, including students, utilize the same password or variations of it across all their accounts. Therefore, be sure to diversify your passcodes to ensure hackers cannot obtain access to all your accounts at once, should one password be compromised. You can also employ a password manager to keep track of your credentials.
Enable two-factor or multi-factor authentication
Two or multi-factor authentication provides an extra layer of security, as it requires multiple forms of verification. This reduces the risk of successful impersonation by hackers.
Stay educated on security precautions
As you adapt to learning from home, you’ll likely consider downloading various online tools to help make the transition easier. Before downloading the first tools you see, do your research and check for possible security vulnerabilities or known threats.
Use a comprehensive security solution
Enlist the help of a solution like McAfee® Total Protection to help safeguard your devices and data. With an extra layer of protection across your devices, you can continue learning online safely.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
In a U.S. government cyber security advisory released today, the National Security Agency and Federal Bureau of Investigation warn of a previously undisclosed piece of Linux rootkit malware called Drovorub and attribute the threat to malicious actor APT28. The report is incredibly detailed and proposes several complementary detection techniques to effectively identify Drovorub malware activity. A multitude of investigative methods are suggested given that the common issue with rootkits is that large scale detection on a host can be a real challenge. The NSA and FBI have been explicit in their report that systems with a kernel version of 3.7 or lower are most susceptible to Drovorub malware due to the absence of adequate kernel signing enforcement.
Keeping a system updated and fully protected isn’t specific to Windows-based environments. Linux based systems are widespread within many enterprise organizations, requiring the same maintenance as any modern operating system. Linux offers a robust, secure computing platform which can meet many needs. As in most cases, proper configuration is key to the security of the platform.
For specific McAfee technology protections against Drovorub please visit the dedicated Drovorub KB article here.
In addition to the guidance provided in the U.S. government report and our product specific knowledge base article, McAfee encourages organizations to take note of and apply the following best practices (where possible) for rootkit detection and kernel security.
Scanning for Rootkits
Just like a malware scanner, a rootkit scanner can scan low level processes to determine if any malicious code is loaded at bootup. For example, below are examples of software that can be used for general rootkit detection:
Chrootkit – A rootkit scanner for Linux to discover hard to find rootkits
Rkhunter – A rootkit scanner for Linux to discover backdoors and possible local exploits.
In this specific Drovorub case the advice is given to forensically analyze a machine’s memory with tools like Volatility. Using the Volatility plugin “Linux_Psxview” presence of the Drovorub client can be detected even though it doesn’t show up in the normal PSlist.
Linux Kernel hardening
Today’s advisory suggests that organizations enable UEFI Secure Boot in “full” or “thorough” mode on x86-64 systems. UEFI Secure Boot requires cryptographically signed firmware and kernels. Because no unsigned drivers can be loaded for hardware, this action will decrease the attack surface by making it more difficult for an attacker to insert a malicious kernel module into the system and for unsigned rootkits to remain persistent after reboot.
Organizations should take note, however, that Secure Boot is not integrated in all Linux distros. There are also some challenges with enabling Secure Boot. Often it requires manual intervention any time a kernel or module is upgraded or may prevent some products from loading. This knowledge base article from VMWare discusses ways to address these Secure Boot issues
Securing the kernel
There are several steps organizations can take to secure the Linux kernel and take advantage of the features that are provided. We will highlight some of the best-practices that can be used and applied. Please apply these within a test-environment before applying them in production.
Kernel module signing
Since Linux 3.7, the kernel has supported digital signatures on loadable kernel modules. This facility can be enabled in the kernel with settings in CONFIG_MODULE_SIG. These options can require valid signatures; enable automatic module signing during the kernel build phase; and specify which hash algorithm to use. Additionally, local or remote keys can be used. By requiring valid digital signatures, only known valid modules can be loaded, decreasing your system’s attack surface.
Module loading rules
Only known modules should be loadable. Limited module support can be enabled by default, disallowing kernel module loading and specifying which modules are exempt from the ban. The following command can be used to disable the loading:
sysctl kernel.modules_disabled=1
Some modules required for system operation may normally be loaded during system operation and not at boot. To ensure these modules are available, they must be loaded at startup prior to when loading is disabled. To load these modules, list them in a file located in /etc/modules-load.d.
Disabling modules completely
Depending on the system in question, disabling all non-necessary hardware in the kernel configuration and building all required driver code directly into the kernel rather than using modules could allow for completely disabling loadable kernel module support. For special use systems, this may be a viable option. By disallowing modules entirely, your system’s attack surface can be drastically reduced.
Fully disabling kernel module support might only be possible for special purpose systems with a known usage pattern. General purpose, user-facing machines will likely need module support to support user access patterns.
Using Linux kernel Lockdown
The lockdown patches have been merged into the kernel since version 5.4. Even if Secure Boot is enabled, if not prevented, the root could still modify the kernel and, for example, apply a hot-patch and create a persistent process. Lockdown was developed to provide a policy to prevent the root from modifying the kernel code. Lockdown has two modes: “integrity” and “confidentiality”. The community generally advises organizations to consider the “integrity” mode and use “confidentiality” mode for special systems.
Harden sysctl.conf
The sysctl.conf file is the main kernel parameter configuration point for a Linux system. By using secure defaults, the whole system will benefit from a more secure foundation. Example options include disabling IPv6 if not in use, ignoring network broadcast packets, enabling ASLR, and activating DEP/NX. (https://www.cyberciti.biz/faq/linux-kernel-etcsysctl-conf-security-hardening/)
Enable SELinux or AppArmor
Modern Linux systems include the security enhancement systems of AppArmor or SELinux depending on the distribution. These allow for granular access control with security policies. SElinux is installed and enabled by default on CentOS and RedHat Enterprise Linux operating systems, while AppArmor is installed and enabled by default on Ubuntu and SuSE Linux Enterprise systems.
What we often observe is that people will decide to disable these security enhancement systems as soon as they run into an issue since they is easy to disable with root privileges. But taking the time to learn how to allow services and fix issues is very important to provide an additional layer of security on a Linux system.
AppArmor is a Linux kernel security module that provides capabilities similar to SELinux. While SELinux operates on files (specifically inodes) and requires filesystem support, AppArmor works on paths, while being file system agnostic. Considered by many to be easier to use, it is mostly transparent to regular users. While SELinux can be (potentially) more secure, the complexity of the system has many users preferring AppArmor.
Among the mainline Enterprise-level Linux distributions, RedHat embraces SELinux, while SuSE Enterprise embraces (and owns the trademark to) AppArmor. Canonical is a significant contributor to AppArmor as well, and supports it by default in Ubuntu.
Additional Linux System hardening
In light of today’s advisory, we have focused mostly on securing the Linux Kernel in this article. However, there are many best practices to secure Linux (or nearly any modern operating system), including:
Removal of unused software
Disabling unused services
Enabling auditing
Controlling API access
Limiting root account usage
Incorporating a least-privilege policy as much as possible
Backing up your system
Increasing ASLR entropy via sysctl to make reliable exploitation more difficult, by increasing the number of locations libraries that could be stored in memory.
Detailed hardening and securing guides for Linux distributions can be downloaded from:
Linux Kernel and System hardening may prove to hold a learning curve for organizations and administrators more familiar with the configuration and use of Windows operating systems. However, given the information provided in the NSA-FBI publication and the adaptation of Linux-based malware by threat actors overall, we advise organizations remain vigilant, harden Linux systems as much as possible, and deploy adequate security products.
For specific McAfee technology protections against Drovorub please visit the dedicated Drovorub KB article here.
Women at McAfee are succeeding in cybersecurity sales. With an inclusive culture, raw talent, and passion, McAfee saleswomen continue to grow as professionals and people and seek to inspire women.
Now, leading women at McAfee offer to help illustrate the importance of learning and leveraging feedbackto stimulate growth necessary for success. They also offer how they balance life, including motherhood, while maintaining bustling professional careers.
Passion, persistence, and time:“Bring passion, persistence, and the right skillset. Acquiring the relevant skills in analytics or sales is important and you must give yourself time to build them. You need the right skills to get started, but passion and persistence will help guide your growth. Remember, everything is possible.”
— Anna, Sales Operations, Plano, TX, United States
Responding to feedback:“My greatest achievement is reaching my current position, and it’s all because I embraced feedback. People shared they were hesitant to put me in management role. I listened and absorbed that feedback. I then zeroed in on improving in that area.I was passionate about achieving my goal and began to build myreputation and seek buy-in from those around me. I continued to show others what I was capable of and worked collaboratively to achieve great outcomes.”
—Briana, Presales/Sales Engineering, Brooklyn, NY, United States
Online learning after a career pause:“It can be a challenge to keep up with technology in a fast-paced industry. I left the corporate workforce for five years before joining McAfee; I’m glad McAfee looked beyond that pause in my career. It is encouraging to see McAfee giving opportunities to candidates ready to rejoin the workforce and making a conscious effort in building an inclusive team. I think with the right attitude, willingness to learn, and opportunity, women who’ve paused careers can rejoin the industry. It helps to leverage the many online resources available so you can keep learning about the industry and pick up different skills to equip yourself with relevant knowledge.”
– Caris, Sales Operations,Singapore
Hands-on experience:“Learning never stops. It is important to build the foundation of your knowledge, but true learning comes from engaging in real-world projects. Always be ready to take every challenge and opportunity to start your career in presales. Say yes with intention.”
—Delaram, Presales/Sales Engineering, Sydney, Australia
Listen and learn from others:“My most amazing professional achievement in the last 10 years is my ability to balance motherhood and a career. The way McAfee encourages working mothers stimulated the learning experience for me. In sales, you are always learning. Every day is new day with new challenges and experiences. We must listen, because listening teaches us so that we grow – and success comes with growth. Soak up the experience of your peers, leaders, and mentors, and carry the mindset that you will never stop learning. The payoff is worth it.”
—Eadaoin, Inside Sales, Cork Ireland
Embrace challenges, then unplug:“I moved to McAfee three years ago through a referral. I was new to the security industry and completely changed territories focusing now on civilian agencies. I looked at this as an opportunity to expand my skillset and continue building my personal brand. At the same time, I always prioritize time for myself. I love my work, but at some point, you need to turn off and call it a day. In my downtime, I love working out, listening to music, or just making time for me. I return to work with clearer focus. I’m very happy at McAfee and look forward to what the future holds.”
— Kate, Enterprise Sales – Federal, Washington, DC, United States
Me time as a mom:“Before I became a mum, time to myself was easy to hold onto and therefore not as important. Now I find myself swinging between working or caring for my child. The rest seems to have disappeared. To become the woman, mother, and professional I want to be, I focus on reclaiming that sanctuary where I make time to exercise, socialize, or simply relax and unwind. I’m not sure I’ve found the right balance just yet, but now that I understand it’s value, I look forward to the journey of finding it.”
—Kimberly, Channel Sales, Essex, England
Positive self-talk and community:“Positive self-talk every day is important. I do my best to plan my “me time,” but when I don’t have time for myself on some days, I remind myself that it will be okay and make a plan to get the time I need the following day. I encourage myself day to day so that I don’t get lost in the hustle. If I’m a little overwhelmed, one thing that’s particularly helpful is to talk with others about their day. That makes me feel better, and I feel a part of a community.”
—Kristol, Inside Sales, Plano, TX, United States
Take time off:“I love technology and it’s my passion, but the work isn’t easy. Staying on top of technology advancements is a never-ending learning process. Don’t be afraid to ask questions and work towards finding balance. I used to fear taking vacations or an occasional day off because I thought I might miss something. But that has changed since I know how important timefinding the balance between work and personal life is.Sometimes,balance is just part of the greater learning process. Now, each week I look at my schedule and find ‘me’ time. I also try to keep my office and home life separate, especially in the physical space.”
— Melissa, Presales/Sales Engineering, Plano, TX, United States
We bring out the best in each other when all voices are heard. It’s only then when we clear barriers and reach our goals. Coming up, women offer insights intothe importance ofsupport systems and inspiration.
You work hard to produce quality applications on tight deadlines, and like every other development team out there, that often means relying on open source code to keep projects on track. Having access to plug-and-go code is invaluable when you???re racing the clock, but the accessibility of open source libraries comes with a caveat: increased risk.
In our recent report, State of Software Security: Open Source Edition, we examined the security of open source libraries by studying data from 85,000 applications ??? including 351,000 unique external libraries. From the data, we evaluated the prevalence of flaws in open source libraries as well as how vulnerable they are, gaining insight into the risk that you might carry when you use open source code in your software development process.
While we found that a sizeable 70.5 percent of the applications had an open source flaw on initial scan, some of the most interesting drill-down data came from examining flaws in the top 50 open source libraries broken down by language. The results, highlighted in an interactive infographic, were eye-opening about a few languages in particular.
Languages to keep an eye on
As an example, JavaScript had more libraries in use than any other language, and a handful stood out as containing risky flaws. In the charts below ??? taken from our interactive infographic, which you can view in full here ??? the lighter blue dots represent libraries that have some flawed versions in use and their placement is relative to the percentage of applications that each specific library is used within. The largest light blue dot hovering around 88 percent represents Lodash, with 401 versions of that library containing a flaw ??? something to keep in mind when using Lodash in your code.ツ?ツ?
???
PHP also raised some alarms as we dug into the data. We found that including any given PHP library in your code increases the chance of introducing a security flaw along with that library by more than 50 percent.
The flaws it carries are dangerous, too. We uncovered that more than 40 percent of PHP libraries contained Cross-Site Scripting (XSS) flaws with Authentication and Broken Access Control vulnerabilities close behind. And as you can see in the chart below, the light blue dot towards the right of the scale represents PHPUnit libraries as a flaw offender, with about 63 versions containing a flaw.
???
One of the more colorful charts in our data represents Ruby, of which we uncovered three library versions in use that are known to have been exploited. Those three versions include:
Rails: Used in 47 percent of applications written in Ruby, with 337 versions containing a flaw and 133 versions exploited.
Action Pack: Used in 49 percent of applications written in Ruby, with 343 versions containing a flaw and 85 versions exploited.
Active Support: Used in 66 percent of applications written in Ruby, with 235 versions containing a flaw and 117 versions exploited in the wild. ツュツュ
???
We found over one-fifth of open source libraries have public proof-of-concept (PoC) exploits, which many organizations use to prioritize treating flaws. Alongside PHP, Ruby has noticeable public proof-of-concept (PoC) exploits for some versions in use ??? such as Nokogiri with 24 versions, and Rack with 25 versions.
The bottom line is that it???s important to stay on top of the security of your code, including snippets you didn???t write from scratch. Software Composition Analysis (SCA) will help you identify vulnerabilities in open source libraries, while also providing recommendations on version updating so that you can find and fix flaws in your open source code before they become a problem.
Check out the full infographic to see the rest of the data on the top 50 open source libraries broken down by language, and read the full report here to gain more insight. ツ?
If you are a young person looking for your ‘special someone’ then 2020 would have seriously cramped your style. Whether you’ve been in lockdown, working from home or simply staying home to stay well, your social life would have taken a hit. So, it comes as no surprise that dating sites have become the playground for young people – in fact all aged people – who are ‘looking for love’.
The New Normal
Our ‘in-person’ social lives look a whole lot different to 2019. Over the last six months, we’ve all been living with a barrage of new restrictions which has meant a lot of time at home. Earlier in 2020, many cities effectively closed-up while others – such as Melbourne – recently imposed a strict lockdown to curb the spread of the virus. In NSW, cafes, pubs, and restaurants reopened but with tight restrictions on the number of people allowed inside. My older boys spent a few Saturday nights lining up to get into the local pub but ended up returning home to the couch.
And of course, it goes without saying that protecting our people should be the biggest priority, but it has taken a toll on our ability to socialise and connect with others.
Most Aussies Now Meet Their Partners Online
One of my oldest friends found the love of her life online. 15 years later and all is going swimmingly! She was an Aussie and he was an Englishman – but it didn’t matter – love prevailed! There is no humanly possible way, these two would have connected if it wasn’t for the wonders of technology.
But add in a global pandemic and you don’t have to be a rocket scientist to predict that these statistics will only increase!
Proceed with Caution
With so many people, understandably, feeling isolated and lonely, online dating is a wonderful way to make connections – both romantic and platonic. But you do need to have your wits about you to stay safe.
Scammers will often prey on our need to feel connected to each other and expend a lot of energy developing relationships for all the wrong reasons. So, if you have a teen or young adult in your house who is directing their energies into online dating sites, then please check in with them to ensure they are being safe.
Here are my top tips on staying safe while you look for love online:
Don’t Get Too Personal – While it might feel harmless sharing your name, location and occupation to your new online friends, it doesn’t take much for a scammer to piece together your details to access your personal info, bank accounts or even steal your identity. Never use your full name on dating sites and only share what is absolutely necessary!
Do Your Homework – Before you meet someone in person after meeting them online, always do your homework. A Google search is a great place to start and even using Google Images will help you get a better understanding of a person. Don’t forget to check out their LinkedIn account too. And why not investigate whether you might have mutual friends? If so, ask them for their ‘2 cents worth’ too.
Think Before You Send – Sharing sexy pics or videos with the person you are dating online might seem like a good idea in the moment but please consider how this could affect you in the future. Once those pictures and videos are online, they are online forever. Even social media apps that say pictures go away after a few seconds can be easily circumvented with a screenshot. It’s not just celebrities who have intimate pictures spread around the Internet!
Make passwords a priority – Ensure all your online dating and social media accounts, and all your devices, have separate and unique passwords. Ideally, each password should have a combination of lower- and upper-case letters, numbers, and special characters. I love using a nonsensical, crazy sentence!
One of the greatest lessons of 2020 for me is our need for human connection.
My biggest takeout from 2020? Humans need other humans to not only survive but thrive. So, if you’re looking for your life partner – or even companionship – the online world may be the perfect way for the moment. But, please – exercise caution and be safe – because as I say to my boys all the time – trust your gut, because it’s usually right!
Open source has become the foundation for modern software development. Vendors use open source software to stay competitive and improve the speed, quality, and cost of the development process. At the same time, it is critical to maintain and audit open source libraries used in products as they can expose a significant volume of risk.
The responsibility of auditing code for potential security risks lies with the organization using it. We have seen one of the highest impact vulnerabilities originate with open source software in the past. The famous Equifax data breach was due to a vulnerability in open source component Apache Struts, widely used in mainstream web frameworks. Furthermore, the 2020 Open Source Security Risk and Analysis report states that out of the applications audited in 2019, 99% of codebases contained open source components and 75% of codebases contained vulnerabilities, with 49% of codebases containing high risk vulnerabilities.
Graphics libraries have a rich history of vulnerabilities and the volume of exploitable issues are especially magnified when the code base is relatively older and has not been recompiled recently. It turns out that graphics libraries on Linux are widely used in many applications but are not sufficiently audited and tested for security issues. This eventually became a driving force for us to test multiple vector graphics and GDI libraries on Linux, one of which was libEMF, a Linux C++ library written for a similar purpose and used in multiple graphics tools that support graphics conversion into other vector formats. We tested this library for several days and found multiple vulnerabilities, ranging from multiple denial-of-service issues, integer overflow, out-of-bounds memory access, use-after-free conditions, and uninitialized memory use.
All the vulnerabilities were locally exploitable. We reported them to the code’s maintainer, leading to two new versions of the library being released in a matter of weeks. This reflects McAfee’s commitment to protecting its customers from upcoming security threats, including defending them against those found in open source software. Through collaboration with McAfee researchers, all issues in this library were fixed in a timely manner.
In this blog we will emphasize why it is critical to audit the third-party code we often use in our products and outline general practices for security researchers to test it for security issues.
Introduction
Fuzzing is an extremely popular technique used by security researchers to discover potential zero-day vulnerabilities in open, as well as closed source software. Accepted as a fundamental process in product testing, it is employed by many organizations to discover vulnerabilities earlier in the product development lifecycle. At the same time, it is substantially overlooked. A well designed fuzzer typically comprises of a set of tools to make the fuzzing process relatively more efficient and fast enough to discover exploitable bugs in a short period, helping developers patch them early.
Several of the fuzzers available today help researchers guide the fuzzing process by measuring code coverage, by using static or dynamic code instrumentation techniques. This eventually results in more efficient and relevant inputs to the target software, exercising more code paths, leading to more vulnerabilities discovered in the target. Modern fuzzing frameworks also come with feedback-driven channels for maximizing the code coverage of the target software, by learning the input format along the way and comparing the code coverage of the input via feedback mechanisms, resulting in more efficient mutated inputs. Some of the state-of-the-art fuzzing frameworks available are American Fuzzy Lop (AFL), LibFuzzer and HongFuzz.
Fuzzers like AFL on Linux come with compiler wrappers (afl-gcc, afl-clang, etc.). With the assembly parsing module afl-as, AFL parses the generated assembly code to add compile-time instrumentation, helping in visualizing the code coverage. Additionally, modern compilers come with sanitizer modules like Address Sanitizers (ASAN), Memory Sanitizers (MSAN), Leak Sanitizers (LSAN), Thread Sanitizers (TSAN), etc., which can further increase the fuzzer’s bug finding abilities. Below highlights the variety of memory corruption bugs that can be discovered by sanitizers when used with fuzzers.
ASAN
MSAN
UBSAN
TSAN
LSAN
Use After Free Vulnerabilities
Uninitialized Memory Reads
Null Pointer Dereferences
Race Conditions
Run Time Memory Leak Detection
Heap Buffer Overflow
Signed Integer Overflows
Stack Buffer Overflow
Typecast Overflows
Initialization Order Bugs
Divide by Zero Errors
Memory Leaks
Out of Bounds Access
One of the McAfee Vulnerability Research Team goals is to fuzz multiple open and closed source libraries and report vulnerabilities to the vendors before they are exploited. Over the next few sections of this blog, we aim to highlight the vulnerabilities we discovered and reported while researching one open source library, LibEMF (ECMA-234 Metafile Library).
Using American Fuzzy Lop (AFL)
Much of the technical detail and working of this state-of-the-art feedback-driven fuzzer is available in its documentation. While AFL has many use cases, its most common is to fuzz programs written in C / C++ since they are susceptible to widely exploited memory corruption bugs, and that is where AFL and its mutation strategies are extremely useful. AFL gave rise to several forks like AFLSmart , AFLFast and Python AFL, differing in their mutation strategies and extensions to increase performance. Eventually, AFL was also imported to the Windows platform, WinAFL, using a dynamic instrumentation approach predominantly for closed source binary fuzzing.
The fuzzing process primarily comprises the following tasks:
Fuzzing libEMF (ECMA-234 Metafile Library) with AFL
LibEMF (Enhanced Metafile Library) is an EMF parsing library written in C/C++ and provides a drawing toolkit based on ECMA-234. The purpose of this library is to create vector graphic files. Documentation of this library is available here and is maintained by the developer.
We chose to fuzz this LibEMF with AFL fuzzer because of its compile time instrumentation capabilities and good mutation strategies as mentioned earlier. We have the source code compiled in hardened mode, which will add code hardening options while invoking the downstream compiler, which helps with discovering memory corruption bugs.
Compiling the Source
To use the code instrumentation capabilities of AFL, we must compile the source code with the AFL compiler wrapper afl-gcc/afl-g++ and, with an additional address sanitizer flag enabled, use the following command:
Below is a snapshot of the compilation process showing how the instrumentation is added to the code:
Pwntools python package comes with a good utility script, checksec, that can examine the binary security properties. Executing checksec over the library confirms the code is now ASAN instrumented. This will allow us to discover non-crashing memory access bugs as well:
Test harness is a program that will use the APIs from the library to parse the file given to the program as the command line argument. AFL will use this harness to pass its mutated files as an argument to this program, resulting in several executions per second. While writing the harness, it is extremely important to release the resources before returning to avoid excessive usage which can eventually crash the system. Our harness for parsing EMF files using APIs from the libEMF library is shown here:
AFL will also track the code coverage with every input that it passes to the program and, if the mutations result in new program states, it will add the test case to the queue. We compiled our test harness using the following command:
While a fuzzer can learn and generate the input format even from an empty seed file, gathering the intial corpus of input files is a significant step in an effective fuzzing process and can save huge amounts of CPU cycles. Depending upon the popularity of the file format, crawling the web and downloading the initial set of input files is one of the most intuitive approaches. In this case, it is not a bad idea to manually construct the input files with a variety of EMF record structures, using vector graphic file generation libraries or Windows GDI APIs. Pyemf is one such available library with Python bindings which can be used to generate EMF files. Below shows example code of generating an EMF file with an EMR_EXTEXTOUTW record using Windows APIs. Constructing these files with the different EMR records will ensure functionally different input files, exercising different record handlers in the code.
Running the Fuzzer
Running the fuzzer is just running the afl-fuzz command with the parameters as shown below. We would need to provide the input corpus of EMF files ( -i EMFs/ ) , output directory ( -o output/ ) and the path to the harness binary with @@, meaning the fuzzer will pass the file as an argument to the binary. We also need to use -m none since the ASAN instrumented binary needs a huge amount of memory.
However, we can make multiple tweaks to the running AFL instance to increase the number of executions per second. AFL provides a persistent mode which is in-memory fuzzing. This avoids forking a new process on every run, resulting in increased speed. We can also run multiple AFL instances, one on every core, to increase the speed. Beyond this, AFL also provides a file size minimization tool that can be used to minimize the test case size. We applied some of these optimization tricks and, as we can see below, there is a dramatic increase in the execution speed reaching ~500 executions per second.
After about 3 days of fuzzing this library, we had more than 200 unique crashes, and when we triaged them we noticed 5 unique crashes. We reported these crashes to the developer of the library along with MITRE, and after being acknowledged, CVE-2020-11863, CVE-2020-11864, CVE-2020-11865, CVE-2020-11866 and CVE-2020-13999 were assigned to these vulnerabilities. Below we discuss our findings for some of these vulnerabilities.
CVE-2020-11865 – Out of Bounds Memory Access While Enumerating the EMF Stock Objects
While triaging one of the crashes produced by the fuzzer, we saw SIGSEGV (memory access violation) for one of the EMF files given as an input. When the binary is compiled with the debugging symbols enabled, ASAN uses LLVM Symbolizer to produce the symbolized stack traces. As shown below, ASAN outputs the stack trace which helps in digging into this crash further.
Looking at the crash point in the disassembly clearly indicates the out of bounds memory access in GLOBALOBJECTS::find function.
Further analyzing this crash, it turned out that the vulnerability was in accessing of the global object vector which had pointers to stock objects. Stock objects are primarily logical graphics objects that can be used in graphics operations. Each of the stock objects used to perform graphical operations have their higher order bit set, as shown below from the MS documentation. During the metafile processing, the index of the relevant stock object can be determined by masking the higher order bit and then using that index to access the pointer to the stock object. Metafile processing code tries to retrieve the pointer from the global object vector by attempting to access the index after masking the higher order bit, as seen just above the crash point instruction, but does not check the size of the global object vector before accessing the index, leading to out of bounds vector access while processing a crafted EMF file.
Shown below is the vulnerable and fixed code where the vector size check was added:
CVE-2020-13999 – Signed Integer Overflow While Processing EMR_SCALEVIEWPORTEX Record
Another crash in the code that we triaged turned out to be a signed integer overflow condition while processing an EMR_SCALEVIEWPORTEXT record in the metafile. This record specifies the viewport in the current device context and is calculated by computing ratios. An EMR_SCALEVIEWPORTEXTEX record looks like this, as per the record specification. A new viewport is calculated as shown below:
As part of AFL’s binary mutation strategy, it applies a deterministic approach where certain hardcoded sets of integers replace the existing data. Some of these are MAX_INT, MAX_INT-1, MIN_INT, etc., which increases the likelihood of triggering edge conditions while the application processes binary data. One such mutation done by AFL in the EMF record structure is shown below:
This resulted in the following crash while performing the division operation.
Below we see how this condition, eventually leading to a denial-of-service, was fixed by adding division overflow checks in the code:
CVE-2020-11864 – Memory Leaks While Processing Multiple Metafile Records
Leak Sanitizer (LSAN) is yet another important tool which is integrated with the ASAN and can be used to detect runtime memory leaks. LSAN can also be used in standalone mode without ASAN. While triaging generated crashes, we noticed several memory leaks while processing multiple EMF record structures. One of them is as shown below while processing the EXTTEXTOUTA metafile record, which was later fixed in the code by releasing the memory buffer when there are exceptions reading the corrupted metafiles.
Apparently, memory leaks can lead to excessive resource usage in the system when the memory is not freed after it is no longer needed. This eventually leads to the denial-of-service. We found memory leak issues while libEMF processed several such metafile records. The same nature of fix, releasing the memory buffer, was applied to all the vulnerable processing code:
Additionally, we also reported multiple use-after-free conditions and denial-of-service issues which were eventually fixed in the newer version of the library released here.
Conclusion
Fuzzing is an important process and fundamental to testing the quality of a software product. The process becomes critical, especially when using third-party libraries in a product which may come with exploitable vulnerabilities. Auditing them for security issues is crucial. We believe the vulnerabilities that we reported are just the tip of the iceberg. There are several legacy libraries which likely require a thorough audit. Our research continues with several other similar Windows and Linux libraries and we will continue to report vulnerabilities through our coordinated disclosure process. We believe this also highlights that it is critical to maintain a good level of collaboration between vulnerability researchers and the open source community to have these issues reported and fixed in a timely fashion. Additionally, modern compilers come with multiple code instrumentation tools which can help detect a variety of memory corruption bugs when used early in the development cycle. Using these tools is recommended when auditing code for security vulnerabilities.
Recently, we conducted a survey of 600 families and professionals in the U.S. to better understand what matters to them—in terms of security and the lives they want to lead online. The following article reflects what they shared with us, and allows us to share it with you in turn, with the aim of helping you and your family stay safer and more secure. 1
While many of us take shopping, surfing, and banking online for granted, they mark a dramatic shift for elders. They’ve gone from the days when banking meant banker’s hours and paper passbook to around-the-clock banking and a mobile app. And even if they use the internet sparingly, banking, finances, and commerce have gone digital. Their information is out there, and it needs to be protected.
The good news is, elders are motivated.
What’s on the minds of elders when it comes to their security?
Most broadly, this sentiment captures it well: Technology may be new to me, but I still want to be informed and involved. For example, elders told us that they absolutely want to know if something is broken—and if so, how to fix it as easily as possible. In all, they’re motivated to get smart on the topic of security, get educated on how to tackle risks, and gain confidence that they go about their time on the internet safely. Areas of interest they had were:
Identity protection: This covers a few things—one, it’s monitoring your identity to spot any initial suspicious activity on your personal and financial accounts before it becomes an even larger one; and two, it’s support and tools for recovery in the even your identity is stolen by a crook. (For more on identity theft, check out this blog.)
Social Security monitoring: Government benefits are very much on the mind of elders, particularly as numerous agencies increasingly direct people to use online services to manage and claim those benefits. Of course, hackers and crooks have noticed. In the U.S., for example, Social Security identified nearly 63,000 likely fraudulent online benefit applications in fiscal 2018, according to the agency’s Office of the Inspector General, up from just 89 in fiscal 2015.
Tech support scams are run by people, sometimes over the phone, that pretend to be from a reputable company, which will then ask for access to your computer over the internet, install malware, and then claim there’s a problem. After that, they’ll claim to “help” you by removing that malware—for an exorbitant fee.
Ransomware scams, where a crook will block access to your computer until you pay a sum of money. This is like the tech support scam, yet without the pretense of support—it’s straight-up ransom.
False debt collectors are out there too, acting in many ways like tax scammers. These will often come by way of email, where the hacker will hope that you’ll click the phony link or open a malicious attachment.
Sweepstakes and charity scams that play on your emotions, where you’re asked to pay to receive a prize or make a donation with your credit card (thereby giving crooks the keys to your account).
Where can professionals get started?
With that, we’ve put together several resources related to these topics. Drop by our site and check them out. We hope you’ll find some basic information and knowledge of behaviors that can keep you safe.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Survey conducted in October 2019, consisting of 600 computer-owning adults in the U.S
During a recent investigation at a telecommunications company led by
Mandiant
Managed Defense, our team was tasked with rapidly identifying
systems that had been accessed by a threat actor using legitimate, but
compromised domain credentials. This sometimes-challenging task was
made simple because the customer had enabled the Logon Tracker module
within their FireEye Endpoint
Security product.
Logon Tracker is an Endpoint Security Innovation Architecture module
designed to simplify the investigation of lateral movement within
Windows enterprise environments. Logon Tracker improves the efficiency
of investigating lateral movement by aggregating historical logon
activity and provides a mechanism to monitor for new activity. This
data is presented in a user interface designed for analyzing
investigative leads (e.g., a compromised account) and hunting for
suspicious activity (e.g., RDP activity by privileged accounts). Logon
Tracker also provides a graph interface that enables the
identification of irregular and unique logons with the option to
filter on hostnames, usernames, protocol, time of day, process name,
privilege level, status (success/failure), and more.
Figure 1: Logon Tracker GUI interface
A critical component of a successful incident response is the
scoping effort to identify systems that may have been accessed by the
adversary. Windows Event Logs offer a commonly utilized method of
identifying an adversary’s lateral movement between Windows systems.
However, as with all log sources, Windows Event Logs are subject to
data retention limits on endpoints, making the aggregated logon
activity provided by Logon Tracker a critical source of evidence for
incident response.
Logon Tracker’s graphical display along with the raw logon events
allowed Mandiant Managed Defense to quickly identify 10 potentially
compromised hosts and begin to create a timeline of adversary activity.
Managed Defense also leveraged Logon Tracker to monitor for
additional suspicious logons and adversary activity throughout the
incident response. Searching for logons (both failed and successful)
from known compromised accounts and activity originating from
compromised systems allowed our investigators to quickly determine
which systems should be prioritized for analysis. Additionally, Logon
Tracker provides investigators the ability to:
Filter logon data for activity originating from user-provided
IP ranges
Search for logon data for activity by specific
privileged accounts, including “Domain Administrators” and
“Enterprise Administrators”
Search for any privileged logon
using the “Privileged” logon type
Provide alerting and
definition of custom rules (coming soon!)
Case Background
In mid-July, the Managed Defense Security Operations Center
identified potential credential harvesting activity on a Windows
server. The activity included the creation of a scheduled task
configured to execute the built-in Windows utility, NTDSUTIL to take a
snapshot of the active NTDS.dit file and save it locally to a text
file as shown in Figure 2:
Figure 2: Scheduled task creation for NTDS.DIT harvesting
The NTDS.dit file is a database that contains Active Directory data
such as user objects, group memberships, groups, and—more useful to an
adversary—password hashes for all users in the domain.
Leveraging Logon Tracker and simple timeline analysis, Managed
Defense quickly determined an adversary had accessed this system to
create a scheduled task from a system with a hostname that did not
match the naming convention used within the environment. An anonymized
example of Logon Tracker data is shown in Figure 3:
Figure 3: Logon Tracker data
Armed with the suspicious hostname and potentially compromised
username, Managed Defense then used Logon Tracker’s search
functionality to determine the scope of systems potentially accessed
by the adversary.
The resulting investigation revealed that an Internet-facing
Customer Relationship Management (CRM) application hosted on a Linux
Apache web server had been compromised. Multiple web shells had been
placed within web-accessible folders, allowing an adversary to execute
arbitrary commands on the server. The adversary leveraged one of these
web shells to install a malicious Apache module and restart Apache for
the module to take effect. Mandiant has classified this module as
COOKIEJAR (see the Malware Appendix at the end of the post for more
details). The COOKIEJAR module enabled the adversary to proxy through
the compromised server to any arbitrary IP/port pair within the
customer’s internal network, see Figure 4.
Figure 4: PCAP data
Using this proxied access to the customer’s network, the adversary
leveraged previously compromised domain credentials to connect to
multiple Windows servers using SMB. Due to the use of the proxy to
connect into the customer’s network, the hostname of the adversary’s
workstation being used to conduct the attack was also passed into the
logon events. This type of activity occurs due to the direct
connection to the customers network and is similar to being on the
same LAN. The non-standard hostname and non-standard customer naming
convention used by the adversary help make scoping an easy task.
Additionally, Managed Defense was able to leverage network detection
to alert on the authentication attempts and activities of the
adversary’s host.
Malware Appendix
During the course of the response, Mandiant identified a customized
malicious Apache plugin capable of intercepting HTTP requests to an
Apache HTTP server. The new malware family COOKIEJAR was created to
aid in clustering and tracking this activity. The COOKIEJAR module
installs a pre-connection hook that only runs if the client IP address
matches a specified hardcoded adversary-controlled IP address. It
listens for SSL/TLS connections on the port specified by the Apache
server, using a certificate and private key loaded from
/tmp/cacert.pem and /tmp/privkey.pem respectively. If
the client IP address matches the hardcoded IP address (Figure 4), the
backdoor accepts three commands based on the start of the URL:
/phpconf_t/: Simply writes
<html><h1>accepted.</h1></html> as the
response. Likely used to test if the server is infected with the
malware.
/phpconf_s/: Executes commands on the server. Any
communications to and from the system are forwarded to a shell, and
are AES-256-ECB encrypted and then Base58 encoded.
/phpconf_p/: Decode the second encoded string provided as a
hostname/port (the first is ignored), using Base58 and AES-256-ECB
(same key as before). The server will connect to the remote host and
act as a proxy for the command and control (C2). Data to and from
the C2 is encoded using Base58 and AES-256-ECB. Data to and from the
remote host is not encoded.
Figure 5: Hardcoded configuration data
within COOKIEJAR
In 2020, months seem to feel like years. Amid rapid change, adaptation is essential. Cyber threats are no exception to this rule. Technology can solve complex problems but can also be destabilizing. We think about this paradox regularly as artificial intelligence (AI) and Machine Learning gain prevalence in our field. Will these technologies drive better outcomes, and improve efficiencies in our cyber defense workforce, or will they introduce more risk to our environment?
Businesses are having to deal with two crises simultaneously—the impact of a global recession and the acceleration of malicious cyber activity. With hundreds of thousands—often even millions—of data points streaming into enterprises on a daily basis, security operations center (SOC) directors need a way to reduce the burden on human analysts. AI abstracts the complexity of cyber defense such that less skilled individuals are capable of investigating highly sophisticated scenarios and executing at scale.
Machine Learning enables security professionals to keep pace with the scale and complexity of data in many important ways. AI is a sophisticated analytics capability that, once trained, can identify malicious situations that are similar to previously identified threats. However, tomorrow’s cyber-attack might be entirely new, flying under the radar of even the best models. We need human intuition and intellect to recognize new attack methods and differentiate them from benign activity.
Let’s examine why AI-driven human-machine teaming is a differentiator for security operations professionals.
How AI is Deployed
Machine learning is already in wide use—it has become a critical part of threat detection. But an AI model’s success is highly dependent upon the quality of available inputs. If machine learning isn’t analyzing the right data, there’s no magic algorithm that can produce a valuable output to accurately assess advanced threats.
Machine learning processes data in bulk and presents security operations teams with actionable intelligence freeing analysts from the burden of having to comb through massive data sets.
For instance, AI can help security teams conduct triage and prioritize potential threats. Without proper triage, minimally impactful events waste time and distract teams from focusing on more substantial threats. AI can help analysts identify the threats that matter most and prioritize them properly. This automated triaging is critical because the longer high priority threats go unaddressed, the more damage they can inflict.
There are many compelling aspects of AI but it’s not a cybersecurity silver bullet. Just as AI is driving innovation in the field, a few sophisticated attackers have engaged in adversarial AI, a means of tricking machine learning models through malicious inputs or poisoning the training sets. Most of today’s AI models are fragile because the field has traditionally focused on solving problems where an adversary was not incentivized to have the model fail. Conversely, cybersecurity defense is a field where there is an adversary whose objective is to evade detection capabilities, including the latest AI-based solutions. McAfee is studying adversarial AI to make our models more resilient.
Human-Machine Teaming
With more automation and new high-fidelity data, the SOC can focus on complex issues that require human intuition and insight, increasing a security team’s strategic abilities. With McAfee MVISION Insights, we’re turning the concept into a reality.
McAfee MVISION Insights, a key and unique component of the MVISION Endpoint Security platform, enables security analysts to significantly increase the proactive security posture of the organization’s countermeasures while reducing the amount of time that the SOC must spend to accomplish this goal.
We architected MVISION Insights from the ground up to operate on a Human-Machine AI teaming model. Effective analytic models prioritize potential threats by applying algorithms that alert teams to the high-impact campaigns they need to be aware of and provide prescriptive guidance on how to defend the organization.
This is a tremendous benefit to threat hunters who operate in environments where speed and precision in identifying the things that really matter make all the difference.
MVISION Insights does this by analyzing threat telemetry from over a billion sensors including those globally and within an organization along with the threat research developed by McAfee’s world-leading Advanced Threat Research team. Additionally, metadata describing an enterprise’s security posture, enables Insights to deliver a custom recommendation on what products and configuration are needed to defend against specific, high-impact, in-the-wild threats.
In addition to these core capabilities, McAfee will be able to build new modules on top of the Insights foundation. This is possible because we’ve developed Insights as a platform that allows easy integration of new capabilities. That means that as we identify the next generation of AI and data science technologies, we can deploy those features without requiring a customer to deploy new products.
Gain hands-on experience on this distinct proactive endpoint security capability that keenly drives actionable intelligence before an attack occurs. Check out MVISION Insights Preview on McAfee.com to see the top ten threat campaigns.
As organizations continue to adopt DevSecOps, a methodology that shifts security measures to the beginning of the software development lifecycle (SDLC), roles and processes are evolving. Developers are expected to take on increased security measures ??? such as application security (AppSec) scans, flaw remediation, and secure coding ??? and security professionals are expected to take on more of a security oversight role.
Developers are taking the necessary steps to adapt to their evolving role and embrace security measures, but they???re often at odds with their other priorities, like rapid deployments. Since developers and security professionals??? priorities are frequently misaligned, it can lead to organizational challenges and security gaps.
Veracode recently sponsored Enterprise Strategy Group???s (ESG) survey of 378 developers and security professionals in North America to better understand the dynamics between these teams and to understand their application security challenges and priorities.
The report highlights five key insights:
1. Most think their application security programs are solid, though many still push vulnerable code.
Respondents were asked to rate the efficacy of their organization???s AppSec program on a scale of zero to 10, zero being ???we continually have security issues,??? and 10 being ???we feel confident in the efficacy and efficiency of our program.??? Two-thirds of the organizations surveyed rated their programs as an eight or higher. And, better yet, two-thirds are using their AppSec scans on more than half their codebase.
Despite having a solid AppSec program and leveraging scans, 81 percent of organizations are still experiencing exploits. Why? The research revealed that 48 percent of organizations regularly release vulnerable code to production when they???re under a time crunch. By pushing vulnerable code to production, organizations are putting their applications at risk for a breach.
2. Multiple security testing tools are needed to secure the potpourri of application development and deployment models in use today.
There is no single AppSec testing type that is able to identify every vulnerability. Each testing type has its strengths and cautions. For example, if you only use static analysis, you won???t be able to uncover open source flaws, business logic flaws, or configuration errors. If you only use software composition analysis, you will only identify third-party flaws.
The findings showed that most organizations do employ a mix of testing types. However, there are some gaps. For example, only 38 percent of organizations use software composition analysis. Unless those organizations are using penetration testing, they are likely not testing for third-party vulnerabilities.
3. Developer security training is spotty, and programs to improve developer security skills are lacking.
The survey uncovered that 50 percent of organizations only provide developers with security training once a year or less. Not surprisingly, the survey also uncovered that developers??? top challenge is the ability to mitigate code issues. The only way for developers to improve their knowledge of code vulnerabilities is through security training or programs, like Veracode Security Labs, or AppSec solutions that give developers real-time security feedback as they are coding, like Veracode???s IDE Scan.
4. The proliferation of AppSec testing tools is an issue for many, with more than a third focusing investments on consolidation.
Over 30 percent of organizations are overwhelmed by the amount of AppSec tools currently used across their development teams. They spend too much time managing the tools and processes, which takes away from the effectiveness of their AppSec program. As a result, these organizations are planning future investments to consolidate their tools and processes.
5. Organizations are investing, with more than half planning to significantly increase spending on application security.
When asked about their future application security investments, more than half of the respondents stated that they plan to increase their AppSec spending. The majority are planning to use their investment in the cloud, consolidating AppSec tools, or expanding their use of testing tools.
There are 3,4 million digital payment system users worldwide. This figure is almost equal to the number of social media users globally and the half of word’s population to date. It is a strong enough reason to believe that online payments dominate the ways we pay for goods and transfer money. What is more, online payments for e-commerce websites are the features your online store can’t do without. So, here are all the answers to your “how” and “why” questions.
What Is an E-Payment System and Its Types?
An electronic payment system is special software that works as an intermediary the payer and the recipient of funds. In most cases, online payment systems work as non-interested parties, that is, they are only responsible for the money transfer, but not for the honesty of the relationship between the seller and buyer.
Using online payments, none of the parties need physical mediums like cash or checks. All the necessary documents and reports are formed automatically and online to be printed by any of the parties anytime.
Here are the main types of electronic payments.
Automated clearing house (ACH)
Wire transfers.
Item processing (IP)
Remote deposit capture (RDC)
FedLine Access Solutions.
Automated Teller Machines.
Card Services (ATM, credit, debit, prepaid)
Mobile payments.
What Are the Benefits of Using E-Payment Systems
Electronic commerce was invented to make shopping more comfortable and convenient. E-commerce payment system contributes to this goal even more.
Cash flows are difficult to track. This is the opinion of governments, financial institutions, business owners and a lot of ordinary people too. However, it is always easy to find put how do you spend a certain sum just be checking your financial or accounting app.
Electronic payments are almost instant, as well as traditional money hand-to-hand transfers. However, there is a strong reason not to do it now.
While other businesses suffered from a pandemic or even were completely banned, financial technology felt better than ever. Yes, precisely because it has become the safest way to use money without physical health risks. According to the recent research by BIS, “Research in microbiology examines whether pathogenic agents, including viruses, bacteria, fungi, and parasites can survive on banknotes and coins. Some viruses, including human flu, can persist for hours or days on banknotes, The Covid-19 virus can also survive on surfaces.” However, electronic payments protect you, your staff, and your customers from infection risks.
What Is the Role of an Online Payment System in E-Commerce?
Online payment is the main way to pay for the goods purchased from branded websites. What is more, there is almost no sense in the concept of e-commerce itself is there is no possibility to pay for the goods online since electronic commerce involves 100% electronic interaction between a company and a customer.
Yes, there is still cash on delivery option, which by the way, may have some benefits, but most online transactions are launched and completed online with the help of an electronic payment system in e-commerce.
What Are the Payment Options You May Choose for Your Ecommerce Store?
Here are the payment options that may potentially suit your eCommerce project. Leading e-commerce brands are using all of them at once, and it greatly contributes to the development of good relations and trust.
However, not all the alternatives may be needed for your startup since each e-commerce idea is specific. Find out what do your potential customers expect before utilizing any of them.
Credit/debit cards
In practice, this approach realized as a system that allows entering a user’s card data, receiving a confirmation code from a banking app, and completing a deal on the website.
Bank Transfers
This approach is used in B2B e-commerce since corporate clients often prefer to make bank transfers and be sure in clarity of reports.
EWallets
E-wallets are also convenient options that allow users to pay without revealing their banking details.
Mobile payments
Mobile payments are on the rise of popularity. ApplePay and GooglePay are the most used systems.
Since a lot of countries are making efforts to legalize cryptos, they are one more way to pay for the goods or services purchased online.
Cash on Delivery
Cash on delivery is still required by some customers, especially if there is no trust between a newly created company, or the company addressed for the first time, and the client.
What to Look for While Choosing the E-Payment System?
As you can see, the e-payment market has a lot of offers for your e-commerce store. Here are the main factors you should take into account making the final choice.
Preferences of your customers. There are a lot of alternatives to choose from, however, your best electronic payment system is the one that suits your customer most. If you know that your customers are corporate clients, it is better to give them the opportunity to make bank transfers. If they are young shoppers, they most probably prefer e-wallets, PayPal, and mobile payments.
This is one of the most important factors since the security of the payment on your websites is one of the things that contribute to your reputation. that is why it is better to choose such a payment system that has strong protection, support service as well as embedded e-commerce fraud detection features.
UX impact. The best electronic payment system is one more way to provide users with a great experience when completing a transaction with you. That is why the payment system should be fast-processing, reliable, and convenient.
Performance metrics. Find out whether it would be profitable for you to use this or that system from the point of view of commissions, fees, and reporting.
Surely, it should be more than one option for payments for an e-commerce website. That is why you should compare and analyze the most popular alternatives and integrate your with the most reliable and demanded by your users’ ones.
What Is the Best Online Payment System?
Here is the infographic that shows the most popular online payment systems in the USA. but since the greatest number of online shoppers is USA based, it may seem that these are the most popular systems in the world too. However, keep in mind one important note. If you are going to create a multilingual e-commerce store and reach the target audience from different countries, some e-payment systems may not be supported there or may be poorly known among customers from a specific country.
How Do I Add a Payment System to My Website?
There are several ways to add a payment system to your website.
If you are just going to create an eCommerce store and want to do it with the help of WordPress, you may choose the themes with payment system integration in advance.
If you have a ready-made website designed by you, you may contact the support service of the payment gateway provider, and set up the system following their instructions.
If your e-commerce project was created by a development company, ask them to make some changes and add more payment systems to your platform.
Conclusion
As you can see, the meaning of e-commerce as such is lost if there is no possibility to pay for the goods or services online. That is why payment system integration is an important stage of eCommerce store development. The choice of the most suitable solutions should be based on the careful market and your target audience analysis. What is more, the most popular payment systems are not always the most suitable ones – sometimes there is a need to come up with a system from scratch to satisfy the business needs. Make sure to get in touch with a reliable vendor and ask for help.
We invite you to join us in observing National Cybersecurity Career Awareness Week November 9-14, 2020 2020 Call for Commitments - Open Now Read More The National Cybersecurity Career Awareness Week Campaign focuses local, regional, and national interest to inspire, educate, and engage citizens to pursue careers in cybersecurity. National Cybersecurity Career Awareness Week takes place during November’s National Career Development Month. Each day of the campaign provides an opportunity to learn about the contributions and innovations of cybersecurity professionals and the plethora of job
Posted by Eugene Liderman and Xevi Miro Bruix, Android Security and Privacy Team
Trust is very important when it comes to the relationship between a user and their smartphone. While phone functionality and design can enhance the user experience, security is fundamental and foundational to our relationship with our phones.There are multiple ways to build trust around the security capabilities that a device provides and we continue to invest in verifiable ways to do just that.
The Internet of Secure Things Alliance (ioXt) manages a security compliance assessment program for connected devices. ioXt has over 200 members across various industries, including Google, Amazon, Facebook, T-Mobile, Comcast, Zigbee Alliance, Z-Wave Alliance, Legrand, Resideo, Schneider Electric, and many others. With so many companies involved, ioXt covers a wide range of device types, including smart lighting, smart speakers, webcams, and Android smartphones.
The core focus of ioXt is “to set security standards that bring security, upgradability and transparency to the market and directly into the hands of consumers.” This is accomplished by assessing devices against a baseline set of requirements and relying on publicly available evidence. The goal of ioXt’s approach is to enable users, enterprises, regulators, and other stakeholders to understand the security in connected products to drive better awareness towards how these products are protecting the security and privacy of users.
ioXt’s baseline security requirements are tailored for product classes, and the ioXt Android Profile enables smartphone manufacturers to differentiate security capabilities, including biometric authentication strength, security update frequency, length of security support lifetime commitment, vulnerability disclosure program quality, and preloaded app risk minimization.
We believe that using a widely known industry consortium standard for Pixel certification provides increased trust in the security claims we make to our users. NCC Group has published an audit report that can be downloaded here. The report documents the evaluation of Pixel 4/4 XL and Pixel 4a against the ioXt Android Profile.
Security by Default is one of the most important criteria used in the ioXt Android profile. Security by Default rates devices by cumulatively scoring the risk for all preloads on a particular device. For this particular measurement, we worked with a team of university experts from the University of Cambridge, University of Strathclyde, and Johannes Kepler University in Linz to create a formula that considers the risk of platform signed apps, pregranted permissions on preloaded apps, and apps communicating using cleartext traffic.
In partnership with those teams, Google created Uraniborg, an open source tool that collects necessary attributes from the device and runs it through this formula to come up with a raw score. NCC Group leveraged Uraniborg to conduct the assessment for the ioXt Security by Default category.
As part of our ongoing certification efforts, we look forward to submitting future Pixel smartphones through the ioXt standard, and we encourage the Android device ecosystem to participate in similar transparency efforts for their devices.
Acknowledgements: This post leveraged contributions from Sudhi Herle, Billy Lau and Sam Schumacher
Engineers at the National Institute of Standards and Technology (NIST) have developed a flexible, portable measurement system to support design and repeatable laboratory testing of fifth-generation (5G) wireless communications devices with unprecedented accuracy across a wide range of signal frequencies and scenarios. The system is called SAMURAI, short for Synthetic Aperture Measurements of Uncertainty in Angle of Incidence. The system is the first to offer 5G wireless measurements with accuracy that can be traced to fundamental physical standards — a key feature because even tiny errors can
Rise in settlements in 2019 included those paid to departing tech security staff shortly before major breach
The Bank of England paid departing staff almost £3m in “golden goodbyes” over 15 months, at the same time as an exodus of workers from its information security team.
Settlement payments to former staff surged to £2.3m in 2019, according to data provided to the Guardian under freedom of information laws. The Bank confirmed that former information security staff received some of the payments.
Solving Puzzles has been a very popular pastime for InfoSec professionals for decades. I couldn???t imagine a DefCon without the badge challenge. At Black Hat 2020 Matt Wixey, Research Lead at PwC UK, didn???t disappoint as he presented on parallels between puzzle-solving and addressing InfoSec problems.
Puzzle (and problem) solving can be taught
Solving a puzzle and a problem is very similar. They usually involve two primary functions, which may feed into each other in a circular fashion:
Understanding the problem
Searching for a solution
Problem-solving is always thought of as an innate ability that you cannot teach, but that???s not true. You can teach comfort level with ambiguity and feeling around the edges of the solution of a problem.
Problem-solving does not require expertise, but it can help in some circumstances. Experts tend to know more schema of problems and can more easily chunk problems into smaller, manageable parts, so they can recognize that a problem follows the same pattern as a problem they???ve solved before. However, assumptions can also lead you astray. Puzzle makers may even purposefully take you astray, playing with your assumptions.
In a test where experts and novices were pitted against each other, experts took about as much time to solve problems, but they made fewer mistakes than the novices.
The role of bias in problem-solving
Problem-solving is subject to the same kind of challenges as decision-making. Biases come in many forms, which can hinder a person from solving a problem. You should be aware of the following biases that may impact your thinking:
???
Problem-solving in InfoSec
Problems in InfoSec are often knowledge-rich and ill-defined. Practitioners range from experts and, because of chronic skill shortage, many novices. There are ample schemas for these problems.
Wixey asserts that even if you change the "cover story??? of the problem, the problem space remains the same. Not telling your colleague the full story may actually be useful in solving the problem in some cases. He encourages diversity in background and expertise, and of course, applying your experience in solving puzzles to real-world problems.
Designing the perfect puzzle
Designing a puzzle can be difficult and time-consuming. The perfect puzzle has an interesting premise but very little explanation. Hidden ???trap door??? functions, red herrings, and easter eggs are optional but can add variety to a puzzle. Interesting puzzles may ask something completely unconnected to the premise, but the puzzle should have internal logic, where the answer can be obtained just from the question. It should not require specialist knowledge beyond what you can get from a quick search.
A personal lesson learned after generating my first puzzle was to have it field-tested by a few people. I thought that there was a direct, linear path to the solution for a puzzle I created, but there were actually several paths that led to dead ends, which was frustrating to some puzzle solvers.
Let???s solve some puzzles!
At Veracode, we have regular puzzle challenges as part of the Veracode Hackathons. We have people from around the company provide their puzzles based on themes, and then the whole project is curated by our puzzle masters. If you???d like to dip your brain into years of Veracode internal puzzle challenges, check out Vera.codes.
It is important for businesses to be aware of what is happening in the industry as they impact companies on a micro level. You cannot reach a wider market without knowing what is happening around you.
The best way to be aware is to pay attention to the facts and figures. In this article, we will highlight some payment stats to help you understand the market landscape.
We have concentrated on global stats to explain the global landscape. Since ecommerce is ‘beyond borders’, it is important for businesses to know what the international audience wants so they can continue to serve them well.
#1 Cash Is On the Decline
Many countries around the world have gone cashless.
Only 77 percent of all transactions involve cash today. The figure was 89 percent about five years ago and is expected to fall even more due to the current situation that has forced buyers to use alternative methods including no contact payment solutions.
According to this report, e-wallets will have a 28 percent market share by 2022. However, cash isn’t going away anytime soon. In fact, the value of the euro in circulation has increased in the last few years.
Some countries are taking steps to remove cash, while some are still heavily dependent on paper money.
Cash is the second most widely used form of payment in the US after debit cards. Considering New York, San Francisco, and Philadelphia recently passed laws banning merchants from not accepting cash payments, it’s safe to say that cash will continue to prevail in the US.
Still, businesses need to be proactive as users prefer merchants who offer a variety of payment options including digital coins.
#2 Electronic Payments Are Rising
The global use of debit and credit cards (combined) grew from 5 percent to 9 percent between 2012 and 2017.
In recent times, debit cards have declined in popularity but the demand for credit cards has only increased due to new entrants like Apple Pay entering the market.
Apple Pay was originally marketed as an e-payment solution but the company’s decision to issue physical cards changed the game.
Consumers have a lot of faith in credit cards as they are easy to use and come with some other benefits including rewards. However, their dominance is being challenged now thanks to electronic payment options.
The global digital payments market is growing at a rate of 12.8 percent and is expected to continue to grow at this rate for the next three years.
About 50 percent of all transactions in North America are conducted electronically making it a global leader. Europe isn’t far behind either. The use of electronic payments is very common in most European countries.
About 47 percent of all European card transactions involve NFC technology. Asian countries including China, India, and Pakistan are also making use of electronic payments.
The Chinese electronic payments market is among the fastest – it increased 10x between 2012 and 2017. The introduction of Alipay and WeChat payment options can be given credit for the huge growth in the Chinese market.
The scenario is similar in African countries as well, especially Nigeria, which is ahead in the technological race.
These figures show the importance of electronic payments. It can be hard for businesses to sustain if they do not offer e-payments. Look for a payment partner that offers third-party integrations so that you do not have to use multiple providers.
#3 Mobile Payments For the Future
Before moving ahead, let’s be clear that there’s a difference between mobile payments and electronic payments.
Mobile payments involve the use of mobile apps, whereas electronic payments can be made via credit and debit cards without using digital wallets or apps.
Since many people carry smartphones, they find it easier to use mobile devices to make payments.
The use of mobile devices for making payments at the point of sale is expected to increase to 28 percent by 2022.
This option is more popular among the newer generations (Gen-Z and millennials). About 28 percent of millennials have used a digital wallet at the point of sale, about 8 percent higher than the general population.
Younger people use digital wallets about five times a month, according to Billtrust. Due to an aging population, the gap is expected to increase in the future as the newer generation is used to mobile devices.
The scenario, however, is not the same all around the world as mobile payments are still not very popular in developing countries.
Only 37 percent of global merchants support mobile payments at the point of sale. On the positive side, about 31.4 percent intend to introduce this feature soon.
Businesses must provide consumers the facilities they need to prevent them from going to competitors.
Conclusion
These stats highlight the diversity in the global payments landscape. Retailers must take steps to know what their customers need so they can bring changes to the payment ecosystem.
A lack of payment options is one of the major reasons why the average cart abandonment rate is as high as 69.56 percent.
Remember that today’s customers are spoiled for choice. They will not think twice before moving to another seller if you do not have the payment option that they prefer.
Look for a payment partner who understands your requirements and can offer the services that you need.
Bio:
Lou Honick is the CEO of Host Merchant Services. Prior to founding Host Merchant Services in 2010, Lou was the founder of HostMySite.com and received numerous awards including SBA Young Entrepreneur of the Year, Inc Magazine 30 under 30, and multiple listings on the Inc 500. As a serial entrepreneur, all of his companies have operated on a singular devotion to outstanding customer service and support. Lou is a respected expert on the topics of customer service, payments and fintech, Internet technology, and entrepreneurship.
The high-profile Twitter accounts compromised included Barack Obama, Elon Musk, Kanye West, Bill Gates, Jeff Bezos, Warren Buffett, Kim Kardashian, and Joe Biden. Around £80,000 of Bitcoin was sent to the scammer's Bitcoin account before Twitter swiftly took action by deleting the scam tweets and blocking every 'blue tick' verified Twitter user from tweeting, including me. While the Twitter hack and scam dominated media headlines around the world, the attack was not the 'highly sophisticated cyber-attack' as reported by many media outlets, but it was certainly bold and clever. The attackers phoned Twitter administrative staff and blagged (socially engineered) their Twitter privilege account credentials out of them, which in turn gave the attackers access to Twitter's backend administrative system and to any Twitter account they desired. It is understood this Twitter account access was sold by a hacker on the dark web to a scammer in the days before the attack, that scammer(s) orchestrated a near-simultaneous Bitcoin scam tweets to be posted from the high profile accounts. On 31st July, law enforcement authorities charged three men for the attack, with one of the suspects disclosed as a 19-year British man from Bognor Regis. There was a very serious critical Windows vulnerability disclosed as part the July 2020 Microsoft 'Patch Tuesday' security update release. Dubbed "SIGRed", it is a 17-year-old Remote Code Execution (RCE) vulnerability in Windows Domain Name System (DNS), a component commonly present in Microsoft Windows Server 2008, 2012, 2012R2, 2016 and 2019. Disclosed as CVE-2020-1350 it was given the highest possible CVSS score of 10.0, which basically means the vulnerability is “easy to attack” and “likely to be exploited”, although Microsoft said they hadn't seen any evidence of its exploitation at the time of their patch release. Given SIGRed is a wormable vulnerability, it makes it particularly dangerous, as wormable malware could exploit the vulnerability to rapidly spread itself over flat networks without any user interaction, as per the WannaCry attack on the NHS and other large organisations. Secondly, it could be used to exploit privilege level accounts (i.e. admin accounts found on Servers). The Microsoft CVE-2020-1350 vulnerability can be mitigated on effected systems by either applying the Microsoft Windows DNS Server Microsoft released patch (https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2020-1350or by applying a Registry Workaround (https://support.microsoft.com/en-us/help/4569509/windows-dns-server-remote-code-execution-vulnerability)
At least 10 universities in the UK had student data stolen after hackers attacked Blackbaud, an education-focused cloud service provider. UK universities impacted included York, Loughborough, Leeds, London, Reading, Exeter and Oxford. According to the BBC News website, Blackbaud said "In May of 2020, we discovered and stopped a ransomware attack. Prior to our locking the cyber-criminal out, the cyber-criminal removed a copy of a subset of data from our self-hosted environment."
As expected, the UK Government ordered UK mobile network operators to remove all Huawei 5G equipment by 2027, and banning their purchase of Huawei 5G network equipment after 31st December 2020. Digital Secretary Oliver Dowden said it follows sanctions imposed by the United States, which claims the Chinese firm poses a national security threat, which Huawei continues to resolutely deny. The ban is expected to delay the UK's 5G rollout by a year. "This has not been an easy decision, but it is the right one for the UK telecoms networks, for our national security and our economy, both now and indeed in the long run," he said.
In some media quarters, it was suggested the UK u-turn on Huawei could lead to cyberattack repercussions after Reuter's said its sources confirmed China was behind cyberattacks on Australia's critical national infrastructure and government institutions following their trade dispute with China. Russian Hacking Group (APT 29) was jointly accused of targeting the theft of coronavirus vaccine research by the UK NCSC, the Canadian Communication Security Establishment (CSE), United States Department for Homeland Security (DHS), Cyber-security Infrastructure Security Agency (CISA) and the US National Security Agency (NSA). The UK's National Cyber Security Centre (NCSC) said the hackers "almost certainly" operated as "part of Russian intelligence services". It did not specify which research organisations had been targeted, or whether any coronavirus vaccine research data was taken, but it did say vaccine research was not hindered by the hackers. Russia's ambassador to the UK has rejected allegations, "I don't believe in this story at all, there is no sense in it," Andrei Kelin told the BBC's Andrew Marr Show. While Foreign Secretary Dominic Raab said it is "very clear Russia did this", adding that it is important to call out this "pariah-type behaviour".
UK sport said hackers tried to steal a £1 million club transfer fee and froze turnstiles at a football game. Cybercriminals hacked a Premier League club managing director's email account during a player transfer negotiation, the million-pound theft was only thwarted by a last-minute intervention by a bank. Another English football club was targeted by a ransomware attack which stopped its turnstiles and CCTV systems from working, which nearly resulted in a football match being postponed. Common tactics used by hackers to attack football clubs include compromising emails, cyber-enabled fraud and ransomware to shutting down digital systems. For further information on this subject, see my extensive blog post on football club hacking, The Billion Pound Manchester City Hack.
Yet another big data exposure caused by a misconfigured AWS S3 bucket was found by security researchers, one million files of Fitness Brand 'V Shred' was discovered exposed to the world, including the personal data of 99,000 V Shred customers. Interestingly V Shred defended the researcher findings by claiming it was necessary for user files to be publicly available and denied that any PII data had been exposed.
Psychological operations, orツ?PsyOps, is a topic I???ve been interested in for a while. It???s aツ?blend of social engineering and marketing, both passions of mine. That's why I found the keynote byツ?Renテゥeツ?DiResta,ツ?Research Managerツ?at theツ?Stanford Internet Observatory, particularly interesting.ツ?
The Internet Makes Spreading Information Cheap & Easyツ?
Disinformation and propaganda areツ?oldツ?phenomenaツ?that can be traced back to the invention of the printing press ??? and arguably before then.ツ?With the advent of theツ?Internet, the cost of publishing dropped to zero. There are no hosting costs on certain platforms, butツ?especially in the beginning, theツ?blogosphere was veryツ?decentralized,ツ?and it was hard to get people to read your content.ツ?With theツ?rise of social media,ツ?you can share your content and it can become viral. At the same time, content creation becomes easier.ツ?All of thisツ?eliminates cost barriers andツ?gatekeepers.ツ?ツ?
State Actors ???Hack??? Our Opinionsツ?
As social media platforms matured, the algorithms that curate content become more and more sophisticated. They are trying to group people and deliver personalized targeting of content, which allows adversaries to analyze and game the algorithms.ツ?ツ?
State actors don???t just influence, they start hacking public opinion, which involves fake content producers and fake accounts. They can do this more effectively because they understand the ecosystem extremely well, typically applying one of four tactics, sometimes in combination:ツ?ツ?ツ?
Distract:ツ?Taking attention away from news stories that are detrimental to the state actor
Persuade:ツ?Providing convincing content to sway a target???s opinion
Entrench:ツ?Getting individuals to identify with their peer groups and dig their heels in
Divide:ツ?Pitting groups against each other to spread dissentツ?
Architecture of a Modern Information Operationツ?ツ?
Information operations often create fake public personas, such as journalists, to create content. They then seed it to social media and amplify it through bot accounts to get organic shares among the population. Theツ?ultimateツ?goalツ?is to have mass media pick up the stories and amplify even further.ツ?ツ?
Many of these campaigns use algorithmic manipulation. The Russian disinformation campaign around the 2016 election only spent $100,000 in advertising, but their real lift came from creating compelling content that people shared organically.ツ?ツ?
From a defensive perspective, you can look at these operations as a kill chain. You should ask yourself: Which part of the chain can I disrupt to slow or stop the campaign? The last hop to mass media is particularly important.ツ?ツ?
Telling a Positive Story About Chinaツ?ツ?
China isツ?aツ?powerful player in informationツ?operations,ツ?but we???ll see in a moment that their operations have less impact than Russia???s.ツ?However, their network infiltrationツ?operations, which can be related to information operations, areツ?alreadyツ?very advanced.ツ?ツ?
In a nutshell, the goal of China???s information operations is to ???Tell China???s Story Well???. They are primarily concerned with persuasion, sometimes distraction.ツ?For example, during the COVID-19 crisis, China first controlledツ?domestic perception, then put out English language posts about WHO praising the Chinese response. They pushed this out on Facebook to ensure they reached large global audiences.ツ?They flip back and forth between funny things that people retweet and more aggressive messages.ツ?
A Lookツ?Intoツ?Chinese Information Operationsツ?ツ?
China has decades of experience inツ?bothツ?covert and overtツ?domesticツ?information management.ツ?They're now taking these inward-facing capabilities and employing them outside of their borders.ツ?ツ?
We can classify their content sources into three categories:ツ?ツ?
Light:ツ?Official state news outlets
Grey:ツ?Content farms thatツ?are not easily attributable to the state andツ?push outツ?fake political storiesツ?
Dark:ツ?Purely online properties that spread disinformationツ?
Even though Facebook is banned in China, its content platforms haveツ?more than 220 million followers. Theyツ?have alsoツ?expanded to troll accounts and covert strategies, which have been taken down from Facebook and Twitter in some occasions.ツ?ツ?
Asツ?Western media began to talk about Hong Kong protests,ツ?Chineseツ?troll accountsツ?surfaced, pretending to be Hong Kong citizens,ツ?andツ?toldツ?theツ?journalists that they gotツ?theツ?storyツ?completelyツ?wrong.ツ?However, China lost its Hong Kong bots early in the protests because they were shut down.ツ?Research showed that most accounts were not createdツ?pre-emptivelyツ?butツ?as a reaction to aツ?crisis.ツ?ツ?
China is Struggling to Have Real Impact, But They???ll Get Betterツ?ツ?
The surprising thing was that 92 percentツ?of accounts had less than 10 followers.ツ?Most tweets didn???t even have aツ????like,??? and maximum tweetツ?engagement was 3,700.ツ?In other words, Chinaツ?did a very poor jobツ?of getting real people to pick up their content.ツ?ツ?
While China is good at creating content, they are sloppy at their social media game.ツ?China is well resourced, and they???reツ?committedツ?to improving.ツ?At the same time, we shouldn???t overemphasize the impact of the efforts.ツ?ツ?
Russia???s Game: Entrench and Divideツ?ツ?
By comparison,ツ?Russia is best in classツ?when it comes to information operations.ツ?They excel at creating agents of influence and manipulating media. They are using network infiltration as one of their tactics, both to hack public influencers and by leaking data to the media.ツ?ツ?
Russia has the same set of overt and covert media, ranging from light to dark, but it spends a fraction of China???s budget. One example of a covert content source isツ?BlackMattersUS, which is officially operated by an American activist but isツ?actually runツ?by a Russian contractor in St. Petersburg.ツ?
Its media outlets have fewer Facebook followers, only in the range of 39 million, but they have a lot more engagement. Russia is much better at segmenting their audience and creating custom content that plays into their narratives, entrenching and dividing their audiences. They are also better at picking the right types of media for the audience and social network, e.g. videos for young millennials.ツ?ツ?
Russian Memes vs. Chinese Narrativesツ?ツ?
While China is focusing mainly on creating a certain narrative, Russia focuses much more on memes that convey feelings or a point of view.ツ?Much of this content is generated by theツ?Internet Research Agency,ツ?a Russian content farm that is not officially associated with the governmentツ?to create plausible deniability. They focus on social content first, whichツ?lends itself to certain types of media.ツ?ツ?
Memes look at how people feel. They areツ?identity-focusedツ?andツ?entrench people inツ?their groups. Contentツ?isツ?createdツ?to reinforceツ?their beliefs.ツ?By sharing the content, individuals areツ?signalingツ?membership in their group. Interestingly, theツ?IRA does this both on theツ?politicalツ?left and on the right, splitting the country in two.ツ?ツ?
Creatingツ?Agents of Influenceツ?ツ?
Russia doesn???t stop with online engagement and shaping opinions. It wants to create agents of influence that go out on the street and conduct activism.ツ?When you follow the Internet Research Agency page or like a piece of content, you give the IRA a signal that you???re sympathetic to aツ?particular pointツ?of view.ツ?
What DiResta hasツ?observedツ?is attempts to recruitツ?these peopleツ?through a constant outreach, more than you???d typically see from a media outlet.ツ?Theyツ?offer financial resources and logistical support to turn people into agents of influence, mobilizing them, getting them out into the streets as activists.ツ?This happens behind the scenes, in direct messages, not visible if you???re simply looking at the memesツ?on social media.ツ?ツ?
Throwing Hacked Dataツ?intoツ?the Mixツ?
Russia goes one step further, engaging GRU hacking operations in its information campaigns.ツ?APT28,ツ?also known asツ?Fancy Bear, began creating fake Facebook pages years ago when the GRU was experimenting withツ?these tactics.ツ?ツ?
The green circles represent fake public personas, often journalists, that put out geopolitical content on their own fake media sites. They share the content with Western and regional blogs to gain wider distribution. However, the GRU did not have a lot of success with this tactic.ツ?ツ?
They since modified their tactics.ツ?Public officials or agencies are hacked,ツ?then the material is offered to journalists through fake personas, such asツ?Gucciferツ?2.0.ツ?Theツ?Internet Research Agencyツ?then createsツ?memesツ?based on the content to amplify on social media.ツ?Finally,ツ?RT and Sputnik,ツ?Russia???sツ?state news outlets,ツ?talk about the substance of hack while denyingツ?their state???s involvement.ツ?ツ?
While China is focused on telling a positive story about their country, Russia is more interested in exploiting divisions in our society and using vulnerabilities in our information ecosystem.ツ?ツ?
Russia Will Use the Same Methods in the 2020 Electionsツ?
We should expect Russia to employツ?similarツ?tactics in the 2020 U.S. presidential elections:ツ?ツ?
Hackingツ?&ツ?leakingツ?operationsツ?
Hackingvotingツ?machinesツ?
Infiltratinggroupsツ?
Amplifyingnarrativesツ?ツ?
Even if Russiaツ?doesn'tツ?hack the voting machines, just claimingツ?they'veツ?beenツ?successfulツ?will cause mistrust in the elections. And that is theirツ?goal: undermining confidence in our political system.ツ?ツ?
The Effects Will Outlast Active Operationsツ?ツ?
You can???t hack a social system if the system is resistant to the attack, but our country is divided and very vulnerable. DiResta found an activist???s page on Facebook that contained 40 percent IRA content. However, the person behind the page was real, not a bot. They were sharing the content because the IRA had created messages that resonated extremely well.ツ?ツ?ツ?
People internalizeツ?opinionsツ?based on repetition. False stories areツ?memorizedツ?by real people and spread long after activeツ?operationsツ?have ceased. We???re all more instrumented than ever before.ツ?ツ?
The challenge for scientific research is:ツ?We canツ?easilyツ?quantify likes and retweets and see how they are reacting, butツ?it???s hard toツ?see if it changed hearts and minds.ツ?ツ?
What Does This Mean for Corporate Information Security?ツ?ツ?
If you???re a CISO in a company with international competitors,ツ?you'reツ?just as much at risk. Companiesツ?with geopolitical aspects such asツ?fracking for oilツ?andツ?agricultural firms likeツ?Monsanto haveツ?alreadyツ?been targets.ツ?Companiesツ?taking part in social issuesツ?have seenツ?contentツ?against them amplified on social media.ツ?ツ?
However, most companies don???t have a position on the org chart to deal with adversarial information operations. As aツ?CISO, you probably need to start thinking about how you would respond.ツ?But the question isn???t purelyツ?technical. It's not a social media analysis problem.ツ?Youツ?need toツ?conduct red teamingツ?exercisesツ?that involve people from both technical teams and corporate communications.ツ?ツ?
ツ?
If you found this post interesting and would like an overview of additional Black Hat sessions, visit the Veracode blog.
At McAfee, women are finding the inclusiveness and tools to succeedin cybersecurity sales, a field often misconceived because of its technical background.They are doing so through perseverance, resolve, and the know-how to excel as sales professionals.
We recently kicked off our Women in Sales Series, which features inspiring women at McAfee. In Part 1, leading womendiscussedopportunities and in Part 2, they outlined the necessary skills to succeed.
This weekmeet more women at McAfee as they share their advice and distinguishing qualities or characteristics that defined their success as cybersecurity sales professionals.
Self-belief: “Be strong, and not disqualify yourself if you aren’t the expert in the room. Remember, nobody has all the answers. Don’t be afraid to ask questions — it demonstrates your willingness to learn. Believe in yourself.”
— Amy, Enterprise Sales, Charlotte,NC, United States
Resiliencyand practice:“Be brave, and in the face of adversity, just keep ‘swimming.’ You won’t ever become good at anything if you do not try, and like in most things, you only become good at sales with practice. Don’t be afraid to network; it’s essential to building connections outside
the workplace. Be open to learning either through a mentor or as a volunteer in the industry; it will help you overcome any obstacles.”
— Anastasia, Inside Sales, Cork, Ireland
Curiosity:“Be confident, be prepared, and be a good listener, but most importantly, be curious and willing to improve. When your instinct tells you can do it, do it or learn how to do it. If you don’t understand, ask until you do — the willingness to learn and be honest with yourself will take you a long way. Looking inward will help you work towards self-development. Own your success and listento your customers and colleagues; they offer so much knowledge. I’ve been lucky to know sponsors who have guided me and signed up for McAfee’s mentor program. The value of sponsors and mentors cannot be defined. Never stop at learning.”
—Guari, Inside Sales, Mumbai, India
Ambition:“Put yourself out there. Do not become complacent and settle for something below your expectations or the intended result. Make sure your personal brand is solid and leverage that to make connections and network.”
— Jardin, Inside Sales, Plano, TX, United States
Persistence:“When you get your foot in the door for an interview, be persistent. You should be strong and ask tough questions that may move you out of your comfort zone. I try to close the interviewer and learn from the conversation by asking what concerns they had about my candidacy. Use that opportunity to see where you might improve or strengthen your interview skills. Doing so shows the prospective employer the type of salesperson you can be.”
— Kate, Enterprise Sales – Federal, Washington, D.C., United States
Self-confidence:“Don’t doubt yourself — you can do it. If you are eager and motivated, you can accomplish just about anything. But keep your options open throughout your career journey. You may not want to take a lateral move or step back, but that may be just the opportunity you need to learn more and eventually advance. There are so many roles that support the sales teams to help get you started; not all are quota carrying roles.”
— Krista, Enterprise Sales, Detroit, MI, United States
Assertiveness and patience:“Take charge of your future. When you put yourself out there, you show everybody what you are made of and can rise to the occasion. Your peers will keep you top of mind for the next project or assignment. Remember, though, give yourself grace and time to grow. It may not happen overnight, but it will happen if you demonstrate your ability to assume responsibility.”
— Kristol, Inside Sales, Plano, TX, United States
Determination:“Don’t hesitate to go after something you want that you know you are qualified. You need to keep trying and don’t give up on things you are passionate about.”
— Margot, Inside Sales, Plano, TX, United States
Authenticity: “I believe that if you know what you want, are determined to get it, and remain true to yourself, you will achieve great things in your career. Your customers and colleagues will appreciate you for it.”
— Marta, Inside Sales, Cork, Ireland
Passion:“First, you should have passion. Love what you do and spend time doing it. And get ready to learn the fundamentals for the role that you embrace.”
— Sandra, Presales/Sales Engineering, Sydney, Australia
Each of these women offer their own unique perspectives and talents. It takes their combined skills and voices to achieve our mission and protect all that matters — together is power.Next week, McAfee women share key ingredients, such as growth and balance, for sales success.
Interested in joining a company that supports inclusion and belonging? Search our jobs. Subscribe to job alerts.
The FireEye Front Line Applied Research & Expertise (FLARE) Team
attempts to always stay on top of the most current and emerging
threats. As a member of the FLARE Reverse Engineer team, I recently
received a request to analyze a fairly new credential stealer
identified as MassLogger. Despite the lack of novel functionalities
and features, this sample employs a sophisticated technique that
replaces the Microsoft Intermediate Language (MSIL) at run time to
hinder static analysis. At the time of this writing, there is only one
publication discussing the MassLogger obfuscation technique in
some detail. Therefore, I decided to share my research and tools to help analyze
MassLogger and other malware using a similar technique. Let us
take a deep technical dive into the MassLogger credential stealer and
the .NET runtime.
Triage
MassLogger is a .NET credential stealer. It starts with a launcher
(6b975fd7e3eb0d30b6dbe71b8004b06de6bba4d0870e165de4bde7ab82154871)
that uses simple anti-debugging techniques which can be easily
bypassed when identified. This first stage loader eventually
XOR-decrypts the second stage assembly which then decrypts, loads and
executes the final MassLogger payload (bc07c3090befb5e94624ca4a49ee88b3265a3d1d288f79588be7bb356a0f9fae)
named Bin-123.exe. The final payload can be
easily extracted and executed independently. Therefore, we will focus
exclusively on this final payload where the main anti analysis
technique is used.
Basic static analysis doesn’t reveal anything too exciting. We
notice some interesting strings, but they are not enough to give us
any hints about the malware’s capabilities. Executing the payload in a
controlled environment shows that the sample drops a log file that
identifies the malware family, its version, and most importantly some
configuration options. A sample log file is described in Figure 1. We
can also extract some interesting strings from memory as the sample
runs. However, basic dynamic analysis is not sufficient to extract all
host-based indicators (HBIs), network-based indicators (NBIs) and
complete malware functionality. We must perform a deeper analysis to
better understand the sample and its capabilities.
User Name: user IP: 127.0.0.1
Location: United States OS: Microsoft Windows 7
Ultimate 32bit CPU: Intel(R) Core(TM) i7-6820HQ CPU @
2.70GHz GPU: VMware SVGA 3D AV: NA
Screen Resolution: 1438x2460 Current Time: 6/17/2020
1:23:30 PM MassLogger Started: 6/17/2020 1:23:21
PM Interval: 2 hour MassLogger Process:
C:\Users\user\Desktop\Bin-123.exe MassLogger Melt:
false MassLogger Exit after delivery: false As
Administrator: False Processes: Name:cmd,
Title:Administrator: FakeNet-NG - fakenet
Name:iexplore, Title:FakeNet-NG - Internet Explorer
Name:dnSpy-x86, Title:dnSpy v6.0.5 (32-bit) Name:cmd,
Title:Administrator: C:\Windows\System32\cmd.exe
Name:ProcessHacker, Title:Process Hacker
[WIN-R23GG4KO4SD\user]+ (Administrator)
### WD Exclusion ###
Disabled
### USB Spread ###
Disabled
### Binder ### Disabled
### Window Searcher ###
Disabled
### Downloader ###
Disabled
### Bot Killer ###
Disabled
### Search And Upload ###
Disabled
### Telegram Desktop ### Not
Installed
### Pidgin ### Not
Installed
### FileZilla ### Not
Installed
### Discord Tokken ### Not
Installed
### NordVPN ### Not
Installed
### Outlook ### Not
Installed
### FoxMail ### Not
Installed
### Thunderbird ### Not
Installed
### QQ Browser ### Not
Installed
### FireFox ### Not
Installed
### Chromium Recovery ### Not
Installed
### Keylogger And Clipboard ###
[20/06/17] [Welcome to Chrome - Google
Chrome] [ESC]
Like many other .NET malwares, MassLogger obfuscates all of its
methods names and even the method control flow. We can use de4dot to automatically deobfuscate the MassLogger
payload. However, looking at the deobfuscated payload, we quickly
identify a major issue: Most of the methods contain almost no logic as
shown in Figure 2.
Figure 2: dnSpy showing empty methods
Looking at the original MassLogger payload in dnSpy’s Intermediate Language (IL) view confirms
that most methods do not contain any logic and simply return nothing.
This is obviously not the real malware since we already observed with
dynamic analysis that the sample indeed performs malicious activities
and logging to a log file. We are left with a few methods, most
notably the method with the token 0x0600049D
called first thing in the main module constructor.
Figure 3: dnSpy IL view showing the
method's details
Method 0x0600049D control flow has been
obfuscated into a series of switch statements. We can still somewhat
follow the method’s high-level logic with the help of dnSpy as a debugger. However, fully analyzing the
method would be very time consuming. Instead, when first analyzing
this payload, I chose to quickly scan over the entire module to look
for hints. Luckily, I spot a few interesting strings I missed during
basic static analysis: clrjit.dll, VirtualAlloc, VirtualProtect and WriteProcessMemory as seen in Figure 4.
Figure 4: Interesting strings scattered
throughout the module
A quick internet search for “clrjit.dll”
and “VirtualProtect” quickly takes us to afewpublications
describing a technique commonly referred to as Just-In-Time Hooking.
In essence, JIT Hooking involves installing a hook at the compileMethod() function where the JIT compiler is
about to compile the MSIL into assembly (x86, x64, etc). With the hook
in place, the malware can easily replace each method body with the
real MSIL that contains the original malware logic. To fully
understand this process, let’s explore the .NET executable, the .NET
methods, and how MSIL turns into x86 or x64 assembly.
.NET Executable Methods
A .NET executable is just another binary following the Portable
Executable (PE) format. There are plenty of resources describing the
PEfileformat,
the .NET
metadata and the .NET token tables in detail. I recommend our
readers to take a quick detour and refresh their memory on those
topics before continuing. This post won’t go into further details but
will focus on the .NET methods instead.
Each .NET method in a .NET assembly is identified by a token. In
fact, everything in a .NET assembly, whether it’s a module, a class, a
method prototype, or a string, is identified by a token. Let’s look at
method identified by the token 0x0600049D,
as shown in Figure 5. The most-significant byte (0x06) tells us that this token is a method token
(type 0x06) instead of a module token (type
0x00), a TypeDef token (type 0x02), or a LocalVarSig token (type 0x11), for example. The three least significant
bytes tell us the ID of the method, in this case it’s 0x49D (1181 in decimal).
This ID is also referred to as the Method ID (MID) or the Row ID of
the method.
Figure 5: Method details for method 0x0600049D
To find out more information about this method, we look within the
tables of the “#~” stream of the .NET
metadata streams in the .NET metadata directory as show in Figure 6.
We traverse to the entry number 1181 or
0x49D of the Method table to find the
method metadata which includes the Relative Virtual Address (RVA) of
the method body, various flags, a pointer to the name of the method, a
pointer to the method signature, and finally, an pointer to the
parameters specification for this method. Please note that the MID
starts at 1 instead of 0.
Figure 6: Method details from the PE file header
For method 0x0600049D, the RVA of the
method body is 0xB690. This RVA belongs to
the .text section whose RVA is 0x2000. Therefore, this method body begins at
0x9690 (0xB690 –
0x2000) bytes into the .text section. The .text
section starts at 0x200 bytes into the file
according to the section header. As a result, we can find the method
body at 0x9890 (0x9690 + 0x200) bytes
offset into the file. We can see the method body in Figure 7.
Figure 7: Method 0x0600049D body in a hex editor
.NET Method Body
The .NET method body starts with a method body header, followed by
the MSIL bytes. There are two types of .NET methods: a tiny method and
a fat method. Looking at the first byte of the method body header, the
two least-significant bits tell us if the method is tiny (where the
last two bits are 10) or fat (where the last
two bits are 11).
.NET Tiny Method
Let’s look at method 0x06000495. Following
the same steps described earlier, we check the row number 0x495 (1173 in decimal)
of the Method table to find the method body RVA is 0x7A7C which translates to 0x5C7C as the offset into the file. At this
offset, the first byte of the method body is 0x0A (00001010 in binary).
Figure 8: Method 0x06000495 metadata and body
Since the two least-significant bits are 10, we know that 0x06000495 is a tiny method. For a tiny method,
the method body header is one byte long. The two
least-significant bits are 10 to
indicate that this is the tiny method, and the six most-significant
bits tell us the size of the MSIL to follow (i.e. how long the
MSIL is). In this case, the six most-significant bits are 000010, which tells us the method body is two
bytes long. The entire method body for 0x06000495 is 0A162A, followed by a
NULL byte, which has been disassembled by dnSpy as shown in Figure 9.
Figure 9: Method 0x06000495 in dnSpy IL view
.NET Fat Method
Coming back to method 0x0600049D (entry
number 1181) at offset 0x9890 into the file (RVA 0xB690), the first byte of the method body is
0x1B (or 00011011 in binary). The two least-significant
bits are 11, indicating that 0x0600049D is a fat method. The fat method body
header is 12-byte long whose structure is beyond the scope of
this blog post. The field we really care about is a four-byte
field at offset 0x04 byte into
this fat header. This field specifies the length of the MSIL that
follows this method body header. For method 0x0600049D, the entire method body header is
“1B 30 08 00 A8 61 00 00 75 00 00
11” and the length of the MSIL to follow is “A8 61 00 00” or 0x61A8
(25000 in decimal) bytes.
Figure 10: Method 0x0600049D body in a
hex editor
JIT Compilation
Whether a method is tiny or fat, it does not execute as is. When the
.NET runtime needs to execute a method, it follows exactly the process
described earlier to find the method body which includes the method
body header and the MSIL bytes. If this is the first time the method
needs to run, the .NET runtime invokes the Just-In-Time compiler which
takes the MSIL bytes and compiles them into x86 or x64 assembly
depending on whether the current process is 32- or 64-bit. After some
preparation, the JIT compiler eventually calls the compileMethod() function. The entire .NET runtime
project is open-sourced and available on GitHub. We
can easily find out that the compileMethod()
function has the following prototype (Figure 11):
CorJitResult __stdcall compileMethod
( ICorJitInfo
*comp, /* IN */
CORINFO_METHOD_INFO *info,
/* IN */ unsigned /* code:CorJitFlag */
flags, /* IN */
BYTE **nativeEntry, /* OUT
*/ ULONG
*nativeSizeOfCode /* OUT */ );
Figure 11: compileMethod() function protype
Figure 12 shows the CORINFO_METHOD_INFO structure.
The ILCode is a pointer to the MSIL of the
method to compile, and the ILCodeSize tells
us how long the MSIL is. The return value of compileMethod() is an error code indicating
success or failure. In case of success, the nativeEntry pointer is populated with the address
of the executable memory region containing the x86 or the x64
instruction that is compiled from the MSIL.
MassLogger JIT Hooking
Let’s come back to MassLogger. As soon as the main module
initialization runs, it first decrypts MSIL of the other methods. It
then installs a hook to execute its own version of compileMethod() (method 0x06000499). This method replaces the ILCode and ILCodeSize
fields of the info argument to the original compileMethod() with the real malware’s MSIL bytes.
In addition to replacing the MSIL bytes, MassLogger also patches the
method body header at module initialization time. As seen from Figure
13, the method body header of method 0x060003DD on disk (at file offset 0x3CE0) is
different from the header in memory (at RVA 0x5AE0). The only two things remaining quite
consistent are the least significant two bits indicating whether the
method is tiny or fat. To successfully defeat this anti-analysis
technique, we must recover the real MSIL bytes as well as the correct
method body headers.
Figure 13: Same method body with
different headers when resting on disk vs. loaded in memory
Defeating JIT Method Body Replacement With JITM
To automatically recover the MSIL and the method body header, one
possible approach suggested by another FLARE team member is to install
our own hook at compileMethod() function
before loading and allowing the MassLogger module constructor to run.
There are multipletutorials and
open-sourcedprojects on
hooking compileMethod() using both managed
hooks (the new compileMethod() is a managed
method written in C#) and native hooks (the new compileMethod() is native and written in C or
C++). However, due to the unique way MassLogger hooks compileMethod(), we cannot use the vtable hooking
technique implemented by many of the aforementioned projects.
Therefore, I’d like to share the following project: JITM, which is designed use
inline hooking implemented by PolyHook
library. JITM comes with a wrapper for compileMethod() which logs all the method body
headers and MSIL bytes to a JSON file before calling the original
compileMethod().
In addition to the hook, JITM also
includes a .NET loader. This loader first loads the native hook DLL
(jitmhook.dll) and installs the hook. The
loader then loads the MassLogger payload and executes its entry point.
This causes MassLogger’s module initialization code to execute and
install its own hook, but hooking jitmhook.dll code instead of the original compileMethod(). An alternative approach to
executing MassLogger’s entry point is to call the RuntimeHelpers.PrepareMethod() API to force the
JIT compiler to run on all methods. This approach is better because it
avoids running the malware, and it potentially can recover methods not
called in the sample’s natural code path. However, additional work is
required to force all methods to be compiled properly.
To load and recover MassLogger methods, first run the following
command (Figure 14):
jitm.exe Bin-123.exe
[optional_timeout]
Figure 14: Command to run jitm
Once the timeout expires, you should see the files jitm.log and jitm.json
created in the current directory. jitm.json
contains the method tokens, method body headers and MSIL bytes of all
methods recovered from Bin-123.exe. The only
thing left to do is to rebuild the .NET metadata so we can perform
static analysis.
Figure 15: Sample jitm.json
Rebuilding the Assembly
Since the decrypted method body headers and MSIL bytes may not fit
in the original .NET assembly properly, the easiest thing to do is to
add a new section and a section header to MassLogger. There are plenty
of resources
on how
toadd
a PE section header and data, none of which is trivial or easy
to automate. Therefore, JITM also include
the following Python 2.7 helper script to automate this process: Scripts\addsection.py.
With the method body header and MSIL of each method added to a new
PE section as shown in Figure 16, we can easily parse the .NET
metadata and fix each method’s RVA to point to the correct method body
within the new section. Unfortunately, I did not find any Python
library to easily parse the .NET metadata and the MethodDef table.
Therefore, JITM also includes a partially
implemented .NET metadata parser: Script\pydnet.py. This script uses pefile and vivisect
modules and parses the PE file up to the Method table to extract all methods and their
associated RVAs.
Figure 16: Bin-123.exe before and after
adding an additional section named FLARE
Finally, to tie everything together, JITM
provides Script\fix_assembly.py to perform
the following tasks:
Write the method body header and MSIL of each method recovered
in jitm.json into a temporary binary file
named “section.bin” while at the same time
remember the associated method token and the offset into section.bin.
Use addsection.py to add section.bin into Bin-123.exe and save the data into a new file,
e.g. Bin-123.fixed.exe.
Use pydnet.py to parse Bin-123.fixed.exe and update the RVA field of
each method entry in the MethodDef table to point to the correct RVA
into the new section.
The final result is a partially reconstructed .NET assembly.
Although additional work is necessary to get this assembly to run
correctly, it is good enough to perform static analysis to understand
the malware’s high-level functionalities.
Let’s look at the reconstructed method 0x0600043E that implements the decryption logic
for the malware configuration. Compared to the original MSIL, the
reconstructed MSIL now shows that the malware uses AES-256 in CBC mode with
PKCS7 padding. With a combination of
dynamic analysis and static analysis, we can also easily identify the
key to be “Vewgbprxvhvjktmyxofjvpzgazqszaoo”
and the IV to be part of the Base64-encoded
buffer passed in as its argument.
Figure 17: Method 0x0600043 before and
after fixing the assembly
Armed with that knowledge, we can write a simple tool to decrypt the
malware configuration and recover all HBIs and NBIs (Figure 18).
Using a JIT compiler hook to replace the MSIL is a powerful
technique that makes static analysis almost impossible. Although this
technique is not new, I haven’t seen many .NET malwares making use of
it, let alone trying to implement their own adaptation instead of
using widely available protectors like ConfuserEx. Hopefully, with
this blog post and JITM, analysts
will now have the tools and knowledge to defeat MassLogger or any
future variants that use a similar technique.
If this is the type of work that excites you; and, if you thrive to
push the state of the art when it comes to malware analysis and
reverse engineering, the Front Line Applied Research and Expertise
(FLARE) team may be a good place for you. The FLARE team faces fun and
exciting challenges on a daily basis; and we are constantly looking
for more team members to tackle these challenges head on. Check out FireEye’s career
page to see if any of our opportunities would be a good fit for you.
Contributors (Listed Alphabetically)
Tyler Dean (@spresec): Technical review
of the post
Michael Durakovich: Technical review of the
post
Stephen Eckels (@stevemk14ebr): Help
with porting JITM to use PolyHook
Jon Erickson (@evil-e): Technical review of
the post
Moritz Raabe (@m_r_tz): Technical
review of the post
Adversarial machine learning (ML) is a hot new topic that I now understand much better thanks to this talk at Black Hat USA 2020. Ariel Herbert-Voss, Senior Research Scientist at OpenAI, walked us through the current attack landscape. Her talk clearly outlined how current attacks work and how you can mitigate against them. She skipped right over some of the more theoretical approaches that don???t really work in real life and went straight to real-life examples.
???
Bad inputs vs. model leakage
Herbert-Voss broke down attacks into two main categories:
Bad Inputs:ツ?In this category, the attacker feeds the ML algorithm bad data so that it makes its decisions based on that data. The form of the input can be varied; for example, using stickers on the road to confuse a Tesla???s autopilot, deploying Twitter bots to send messages that influence cryptocurrency trading systems, or using click farms to boost product ratings.ツ?
Model Leakage:ツ?This attack interacts with the algorithm to reverse-engineer it, which in turn provides a blueprint on how to attack the system. One example I loved involved a team of attackers who published fake apps on an Android store to observe user behavior so that it could train its own model to mimic user behavior for monetized applications, avoiding fraud detection.
Defending against adversarial machine learningツ?
The defenses against these attacks turned out to be easier than I had thought:ツ?
Use blocklists:ツ?Either explicitly allow input or block bad input. In the case of the Twitter bot influencing cryptocurrency trading, the company switched to an allow list.ツ?
Verify data accuracy with multiple signals:ツ?Two data sources are better than one. For example, Herbert-Voss saw a ~75% reduction in face recognition false positives when using two cameras. The percentage increased as cameras were placed further apart.ツ?
Resist the urge to expose raw statistics to users:ツ?The more precise the data is that you expose to users, the simpler it is for them to analyze the model. Rounding your outputs is an easy and effective way to obfuscate your model. In one example, this helped reduce the ability to reverse-engineer the model by 60%.ツ?
Based on her research, Herbert-Voss sees an ~85% reduction in attacks by following these three simple recommendations.
Healthcare providers heavily leverage technology.ツ?In his talk, Seth Fogie,ツ?informationツ?security director at Penn Medicine takes apart different vendor systemsツ?at the ???fictitious??? Black Hat Clinic. Fogie gives a lot of examples and drives home the point that you shouldn???t just look at network security ??ヲ you have to dig deep into the applications to ensure the security of your data.
Following the patient???s journey.
Fogie followsツ?the patient???s journey of now geriatric Alice and Bob, our quintessential victims in the security realm. Taking on the perspective of Mallory, the malicious attacker, he goesツ?to town taking apart one system after another.
For example, patient entertainment systems not only let you watch television but also give access to patient data.ツ?The first system he looks at providesツ?access to patient health information without authentication and usesツ?client-side authentication for PINsツ?that are easilyツ?overcome whenツ?using a proxy server between the client and the server.ツ?ツ?
A different system, a clinical productivity system, hasツ?a backdoor with a daily password that is generated with a pre-determined algorithm.ツ?ツ?
Next, he looksツ?at the drug dispensary system, which hasツ?an unauthenticated network share. Investigating the binaries, he findsツ?the SQL decryptionツ?key.ツ?This leads to full system access of the server, which providesツ?access not only to user data but a full table of encrypted passwords that they were able to decrypt using the same decryption key.ツ?ツ?ツ?
Fogie then looksツ?at the temperature monitoring system that is used to chill blood bags, insulin, and other drugs. Usingツ?WireShark, heツ?findsツ?a few authentication codes and passwords.ツ?(Around this point my head and keyboard startツ?to smoke as Fogie speedsツ?through his results faster than I canツ?screenshot.)
In the end, he compromisesツ?all seven systems, mostly through the use of clientツ?software. No vendors areツ?harmed in this presentation as Fogie blurred out all screens.ツ?He also worked with vendors to notify them of the security issues. Where software was no longer maintained, he patched the client software himself by setting a unique and complex password for a backdoor he found.ツ?ツ?
Managing 225,000 patient records, Black Hat Clinic could have been on the hook for millions of dollars in fines. Healthcare records are particularly popular on the dark web because they often contain a lot of information that helps fraudsters steal the identity of their victims andツ?use their credit.
Don???t just try to get to DC,ツ?pentestツ?your apps.
Fogie???s advice is to not only conduct aツ?pentestツ?that is trying to get to the domain controller to take over the network but also to dig deep into the applications that hold your data. At Penn Med, they do a ???Lite???ツ?pentestツ?of all new products. For fellow practitioners in the healthcare space, he recommendsツ?participating in H-ISAC.ツ?ツ?
Plea to healthcare application vendors: ???Pleaseツ?don???t make our jobs harder.???ツ?
Fogie is asking healthcare application vendors toツ?run security testing onツ?their applicationsツ?prior to release. Of course, being an employee of application security testing vendor Veracode, I completely agree. At Veracode, we???re also seeing the market shift.ツ?Application vendors are telling us that their customers are putting more pressure on them to develop secure software than the regulators.ツ?ツ?
As an educated software buyer, ask your application vendor about their secure development practices.ツ?Rather than picking a vendor that has had a single point-in-time penetration test, look for vendors that follow a secure development process to ensure that they are continually trying to reduce risk and are more responsive to security issues.ツ?Some vendors may alsoツ?have theツ?Veracode Verifiedツ?seal, an attestation Veracode provides to organizations that follow specific security protocols in their application development.ツ?ツ?
If you don???t have the resources in house to run the type of tests that Fogie did in his presentation, please reach out to us to have a conversation. Our automated testing can beツ?plugged into anyツ?DevSecOpsツ?process, plus we help you with your program management to bring your stakeholders on board and advise your development team on how to fix flaws.ツ?We also do manual penetration tests if that???s what you need.ツ?ツ?
Retired Marine fighter pilot and Top Gun instructor Dave Berke said “Every single thing you do in your life, every decision you make, is an OODA Loop.” OODA Loop? Observe–Orient–Decide–Act, the “OODA Loop” was originally developed by United States Air Force Colonel John Boyd and outlines that fundamentally all actions are first based on observations. If you can’t first observe you are at a serious disadvantage. This can be applied at every scale from planning a large-scale military invasion, to changing lanes on a highway or even down to the split-second decision of picking up a glass of water off a table. But it all starts with observation.
If observation or information about what’s around us is so important, then it’s no surprise that tools and devices that can provide us information not only increase our success but also make us feel more comfortable. The personal robot called temi, is such a device. With 1,000 new devices being created a month, temi can help us see a loved one in the hospital through teleconferencing, escort us to the ballroom in the hotel we are staying at or help a doctor virtually visit with a patient. In fact, across Israel, temi was recently selected as the de facto robotics platform for hospital telepresence. Temi can allow us to observe and interact remotely; all while influencing what we decide to do next with the information we have learned.
But what are the risks associated with tools that help provide greater information into our OODA Loop? How could someone with malicious intent use tools like temi to improve their OODA Loop, furthering their ability to cause harm? Could someone control temi without the owner’s permission? What if someone could turn on the camera and microphone and see the robot’s surroundings while listening to your medical diagnosis or treatment plan? What if someone could “check in on your kids” and map out your house without your permission? These are the type of questions that McAfee Advanced Threat Research (ATR) aims to answer when we see a product like temi. We look at technologies and the wealth of information they provide through the lens of an attacker; with the goal of making your experience a safer one. So, it is no surprise that our team decided to take a deeper look into temi when it was first released.
On day one, like a kid during Christmas, we unboxed our shiny new temi with a sense of joy and excitement. Like your average user, we set it up following the included directions, had it follow us around dressed up as a ghost for Halloween, marked key locations, made video calls and were amazed at just how cool this robot was! Maybe a little less like your average user, we also captured all the network traffic while temi showed off and did a firmware update. We also looked at what open network ports temi exposed, decompiled the Android application, activated android debugger (adb) and installed a ssh server, just to be safe. Basically, we treated temi like your child’s first date in high school – we did a background check and used proper intimidation techniques before we allowed temi to go on a date.
Over the next several months, we really dug into what we learned from our “first date” with temi. What we discovered, not unlike some of my first high school dates, is temi had some trust issues. This is not a good report card considering some of temi’s primary functions. For the ability to make video calls and to remotely control temi securely, it’s important to keep information used to connect to a call secret and secured. Since temi had some flaws doing so, the team turned its attention to the question of whether these “character flaws” could be exploited.
It turned out with only a few modifications to the original temi Android application, we could intercept phone calls intended for another user. With a few more modifications, temi began to trust us (the attacker) to navigate it around and activate the camera and microphone. Additionally, we discovered the only knowledge an attacker would need to know about the victim is their phone number. Yes, the same number you can’t figure out how tele-markers got ahold of was the information a malicious actor would need to access your temi completely remotely without your knowledge or consent.
For an attacker, temi’s security flaws allows them to feed vital information into their OODA Loop. Consider a hospital that keeps a very tight and well-run information security program. From the outside they are almost impossible to steal information from. The vulnerabilities discovered in temi now provide them a way to gather information about the internal operations of the business without needing to crack the well implemented business network or physical security. With the phone number of anyone who has called a temi recently, an attacker could observe what room number and condition a hospitalized member of congress is in. Temi could watch the security guard type in the building alarm code. Temi could observe the dog pictures on the nurse’s desk labeled with its cute name and birthday, that just happens to also be part of their password. This information can be invaluable to an attacker’s OODA loop and the potentially malicious decisions they make. Temi was a challenging target to compromise, but the nature of the flaws discovered highlight the importance of this research and the value of vendors to provide fast and effective mitigations, as was the case here.
For those who enjoy punishing themselves with an excessive amount of technical detail of these vulnerabilities, we have a special treat for you in our highly technical paper on this research here. This dives into the vulnerability discovery and exploitation process from start to finish in intricate detail.
In keeping with our responsible disclosure program, we reached out to temi as soon as we confirmed that the vulnerabilities we discovered were exploitable. Temi was very receptive, grateful, and had a strong desire to improve the security of temi. Over the next several weeks, we worked closely with their team to develop a more robust solution for temi. For each vulnerability, we provided temi with two mitigations strategies; a band aid and a cure. It is common for vendors to only implement the band aid or quick fix solution. To their credit, temi took on the challenge of implementing the more effective solution or the cure for all findings. Once in place, we were able to test and confirm that all the vulnerabilities reported were effectively mitigated. It is always exciting to see the positive impact security research can have when responsible disclosure is valued by vendor and researchers alike. The following demo video is a light-hearted way to show the true possible impact of the remote accessibility component of this research, had these been found by adversaries instead of responsibly disclosed by McAfee ATR and fixed by temi.
So, what is the larger lesson here? I would venture to guess that I wouldn’t be the first to conclude that Colonel John Boyd was a smart man and his theory of the OODA Loop is not only effective but also strengthens a process – in this case, vulnerability mitigation. For our research, this applies directly to a personal robot assistant which is currently being used worldwide, across a wide range of industries, and currently producing nearly 1,000 new units a month. McAfee’s ATR team observed a new impactful technology emerging into the market. Due to our previous experiences and technical expertise we were well oriented in a position where we decided to take further action and investigate. Temi observed the value in collaboration, oriented themselves in a position where they could decide to take action in making a tool used for observation a more secure product.
As part of our continued goal of helping developers provide safer products for businesses and consumers, we here at McAfee Advanced Threat Research (ATR) recently investigated temi, a teleconference robot produced by Robotemi Global Ltd. Our research led us to discover four separate vulnerabilities in the temi robot, which this paper will describe in great detail. These include:
CVE-2020-16170 – Use of Hard-Coded Credentials
CVE-2020-16168 – Origin Validation Error
CVE-2020-16167 – Missing Authentication for Critical Function
CVE-2020-16169 – Authentication Bypass Using an Alternate Path of Channel
Together, these vulnerabilities could be used by a malicious actor to spy on temi’s video calls, intercept calls intended for another user, and even remotely operate temi – all with zero authentication.
Per McAfee’s vulnerability disclosure policy, we reported our findings to Robotemi Global Ltd. on March 5, 2020. Shortly thereafter, they responded and began an ongoing dialogue with ATR while they worked to adopt the mitigations we outlined in our disclosure report. As of July 15, 2020, these vulnerabilities have been successfully patched – mitigated in version 120 of the temi’s Robox OS and all versions after 1.3.7931 of the temi Android app. We commend Robotemi for their prompt response and willingness to collaborate throughout this process. We’d go so far as to say this has been one of the most responsive, proactive, and efficient vendors McAfee has had the pleasure of working with.
This paper is intended as a long-form technical analysis of the vulnerability discovery process, the exploits made possible by the vulns, and the potential impact such exploits may have. Those interested in a higher-level, less technical overview of these findings should refer to our summary blog post here.
For an Android tablet ‘brain’ sitting atop a 4-foot-tall robot, temi packs a lot of sensors into a small form factor. These include 360° LIDAR, three different cameras, five proximity sensors, and even an Inertial Measurement Unit (IMU) sensor, which is a sort of accelerometer + gyroscope + magnetometer all-in-one. All these work together to give temi something close to the ability to move autonomously through a space while avoiding any obstacles. If it weren’t for the nefarious forces of stairs and curbs, temi would be unstoppable.
Robotemi markets its robot as being used primarily for teleconferencing. Articles linked from the temi website describe the robot’s applications in various industries: Connected Living recently partnered with temi for use in elder care, the Kellog’s café in NYC adopted temi to “enhance the retail experience”, and corporate staffing company Collabera uses temi to “improve cross-office communication.” Despite its slogan of “personal robot”, it appears that temi is designed for both consumer and enterprise applications, and it’s the latter that really got us at McAfee Advanced Threat Research interested in it as a research target. Its growing presence in the medical space, which temi’s creators have accommodated by stepping up production to 1,000 units a month, is especially interesting given the greatly increased demand for remote doctor’s visits. What would a compromised temi mean for its users, whether it be the mother out on business, or the patient being diagnosed via robotic proxy? We placed our preorder and set out to find out.
Normal Operation of temi
Once it finally arrived, we got to setting it up the way any user might: we unboxed it, plugged in its charging dock, and connected it to WiFi. Normal operation of the temi robot is done through the use of its smart phone app, and at first startup temi prompted us to scan a QR code with the app. The phone used to scan the QR code becomes temi’s “admin”, allowing you to control the robot remotely by simply calling it. Temi can have many contacts outside of its singular admin, and becoming a contact is fairly straightforward. Whenever you launch the temi phone app, it scans your phone’s contacts and automatically adds any numbers that have been registered with the temi app to the app’s contact list. If any of those contacts happens to be a temi admin, you can call their temi simply by clicking that contact, as shown in Figure 1.
Figure 1: Selecting a contact from the temi phone app
In this way, users of the phone app besides temi’s admin can still call a temi robot using this method. Since initiating a call with a temi allows you to remotely operate it in addition to seeing through its camera and hearing through its microphone, giving anyone the ability to call your temi could prove… problematic. Temi’s creators address this exact issue on their FAQ page, as shown in Figure 2.
Figure 2: “Can anyone connect to my temi?”, from the temi’s FAQ page
The first line here is a bit misleading, since adding a temi’s admin as a phone contact seems to be sufficient to call that temi – the admin does not need to also have you added as a phone contact for this to work. That being said, this doesn’t mean that just anyone can start controlling your temi; “physical permission on the robot side” is required for calls made by users that aren’t temi’s admin or explicitly authorized by said admin. In practice, this corresponds to a call notification on the temi’s screen that the user can either answer or decline. In other words, it’s no more or less “secure” than receiving cold calls on your cell phone.
As for the last line, which refers to an admin’s ability to grant certain users the option to “hop in” to the robot, this is also done through the phone app by simply selecting the “Invite New Member” option from the temi’s contact entry in the app, and then selecting one or more users from the app’s contact list to “invite”, as shown in Figure 3.
Figure 3: How to grant users special permissions from the phone app
Once a call has been established between the robot and a phone user, which either party may initiate, the phone user can do things like manually drive the robot around, add, remove, and navigate to saved locations, patrol between saved locations, control its volume, and more.
Figure 4: Driving the temi using the phone app
Due to the level of control afforded to temi’s callers, the calling functionality quickly became a priority during our investigation of the temi robot.
Initial Recon
Port Scanning
While a robot was certainly a bit different from our typical targets, we began our reconnaissance with methods that have stood the test of time: port scans, packet captures, and a local shell.
Figure 5: Running Nmap on the temi
An Nmap scan revealed only one open port: TCP port 4443. Immediately we knew that Nmap’s classification of this port being used for Pharos, a “secure office printing solution for your business”, was almost certainly wrong. Instead, it was likely that this port was being used for TLS communication as an alternative to the standard 443. While good to know, this didn’t tell us much about the type of traffic temi expected on this port, or even what service(s) was handling that traffic, so we moved on.
Capturing Traffic
Next on our checklist was to obtain some captures of the robot’s network traffic, focusing on traffic generated during boot, during an update, and during a video call.
Traffic captured when temi was booting up but otherwise idling showed three unique external IP addresses being accessed: 98.137.246.7, 54.85.186.18, and 34.206.180.208.
Running nslookup on these IPs revealed that the first pointed to some Yahoo! media server, likely used for its news app, while the other two appeared to be AWS instances.
Figure 6: Running nslookup on IPs accessed by temi
As for the data being sent/received, there wasn’t much to look at. The Yahoo! address was being accessed via HTTP (port 80), but the TCP packets had no payload. As for the AWS addresses, these were being accessed via TLS (port 443) and the data was encrypted.
For updates, temi accessed “temi-ota-updates.s3-accelerate.amazonaws.com”, which was almost certainly a custom AWS instance set up by the folks at Robotemi. As with the other traffic to AWS, the updates were being encrypted.
Getting a Shell
In order to prod deeper, we needed a local shell on our temi. Fortunately, wireless connections via Android Debug Bridge, or ADB, could be enabled through temi’s settings from its touchscreen. Better still, the shell provided through ADB had root privileges, although the device itself was not “rooted”. This was likely done to help facilitate “temi Developers”, since the temi website has an entire portal dedicated to helping users develop custom apps for their robot.
Unfortunately, the default shell granted via ADB was fairly stripped down, and each reboot would mean having to manually reopen temi’s ADB port from its touchscreen before being able to connect to it again. Furthermore, this method required the temi software to be running in order to access our shell, which would leave us without recourse if our prodding somehow bricked that software. This was far from ideal.
While commands like adb push made moving custom ARM binaries for bash, busybox, and ssh onto temi fairly trivial, getting anything to run on boot proved more challenging. For starters, temi did not have the standard /etc/init.d/ directory. Additionally, the primary scripts that would run on boot, like /init.rc, would be loaded directly from the boot image, meaning any edits made to them would not persist across reboots. This was likely by design as part of Android’s SELinux security model – we would have to be a little more creative if we wanted to convince temi to start an SSH server on boot.
Digging through the contents of /init.rc, however, gave us a clue:
Figure 7: Interesting entry in /init.rc
For those unfamiliar with Android’s Init language,
“The init language is used in plain text files that take the .rc file extension. There are typically multiple of these in multiple locations on the system, described below.
/init.rc is the primary .rc file and is loaded by the init executable at the beginning of its execution. It is responsible for the initial set up of the system.” – Android docs.
The entry shown in Figure 7, one of the hundreds contained in temi’s init.rc, tells the system to launch the service “flash_recovery” during boot with the argument “/system/bin/install-recovery.sh”. The “class main” is just a label used to group entries together, and “oneshot” just means “do not restart the service when it exits.” This entry stood out to us because it was invoking a script located in /system/bin/, which was not loaded from the boot image, meaning any changes should stick. While the /system partition is read-only by default, our root privileges made it trivial to temporarily remount /system as read-write in order to add the following line to the end: “/system/bin/sshd”. With that, we had a reliable means of getting a root shell on our temi to explore further.
Finding temi’s Code
The first thing we did with our newfound freedom was run netstat:
Figure 8: Running netstat
It appeared that most networking, including the open port 4443 we found using Nmap earlier, was being handled by “com.roboteam.teamy.usa”. Based on the name alone, this was likely the main temi software running on the robot. Additionally, the name looked more like the name of an Android package than a native binary. We confirmed this by running:
Figure 9: Trying to find the binary for com.roboteam.teamy.usa in /proc
app_process32 (or app_process64, depending on architecture), is the binary used by Android for running Android apps, meaning we were looking for an APK, not an ELF. Under this assumption, we instead tried to find the code for this process using Android’s pm command:
Figure 10: Looking for the APK for “com.roboteam.teamy.usa”
Sure enough, we had our APK.
Typically, every installed app on Android has a corresponding data directory, and we were able to find the one for this package under /data/data/com.roboteam.teamy.usa:
Figure 11: data directory for “com.roboteam.teamy.usa”
The lib/ directory contained native code used by the package:
Figure 12: native code used by “com.roboteam.teamy.usa”
After looking into several of these libraries, libagora-rtc-sdk-jni.so in particular stood out to us since it was part of Agora, a platform that provides SDKs for voice, video, and Real-Time-Messaging (RTM) services. It was likely that temi was using Agora to implement its video calling functionality – the functionality we were most interested in.
By looking at this binary’s strings, we were also able to determine that temi was using version 2.3.1 of the Agora Video SDK:
Figure 13: Finding the Agora SDK version using strings
Of course, we were equally interested in the code for the temi phone app, which we obtained through the use of APK Extractor and ADB. We were also curious to see if the Android phone app used the same version of the Agora SDK, which we checked by comparing their MD5 hashes:
Figure 14: Comparing MD5 hashes for libagora between temi robot and temi phone app
Sure enough, they were the identical.
Reversing the Code for Video Calls
The next step was to begin reversing these apps in order to better understand how they worked. We decided to start by looking at the phone app’s code, since it was easier to test the behavior of the phone app compared to the robot.
In order to decompile and analyze the APK, we used JADX. Although there are many Java decompilers out there that work on Android code, JADX stood out to us due to its various features:
It can open APK files directly (will convert .dex to .jar and merge multiple .dex files automatically)
It handles modern Java features like nested classes and inline lambda functions better than most other decompilers
For any class being viewed, you can click on the “smali” tab to see the smali code
It displays line numbers synchronized with the corresponding .line directives in the bytecode, making it easier to map decompiled Java code back to the original bytecode
You can right-click on any method, member, or class to see its usage or declaration
The temi Android phone app, like most non-trivial Android apps, was massive. Instead of groping in the dark and hoping to stumble upon some interesting code, we decided to take a more targeted approach. From the outset, we knew that our focus was on temi’s calling functionality since this would provide the greatest impact for any attacker. We also now knew that temi was using the Agora SDK to implement this calling functionality. After looking through Agora’s Java API Reference for Android (v2.3.1), we decided that the function joinChannel() would be a good place to start our investigation:
Figure 15: Agora’s documentation for joinChannel()
By opening libagora-rtc-sdk-jni.so in IDA and looking at its exports, we were able to find a function called nativeJoinChannel() at the offset 0xD4484:
Figure 16: Some of libagora’s exports
Using JADX’s search feature, we found a function with the same name in the decompiled code, located on line 960 in the class RtcEngineImpl:
Figure 17: Native Agora methods in RtcEngineImpl
From here, we began the tedious process of working backwards to trace the call chain for video calls all the way to their entry points. We did this by looking for the method that invoked nativeJoinChannel(), then looking for the method that invoked that method, and so on. The return on investment for our tireless efforts was the following:
Figure 18: Code flow diagram for the various ways of making video calls using the temi phone app
At a high level, the code for outgoing calls has four entry-points, and these map 1-to-1 with the four ways of initiating a call from the temi phone app:
Select a contact and hit “Call”. This will call that contact’s phone directly, not their temi. This corresponds to the “Contact Details → Contact Call” code path outlined in the graph (green region).
Select a contact and, if they have a temi robot, hit “Connect”. This will call their temi and corresponds to the “Contact Details → Robot Call” code path outlined in the graph (orange region).
Go to the app’s “Recents” tab and select one of the contacts/robots you have recently called or have called you. This will call that contact or robot and corresponds to the “Recent Calls → Call” code path outlined in the graph (blue region).
If you are an admin, select your temi from beneath the “My temi” heading on the “Contacts” tab and hit the “Connect” button from the following screen, also known as the Robot Details screen. This will call your temi and corresponds to the “Robot Details → Call” code path outlined in the graph (red region).
The classes and methods contained within the colored regions handle rendering the screens from which calls can be initiated and the binding of these buttons to the code that actually handles calling. No matter the entry point, all outgoing calls converge at TelepresenceService.initiateCall(), so it’s worth taking a closer look at this method:
Figure 19: TelepresenceService.initiateCall()
And here are the four ways it is invoked:
Contact Details → Contact Call
Figure 20: Invoking initiateCall() from Contact Details → Contact Call
Contact Details → Robot Call
Figure 21: Invoking initiateCall() from Contact Details → Robot Call
Recent Calls → Call
Figure 22: Invoking initiateCall() from Recent Calls → Call
Robot Details → Call
Figure 23: Invoking initiateCall() from Robot Details → Call
The first parameter is an ID used to identify the callee. For temi robots, this appears to be something called a “robot ID”, whereas for contacts it is obtained via a call to getContactId(). Interestingly, for recent calls, which can be either a contact or a robot, the ID is obtained via a call to getMd5PhoneNumber(), which is a bit strange since temi does not (as far as we can tell) have a phone number. We will expand on this later.
The second parameter is a string denoting the caller’s type. Since we were looking at exclusively phone app code here, we assumed that “typeUser” denotes a caller using the phone app.
The third parameter is the callee’s display name – straightforward enough.
The fourth and final parameter is of a custom enum type that denotes the callee’s type. If calling a temi, its type is CallContactType.TEMI_CALL; otherwise, it’s CallContactType.CONTACT_CALL. Note that for the Recent Calls entry point, this value is obtained dynamically, which makes sense since this path handles calls to both contacts and temis.
It’s also worth noting that initiateCall() invokes InvitationManager.createInvitation() in order to create an Invitation object which represents the “call invitation” to be sent to the callee:
Figure 24: InvitationManager.createInvitation()
From there, as seen on line 223 of Figure 19, initiateCall() passes the result of Utils.getRandomSessionId() as the first parameter to createInvitation(), which becomes the sessionId:
Figure 25: Utils.getRandomSessionId()
All this is doing is randomly generating an integer between 100,000 and 999,999, inclusive. In other words, the sessionId is always a positive, six-digit decimal number.
By tracing this value all the way down the call chain to Agora’s nativeJoinChannel(), we discovered that this sessionId becomes the “channelName” parameter described in Figure 15 – the unique identifier for the chatroom for that particular call. The propagation of this channel name can be seen in Figure 18; the bolded parameter in each method call contains the channel name.
Brute-Forcing the Channel Name as an Attack Vector
So why were we so interested in the Agora channel name? Well, if we look back at Figure 15, we can see that it is one of only two fields needed to join a channel since the third and fourth fields are labeled optional. The only other required field is the “token”, but looking closer at the documentation reveals that “in most circumstances, the static App ID suffices”, and in those cases the token “is optional and can be set as null”:
Figure 26: Agora documentation for joinChannel’s “token” parameter
Our next question was, does temi use a token or does it use a static App ID? We quickly answered that question by looking at how the temi phone app was calling joinChannel():
Figure 27: Agora’s joinChannel() API function being called from AgoraEngine.joinChannel()
Sure enough, the token parameter is being set to null.
This was starting to look promising – if the static App ID and channel name are all that’s needed to join an Agora video call and we already knew that temi’s channel names are restricted to six-digit values, then it might be plausible to join existing temi calls through brute force means. All we would need is this “App ID”.
Looking back at the Agora docs, we looked for which API functions actually use this App Id. As it turned out, there was only one: RtcEngine.create().
Figure 28: Agora documentation for RtcEngine.create()
Put briefly, the App ID is a static value issued to developers that is unique to each project and is used as a sort of namespace, segregating different users of Agora’s servers. This ensures that temi users can only use Agora to call other temi users. Since any temi phone app user should be able to call any temi (or other user), there should be a single App ID shared by all temi robots. We decided to take a look at how temi’s code was invoking RtcEngine.create() to see if we could track down the App ID:
Figure 29: AgoraEngine.ensureRtcEngineReadyLock()
Well, that’s not good.
In our view, this was already a vulnerability, denoted by CVE-2020-16170 and having a CVSS score of 8.2. A dedicated attacker would have no problem iterating over all 900,000 possible channel names. Worse yet, doing so would let the attacker “spray and pray”, allowing them to connect to any ongoing temi call without needing to know anything about their victims.
While we certainly couldn’t test such an exploit against a production server, we did decide to test whether an attacker could join an existing temi call knowing only the App ID and channel name in advance. To facilitate this, we used an Android phone to call our temi, making sure to run logcat on the phone before the call started. Doing so, we were able to capture the invitation message containing the channel name (labeled “sessionId”) as it was being sent to the OkHttp client, which then logged it:
Figure 30: Finding the channel name for the call using logcat
Using the hardcoded App ID and the channel name obtained from the logs we were able to successfully join an ongoing call and get both audio and video from at least one of the other callers, proving that this is a viable attack vector.
Although exploitation of this vulnerability utilizes Agora’s video calling API, its existence is a result of temi’s specific implementation of the SDK. While the decision to hardcode temi’s Agora App ID into the phone app is the root cause of this vulnerability, its impact could also have been substantially mitigated by utilizing a token or by allowing a broader range of channel names. Either of these would have rendered the brute-force attack vector incredibly difficult, if not impossible.
Exploring MQTT Attack Vectors
Overview
Outside of a brute-force method involving joining existing temi calls by trying every possible channel ID, it would be useful to have a more targeted attack vector. Moreover, while joining a call using the brute-force method would let an attacker spy on users, it would not grant them control of the robot itself – we wanted a method that would let us do both. Fortunately, this level of control is already available during normal operation of the temi robot + phone app.
Although temi uses Agora to facilitate video calls, notifying users of incoming calls, through ringing or otherwise, is not a feature implemented by Agora. In the case of temi, this functionality is implemented using MQTT. MQTT is a publish/subscribe (pub/sub) connectivity protocol designed for “machine-to-machine (M2M)/Internet of Things” communication. In it, communications are categorized into “topics” and clients can either “subscribe” to these topics to receive all their related messages, “publish” their own messages to these topics, or both. These topics are structured into a hierarchy delineated in a manner very similar to UNIX-style filesystem naming schemes, with a forward slash (“/”) separating different topic “levels”. Communication between clients is facilitated via a system known as a “broker“, usually implemented as a remote server, which handles establishing and closing connections, message delivery, and client authentication.
Unfortunately, there were several hurdles we would need to overcome in order to leverage temi’s calling functionality as an unauthorized “attacker”.
Firstly, a targeted attack method requires, at a minimum, a reliable way of uniquely identifying the target. In other words, if you are interested in hacking Bob’s temi, you need a way to pick out Bob’s temi from every other temi. A softer but equally important requirement is that this identifier must be one that an attacker could plausibly obtain. The more difficult the identifier is to obtain, the more contrived and unrealistic any attack vectors that rely on this identifier become. As an example, a name or phone number might be a plausible identifier; a social security number, less so.
Reversing temi’s MQTT implementation in its phone app code had revealed a promising potential identifier. We discovered that each robot has its own MQTT topics that it listens on, which are identified using its MQTT client ID, otherwise known as its robot ID. The robot ID for a temi can be easily obtained through normal operation of the phone app by simply adding one of its users as a phone contact, a method we had described previously. This is because the temi phone app allows the user to call any temi registered to a phone number in the user’s contacts list, which requires that temi’s robot ID. If the phone app already has access to this information locally, then it should be hypothetically possible to pull this information from the app.
Second, we needed a way to communicate with temi. If calling is facilitated through the publishing of MQTT messages to certain topics, we needed to find a way to publish our own messages to these same topics. On that note, if all MQTT messages must first pass through the broker, we needed a way to trick the broker into thinking we were a trusted MQTT client and bypass any authentication that might be in place.
One way to overcome this hurdle would be to alter the existing temi phone app. Modifying third party Android apps is a well-known process and would let us leverage the code already present for sending MQTT messages instead of writing this code from scratch. Furthermore, since the phone app requires nothing more than the phone number of one of temi’s users in order to call it (and thus, publish MQTT messages on its topics), this means that the phone app must have a way of authenticating with the broker. If we could send custom MQTT messages to arbitrary topics from the context of a “trusted” phone app, then we likely wouldn’t need to worry about authentication.
This left us with our third and final hurdle: privilege escalation. Although cold calling a temi robot is not difficult to achieve, it does not grant us the ability to remotely operate the robot on its own. This is because calling temi in this way causes it to ring and requires the call to be accepted on temi’s end via its touchscreen. Only temi’s admin, the user who registered it via QR code scan, and privileged users manually selected by this admin may directly control temi without any user interaction on the other end. Thus, we needed to find a way to escalate our privilege so that temi would pick up our calls automatically.
If MQTT is the primary means of communication between temi and its phone app users, and admins can manage the privilege levels of users directly from the phone app, it stands to reason that privilege management is likely performed through MQTT. Thus, if we could alter the phone app to spoof a privilege escalation MQTT message, we could overcome this hurdle, as well.
Modifying the temi Phone App
Since our entire plan hinged on us being able to alter the bytecode of the phone app without either breaking the app or preventing it from authenticating with the various remote servers it communicates with, we decided to first confirm that this was even possible. To accomplish this, we used a combination of ADB, Apktool, Keytool, and Jarsigner.
We began by unpacking the APK file for the temi phone app using Apktool:
Figure 31: Unpacking the temi phone app
Next, we searched the unpacked APK for the file or code we wanted to modify. For our proof-of-concept, we decided to simply change the label for the “Call” button, since it would be immediately obvious whether or not it worked. In many Android apps, the strings used for buttons are typically not hardcoded and are instead loaded from a resource file. In this case, they were loaded from the file res/values/strings.xml:
Figure 32: Searching for the call button label in strings.xml
It looked like line 108 contained the label we wanted to change. We simply replaced “Call” with “PWN” and saved our changes.
Note: For less trivial modifications, like the ones we would need to make later, this process would naturally be more involved. Since even the most sophisticated Java decompilers are unable to produce code that will actually compile for non-trivial apps, meaningful modifications usually mean having to read and modify smali, the assembler for Android’s Dalvik bytecode. That being said, we found that the best approach to making meaningful changes to a complex app like temi’s is to read Java, write smali. By this, we mean that it’s better to do your reversing on decompiled Java code and only look at the nigh-hieroglyphic smali code once you know exactly what changes you want to make and what class to make it in.
Once our modification had been made, we used Apktool again to repack the APK:
Figure 33: Repacking our modified app
Our next step was to sign the modified APK. This is because Android refuses to install any unsigned APKs, even through ADB. First, we generated a key using Keytool:
Figure 34: Generating the key we will use to sign the modified app
and then we used our key to sign the APK using Jarsigner:
Figure 35: Signing the modified app
Finally, we installed the modified APK onto an Android phone using ADB:
Figure 36: Installing the modified app
After launching the app and selecting one of our contacts, we were greeted with a pleasant sight:
Figure 37: Testing the modified app
Furthermore, the change we made did not seem to impact the app’s functionality; we could still make and receive calls, add contacts, etc.
In our view, this was a vulnerability, later denoted by CVE-2020-16168 and having a CVSS score of 6.5. An altered and potentially malicious app could still access the various cloud resources and user information made available to the temi app because no integrity checks were performed by either the app or the remote servers to ensure that the app had not been modified. As we’ll demonstrate later, the presence of this vulnerability made various other attack vectors possible.
The Relationship Between Robot IDs and MQTT Topics
In the overview section, we made the claim that “each robot has its own MQTT topics that it listens on, which are identified using temi’s MQTT client ID, otherwise known as its robot ID.” Here we will outline how we came to this conclusion and the specific format of a few of the topics that temi and its phone app subscribe/publish to.
Since we knew that temi uses MQTT to handle calling notifications, a natural place to start our investigation was the code used by the phone app to initiate calls. Moving down the call chain, we saw that the robot ID was being passed to the sendInitInvitation() method of the TelepresenceService class:
Here, the robot ID is used in two different places. On line 82, an MqttDelegateApi.Topic object is created with the name “users/<ROBOT_ID>/status”, implying that each robot has its own MQTT topic category for its status and the robot ID itself is used to uniquely identify these topics. Next, on line 87, we see one of many RxJava functions (the others have been omitted) with the robot ID passed as a parameter to it. This function’s only purpose is to call InvitationManagerImpl.sendInviteMsg(), passing along the Invitation and the robot ID as its arguments:
Figure 39: InvitationManagerImpl.sendInviteMsg()
This function is of particular interest because we see the construction of another MQTT topic name on line 332, this time taking the form “client/<ROBOT_ID>/invite”. Presumably, this is the topic format used when publishing call invitations to specific temi robots (and, likely, phone contacts).
Additionally, the anonymous function executed via doOnSuccess() uses MqttManager.publish() (line 359) to actually publish the call invitation on the callee’s call invite topic. This information would become useful when we later tried to send custom MQTT messages to our temi in order to facilitate privilege escalation.
How are MQTT Call Invite Messages Published?
If we were to publish our own MQTT messages, we would need a robust understanding of the code used by the temi phone app to publish arbitrary MQTT messages, which appears to be this MqttManagerImpl.publish() method.
To determine what was actually being passed to publish(), we needed to go through what each of the RxJava functions in InvitationManagerImpl.sendInviteMsg() was doing. Referring back to Figure 39:
On lines 331-339, Single.zip() is called, which simply creates a Pair from the MQTT topic string (“client/<ROBOT_ID>/invite”) and the Invitation object.
On lines 340-355, Single.flatMap() is called, which adds a timestamp to the Invitation object inside the Pair.
Presuming successful execution of the previous flatMap() call, Single.doOnSuccess() is called on lines 356-372. As mentioned previously, this is where the call to MqttManagerImpl.publish() occurs. Since this doOnSuccess() operates on the value returned by the previous call to flatMap(), the arguments being passed to publish() are:
Argument
Description
(String) pair.first
The MQTT topic string
new Gson().toJson((Invitation) pair.second)
The Invitation object as JSON
0
The integer “0”
false
The boolean “false”
While it was obvious that the first argument is the MQTT topic the message is being published on and the second argument is the message itself, it was not immediately obvious what the third and fourth arguments were for. Digging down into the source code of the MQTT package being used (org.eclipse.paho.client.mqttv3) revealed their purpose:
Figure 40: MqttAsyncClient.publish()
After passing through a couple MqttManagerImpl methods, the four arguments listed above become the first four arguments passed to this internal publish() method. The JSON string (second argument), is converted from a string to a byte array in the interim; the rest of the arguments are unchanged.
Knowing this, it was clear that the second argument is indeed the MQTT message, the third argument is the QoS, and the fourth argument is a flag that specifies whether or not the message should be retained. A QoS value of “0” means that the message will be delivered to clients subscribed to the topic at most once. A retained flag of “false” means that new subscribers to the topic will not receive the most recent published message upon subscription.
Intercepting Calls
As we’ve already established, every temi robot has a unique MQTT client ID and many of the topics it subscribes to contain this ID to indicate that they are intended for that specific robot. If users of the temi phone app can receive calls in addition to making them, it stands to reason that they must also have a unique MQTT client ID – a robot ID equivalent. If there was an easy way to discover the client ID of another phone app user, it might be possible to subscribe to their topics and thus receive calls intended for them, making it worthy of investigation.
If we refer back to Figure 22, we saw that if a call is initiated from the Recent Calls screen, a method called getMd5PhoneNumber() is used to obtain the client ID of the callee. While a temi doesn’t have a phone number to speak of, we began to suspect that the client ID for users of the temi phone app might just be an MD5 hash of their phone number.
Although we could have followed the code in order to track down exactly where this value comes from, we thought it might be easier to simply verify our suspicions. To do this, we first took the MD5 hash of the temi admin’s phone number and then performed a string search for this hash in every temi-related file we had access to.
Figure 41: The Google Voice number used to register with temi
Figure 42: Taking the MD5 hash of the phone number
Sure enough, this hash appeared in two places. First, it appeared in the primary SQLite 3 database file for the temi app running on the robot itself. More specifically, it appears in the “RecentCallModel” table under the “userId” column for the table’s only entry. Based on the table’s name and sole entry, this is almost certainly used to store temi’s recent calls, as its admin was the only user it had called at this point.
Figure 43: The matching string in temi’s RecentCallModel table, part of its SQLite 3 Database
The second match was in the log output we saved after running logcat on our temi earlier, the same log output seen in Figure 30:
Figure 44: Matching strings in the logcat output recorded earlier
This log output appears to conclusively confirm our suspicions. With what we knew now, these log messages appeared to be JSON data being sent to/received from the MQTT broker. Moreover, the “topic” string in the first result exactly matched the MQTT topic format string found on line 82 of Figure 38. The only difference is that in that case, the string between “users/” and “/status” was the robot ID; here, it was an MD5 hash of the user’s phone number.
Since we now knew that
temi robots and phone app users received calls on the topic “client/<CLIENT_ID>/invite”, where “<CLIENT_ID>” is the MQTT client ID of the callee,
the MQTT client ID for phone app users was simply an MD5 hash of the phone number used to register with the app, and
we could alter the existing phone app,
it stood to reason that we could modify the app to subscribe to another user’s call invite topic in order to intercept calls intended for that user as long as we knew the user’s phone number. In theory, this would allow an attacker to effectively impersonate another user and spy on them by intercepting their calls. The question then became: What code needs to be modified in order to get this to happen? Well, we’re looking at a situation where temi initiates a call with its admin and an attacker attempts to intercept that call. In the case of calls initiated by the phone app, we discovered that call invitations are sent via InvitationManagerImpl.sendInviteMsg(), which publishes the call invite message on the topic “client/<ROBOT ID>/invite”. We suspected a similar approach was being used when a call is initiated from a temi to a phone user and decided to investigate to confirm.
Luckily for us, the exact same InvitationManagerImpl.sendInviteMsg() method could be found in the temi robot’s code, and it even seemed to function identically. Thus, it was probably safe to assume that the robot initiates calls with phone users in the same way: by publishing a call invitation to the topic “client/<CLIENT ID>/invite”, CLIENT_ID being the MQTT client ID of the callee.
If the caller publishes their call invites to a certain MQTT topic, it follows that the callee must subscribe to that same topic to receive the invite. Now that we knew the format of the call invite topic, the next step was to track down the code used by the Android app to subscribe to this topic so we could alter it to subscribe to the temi admin’s topic instead.
Performing a string search for this pattern in the decompiled phone app’s code produced three unique results:
Figure 45: Searching for “client/.+/invite”
The second result we recognized as being part of the code used to generate an outgoing call invitation. Thus, we were only interested in the first and third results.
We began by looking at the first result, found on line 170 of InvitationManagerImpl.java:
This reference is part of the method InvitationManagerImpl.sendInvitationAbortMsg(). Since we were interested in the code that subscribes to the call invite topic and not the code that publishes messages to it, we moved on.
The third result was found on line 523 of MqttManagerImpl.java:
Figure 47: MqttManagerImpl.buildInviteTopic()
This didn’t tell us anything about how the generated topic is used, so we took a step back and looked at what code invokes this method:
The call to buildInviteTopic() can be seen on line 408. There’s a lot going on in this method, but at a high level it appears that it is responsible for setting the callback functions for the MqttManager, which are being defined inline. More specifically, the invite topic string generated by buildInviteTopic() is used in the connectComplete() callback function, where it is passed as the first parameter to MqttManagerImpl.subscribe().
As expected, MqttManager’s subscribe() method is used to a subscribe to a particular MQTT topic, with the first parameter being the topic string:
Figure 49: MqttMangerImpl.subscribe()
Thus, it appeared that we had found the code being used to subscribe to the call invite MQTT topic. Based on these findings, we decided the simplest approach would be to change the call to MqttManagerImpl.subscribe() on line 408 of Figure 48 – instead of passing it the topic string returned by MqttManagerImpl.buildInviteTopic(), we would instead hard-code it to pass the temi admin’s call invite MQTT topic string.
Using this approach, we were able to construct a modified temi phone app that would receive all calls intended for another user, as shown in the following video:
Here, the vulnerability was the lack of any authentication when publishing or subscribing to arbitrary topics, denoted by CVE-2020-16167 and having a CVSS score of 8.6. At a minimum, a check should have been made to ensure that clients cannot subscribe to another client’s topics.
Problem: temi Won’t Answer My Calls
Our next goal was to gain the ability initiate a call with a temi and have it automatically answer. As mentioned previously, this capability is typically reserved for the temi’s admin and users explicitly authorized by the admin. Thus, if an attacker wishes to spy on a temi user, they would need to trick temi into thinking the call is originating from one of these authorized users.
With no other leads, we began searching through the robot’s codebase for keywords related to user permissions, ownership, admin, etc. until we found a very promising enum class:
Figure 50: Class used for delineating the various roles held by temi’s users
This enum is used in com.roboteam.teamy.users.User, the class used to store information about individual temi users, with User.role being an object of the Role enum class:
Figure 51: Class used to describe individual temi users/contacts
Going back to Figure 50, if we assume that Role.ADMIN refers to the user that originally registered temi and that Role.CONTACT refers to a normal user, then Role.OWNER likely refers to a user that has been authorized to “hop in” to the robot by the admin. To verify this, we looked for places in the code where this enum is used. There are 59 unique references to the Role class, but we’ll only be looking at the pertinent ones here.
The first reference we’ll be looking at appears on lines 351 and 357 of ConferencePresenter.handleViewForCallTypes():
On line 351, a string comparison is performed between this.resourcesLoader.getString(R.string.privacy_support_support_caller_name) and this.callerDisplayName; if the two match, a new User is created with a role of CONTACT. So just what is this string that the ‘if’ statement is checking against? We decided to take a look at where this constant is defined:
Figure 53: Searching for “privacy_support_support_caller_name”
Taken together, this means that if the caller’s display name is exactly “Support”, a new User is created with CONTACT privileges. This check likely exists in the event that a member of temi’s support staff calls a user. While this was certainly interesting, it is a niche scenario.
What happens if the caller’s name is not “Support”? In that case, the else statement on line 352 is hit, which simply sets user to the result of getUserbyPeerId():
Figure 54: ConferencePresenter.getUserByPeerId()
This method tries to obtain the User associated with the current caller by performing a lookup in temi’s UsersRepository using the caller’s MQTT client ID. If the lookup succeeds, the found User is returned; if it fails, a new User is created with Role.CONTACT privileges.
As mentioned previously, Figure 52 contains two references to the Role class. Let’s now look at the second reference, found on line 357. Here, the role of the user is checked. If they are either an ADMIN or an OWNER, temi:
waits 1.5 seconds (line 362)
checks if the call hasn’t already ended (line 365)
if it hasn’t, answers the incoming call (line 370)
Otherwise, the function returns after setting the app’s view for an incoming call.
Solution: Become an OWNER
To recap what we’ve learned thus far:
The role of ADMIN appears to be reserved for the user that originally registers temi via QR code scan.
If the user calling temi is not recognized, a new User object is created for them with CONTACT privileges.
If the user calling temi is recognized, their role is checked:
If they are a CONTACT, temi waits for the call to be answered via touchscreen.
If they are either an ADMIN or an OWNER, temi answers the call automatically. This is the behavior we want.
Scanning the temi’s QR code for the purposes of registration is only possible when temi is first powered on or after performing a factory reset. Thus, the attacker cannot realistically make themselves an ADMIN. And since CONTACT privileges are insufficient to get temi to pick up automatically, our best bet was to figure out how to obtain the role of OWNER.
Adding an OWNER: Phone App’s Perspective
Although we knew that it was possible to promote a user to an OWNER using the phone app and we suspected that was achieved via MQTT, we still weren’t sure of what goes on “under the hood” to make that happen.
After some searching, we found that AddOwnersPresenter.addOwners() is called whenever an admin selects one or more users to be granted OWNER privileges:
Here, selectedIds refers to the MQTT client IDs of the users selected to be promoted and robotId refers to the MQTT client ID of the temi robot this method is granting permissions for.
The majority of this method’s body has been trimmed because we’re really only concerned with what’s happening on lines 104-106, which seems to handle sending the request to add an OWNER – the rest of the method is dedicated to logging and updating local databases to reflect the new OWNERs.
This request is sent by first fetching the unique private key used to identify this app’s instance (line 101), and then creating a new AddRemoveOwnersRequest using the selectedIds, robotId, and this private key:
Figure 56: AddRemoveOwnersRequest
The constructor for AddRemoveOwnersRequest creates a timestamp for the request, then creates a new AddOwnersRequestRequest, which contains the body of the request, and finally uses the passed in privateKey in order generate a signature for the AddOwnersRequestRequest. In other words, AddRemoveOwnersRequest is nothing more than a wrapper for the real request and is used to store its signature.
We decided to look at this AddOwnersRequestRequest object next. While the ownerIds, robotId, and timestamp members were mostly self-explanatory, source and type were less so. Looking back at line 8, we saw that source was being set to a hardcoded value of “ADMIN”, which seemed to imply that this was the origin of the request. Looking back at line 6, we saw that type is simply the passed in OwnersRequestType enum (in our case, OWNERS_ADD_TYPE) converted to a string. This enum, defined near the bottom of the class, can take on two values: OWNERS_ADD_TYPE and OWNERS_REMOVE_TYPE. This implied that this same structure was recycled for requests meant to demote OWNERs back to CONTACTs.
Thus, we determined that AddRemoveOwnersRequests had the following structure:
Figure 57: Anatomy of an AddRemoveOwnersRequest
Now that we knew the structure of these requests, we next wanted to know how they were being sent and where. To this end, we decided to look at OwnersApi.addOwners(), which, according to Figure 55, is actually sending the AddRemoveOwnersRequest.
Figure 58: OwnersApi.addOwners()
The body of this method didn’t tell us much, but the imports for the OwnersApi class gave us a clue: this was using the Retrofit 2 HTTP Client for Android, which is typically used to send HTTP requests to a REST API. According to this Retrofit tutorial, the @POST annotation “indicates that we want to execute a POST request when this method is called” and “the argument value for the @POST annotation is the endpoint.” The @Body annotation, on the other hand, indicates that we want the AddRemoveOwnersRequest to serve as the body of the POST request.
Okay, so this method is simply sending the request to add an OWNER via POST to some REST server at the endpoint “ownership/admin/add/owners”. Our next question became: Where is this REST server located?
Part 5 of that same Retrofit tutorial told us that the getClient() method is typically used to obtain/create a Retrofit instance, and the only argument it takes is a string containing the URL for the REST server. Searching for “getClient()” in the phone app’s code led us to ApiClient.getClient():
Figure 59: ApiClient.getClient()
Working backwards from this method, we were able to track down the server’s URL:
Figure 60: URLs for the MQTT broker and REST server
This URL confirmed that the recipient of this request was not the temi robot and it was not being sent via MQTT, contrary to our initial assumptions. This begged the question: If the phone app wasn’t sending these requests to temi, how was temi being notified of any updates to the privileges of its users? We hypothesized that this REST server was simply a middleman whose job was to authenticate all requests to add/remove OWNERs by checking the request’s signature against the admin’s public key that was saved during temi’s initial registration process. This extra level of authentication made sense since privilege management was a particularly sensitive functionality. Presumably, this same REST server would then forward the request to the robot if it determined that the request’s signature was valid.
We took some time trying to send these requests from a user that wasn’t temi’s admin, but they failed, lending some credence to our theory. This was looking like a dead end.
Adding an OWNER: temi’s Perspective
Well, if we couldn’t successfully spoof the request the phone app sends to the REST server, perhaps we could instead spoof the request the server sends to temi, bypassing the authentication mechanism altogether. We started looking for the code temi used to handle these requests.
Searching specifically for “Role.OWNER” in the temi’s decompiled code led us to OwnersController$getUsersRepository$2.apply():
Starting with OwnersController$getUsersRepository$2, we moved up the call chain in an attempt to discover how temi processes requests to add OWNERs. More concretely, this was accomplished through a liberal use of the “Find Usage” feature in JADX, which can be done by simply right-clicking virtually any symbol. Although convenient, “Find Usage” would often fail when the method in question was not invoked directly, such as when an implementation of an abstract class/interface served as the middleman. In such cases, we would perform a string search for instances where the method was invoked in the smali code. To help separate invocations from declarations, we took advantage of the smali syntax for method calls, which consisted of the name of the class the method belonged to, followed by a right arrow (->), followed by the method’s signature.
As an example, “Find Usage” failed for the accept() method of the class UsersAdminTopicListener$listenToUserAdminTopic$1, so to find where it’s invoked, we ran:
Figure 62: Searching for UsersAdminTopicListener$listenToUserAdminTopic$1.accept() in the smali code
For especially indirect invocations, like dependency injection, more creative means had to be used, but a combination of “Find Usage” and string searches such as these got us 99% of the way there.
Using this approach, we found that the mechanism begins with the UsersAdminTopicListener class, which, as the name suggests, handles listening on temi’s “users admin” topic. Moving down the call chain from here, we found out how messages received on this topic are processed by temi and ultimately used to alter the Role, or privilege level, of certain contacts. Based on our earlier analysis, it was likely that the REST server would forward the request sent by the phone app to this MQTT topic.
We found that the bulk of the work is performed by a method called listenToUserAdminTopic():
This function does several things. First, on line 50, it creates a Flowable object by calling UsersAdminTopicListener.getOwnerRequestFlowable(). Next, on lines 51-56, it subscribes to this Flowable and for each emitted OwnersRequest, it calls OwnersController.handle$app_usaDemoRelease() upon success or simply logs upon failure.
We decided to first look at the code for getOwnerRequestFlowable():
It begins by first converting an Observable obtained via mqttPipeline.observe() into a Flowable.
Next, it throws out all emitted MqttMessages whose topic doesn’t match the regex “users/.+/admin” via RxJava’s filter() method.
RxJava’s map() method is then used convert the body of the MqttMessage from JSON to an object of the OwnersMessage
Finally, map() is used a second time to extract and return the OwnersRequest from each OwnersMessage.
At this point, we decided it would be useful to understand the structure of OwnersRequests and OwnersMessages, since these seem to be key to temi’s privilege management mechanisms:
Figure 65: Anatomy of an OwnersMessage
Put briefly, each OwnersMessage is nothing more than a wrapper for an OwnersRequest, which consists of ownerIds, a list of the MQTT client IDs of the users whose privileges are being modified, and type, which indicates whether the request is trying to promote a CONTACT to an OWNER (OWNERS_ADD) or demote an OWNER back to a CONTACT (OWNERS_REMOVE).
Comparing this to Figure 57, an OwnersMessage appears to be an AddRemoveOwnersRequest without the signature. Similarly, an OwnersRequest appears to be a stripped-down AddOwnersRequestRequest, with the robotId, source, and timestamp omitted. This meshes well with our earlier hypothesis that the REST server’s job is to authenticate and forward AddRemoveOwnersRequests to the temi robot. The signature, timestamp, and source would be omitted since they’ve already been verified by the server; the robotId, while needed by the server to know where to forward the request, becomes redundant once the request reaches temi.
Our next step was to figure out what handle$app_usaDemoRelease() does. Since it’s quite the rabbit hole, however, we will summarize its effects in lieu of venturing down it:
It queries the Users table in temi’s local database for all users with IDs matching any of the ones in the OwnersRequest’s ownerIds
It replaces the Role for each of these users with one corresponding to the OwnersRequest’s type: OWNERS_ADD→ OWNER, OWNERS_REMOVE → CONTACT.
It updates temi’s Users table with the updated user information.
This was promising, since it meant that we could potentially trick temi into promoting an arbitrary user/contact to an OWNER simply by crafting a custom OwnersRequest and publishing it on temi’s “users/<ROBOT_ID>/admin” topic, thereby bypassing the authentication server entirely.
Unfortunately, part 1 above reveals a potential obstacle for this plan: since temi will only process OwnersRequests for users already present in its local Users table, we must first add ourselves to this table for this strategy to succeed. Recalling our earlier analysis of how temi handles unrecognized callers, one way of accomplishing this was to simply cold call temi, which would cause it to automatically add the caller to its contacts list. This was far from ideal, however, since cold calling temi without already having the role of OWNER or ADMIN would cause it to ring and display the caller’s username on its screen, potentially alerting temi’s users that something weird is going on.
Detour: Sneaking Onto temi’s Contact List
Before continuing on, we decided to take a brief detour to find a better way for an attacker to add themselves to temi’s contact list.
From our prior investigation into how temi implements its various privilege levels through the use of Roles, we discovered that temi uses the User class to define its various users. Thus, it follows that any code used to add a new user to temi’s local contact list would first create a new User object, so that’s exactly what we searched for.
Figure 66: Searching temi’s code for “new User(“, trimmed
Figure 66 shows the result that we followed up on, the others have been omitted. The name of the containing class, SyncContactsController, sounded promising on its own since syncing contacts from temi’s ADMIN would likely involve adding new contacts without needing to start a call, which was exactly what we were trying to do.
Using largely the same strategy we employed for tracing the code flow for adding OWNERs (JADX’s “Find Usage” feature + grepping the smali code), we were able to trace the code flow all the way back to the app’s entry point. With a more holistic understanding of the mechanism used to sync contacts, we realized that the process is ultimately kicked off by SyncContactsTopicListener.subscribeMqttPipeline():
The first thing this method does is take this.mqttPipeline and turns it first into an Observable (using MqttPipeline.observe()) and then into a Flowable (using RxJavaInterop.toV2Flowable()), as seen on lines 29 and 30.
Essentially, this.mqttPipeline acts as a relay. Incoming MQTT messages are pushed to the relay using its push() method, which it then relays to all its observers.
The output of this relay is then filtered on lines 31-38 to only return MQTT messages received on topics matching the regex “synccontacts/.+”. Based on the other MQTT topic strings we’ve seen up to this point –
“users/<CLIENT_ID>/status”
“users/<CLIENT_ID>/admin”
“client/<CLIENT_ID>/invite”
– we were fairly certain temi’s client ID goes after the forward slash. Thus, temi appeared to be listening on the MQTT topic “synccontacts/<CLIENT_ID>” for messages regarding contact synchronization.
On lines 39-43, the now-filtered MQTT messages emitted by the relay are passed to a call to RxJava’s map() function, which converts each incoming MQTT message from JSON to an object of the SyncContactsMessage class. We quickly determined that SyncContactsMessages had the following structure:
Figure 68: Anatomy of a SyncContactsMessage
Put briefly, each SyncContactsMessage consisted of a senderClientId, a string holding the MQTT client ID of the request’s sender, and contacts, a list of ContactEntry objects. Each ContactEntry object in the list corresponded to a contact to be synced to temi’s contact list, containing both their clientId and their name.
Finally, on lines 45-49, SyncContactsController.save() would be called on each SyncContactsMessage spit out by the prior call to map():
Figure 69: SyncContactsController.save()
This method is doing a lot, but we’ll focus on only the most pertinent side-effects:
On lines 53-62, all messages where the senderClientId does not match the temi admin’s client ID are discarded. This will become important later.
On lines 63-75, the list of contacts is extracted from the SyncContactsMessage and is used to build a list of User objects – one per contact. The Users produced by this method are initialized in the following manner:
Member
Assigned Value
User.id
ContactEntry.clientId
User.name
ContactEntry.name
User.picUrl
“”
User.role
Role.CONTACT
User.userId
SyncContactsMessage.senderClientId
On lines 81-85, our newly-minted list of User objects is passed to insertOrUpdateContact(). This method writes the list of Users to the Users table in temi’s SQLite 3 database, completely overwriting temi’s old contacts list in the process.
So now that we knew how an ADMIN’s contacts are synced to their temi, how could we leverage that knowledge to add ourselves to temi’s contact list in a discrete fashion? Well, if we could publish a message to temi’s synccontacts MQTT topic, we could add ourselves as a contact. Although temi does perform a check to make sure that the sender of the message is its ADMIN, it makes the mistake of trusting that the contents of the message accurately reflect the actual sender. In theory, there’s nothing stopping us from publishing a SyncContactsMessage from one client ID and setting the senderClientId field in the message to a completely different ID – the ADMIN’s client ID, for example.
Based on our understanding of how the temi robot parses MQTT requests to sync contacts, we crafted a JSON message that should decode into a valid SyncContactsMessage object and would add our “attack” phone to temi’s contacts:
Figure 70: Our custom SyncContactsMessage in JSON format
This message was crafted using the following rationale:
Objects in JSON are indicated via curly braces ({}). Since the entire message contents are being converted into a single SyncContactsMessageobject, it follows that the contents should be contained within a pair of curly braces, representing the SyncContactsMessage object itself.
A SyncContactsMessagecontains a senderClientId, a string indicating the client ID of the “sender” of the message. Thus, we added an element to our object with the key “senderClientId” and the string “060f62296e1ab8d0272b623f2f08f915” – the client ID of the temi’s admin – as its value.
A SyncContactsMessagealso contains contacts, a list of ContactEntry Thus, we added an element with the key “contacts” and a new list as its value. In JSON, lists are indicated via square brackets ([]).
In our case, the contacts list contains only a single ContactEntry – one corresponding to the “attack” phone, so we added a single pair of curly braces to the list to indicate the contents of this single ContactEntry.
Each ContactEntry contains a clientId. Thus, we added an element to the object with the key “clientId” and the string “fe5d7af42433f0b6fb6875b6d640931b” – the client ID of the “attack” phone – as its value.
Each ContactEntry also contains a name. Thus, we added a second element to the object with the key “name” and the string “Test” as its value. This simply represents the display name for the contact, so we could set it to basically whatever we liked.
As for spacing and newlines, we referred to the Gson User Guide, since that’s the library temi was using to perform JSON serialization/deserialization. The guide states:
“The default JSON output that is provided by Gson is a compact JSON format. This means that there will not be any whitespace in the output JSON structure. Therefore, there will be no whitespace between field names and its value, object fields, and objects within arrays in the JSON output.”
As a result, whitespace and newlines have been omitted.
Now that we understood where to publish the request and what the request should look like, the next step was figuring out how to alter the temi phone app to send this request. Since we were aiming for a simple proof-of-concept, we prioritized ease and speed of implementation over robustness or elegance when considering which code to change. To this end, it made sense to leverage the code already present in the app that was dedicated to publishing MQTT messages since it would minimize the amount of code we needed to change and thus reduce the risk of introducing unintended bugs into our altered app – an ever-present risk when directly modifying an app’s bytecode. While the phone app publishes many MQTT messages to various topics during its runtime, we decided to try using the phone app’s video calling functionality. This had the obvious advantage of giving us clear control over when and how often the MQTT message is published, since calls are always initiated by hitting the “Connect” or “Call” buttons. Ultimately, we decided to leverage InvitationManagerImpl.sendInviteMsg() for this purpose. If we refer back to Figure 39 and our section “How are MQTT Call Invite Messages Published?”, the reason for this becomes apparent: sendInviteMsg() has a direct interface to MqttManagerImpl.publish(), the underlying method temi uses to publish arbitrary MQTT messages, while still being specific to the call chain for outgoing video calls. This meant that it would only get executed unless when we manually initiated a call from the app.
Running our altered app resulted in our attack phone being added to temi’s contact list, as shown in the following video:
Gaining OWNER Privileges
All that was left was for us to craft and publish a custom OwnersMessage in order to gain OWNER privileges on our temi. As before, we began by crafting the JSON for the message itself:
Figure 71: Our custom OwnersMessage in JSON format
This message was crafted using the following rationale:
Since the entire message contents are being converted into a single OwnersMessage object, it follows that the contents should be contained within a pair of curly braces, representing the OwnersMessage object itself.
An OwnersMessage contains a request, an object of the OwnersRequest class, and nothing else. Thus, we added an element with the key “request” and a new object as its value. The contents of this inner object will become our OwnersRequest.
An OwnersRequest contains a type, an enum specifying the type of request. In our case we want to add an OWNER, so we added an element with the key “type” and the string “OWNERS_ADD” as its value. As for why we’re expressing this enum value as a string, it’s because this article shows that Gson’s default behavior is to express enum values as strings corresponding to the name of the enum value.
An OwnersRequest also contains ownerIds, a list of strings enumerating the temi contacts the request should apply to. In our case, the list contains only a single ID – the ID of the “attack” phone, which is just the MD5 hash of its phone number.
As before, spaces and newlines have been omitted, per the Gson User Guide.
Fortunately, we were able to leverage the code modifications we used to publish our SyncContactsMessage, since the only things changing were the MQTT topic we’re publishing to and the contents of the message.
Our full test consisted of running our first modified app in order to add ourselves to temi’s contact list, followed by our second modified app in order to perform privilege escalation, as shown in the following video:
The call appeared to be fully functional, and we were able to drive temi around, navigate to its saved locations, and receive its audio/video feeds. All an attacker needed to make this possible was the phone number of one of temi’s contacts.
This authentication bypass was the final and most impactful vulnerability we discovered in the temi robot, denoted by CVE-2020-16169 and having a CVSS score of 9.4. To better understand how this an auth bypass, let’s compare temi’s privilege management during normal operation:
Figure 72: temi’s privilege management during normal operation
to how it looks using our modified app:
Figure 73: temi’s privilege management with auth bypass
As you can see, although authentication is in place for adding OWNERs, it can be circumvented entirely by simply spoofing the MQTT message temi expects to receive from the REST server post-authentication.
Refining the Exploits
Once our exploits had the capabilities we initially set out to obtain, we got to work refining them. We created a single modified app that combined all of the MQTT attack vectors we previously outlined: it could intercept calls and send both MQTT messages necessary to obtain OWNER privileges, all with a single button press. Furthermore, we added some additional logic to automatically extract the client IDs required by our custom MQTT messages from the app’s contact list, meaning our app would work on any temi robot. To see all these features in action, please refer to the video below:
Impact
At this point it would be prudent to take a step back, review the combined capabilities of the exploits we’ve outlined, and consider what impact these might have in a real-world setting.
At the time of discovery, the vulnerabilities in the temi robot meant that an attacker could join any ongoing temi call simply by using a custom Agora app initialized with temi’s hardcoded App ID and iterating over all 900,000 possible channel names – certainly feasible with modern computing power. This becomes more plausible if one considers that our testing revealed that joining a temi call this way does not notify either of the existing call participants that another user is present, since the apps were only designed with 1-on-1 calling in mind. The fact that the attacker needs no information on the victims makes this attack vector worrisome, but it also means that the attacker has no control over who he or she spies on using this method. Realistically, a malicious actor might use this exploit as a means of reconnaissance – by collecting data on arbitrary temi users, they could follow up on the more “promising” ones later with one of the targeted attack vectors. Given temi’s limited adoption by consumers but ramping adoption in industries such as healthcare, the odds are in the attacker’s favor that they stumble upon sensitive information.
Furthermore, if an attacker has a specific victim in mind, they could leverage the lack of per-topic authentication in temi’s MQTT implementation in order to intercept calls between a user and their temi. The plausibility of this particular attack vector is difficult to evaluate. On the one hand, the only information the attacker needs is the victim’s phone number, and telemarketers consistently remind us that this is a very low bar. On the other hand, this exploit’s behavior during our testing proved somewhat inconsistent. At times, the user running the exploit could listen in on the call as a third party with the call participants being none the wiser. At other times, the user running the exploit would disconnect the original caller and take their place. It is easy to imagine scenarios where both of these outcomes are desirable to a malicious actor, and neither bodes well for the privacy of the user being targeted. It is also important to note that although we focused on using this vulnerability to intercept calls, it can also be used to intercept all sorts of notifications intended for another user: contacts being added, when temi goes online/offline, its current location, etc. While less flashy than intercepting a call, this approach is more discreet and might allow a dedicated attacker to learn a user’s routine, only showing their hand to listen in when it would be most impactful.
Of the possible attack vectors, we believe that the ability to call and control a temi remotely, leveraging the authentication bypass present in temi’s privilege management mechanism, is the most impactful. The bar for this attack vector is arguably even lower than the previous, as the attacker would only need the phone number of any of temi’s contacts – it need not be its admin. In our testing, none of the steps involved in leveraging this exploit notify temi’s admin in any way that something is amiss; they are not notified that the attacker has added themselves to the robot’s contact list nor that they have gained raised privileges. Since this method does not cause temi to ring, an observer would have to see the attacker move temi or have a good look at its touchscreen during the attack to know something nefarious was going on. This level of control and subtlety becomes especially problematic when we consider some of temi’s applications in industry. In April of this year, Robotemi Global Ltd. stepped up production to 1,000 units per month in response to Israel’s Ministries of Defense and Health choosing temi to assist with patients in their COVID-19 wards. In South Korea, temi sees use in both public places and nursing homes, helping to facilitate social distancing. Besides the obvious impact of compromising patients’ sensitive medical information during remote doctor’s visits, the ability to have eyes and ears into a hospital is worrying in its own right. It isn’t difficult to imagine what a malicious agent might do with an overheard network password, access code to a sensitive area, or the location and condition of a person of interest.
Conclusion
The findings outlined in this paper highlight the importance of secure coding practices and security auditing for cutting edge technologies, particularly in the IoT space. When an IoT camera evolves into one that can also drive around your home or business and even assist medical patients, the need to properly secure who can access it only becomes more important. While history has demonstrated that any sufficiently complex software is bound to have vulnerabilities, there are many steps we can take to substantially raise the bar for bad actors. In our view, these responsibilities are shared between vendors, consumers, and the security industry as a whole.
First and foremost, vendors can ensure they are employing proper security hygiene when designing products. Often, best practices consist not of novel, out-of-the-box thinking, but rather adopting time-tested guidelines like the principle of least privilege. In the case of temi, application of this principle likely would have addressed CVE-2020-16167, which allows clients to subscribe/publish to topics they have no business accessing. Considerations for security should also extend beyond the development phase, with third-party security auditing being a powerful tool that allows vendors to discover and patch vulnerabilities before the bad guys get ahold of them. Taking vulnerability disclosure seriously and cooperating with the bodies that report them can often net similar benefits, and this is an area Robotemi excelled in.
Consumers of these technologies, at either the individual or enterprise level, also share some of the responsibility. Companies should ideally vet all technologies before widespread adoption, particularly if these technologies have access to customer information. It goes without saying that greater risk warrants greater scrutiny. Individuals, while typically having fewer resources, can also take certain precautions. Placing IoT devices on a VLAN, a virtually segregated subnetwork, reduces the risk that a vulnerability in one device will compromise your entire network. Staying up to date on important vulnerabilities can also help inform consumers’ purchasing decisions, although the presence of vulnerabilities is often less illuminating than if/how a company chooses to address them.
Finally, we in the security industry also have an obligation towards moving the needle in the realm of security, one that we hope research such as this embodies. One of our goals here at McAfee ATR is to identify and illuminate a broad spectrum of threats in today’s complex and constantly evolving landscape. While we take seriously our obligation to inform vendors of our findings in a timely and responsible fashion, it is only through cooperation that the best results are possible. Our partnership with Robotemi on addressing these vulnerabilities was a perfect example of this. They responded quickly to our private disclosure report, outlined to us their plans for mitigations and an associated timeline, and maintained a dialogue with us throughout the whole process. We even received feedback that they have further emphasized security in their products by approaching all development discussions with a security-first mindset as a result of this disclosure. The ultimate result is a product that is more secure for all who use it.
FireEye’s Data Science and Information Operations Analysis teams
released this blog post to
coincide with our Black Hat USA 2020 Briefing, which details how
open source, pre-trained neural networks can be leveraged to generate
synthetic media for malicious purposes. To summarize our presentation,
we first demonstrate three successive proof of concepts for how
machine learning models can be fine-tuned in order to generate
customizable synthetic media in the text, image, and audio domains.
Next, we illustrate examples in which synthetically generated media
have been weaponized for information operations (IO), as detected on
the front lines by Mandiant Threat Intelligence. Finally, we outline
challenges in detecting synthetically generated content, and lay out
potential paths forward in a future where synthetically generated
media will increasingly look, speak, and write like us.
Highlights
Open source, pre-trained
natural language processing, computer vision, and speech
recognition neural networks can be weaponized for offensive
social media-driven IO campaigns.
Detection,
attribution, and response is challenging in scenarios where
actors can anonymously generate and distribute credible fake
content using proprietary training datasets.
The
security community can and should help AI researchers,
policy makers, and other stakeholders mitigate the harmful
use of open source models.
Background: Synthetic Media, Generative Models, and Transfer Learning
Synthetic media is by no means a new development; methods for
manipulating media for specific agendas are as old as the media
themselves. In the 1930’s, the chief of the Soviet secret police was
photographed walking alongside Joseph Stalin before being retouched
out of an official press photo, after
he himself was arrested and executed during the Great
Purge. Digital graphic manipulation like this became prominent
with the advent of Photoshop. Then later in the 2010’s, the term
“deepfake” was coined. While deepfake videos, including techniques
like face swapping and lip syncing, are concerning in the long term,
this blog post focuses on more basic, but we argue more believable,
synthetic media generation advancements in the text, static image, and
audio domains. Machine learning approaches for creating synthetic
media are underpinned by generative models, which have been
effectively misused to
fabricate high volume submissions to federal public comment
websites and clone
a voice to trick an executive into handing over $240,000.
The pre-training required to produce models capable of synthetic
media generation can cost thousands of dollars, take weeks or months
of time, and require access to expensive GPU clusters. However, the
application of transfer
learning can drastically reduce the amount of time and effort
involved. In transfer learning, we start from a large generic model
that has been pre-trained for an initial task where copious data is
available. We then leverage the model’s acquired knowledge to train it
further on a different, smaller dataset so that it excels at a
subsequent, related task. This process of training the model further
is referred to as fine-tuning, which typically requires less
resources compared to pre-training from scratch. You can think of this
in more relatable terms—if you’re a professional tennis player, you
don’t need to completely relearn how to swing a racket in order to
excel at badminton.
Technology and elections are heavily interrelated ??? but it wasn???t always that way. We started to adopt technology once weツ?weren???t able toツ?fit everyone into a town hall. The first piece of technology was simply a piece of paper and a ballot box. We may not think of it asツ?technology,ツ?but the ballot box can be tampered with.ツ?ツ?
That technology gave us ballot secrecy, a trait that aツ?hand-raiseツ?in the town hall didn???t. This raised the barツ?to a level that is expected from other voting technologies since then, which can be tougher with voting machines and electronic evaluation of ballot boxes. Our Confidence in the outcome of an election depends on the integrity of the methodology we use to do this.
???ツ?ツ?
Matt Blaze, this year???sツ?Black Hat keynoteツ?speaker,ツ?is a researcher in the areas of secure systems, cryptography, and trust management. He is currently the McDevitt Chair of Computer Science and Law at Georgetown University.ツ?ツ?
Blazeツ?has been working on election security for years. He???s neverツ?encounteredツ?a problem bigger andツ?moreツ?complexツ?than democraticツ?elections. The reason for this is that the requirements are contradictory: Weツ?don???t want to be able to figure out how someone voted, but we wantツ?transparencyツ?into whether or notツ?our vote was counted as cast and that the system is not corrupted. The paper ballot box seems to do thisツ?pretty well, and other technology solutions require you to be a lotツ?more clever.ツ?Another snag is that you cannot recover from a bad election very easily. You can???t redo it easily before the term is up.ツ?ツ?
U.S.ツ?voting isツ?highlyツ?decentralizedツ?due to size
The federal government has remarkably little to do with the election process; each state has their own rules and requirements. The elections are carried out by over 3,000 counties and voting takes place inツ?precincts in these counties. It???s a very decentralized process. Even within a precinct, there may be different ballots for various local elections. The county???s budget is paying for elections, so improvements in election technology competes with improvements to roads and the fire department.ツ?In the 2016 election, about 24% were cast by mail and 17% cast in person before election day. Most states allowツ?someツ?formツ?of absenteeツ?voting.ツ?ツ?
Election campaignsツ?vastly outspend the money that???s spent on carrying out the elections.ツ?In addition, foreign state adversaries have recently entered the game, sometimes simply with the goal of disrupting elections and undermining the legitimacy of an election. That???sツ?actually easierツ?than influencing a particular outcome.ツ?ツ?
The question is: Does new voting tech enable or prevent mischief? The answer is: both.ツ?ツ?
Paper ballots are more effective in re-assessing aツ?particular voteツ?and agreeing on an outcome. If we remember voting machines in Florida that led to the re-count inツ?2000, they didn???t even involve a computer. It was simply a punch card with a manual punch to vote. However, the mechanical design was flawed, andツ?it became more difficult to vote for a popular in the end of the day because punched out paper from previous votes were blockingツ?the punch.ツ?ツ?
???
A Florida election official trying to interpret a paper ballot during the 2000 U.S. presidential elections.ツ?
As a result, Congress passed the Help America Vote Act (HAVA). It provided funding to modernize voting and to make it more ???accessible??? to a wide range of voters. Most of the current equipment did not comply.ツ?However, the technology wasn't broadly available.ツ?ツ?
The DRE voting machine was a common new form of computerized voting that works similarly to an ATM. It counts the votes in an internal computer. Looking at the entire journey, software touches each part of the voteツ????ツ?such as voter registration databasesツ?andツ?software to check who???s already votedツ?orツ?to count and report the votes. The security of this software is critical to theツ?legitimacyツ?of the election.ツ?At the same time, software is designed to be replaceable and easily changed. It???s aツ?really hardツ?problem to solve.ツ?ツ?
Theツ?votingツ?systemツ?attackツ?surface isツ?hugeツ?ツ?
Software security and reliability is hard, even under the best of circumstances.ツ?In practice, the attack surface is huge: county election management software, voting machine firmware, communications, procedures, physical security,ツ?and people. Attacksツ?includeツ?anything fromツ?denial of service to forging the vote. Every piece of computerized voting technology so far has been terrible.ツ?ツ?
The DMCA Security Research Exemption makes it legal to buy surplus voting machines, hack them, and to report on your findings. The DEFCON Voting Village does this, and everything is worse than we thought.ツ?ツ?
Hand-countedツ?paperツ?ballots vs.ツ?blockchainツ?
We haveツ?two options:ツ?We could just hand-count all votes on paper or amp up the technology (blockchain FTW!). The size of the US election is so large thatツ?hand-counting would be extremely hard. It would beツ?very difficultツ?to eliminate all reliance on software for the entire election.ツ?ツ?
On the other side, the blockchain makes us more dependent on software.ツ?Also, the blockchain is decentralized while elections have a central oversight, which is a contradiction. Just detecting election fraud is not helpful either, we need to prevent it to start with.ツ?ツ?
Twoツ?breakthroughsツ?since 2020ツ?ツ?
There were two breakthroughs since 2020 that help us:ツ?ツ?
Ron Rivestツ?inventedツ?software independence. A voting system is software-independent ifツ?anツ?undetected change or error in its software cannot cause an undetectable change or error in an election outcome.ツ?ツ?
Stark et al developed a new statistical method to sample a subset of voting machines (e.g. paper ballot optical scanners) for post-election hand audits to ensure they reported correct results. If not, the other ones can be hand-counted.ツ?ツ?
These two ideas have become the gold standard for securing elections since 2020. Progress is positive but slow, and it addresses the key concernsツ?computer scientistsツ?wereツ?worried about in past elections.ツ?If you???d like to read up on election security, Blaze recommends theツ?National Academy of Scienceツ????Securing the Vote??? (2018) study.ツ?
Matt???s talk would have ended here if it wasn???t for the pandemic...ツ?
Postponing elections is absolutely the worst-case option. There are often no rulesツ?forツ?this. It may be preferable to hold an election that people regard as illegitimate.ツ?ツ?
A huge logistical challengeツ?ツ?
Emergencies (such as a pandemic) likely require scaling up mail-in voting. Absentee voting exists in every U.S. jurisdiction, but they often require a reason, such as being out of town ??? unlikely during the pandemic. Some places allow absentee ballots without an excuse.ツ?ツ?
The question is how we scale up absentee voting during an emergency, and this is a resource and logistics problem.ツ?ツ?
???
The voter-side of an absentee ballot is reasonably easy but the workflow on the system side is relatively complex.ツ?It???s aツ?fairly labor-intensiveツ?process that involves checks by multiple people and can involve some technology. Exception handling, like signature mismatches, is even more labor-intensive because they require reaching out to the voter. Simple logistics of theツ?number of envelopes and ballots and the throughput of your counting machines may provide restrictions. Ballots themselves have security features so they can???t simply be printed at a localツ?copy shopツ?either.ツ?ツ?
???
Vote batch scanning machines are big, bulky and hard toツ?mass-produce.ツ?ツ?ツ?
Your local election officials need your skills ??? ask how you can help!ツ?ツ?
There are reasons to be optimistic and pessimistic. We don???t know how many people need paper ballots, so we???ll have to over-produce just to be sure. Most jurisdictions don???t have the funding to do this. Time isツ?really shortツ???? less than 100 days away.ツ?This problem isツ?similarツ?toツ?some computing problems. This community is going to be needed by the local election officials. Phone them, find out how you can help.ツ?ツ?
Co-authored with Jesse Chick, OSU Senior and Former McAfee Intern, Primary Researcher.
Special thanks to Dr. Catherine Huang, McAfee Advanced Analytics Team
Special thanks to Kyle Baldes, Former McAfee Intern
“Face” the Facts
There are 7.6 Billion people in the world. That’s a huge number! In fact, if we all stood shoulder to shoulder on the equator, the number of people in the world would wrap around the earth over 86 times! That’s just the number of living people today; even adding in the history of all people for all time, you will never find two identical human faces. In fact, in even some of the most similar faces recorded (not including twins) it is quite easy to discern multiple differences. This seems almost impossible; it’s just a face, right? Two eyes, a nose, a mouth, ears, eyebrows and potentially other facial hair. Surely, we would have run into identical unrelated humans by this point. Turns out, there’s SO much more to the human face than this which can be more subtle than we often consider; forehead size, shape of the jaw, position of the ears, structure of the nose, and thousands more extremely minute details.
You may be questioning the significance of this detail as it relates to McAfee, or vulnerability research. Today, we’ll explore some work undertaken by McAfee Advanced Threat Research (ATR) in the context of data science and security; specifically, we looked at facial recognition systems and whether they were more, or less susceptible to error than we as human beings.
Look carefully at the four images below; can you spot which of these is fake and which are real?
StyleGAN images
The answer may surprise you; all four images are completely fake – they are 100% computer-generated, and not just parts of different people creativally superimposed. An expert system known as StyleGAN generated each of these, and millions more, with varying degrees of photorealism, from scratch.
This impressive technology is equal parts revolutions in data science and emerging technology that can compute faster and cheaper at a scale we’ve never seen before. It is enabling impressive innovations in data science and image generation or recognition, and can be done in real time or near real time. Some of the most practical applications for this are in the field of facial recognition; simply put, the ability for a computer system to determine whether two images or other media represent the same person or not. The earliest computer facial recognition technology dates back to the 1960s, but until recently, has either been cost-ineffective, false positive or false negative prone, or too slow and inefficient for the intended purpose.
The advancements in technology and breakthroughs in Artificial Intelligence and Machine Learning have enabled several novel applications for facial recognition. First and foremost, it can be used as a highly reliable authentication mechanism; an outstanding example of this is the iPhone. Beginning with the iPhone X in 2017, facial recognition was the new de facto standard for authenticating a user to their mobile device. While Apple uses advanced features such as depth to map the target face, many other mobile devices have implemented more standard methods based on the features of the target face itself; things we as humans see as well, including placement of eyes, width of the nose, and other features that in combination can accurately identify a single user. More simplistic and standard methods such as these may inherently suffer from security limitations relative to more advanced capabilities, such as the 3D camera capture. In a way, this is the whole point; the added complexity of depth information is what makes pixel-manipulation attacks impossible.
Another emerging use case for facial recognition systems is for law enforcement. In 2019, the Metropolitan Police in London announced the rollout of a network of cameras designed to aid police in automating the identification of criminals or missing persons. While widely controversial, the UK is not alone in this initiative; other major cities have piloted or implemented variants of facial recognition with or without the general population’s consent. In China, many of the trains and bus systems leverage facial recognition to identify and authenticate passengers as they board or unboard. Shopping centers and schools across the country are increasingly deploying similar technology.
More recently, in light of racial profiling and racial bias demonstrated repeatedly in facial recognition AI, IBM announced that it would eliminate its facial recognition programs given the way it could be used in law enforcement. Since then, many other major players in the facial recognition business have suspended or eliminated their facial recognition programs. This may be at least partially based on a high profile “false positive” case in which authorities errantly based an arrest of an individual on an incorrect facial recognition match of a black man named Robert Williams. The case is known as the country’s first wrongful arrest directly resulting from facial recognition technology.
Facial recognition has some obvious benefits of course, and this recent article details the use of facial recognition technology in China to track down and reunite a family many years after an abduction. Despite this, it remains a highly polarizing issue with significant privacy concerns, and may require significant further development to reduce some of the inherent flaws.
Live Facial Recognition for Passport Validation
Our next use case for facial recognition may hit closer to home than you realize. Multiple airports, including many in the United States, have deployed facial recognition systems to aid or replace human interaction for passport and identity verification. In fact, I was able to experience one of these myself in the Atlanta airport in 2019. It was far from ready, but travellers can expect to see continued rollouts of this across the country. In fact, based on the global impact COVID-19 has had on travel and sanitization, we are observing an unprecedented rush to implement touchless solutions such as biometrics. This is of course being done from a responsibility standpoint, but also from an airlines and airport profitability perspective. If these two entities can’t convince travelers that their travel experience is low-risk, many voluntary travelers will opt to wait until this assurance is more solid. This article expands on the impact Coronavirus is having on the fledgling market use of passport facial recognition, providing specific insight into Delta and United Airlines’ rapid expansion of the tech into new airports immediately, and further testing and integration in many countries around the world. While this push may result in less physical contact and fewer infections, it may also have the side-effect of exponentially increasing the attack surface of a new target.
The concept of passport control via facial recognition is quite simple. A camera takes a live video and/or photos of your face, and a verification service compares it to an already-existing photo of you, collected earlier. This could be from a passport or a number of other sources such as the Department of Homeland Security database. The “live” photo is most likely processed into a similar format (image size, type of image) as the target photo, and compared. If it matches, the passport holder is authenticated. If not, an alternate source will be checked by a human operator, including boarding passes and forms of ID.
As vulnerability researchers, we need to be able to look at how things work; both the intended method of operation as well as any oversights. As we reflected on this growing technology and the extremely critical decisions it enabled, we considered whether flaws in the underlying system could be leveraged to bypass the target facial recognition systems. More specifically, we wanted to know if we could create “adversarial images” in a passport-style format, that would be incorrectly classified as a targeted individual. (As an aside, we performed related attacks in both digital and physical mediums against image recognition systems, including research we released on the MobilEye camera deployed in certain Tesla vehicles.)
The conceptual attack scenario here is simple. We’ll refer to our attacker as Subject A, and he is on the “no-fly” list – if a live photo or video of him matches a stored passport image, he’ll immediately be refused boarding and flagged, likely for arrest. We’ll assume he’s never submitted a passport photo. Subject A (AKA Jesse), is working together with Subject B (AKA Steve), the accomplice, who is helping him to bypass this system. Jesse is an expert in model hacking and generates a fake image of Steve through a system he builds (much more on this to come). The image has to look like Steve when it’s submitted to the government, but needs to verify Jesse as the same person as the adversarial fake “Steve” in the passport photo. As long as a passport photo system classifies a live photo of Jesse as the target fake image, he’ll be able to bypass the facial recognition.
If this sounds far-fetched to you, it doesn’t to the German government. Recent policy in Germany included verbiage to explicitly disallow morphed or computer-generated combined photos. While the techniques discussed in this link are closely related to this, the approach, techniques and artifacts created in our work vary widely. For example, the concepts of face morphing in general are not novel ideas anymore; yet in our research, we use a more advanced, deep learning-based morphing approach, which is categorically different from the more primitive “weighted averaging” face morphing approach.
Over the course of 6 months, McAfee ATR researcher and intern Jesse Chick studied state-of-the-art machine learning algorithms, read and adopted industry papers, and worked closely with McAfee’s Advanced Analytics team to develop a novel approach to defeating facial recognition systems. To date, the research has progressed through white box and gray box attacks with high levels of success – we hope to inspire or collaborate with other researchers on black box attacks and demonstrate these findings against real world targets such as passport verification systems with the hopes of improving them.
The Method to the Madness
The term GAN is an increasingly-recognized acronym in the data science field. It stands for Generative Adversarial Network and represents a novel concept using one or more “generators” working in tandem with one or more “discriminators.” While this isn’t a data science paper and I won’t go into great detail on GAN, it will be beneficial to understand the concept at a high level. You can think of GAN as a combination of an art critic and an art forger. An art critic must be capable of determining whether a piece of art is real or forged, and of what quality the art is. The forger of course, is simply trying to create fake art that looks as much like the original as possible, to fool the critic. Over time, the forger may outwit the critic, and at other times the opposite may hold true, yet ultimately, over the long run, they will force each other to improve and adapt their methods. In this scenario, the forger is the “generator” and the art critic is the “discriminator.” This concept is analogous to GAN in that the generator and discriminator are both working together and also opposing each other – as the generator creates an image of a face, for example, the discriminator determines whether the image generated actually looks like a face, or if it looks like something else. It rejects the output if it is not satisfied, and the process starts over. This is repeated in the training phase for as long of a time as it takes for the discriminator to be convinced that the generator’s product is high enough quality to “meet the bar.”
One such implementation we saw earlier, StyleGAN, uses these exact properties to generate the photorealistic faces shown above. In fact, the research team tested StyleGAN, but determined it was not aligned with the task we set out to achieve: generating photorealistic faces, but also being able to easily implement an additional step in face verification. More specifically, its sophisticated and niche architecture would have been highly difficult to harness successfully for our purpose of clever face-morphing. For this reason, we opted to go with a relatively new but powerful GAN framework known as CycleGAN.
CycleGAN
CycleGAN is a GAN framework that was released in a paper in 2017. It represents a GAN methodology that uses two generators and two discriminators, and in its most basic sense, is responsible for translating one image to another through the use of GAN.
Image of zebras translated to horses via CycleGAN
There are some subtle but powerful details related to the CycleGAN infrastructure. We won’t go into depth on these, but one important concept is that CycleGAN uses higher level features to translate between images. Instead of taking random “noise” or “pixels” in the way StyleGAN translates into images, this model uses more significant features of the image for translation (shape of head, eye placement, body size, etc…). This works very well for human faces, despite the paper not specifically calling out human facial translation as a strength.
Face Net and InceptionResnetV1
While CycleGAN is an novel use of the GAN model, in and of itself it has been used for image to image translation numerous times. Our facial recognition application facilitated the need for an extension of this single model, with an image verification system. This is where FaceNet came into play. The team realized that not only would our model need to accurately create adversarial images that were photorealistic, it would also need to be verified as the original subject. More on this shortly. FaceNet is a face recognition architecture that was developed by Google in 2015, and was and perhaps still is considered state of the art in its ability to accurately classify faces. It uses a concept called facial embeddings to determine mathematical distances between two faces in a dimension. For the programmers or math experts, 512 dimensional space is used, to be precise, and each embedding is a 512 dimensional list or vector. To the lay person, the less similar the high level facial features are, the further apart the facial embeddings are. Conversely, the more similar the facial features, the closer together these faces are plotted. This concept is ideal for our use of facial recognition, given FaceNet operates against high level features of the face versus individual pixels, for example. This is a central concept and a key differentiator between our research and “shallow”adversarial image creation a la more traditionally used FGSM, JSMA, etc. Creating an attack that operates at the level of human-understandable features is where this research breaks new ground.
One of the top reasons for FaceNet’s popularity is that is uses a pre-trained model with a data set trained on hundreds of millions of facial images. This training was performed using a well-known academic/industry-standard dataset, and these results are readily available for comparison. Furthermore, it achieved very high published accuracy (99.63%) when used on a set of 13,000 random face images from a benchmark set of data known as LFW (Labeled Faces in the Wild). In our own in-house evaluation testing, our accuracy results were closer to 95%.
Ultimately, given our need to start with a white box to understand the architecture, the solution we chose was a combination of CycleGAN and an open source FaceNet variant architecture known as InceptionResnet version 1. The ResNet family of deep neural networks uses learned filters, known as convolutions, to extract high-level information from visual data. In other words, the role of deep learning in face recognition is to transform an abstract feature from the image domain, i.e. a subject’s identity, into a domain of vectors (AKA embeddings) such that they can be reasoned about mathematically. The “distance” between the outputs of two images depicting the same subject should be mapped to a similar region in the output space, and two very different regions for input depicting different subjects. It should be noted that the success or failure of our attack is contingent on its ability to manipulate the distance between these face embeddings. To be clear, FaceNet is the pipeline consisting of data pre-processing, Inception ResNet V1, and data separation via a learned distance threshold.
Training
Whoever has the most data wins. This truism is especially relevant in the context of machine learning. We knew we would need a large enough data set to accurately train the attack generation model, but we guessed that it would be smaller than many other use cases. This is because given our goal was simply to take two people, subject A (Jesse) and subject B (Steve) below and minimize the “distance” between the two face embeddings produced when inputted into FaceNet, while preserving a misclassification in either direction. In other words, Jesse needed to look like Jesse in his passport photo, and yet be classified as Steve, and vice versa. We’ll describe facial embeddings and visualizations in detail shortly.
The training was done on a set of 1500 images of each of us, captured from live video as stills. We provided multiple expressions and facial gestures that would enrich the training data and accurately represent someone attempting to take a valid passport photo.
The research team then integrated the CycleGAN + FaceNet architecture and began to train the model.
As you can see from the images below, the initial output from the generator is very rough – these certainly look like human beings (sort of), but they’re not easily identifiable and of course have extremely obvious perturbations, otherwise known as “artifacts.”
However, as we progress through training over dozens of cycles, or epochs, a few things are becoming more visually apparent. The faces begin to clean up some of the abnormalities while simultaneously blending features of both subject A and subject B. The (somewhat frightening) results look something like this:
Progressing even further in the training epochs, and the discriminator is starting to become more satisfied with the generator’s output. Yes, we’ve got some detail to clean up, but the image is starting to look much more like subject B.
A couple hundred training epochs in, and we are producing candidates that would meet the bar for this application; they would pass as valid passport photos.
Fake image of Subject B
Remember, that with each iteration through this training process, the results are systematically fed into the facial recognition neural network and classified as Subject A or Subject B. This is essential as any photo that doesn’t “properly misclassify” as the other, doesn’t meet one of the primary objectives and must be rejected. It is also a novel approach as there are very few research projects which combine a GAN and an additional neural network in a cohesive and iterative approach like this.
We can see visually above that the faces being generated at this point are becoming real enough to convince human beings that they are not computer-generated. At the same time, let’s look behind the curtain and see some facial embedding visualizations which may help clarify how this is actually working.
To further understand facial embeddings, we can use the following images to visualize the concept. First, we have the images used for both training and generation of images. In other words, it contains real images from our data set and fake (adversarial) generated images as shown below:
This set of images is just one epoch of the model in action – given the highly realistic fake images generated here, it is not surprisingly a later epoch in the model evaluation.
To view these images as mathematical embeddings, we can use a visualization representing them on a multidimensional plane, which can be rotated to show the distance between them. It’s much easier to see that this model represents a cluster of “Real A” and “Fake B” on one side, and a separate cluster of “Real B” and “Fake A” on the other. This is the ideal attack scenario as it clearly shows how the model will confuse the fake image of the accomplice with the real image of the attacker, our ultimate test.
White Box and Gray Box Application
With much of machine learning, the model must be both effectively trained as well as able to reproduce and replicate results in future applications. For example, consider a food image classifier; its job being to correctly identify and label the type of food it sees in an image. It must have a massive training set so that it recognizes that a French Fry is different than a crab leg, but it also must be able to reproduce that classification on images of food it’s never seen before with a very high accuracy. Our model is somewhat different in that it is trained specifically on two people only (the adversary and the accomplice), and its job is done ahead of time during training. In other words, once we’ve generated a photorealistic image of the attacker that is classified as the accomplice, the model’s job is done. One important caveat is that it must work reliably to both correctly identify people and differentiate people, much like facial recognition would operate in the real world.
The theory behind this is based on the concept of transferability; if the models and features chosen in the development phase (called white box, with full access to the code and knowledge of the internal state of the model and pre-trained parameters) are similar enough to the real-world model and features (black box, no access to code or classifier) an attack will reliably transfer – even if the underlying model architecture is vastly different. This is truly an incredible concept for many people, as it seems like an attacker would need to understand every feature, every line of code, every input and output, to predict how a model will classify “adversarial input.” After all, that’s how classical software security works for the most part. By either directly reading or reverse engineering a piece of code, an attacker can figure out the precise input to trigger a bug. With model hacking (often called adversarial machine learning), we can develop attacks in a lab and transfer them to black box systems. This work, however, will take us through white box and gray box attacks, with possible future work focusing on black box attacks against facial recognition.
As mentioned earlier, a white box attack is one that is developed with full access to the underlying model – either because the researcher developed the model, or they are using an open source architecture. In our case, we did both to identify the ideal combination discussed above, integrating CycleGAN with various open source facial recognition models. The real Google FaceNet is proprietary, but it has been effectively reproduced by researchers as open source frameworks that achieve very similar results, hence our use of Inception Resnet v1. We call these versions of the model “gray box” because they are somewhere in the middle of white box and black box.
To take the concepts above from theory to the real world, we need to implement a physical system that emulates a passport scanner. Without access to the actual target system, we’ll simply use an RGB camera, such as the external one you might see on desktops in a home or office. The underlying camera is likely quite similar to the technology used by a passport photo camera. There’s some guesswork needed to determine what the passport camera is doing, so we take some educated liberties. The first thing to do is programmatically capture every individual frame from the live video and save them in memory for the duration of their use. After that, we apply some image transformations, scaling them to a smaller size and appropriate resolution of a passport-style photo. Finally, we pass each frame to the underlying pretrained model we built and ask it to determine whether the face it is analyzing is Subject A (the attacker), or Subject B (the accomplice). The model has been trained on enough images and variations of both that even changes in posture, position, hair style and more will still cause a misclassification. It’s worth noting that in this attack method, the attacker and accomplice are working together and would likely attempt to look as similar as possible to the original images in the data set the model is trained, as it would increase the overall misclassification confidence.
The Demos
The following demo videos demonstrate this attack using our gray box model. Let’s introduce the 3 players in these videos. In all three, Steve is the attacker now, Sam is our random test person, and Jesse is our accomplice. The first will show the positive test.
Positive Test:
This uses a real, non-generated image on the right side of the screen of Steve (now acting as our attacker). Our random test person (Sam), first stands in front of the live “passport verification camera” and is compared against the real image of Steve. They should of course be classified as different. Now Steve stands in front of the camera and the model correctly identifies him against his picture, taken from the original and unaltered data set. This proves the system can correctly identify Steve as himself.
Negative Test:
Next is the negative test, where the system tests Sam against a real photo of Jesse. He is correctly classified as different, as expected. Then Steve stands in front of the system and confirms the negative test as well, showing that the model correctly differentiates people in non-adversarial conditions.
Adversarial Test:
Finally, in the third video, Sam is evaluated against an adversarial, or fake image of Jesse, generated by our model. Since Sam was not part of the CycleGAN training set designed to cause misclassification, he is correctly shown as different again. Lastly, our attacker Steve stands in front of the live camera and is correctly misclassified as Jesse (now the accomplice). Because the model was trained for either Jesse or Steve to be the adversarial image, in this case we chose Jesse as the fake/adversarial image.
If a passport-scanner were to replace a human being completely in this scenario, it would believe it had just correctly validated that the attacker was the same person stored in the passport database as the accomplice. Given the accomplice is not on a no-fly list and does not have any other restrictions, the attacker can bypass this essential verification step and board the plane. It’s worth noting that a human being would likely spot the difference between the accomplice and attacker, but this research is based off of the inherent risks associated with reliance on AI and ML alone, without providing defense-in-depth or external validation, such as a human being to validate.
Positive Test Video – Confirming Ability to Recognize a Person as Himself
Negative Test Video – Confirming Ability to Tell People Apart
Adversarial Test Video – Confirming Ability to Misclassify with Adversarial Image
What Have we Learned?
Biometrics are an increasingly relied-upon technology to authenticate or verify individuals and are effectively replacing password and other potentially unreliable authentication methods in many cases. However, the reliance on automated systems and machine learning without considering the inherent security flaws present in the mysterious internal mechanics of face-recognition models could provide cyber criminals unique capabilities to bypass critical systems such as automated passport enforcement. To our knowledge, our approach to this research represents the first-of-its-kind application of model hacking and facial recognition. By leveraging the power of data science and security research, we look to work closely with vendors and implementors of these critical systems to design security from the ground up, closing the gaps that weaken these systems. As a call to action, we look to the community for a standard by which can reason formally about the reliability of machine learning systems in the presence of adversarial samples. Such standards exist in many verticals of computer security, including cryptography, protocols, wireless radio frequency and many more. If we are going to continue to hand off critical tasks like authentication to a black box, we had better have a framework for determining acceptable bounds for its resiliency and performance under adverse conditions.
For more information on research efforts by McAfee Advanced Threat Research, please follow our blog or visit our website.
This document has been prepared by McAfee Advanced Threat Research in collaboration with JSOF who discovered and responsibly disclosed the vulnerabilities. It is intended to serve as a joint research effort to produce valuable insights for network administrators and security personnel, looking to further understand these vulnerabilities to defend against exploitation. The signatures produced here should be thoroughly considered and vetted in staging environments prior to being used in production and may benefit from specific tuning to the target deployment. There are technical limitations to this work, including the fact that more complex methods of detection might be required to detect these vulnerabilities. For example, multiple layers of encapsulation may obfuscate the exploitation of the flaws and increase the difficulty of detection.
We have also provided packet captures taken from the vulnerability Proof-of-Concepts as artifacts for testing and deployment of either the signatures below or customized signatures based on the detection logic. Signatures and Lua Scripts are located on ATR’s Github page, as well as inline and in the appendix of this document respectively.
As of this morning (August 5th), JSOF has presented additional technical detail and exploitation analysis at BlackHat 2020, on the two most critical vulnerabilities in DNS.
The information provided herein is subject to change without notice, and is provided “AS IS”, with all faults, without guarantee or warranty as to the accuracy or applicability of the information to any specific situation or circumstance and for use at your own risk. Additionally, we cannot guarantee any performance or efficacy benchmarks for any of the signatures.
Integer Overflow in tfDnsExpLabelLength Leading to Heap Overflow and RCE
CVE: CVE-2020-11901 (Variant 1) CVSS: 9 Protocol(s): DNS over UDP (and likely DNS over TCP) Port(s): 53
Vulnerability description:
In the Treck stack, DNS names are calculated via the function tfDnsExpLabelLength. A bug exists in this function where the computation is performed using an unsigned short, making it possible to overflow the computed value with a specially constructed DNS response packet. Since tfDnsExpLabelLength computes the full length of a DNS name after it is decompressed, it is possible to induce an overflow using a DNS packet far smaller than 216 bytes. In some code paths, tfGetRawBuffer is called shortly after tfDnsExpLabelLength, allocating a buffer on the heap where the DNS name will be stored using the size computed by tfDnsExpLabelLength, thus leading to a heap overflow and potential RCE.
While newer versions of the Treck stack will stop copying the DNS name into the buffer as soon as a character that isn’t alphanumeric or a hyphen is reached, older versions do not have this restriction and further use predictable transaction IDs for DNS queries, making this vulnerability easier to exploit.
Limitations or special considerations for detection:
Ideally, detection logic for this vulnerability would involve independently computing the uncompressed length of all DNS names contained within incoming DNS responses. Unfortunately, this may be computationally expensive for a device to perform for every incoming DNS response, especially since each one may contain many DNS names. Instead, we must rely on a combination of heuristics.
Furthermore, it is currently unclear whether exercising this vulnerability is possible when using EDNS(0) or DNS over TCP. We recommend assuming it is possible for the purposes of implementing detection logic. During our testing, an inconsistency in how Suricata handled DNS over TCP was discovered – in some cases it was correctly identified as DNS traffic and in other cases, it was not. Consequently, two rules have been created to determine the size of DNS over TCP traffic. The second rule uses the TCP primitive instead of the DNS primitive; however, the second rule will only be evaluated if not flagged by the first rule.
Because the Suricata rule in dns_invalid_size.rules uses the DNS responses’ EDNS UDP length, which may be controlled by the attacker, a second upper limit of 4096 bytes is enforced.
Recommended detection criteria:
The device must be capable of processing DNS traffic and matching responses to their corresponding requests.
The device must be capable of identifying individual DNS names within individual DNS packets.
The device should flag any DNS responses whose size exceeds what is “expected”. The expected size depends on the type of DNS packet sent:
For DNS over TCP, the size should not exceed the value specified in the first two bytes of the TCP payload.
For DNS over UDP with EDNS(0), the size should not exceed the value negotiated in the request, which is specified in the CLASS field of the OPT RR, if present.
For DNS over UDP without EDNS(0), the size should not exceed 512 bytes.
These are all checked in dns_invalid_size.rules, which invokes either dns_size.lua or dns_tcp_size.lua for the logic.
The device should flag DNS responses containing DNS names exceeding 255 bytes (prior to decompression).
This is checked in dns_invalid_name.rules, which invokes dns_invalid_name.lua for the logic.
The device should flag DNS responses containing DNS names comprised of characters besides a-z, A-Z, 0-9, “-”, “_”, and “*”.
This is also checked in dns_invalid_name.rules, which invokes dns_invalid_name.lua for the logic.
The device should flag DNS responses containing a large number of DNS compression pointers, particularly pointers one after the other. The specific tolerance will depend on the network.
The device should count all labels starting with the bits 0b10, 0b01, or 0b11 against this pointer total, as vulnerable versions of the Treck stack (incorrectly) classify all labels where the first two bits aren’t 0b00 as compression pointers. In the Lua script, we treat any value above 63 (0x3F) as a pointer for this reason, as any value in that range will have at least one of these bits set.
The specific thresholds were set to 40 total pointers in a single DNS packet or 4 consecutive pointers for our implementation of this rule. These values were chosen since they did not seem to trigger any false positives in a very large test PCAP but should be altered as needed to suit typical traffic for the network the rule will be deployed on. The test for consecutive pointers is especially useful since each domain name should only ever have one pointer (at the very end), meaning we should never be seeing many pointers in a row in normal traffic.
This is implemented in dns_heap_overflow_variant_1.lua, which is invoked by dns_heap_overflow.rules.
Implementation of the detection logic above has been split up amongst several Suricata rule files since only the pointer counting logic is specific to this vulnerability. Detection of exploits leveraging this vulnerability are enhanced with the addition of the DNS layer size check, domain name compressed length check, and domain name character check implemented in the other rules, but these are considered to be “helper” signatures and flagging one of these does not necessarily indicate an exploitation attempt for this specific vulnerability.
False positive conditions (signatures detecting non-malicious traffic):
Networks expecting non-malicious traffic containing DNS names using non-alphanumeric characters or an abnormally large number of DNS compression pointers may generate false positives. Unfortunately, checking for pointers in only the domain name fields is insufficient, as a malicious packet could use a compression pointer that points to an arbitrary offset within said packet, so our rule instead checks every byte of the DNS layer. Consequently, Treck’s overly liberal classification of DNS compression pointers means that our rule will often misclassify unrelated bytes in the DNS payload as pointers.
In our testing, we ran into false positives with domain names containing spaces or things like “https://”. Per the RFCs, characters such as “:” and “/” should not be present in domain names but may show up from time to time in real, non-malicious traffic. The list of acceptable characters should be expanded as needed for the targeted network to avoid excessive false positives. That being said, keeping the list of acceptable characters as small as possible will make it more difficult to sneak in shellcode to leverage one of the Ripple20 DNS vulnerabilities.
False positives on the DNS size rules may occur when DNS over TCP is used if Suricata does not properly classify the packet as a DNS packet – something that has occurred multiple times during our testing. This would cause the second size check to occur, which assumes that all traffic over port 53 is DNS traffic and processes the payload accordingly. As a result, any non-DNS traffic on TCP port 53 may cause false positives in this specific case. It is recommended the port number in the rule be adjusted for any network where a different protocol is expected over port 53.
Fragmentation of DNS traffic over TCP may also introduce false positives. If the streams are not properly reconstructed at the time the rules execute on the DNS payload, byte offsets utilized in the attached Lua scripts could analyze incorrect data. Fragmentation in DNS response packets is not common on a standard network unless MTU values have been set particularly low. Each rule should be evaluated independently prior to use in production based on specific network requirements and conditions.
False negative conditions (signatures failing to detect vulnerability/exploitation):
False negatives are more likely as this detection logic relies on heuristics due to computation of the uncompressed DNS name length being too computationally expensive. Carefully constructed malicious packets may be able to circumvent the suggested pointer limitations and still trigger the vulnerability.
Signature(s):
dns_invalid_size.rules:
alert dns any any ‑> any any (msg:"DNS packet too large"; flow:to_client; flowbits:set,flagged; lua:dns_size.lua; sid:2020119014; rev:1;) Lua script (dns_size.lua) can be found in Appendix A
alert tcp any 53 -> any any (msg:"DNS over TCP packet too large"; flow:to_client,no_frag; flowbits:isnotset,flagged; lua:dns_tcp_size.lua; sid:2020119015; rev:1;) Lua script (dns_tcp_size.lua) can be found in Appendix A
dns_invalid_name.rules:
alert dns any any -> any any (flow:to_client; msg:"DNS response contains invalid domain name"; lua:dns_invalid_name.lua; sid:2020119013; rev:1;) Lua script (dns_invalid_name.lua) can be found in Appendix A
dns_heap_overflow.rules:
# Variant 1 alert dns any any -> any any (flow:to_client; msg:"Potential DNS heap overflow exploit (CVE-2020-11901)"; lua:dns_heap_overflow_variant_1.lua; sid:2020119011; rev:1;) Lua script (dns_heap_overflow_variant_1.lua) can be found in Appendix A
RDATA Length Mismatch in DNS CNAME Records Causes Heap Overflow
Vulnerability description:
In some versions of the Treck stack, a vulnerability exists in the way the stack processes DNS responses containing CNAME records. In such records, the length of the buffer allocated to store the DNS name is taken from the RDLENGTH field, while the data written is the full, decompressed domain name, terminating only at a null byte. As a result, if the size of the decompressed domain name specified in RDATA exceeds the provided RDLENGTH in a CNAME record, the excess is written past the end of the allocated buffer, resulting in a heap overflow and potential RCE.
Limitations or special considerations for detection:
Although exploitation of this vulnerability has been confirmed using malicious DNS over UDP packets, it has not been tested using DNS over TCP and it is unclear if such packets would exercise the same vulnerable code path. Until this can be confirmed, detection logic should assume both vectors are vulnerable.
Recommended detection criteria:
The device must be capable of processing incoming DNS responses.
The device must be capable of identifying CNAME records within DNS responses
The device should flag all DNS responses where the actual size of the RDATA field for a CNAME record exceeds the value specified in the same record’s RDLENGTH field.
In this case, the “actual size” corresponds to how vulnerable versions of the Treck stack compute the RDATA length, which involves adding up the size of every label until either null byte, a DNS compression pointer, or the end of the payload is encountered. The Treck stack will follow and decompress the pointer that terminates the domain name, if present, but the script does not as this computation is simply too expensive, as mentioned previously.
False positive conditions (signatures detecting non-malicious traffic):
False positives should be unlikely, but possible in scenarios where network devices send non-malicious traffic where RDLENGTH is not equal to the size of RDATA, thereby breaking RFC 1035.
False negative conditions (signatures failing to detect vulnerability/exploitation):
Since the detection logic does not perform decompression when computing the “actual size” of RDATA, it will fail to detect malicious packets that contain domain names whose length only exceeds RDLENGTH after decompression. Unfortunately, coverage for this case is non-trivial as such packets are actually RFC-compliant. According to RFC 1035, section 4.1.4:
If a domain name is contained in a part of the message subject to a length field (such as the RDATA section of an RR), and compression is used, the length of the compressed name is used in the length calculation, rather than the length of the expanded name.
Besides the computational overhead, enforcing such a check would likely result in very high false positive rates.
Signature(s):
dns_heap_overflow.rules:
# Variant 2 alert dns any any -> any any (flow:to_client; msg:"Potential DNS heap overflow exploit (CVE-2020-11901)"; lua:dns_heap_overflow_variant_2.lua; sid:2020119012; rev:1;) Lua script (dns_heap_overflow_variant_2.lua) can be found in Appendix A
Vulnerability description:
When processing IPv6 incoming packets, an inconsistency parsing the IPv6 routing header can be triggered where the header length is checked against the total packet and not against the fragment length. This means that if we send fragmented packets with the overall size greater than or equal to the specified routing header length, then we process the routing header under the assumption that we have enough bytes in our current fragment (where we have enough bytes in the overall reassembled packet only). Thus, using routing header type 0 (RH0) we can force read and write into out-of-bounds memory location.
There is also a secondary side effect where we can get an info leak in a source IPv6 address in an ICMP parameter returned from the device.
Limitations or special considerations for detection:
The RFC for RH0 defines the length field as equal to “two times the number of addresses in the header.” For example, if the routing header length is six, then there are three IPv6 addresses expected in the header. Upon reconstruction of the fragmented packets, the reported number of addresses is filled with data from the fragments that follow. This creates “invalid” IPv6 addresses in the header and potentially malforms the next layer of the packet. During exploitation, it would also be likely for the next layer of the packet to be malformed. Although ICMP can be used to perform an information leak, it is possible for the next layer to be any type and therefore vary in length. Verification of the length of this layer could therefore be very expensive and non-deterministic.
Recommended detection criteria:
The device must be capable of processing fragmented IPv6 traffic
The device should inspect fragmented packets containing Routing Header type 0 (RH0). If a RH0 IPv6 packet is fragmented, then the vulnerability is likely being exploited
If the length of the IPv6 layer of a packet fragment containing the RH0 header is less than the length reported in the routing header, then the vulnerability is likely being exploited
Upon reconstruction of the fragmented packets, if the header of the layer following IPv6 is malformed, the vulnerability may be being exploited
Notes:
The routing header type 0 was deprecated in IPv6 traffic in RFC 5095 as of December 2007. As a result, it may be feasible simply to detect packets using this criterion. False positives may be possible in this scenario for legacy devices or platforms. Suricata already provides a default rule for this scenario which has been added below. According to the RFC, routers are not supposed to fragment IPv6 packets and must support an MTU of 1280, which would always contain all of the RH0 header, unless an unusual amount of header extensions or an unusually large header is used. If this is followed, then a packet using the RH0 header should never be fragmented across the RH0 extension header bounds and any RH0 packet fragmented in this manner should be treated as potentially malicious. Treating any fragmented RH0 packet as potentially malicious may be sufficient. Furthermore, treating any fragmented RH0 packet with fragments size below a threshold as well as IPv6 packets with multiple extension headers or an unusually large header above a threshold may provide high accuracy detection.
False positive conditions (signatures detecting non-malicious traffic):
If all detection criteria outlined above are used, false positives should be minimal since the reported length of a packet should match its actual length and the next header should never contain malformed data. If only routing header type 0 is checked, false positives are more likely to occur. In the additional provided rule, false positives should be minimal since RH0 is deprecated and the ICMP header should never have invalid checksums or unknown codes.
False negative conditions (signatures failing to detect vulnerability/exploitation):
False negatives may occur if the signature is developed overly specific to the layer following IPv6, for example, ICMP. An attacker could potentially leverage another layer and still exploit the vulnerability without the information leak; however, this would still trigger the default RH0 rule. In the second rule below, false negatives are likely to occur if:
An attacker uses a non-ICMP layer following the IPv6 layer
A valid ICMP code is used
The checksum is valid, and the payload is less than or equal to 5 bytes (this value can be tuned in the signature)
Signature(s):
Ipv6_rh0.rules:
alert ipv6 any any -> any any (msg:"SURICATA RH Type 0"; decode-event:ipv6.rh_type_0; classtype:protocol-command-decode; sid:2200093; rev:2;)
alert ipv6 any any -> any any (msg:"IPv6 RH0 Treck CVE-2020-11897"; decode-event:ipv6.rh_type_0; decode-event:icmpv6.unknown_code; icmpv6-csum:invalid; dsize:>5; sid:2020118971; rev:1;)
IPv4/UDP Tunneling Remote Code Execution
CVE: CVE-2020-11896 CVSS: 10.0 Protocol(s): IPv4/UDP Port(s): Any
Vulnerability description:
The Treck TCP/IP stack does not properly handle incoming IPv4-in-IPv4 packets with fragmented payload data. This could lead to remote code execution when sending multiple specially crafted tunneled UDP packets to a vulnerable host.
The vulnerability is a result of an incorrect trimming operation when the advertised total IP length (in the packet header) is strictly less than the data available. When sending tunneled IPv4 packets using multiple fragments with a small total IP length value, the TCP/IP stack would execute the trimming operation. This leads to a heap overflow situation when the packet is copied to a destination packet allocated based on the smaller length. When the tunneled IPv4 packets are UDP packets sent to a listening port, there’s a possibility to trigger this exploit if the UDP receive queue is non-empty. This can result in an exploitable heap overflow situation, leading to remote code execution in the context in which the Treck TCP/IP stack runs.
Recommended detection criteria:
In order to detect an ongoing attack, the following conditions should be met if encapsulation can be unpacked:
The UDP receive queue must be non-empty
Incoming UDP packets must be fragmented
Flag MF = 1 with any offset, or
Flag MF = 0 with non-zero offset
Fragmented packets must have encapsulated IPv4 packet (upon assembly)
protocol = 0x4 (IPIP)
Encapsulated IPv4 packet must be split across 2 packet fragments.
Reassembled (inner-most) IPv4 packet has incorrect data length stored in IP header.
The fourth condition above is required to activate the code-path which is vulnerable, as it spreads the data to be copied across multiple in-memory buffers. The final detection step is the source of the buffer overflow, as such, triggering on this may be sufficient.
Depending on the limitations of the network inspection device in question, a looser condition could be used, though it may be more prone to false positives.
In order to detect an ongoing attack if encapsulation cannot be unpacked:
The UDP receive queue must be non-empty
Incoming UDP packets must be fragmented
Flag MF = 1 with any value in offset field, or
Flag MF = 0 with any non-zero value in offset field
Final fragment has total fragment length longer than offset field.
The final condition shown above is not something that should be seen in a normal network.
Fragmentation, when it occurs, is the result of data overflowing the MTU of a given packet type. This indicates the final fragment should be no larger than any other fragment – and in practice would likely be smaller. The inverse, where the final fragment is somehow larger than previous fragments, indicates that the fragmentation is not the result of MTU overflow, but instead something else. In this case, malicious intent.
As network monitors in common usage are likely to have the ability to unpack encapsulation, only that ruleset is provided.
Limitations or special considerations for detection:
The Treck stack supports (at least) two levels of tunneling. Each tunnel level can be IPv4-in-IPv4, IPv6-in-IPv4, or IPv4-in-IPv6. The above logic is specific to the IPv4-in-IPv4, single level of tunneling case. In cases of deeper nesting, either a recursive check or a full unwrapping of all tunneling layers will be necessary.
False positive conditions (signatures detecting non-malicious traffic):
False positives should be minimal if all detection criteria outlined above are used in the case where the tunneling can be unpacked. In the case where tunneling cannot be unpacked, this is unlikely to trigger many false positives in the presence of standards compliant applications. Fragmentation as seen here is simply not common.
False negative conditions (signatures failing to detect vulnerability/exploitation):
False negatives could occur with deeper levels of nesting, or nesting of IPv6.
Signature(s):
ipv4_tunneling.rules:
alert ip any any -> any any (msg:"IPv4 TUNNELING EXPLOIT (CVE‑2020‑11896)"; ip_proto:4; lua:tunnel_length_check.lua; sid:2020118961; rev:1;) Lua script (tunnel_length_check.Lua) can be found in Appendix A
Appendix A
Appendix A contains Lua scripts that are required for usage with corresponding Suricata signatures. These scripts are also located on McAfee ATR’s GitHub.
The COVID-19 pandemic has put into motion a scale of remote working never before seen. Our teams are no longer just grouped in different office locations – but working individually from kitchen tables, spare rooms and, for the lucky ones, home offices! It’s therefore inevitable that this level of remote working will reveal security pitfalls for remediation, with improvements that can be carried forward when this period is over.
Attackers are taking advantage of heightened anxiety and homeworking
Tony Pepper, CEO at Egress, provides his insight below, as well as his six tips to improve data security while working from home.
Phishing It’s sad, but it’s no surprise that phishing attacks have increased due to COVID-19– and businesses need to be prepared. Attackers are taking advantage of an environment of heightened anxiety and disrupted work settings to trick people into making mistakes, and they’re unlikely to stop until at least the main wave of the pandemic has passed.
Research shows that phishing is a major security issue under normal circumstances. Egress’ recent Insider Data Breach survey found that 41% of employees who had accidentally leaked data had done so because of a phishing email. More worryingly due to their level of access to data and systems, senior personnel are typically the most likely group to fall victim to phishing attacks, with 61% of directors saying that they’d caused a breach in this way.
And education and training can only go so far. Of course, we must continue to encourage employees to be vigilant to suspicious emails and to do things like hovering over links before clicking on them. We also need to reduce blame culture and free up employees to report genuine mistakes without fear.
But this can only go so far. People will always make mistakes. The good news is that advanced technology like contextual machine learning can remediate the targeted attacks, like conversation hijacking, that usually do the most damage to businesses.
Productivity and Security Even in our tech-savvy world, there are still organisations that don’t have VPN access set up or enough laptops, mobile devices or processes to enable home working. But while IT teams try to quickly sort this situation out, we’re seeing employees finding workarounds, for example by sharing files using FTP sites or sending data to personal devices to work on.
We talk a lot about ‘human layer security’ technologies, which find the right balance between productivity and security. Right now, as well as looking at technologies to help securely move meetings, events and other activities online, businesses should also check that usually easy routine tasks can still be carried out safely – such as sharing large files or sending sensitive data via email. In particular, technologies like contextual machine learning and AI can identify what typically ‘good’ security behaviour looks like for individual users and then prevent abnormal behaviours that put data at risk.
For example, with people working on smaller screens and via mobile devices, it’s more likely they might attach the wrong document to an email or include a wrong recipient. Contextual machine learning can spot when incidents like this are about to happen and correct the user’s behaviour to prevent a breach before it happens.
Human Error People are the new perimeter when it comes to data security – their decisions and behaviours can put data at risk every day, especially at a time of global heightened anxiety.
We know from our 2020 Insider Data Breach Survey that over half of employees don’t think their organisation has sole ownership over company data – instead believing that it is in-part or entirely owned by the individuals and teams who created it. And we also know that people are more likely to take risks with data they feel belongs to them than data they believe belongs to someone else. When they don’t have access to the right tools and technology to work securely – or they think the tools they do have will slow them down, especially at a time when the need for productivity is at its highest – they’re more likely to cut corners.
Maintaining good security practices is essential – and the good news is there are technologies on the market that can help ensure the right level of security is applied to sensitive data without blocking productivity.
Six Tips to improve Data Security while Working from Home We can all agree that times are incredibly tough right now. For security professionals looking to mitigate some of the risks, here are six practical tips are taken from the conversations we’re having with other organisations right now:
Look for security software that doesn’t hamper productivity. It’s generally the aim of the game anyway – but right now, employees are feeling increased pressure to prove their productivity. If you’re finding yourself selecting new solutions, it’s never been more crucial to select technologies that don’t add difficult extra steps for them or anyone they’re working with outside the organisation.
Choose collaboration/productivity solutions that have security baked into them. The other side to the coin of the point above, really: when choosing any new solution to implement at this time, make sure that security measures are part of a product’s standard design, and not an after-thought.
Automate security wherever possible. If it’s possible, take decisions out of end users’ hands to ensure the security of sensitive information in line with policy, reducing the risk of someone accidentally or intentionally not using security software.
Engage employees over security best practices. Phishing is a good example of this. Some inbound risks will evade the filters on your network boundary and end up in users’ mailboxes. Effort to proactively engage employees through e-learning and other educational measures can help them to know what to do with emails they think are suspicious (for example, hovering over links before clicking on them).
Look to AI and machine learning to help solve advanced risks. Use cases like conversation hijacking, misdirected emails or people attaching the wrong files to documents can now be mitigated by intelligent technology like contextual machine learning, which determines what “good security behaviour” looks like for each individual, and alerts them and administrators to abnormal incidents – effectively stopping breaches before they happen.
Implement no-fault reporting. People often don’t report security incidents because they’re concerned about the repercussions. Where it’s appropriate to do so, implement no-fault reporting to encourage individuals to report incidents in a timely manner, so you can focus on remediating the problem as quickly as possible.
One truth of parenting is this: we do a lot of learning on the job. And that often goes double when it comes to parenting and the internet.
That’s understandable. Whereas we can often look to our own families and how we were raised for parenting guidance, today’s always-on mobile internet, with tablets and smartphones almost always within arm’s reach, wasn’t part of our experience growing up. This is plenty new for nearly all of us. We’re learning on the job as it were, which is one of the many reasons why we reached out to parents around the globe to find out what their concerns and challenges are—particularly around family safety and security in this new mobile world of ours.
Just as we want to know our children are safe as they walk to school or play with friends, we want them to be just as safe when they’re online. Particularly when we’re not around and there to look over their shoulder. The same goes for the internet. Yet where we likely have good answers for keeping our kids safe around the house and the neighborhood, answers about internet safety are sometimes harder to come by.
Recently, we conducted a survey of 600 families and professionals in the U.S. to better understand what matters to them—in terms of security and the lives they want to lead online. The following article reflects what they shared with us, and allows us to share it with you in turn, with the aim of helping you and your family stay safer and more secure. 1
What concerns and questions do parents have about the internet?
The short answer is that parents are looking for guidance and support. They’re focused on the safety of their children, and they want advice on how to parent when it comes to online privacy, safety, and screen time. Within that, they brought up several specific concerns:
Help my kids not feel anxious about growing up in an online world.
There’s plenty wrapped up in this statement. For one, it refers to the potential anxiety that revolves around social networks and the pressures that can come with using social media—how to act, what’s okay to post and what’s not, friending, following, unfriending, unfollowing, and so on—not to mention the notion of FOMO, or “fear of missing out,” and anxiety that arises from feelings of not being included in someone else’s fun.
Keep my kids safe from bullying, or bullying others.
Feel like I can leave my child alone with a device without encountering inappropriate content.
If we think of the internet as a city, it’s the biggest one there is. For all its libraries, playgrounds, movie theatres, and shopping centers, there are dark alleys and derelict lots as well. Not to mention places that are simply age appropriate for some and not for others. Just as we give our children freer rein to explore their world on their own as they get older, the same holds true for the internet. There are some things we don’t want them to see and do.
Balance the amount of screen time my children get each day.
Screen time is a mix of many things—from schoolwork and videos to games and social media. It has its benefits and its drawbacks, depending on what children are doing and how often they’re doing it. The issue often comes down to what is “too much” screen time, particularly as it relates to the bigger picture of physical activity, face-to-face time with the family, hanging out with friends, and getting a proper bedtime without the dim light of a screen throwing off their sleep rhythms.
Where can parents get started?
Beyond our job of providing online security for devices, our focus at McAfee is on protecting people. Ultimately, that’s the job we aim to do—to help you and your family be safer. Beyond creating software for staying safe, we also put together blogs and resources that help people get sharp on the security topics that matter to them. For parents, check out this page which puts forward some good guidance and advice that can help. Check it out, and we hope that you’ll find even more ways you can keep you and your family safe.
Stay Updated
To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Survey conducted in October 2019, consisting of 600 computer-owning adults in the U.S.
By: Nilisha, Finance Intern, Plano, TX, United States
Amidst this global pandemic, I was fortunate enough to have the opportunity to be a Finance Intern at McAfee this summer. Working remotely was something that I never thought I would have to do so soon, however, my experience was nothing short of amazing. From the onboarding process to all of the trainings and workshops, McAfee helped make sure all of the summer interns had the most enriching experience.
As a leading-edge cybersecurity company, McAfee offers advanced security solutions to consumers, small and large businesses, enterprises, and governments. Security technologies from McAfee use a unique and predictive capability, which enables home users and businesses to stay one step ahead of the next wave of fileless attacks, viruses, malware, and other online threats.
As a Central Finance Intern, I was exposed to many different programs and softwares such as Hyperion (financial reporting), SAP (managing financials), Qlik (data discovery and analytics), and Power BI (data modeling). I worked with supportive project leads who enabled me to ask as many questions as I wanted and guided me to successfully complete my projects. My work was used on the latest company cash forecasting, and being recognized for that felt really great. Additionally, during the course of the summer, my fellow interns and I worked together to help automate some of the pre-formatting done on the “big-guy” massive Excel workbook that contained the company financials, known as the CRIB.Getting my hands on that and working with macros and VBA codes, made me realize how I was actually able to solve things on my own and reach out for help whenever I got stuck.
Some fun activities that we did to make our experience as normal as possible was a Finance Intern picnic at a park nearby the office, had a virtual coffee with the CEO, Peter Leav, a Ruins Forbidden Treasure Virtual Escape Room, and numerous Microsoft Teams calls to wind down and grasp the fact that we were lucky enough to make a long-lasting impact. Moreover, People Success, the human resources organization at McAfee, organized many workshops and virtual intern meet-ups from interns across North America. This was a great way to see what other interns were working on and how their experiences were similar and different from mine.
Overall, I found my time at McAfee to be one of the most profoundly educational and productive experiences of my career. I am extremely thankful to McAfee for their investment in me, and for providing me the opportunity to learn, develop, and grow as a member of the McAfee team this summer.
Follow @LifeAtMcAfee on Instagram and @McAfee on Twitterto see what working at McAfee is all about. Interested in a new career opportunity at McAfee? Explore Our Careers.
The Front Line Applied Research
& Expertise (FLARE) team is honored to announce that the popular
Flare-On challenge will return for a triumphant seventh year. Ongoing
global events proved no match against our passion for creating
challenging and fun puzzles to test and hone the skills of aspiring
and experienced reverse engineers.
The contest will begin at 8:00 p.m. ET on Sept. 11, 2020. This is a
CTF-style challenge for all active and aspiring reverse engineers,
malware analysts and security professionals. The contest runs for six
full weeks and ends at 8:00 p.m. ET on Oct. 23, 2020.
This year’s contest features a total of 11 challenges in a variety
of formats, including Windows, Linux, Python, VBA and .NET. This is
one of the only Windows-centric CTF contests out there and we have
crafted it to closely represent the challenges faced by our FLARE team
on a daily basis.
If you are skilled and dedicated enough to complete the seventh
Flare-On challenge, you will receive a prize and recognition on the
Flare-On website for your accomplishment. Prize details will be
revealed later, but as always, it will be worthwhile swag to earn the
envy of your peers. In previous years we sent out belt buckles,
replica police badges, challenge coins, medals and huge pins.
Check the Flare-On website for a
live countdown timer, to view the previous year’s winners, and to
download past challenges and solutions for practice. For official news
and information, we will be using the Twitter hashtag: #flareon7.
In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method to accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life. NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.
One of the joys of online shopping is instant gratification – your purchases arrive on your doorstep in just a few days! Unfortunately, consumers aren’t the only ones taking advantage of this convenience – hackers are also using it to trick users into handing over money or data. Recently, AARPrecounted several scams where cybercriminals posed as Amazon’s customer service or security team as a ploy to steal your personal information.
How These Scams Work
These scams all begin with an unsuspecting user seeking help from Amazon’s customer support or their security team, only to find the contact information of a fraudster posing as the company. For example, in one of these scams, a user called a fraudulent customer support number to help his wife get back into her account. However, the scammer behind the phone number tried to sell the victim a fake $999 computer program to prevent hacking on his own device. Thankfully, according to AARP, the man refused to send the money.
Another victim reported receiving an email from the “Amazon Security Team,” stating that a fraudulent charge was made on her account and that it was locked as a result. The email asked for her address and credit or debit card information to unlock her account and get a refund on the fake charge. But upon closer review, the woman noticed that the email address ended in .ng, indicating that it was coming from Nigeria. Luckily, the woman refused to send her information and reported the incident instead.
Not all victims are as lucky. One woman received an email that looked like it was from Amazon and gave the scammers her social security number, credit card number, and access to her devices. Another victim lost $13,300 to scammers who contacted her through a messaging platform stating that someone hacked her Amazon account and that she needed to buy gift cards to restore it.
Steer Clear of These Tricks
Many of these fraudsters are taking advantage of Amazon’s credibility to trick unsuspecting out of money and personal data. However, there are ways that users can prevent falling prey to these scams – and that all starts with staying educated on the latest schemes so consumers know what to look out for. By staying knowledgeable on the latest threats, consumers can feel more confident browsing the internet and making online purchases. Protect your digital life by following these security tips:
Go directly to the source
Be skeptical of emails or text messages claiming to be from organizations with peculiar asks or information that seems too good to be true. Instead of clicking on a link within the email or text, it’s best to go straight to the organization’s website or contact customer service.
Be wary of emails asking you to act
If you receive an email or text asking you to take a specific action or provide personal details, don’t click on anything within the message. Instead, go straight to the organization’s website. This will prevent you from accidentally downloading malicious content. Additionally, note that Amazon does not ask for personal information like bank account numbers or Social Security numbers in unsolicited emails.
Only use one credit card for online purchases
By only using one payment method for online purchases, you can keep a better eye out for fraud instead of monitoring multiple accounts for suspicious activity.
Look out for common signs of scams
Be on the lookout for fake websites and phone numbers with Amazon’s logo. Look for misspelled words and grammatical errors in emails or other correspondence. If someone sends you a message with a link, hover over the link without actually clicking on it. This will allow you to see a link preview. If the URL looks suspicious, don’t click on it, as it’s probably a phishing link that could download malicious content onto your device. It’s best to avoid interacting with the link and deletethemessagealtogether.
Stay updated
To stay updated on all things McAfee and on top of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcastHackable?, and ‘Like’ us on Facebook.
In 2011 when IBM???s Watson supercomputer went up against ???Jeopardy??? icon Ken Jennings, the world watched as a battle of man vs. machine concluded in an impressive win for Watson. It wasn???t simply remarkable that Watson could complete calculations and source documents quickly; the real feat was the brainpower it took to create fine-tuned software with the ability to comprehend questions contextually and think like a human.
But Watson wasn???t without fault, struggling to understand some ???Jeopardy??? categories that were a little too specific and reminding us that human beings still play a critical role in the successes (or failures) of modern technology. In application security (AppSec), there is no single set-it-and-forget-it solution that will ensure the health and fortitude of your code. Like Watson, the software can???t operate to its fullest potential without the right brainpower behind it, requiring thoughtful minds to understand where solutions plug in and to check code in ways that software cannot. ツ?
The human element of ingenuity
Automation in AppSec testing tools is a prime example. It plays a critical role in scaling security operations and scanning for vulnerabilities to find them before they become expensive headaches. While that undoubtedly boosts efficiency and speed in the background, there???s a human element of ingenuity and adaptability that you can???t ignore: cyberattackers. They pivot quickly to crack your code whether you automate or not, which means your developers and security professionals need to be just as agile and close knowledge gaps to stay one step ahead as they leverage the right testing tools in the background.ツ?
And while having a full range of scanning solutions integrated into your software development process will help you find and fix common flaws, Manual Penetration Testing (MPT) is crucial for uncovering categories of vulnerabilities - like business logic flaws - that you can???t automate with software. The bottom line: man and machine need to work together in AppSec, because like Watson, it takes a village of brainpower to come out on top.
There???s a lot to explore in the realm of man vs. machine, which is why we???re excited to partner with HackerOne for upcoming virtual events that uncover the ways you can work with technology, not against it. In this three-part series, we???re delving into topics like crowdsourced testing and automation to examine how you can strike the balance between capable software solutions and human-powered security. Here???s the lineup:
Part One | Man with Machine: Adapting SDLC for DevSecOps
To keep pace with modern software development, DevOps must work continuously to deliver applications to various infrastructure environments, automatically pushing code changes as they arise. Traditional security practices bog down development, frustrating development teams and causing unnecessary friction. This talk will cover the ways development and security teams can work together with automation and human-powered security at the speed of innovation. Join Veracode???s Chris Kirsch and Chris Wysopal as they chat with HackerOne???s CTO and Co-Founder Alex Rice to learn:
How security and development teams can partner to create a continuous feedback loop without hampering innovation.
How security becomes a competitive advantage through balancing speed with risk.
How to engage a diverse and creative pool of talent not available in traditional firms to test business-critical applications.
Part Two | Hacking Remote: Leveraging Automation and Crowdsourced Testing to Secure Your Enterpriseツ?ツ?
As the world reacts to a global pandemic and the work-from-home model becomes the norm, people are more broadly distributed, and applications, systems, and infrastructures are more vulnerable than ever as a result. In this talk, we???ll discuss the undue strain put on security teams and delve into how leveraging automation and crowdsourced security testing allows your enterprise to scale security to accommodate their newly dispersed workforce. Join HackerOne???s Director of Product Marketing April Rassa and Director of Product Miju Han, along with Veracode???s Brittany O???Shea, to learn:
How to implement a security program with the scale necessary to cover a growing attack surface.
How to operate security at scale while reducing costs and removing the need for expensive headcount.
Trends and insights into the vulnerabilities impacting companies during a time of increased digital connectivity.
Part Three | Who Will Win the Fight of Automation?
In this talk, security leaders from Veracode and HackerOne will debate the unique values man and machine bring and discuss why companies need a complete security strategy that takes into account both the strengths of scale and speed technology can provide and the need for creative skills and adaptability only humans can bring.ツ?Join this talk with Tanner Emek and Johnny Nipper, two hackers from HackerOne, along with Veracode???s Ryan O???Boyle to learn:
The differences in vulnerabilities found by hackers vs. automated tools.
Suggestions for augmenting existing security best practices with a human touch.
When to choose between automation and human-powered security for your organization.ツ?
Armed with the right knowledge and tools, creating a well-rounded AppSec program that relies both on technology and human brainpower isn???t as daunting as it may seem. Join these virtual sessions by registeringツ?to gain more insight into the ways man and machine can work with ??? not against ??? each other on the journey to enhanced security. We hope to see you there!ツ?
Building Adaptable Security Architecture Against NetWalker
NetWalker Overview
The NetWalker ransomware, initially known as Mailto, was first detected in August 2019. Since then, new variants were discovered throughout 2019 and the beginning of 2020, with a strong uptick noticed in March of this year. NetWalker has noticeably evolved to a more stable and robust ransomware-as-a-service (RaaS) model, and McAfee research suggests that the malware operators are targeting and attracting a broader range of technically advanced and enterprising criminal affiliates. McAfee Advanced Threat Research (ATR) discovered a large sum of bitcoins linked to NetWalker which suggest its extortion efforts are effective and that many victims have had no option other than to succumb to its criminal demands. For more details on NetWalker, see the McAfee ATR blog here.
We do not want you to be one of those victims, so this blog is focused on how to build an adaptable security architecture to defeat this threat and, specifically, how McAfee’s portfolio delivers the capability to prevent, detect and respond to NetWalker ransomware.
Gathering Intelligence on NetWalker
As always, building adaptable defensive architecture starts with intelligence. In most organizations, the Security Operations team is responsible for threat intelligence analysis, as well as threat and incident response. The Preview of McAfee MVISION Insights is a sneak peek of some of MVISION Insights capabilities for the threat intel analyst and threat responder. The preview identifies the prevalence and severity of select top emerging threats across the globe which enables the Security Operations Center (SOC) to prioritize threat response actions and gather relevant cyber threat intelligence (CTI) associated with the threat, in this case NetWalker ransomware. The CTI is provided in the form of technical Indicators of Compromise (IOCs) as well as MITRE ATT&CK framework tactics and techniques.
As a threat intel analyst or responder, you can drill down to gather more specific information on NetWalker, such as prevalence and links to other sources of information.
As a threat intel analyst or responder, you can further drill down to gather more specific actionable intelligence on NetWalker, such as indicators of compromise and tactics/techniques aligned to the MITRE ATT&CK framework.
From MVISION Insights preview, you can see that NetWalker leverages tactics and techniques common to other ransomware attacks, such as spear phishing attachments for Initial Access, use of PowerShell for deployment, modification of Registry Keys/Startup folder for persistence and encryption of files for impact of course.
Defensive Architecture Overview
Today’s digital enterprise is a hybrid environment of on-premise systems and cloud services with multiple entry points for attacks like NetWalker. The work from home operating model forced by COVID-19 has only expanded the attack surface and increased the risk for successful ransomware attack if organizations did not adapt their security posture. Mitigating the risk of attacks like NetWalker requires a security architecture with the right controls at the device, on the network and in security operations (sec ops). The Center for Internet Security (CIS) Top 20 Cyber Security Controls provides a good guide to build that architecture. For ransomware, and NetWalker in particular, the controls must be layered throughout the enterprise. The following outlines the key security controls needed at each layer of the architecture to protect your enterprise against ransomware.
To assess your capability against NetWalker, you must match your existing controls against the attack stages we learned from the Preview of MVISION Insights. For detailed analysis on the NetWalker ransomware attack, see McAfee ATR’s blog but, for simplicity, we matched the attack stages to the MITRE ATT&CK Framework below.
Initial Access Stage Defensive Overview
According to Threat Intelligence and Research, the initial access is performed either through vulnerability exploitation or spear phishing attachments. The following chart summarizes the controls expected to have the most effect against initial stage techniques and the McAfee solutions to implement those controls where possible.
MITRE Tactic
MITRE Techniques
CSC Controls
McAfee Capability
Initial Access
Exploit Public-Facing Applications (T1190)
Tomcat, Web Logic
CSC 2 Inventory of Software Assets
CSC 3 Continuous Vulnerability Assessment
CSC 5 Secure Configuration of hardware and software
CSC 9 Limitation of Network Ports and Protocols
CSC 12 Boundary Defense
CSC 18 Application Software Security
Endpoint Security Platform 10.7, Threat Prevention, Application Control (MAC)
As attackers can quickly change spear phishing attachments, it is important to have adaptable defenses that include user awareness training and response procedures, behavior-based malware defenses on email systems, web access and endpoint systems, and finally sec ops playbooks for early detection and response against suspicious email attachments or other phishing techniques. For more information on how McAfee can protect against suspicious email attachments, review this additional blog post.
Using valid accounts and protocols, such as for Remote Desktop Protocol, is an attack technique we have seen rise during the initial COVID-19 period. To further understand how McAfee defends against RDP as an initial access vector, as well as how the attackers are using it to deploy ransomware, please see our previous posts.
The exploitation stage is where the attacker gains access to the target system. Protection at this stage is heavily dependent on system vulnerability management, adaptable anti-malware on both end user devices and servers and security operations tools like endpoint detection and response sensors.
McAfee Endpoint Security 10.7 provides a defense in depth capability including signatures and threat intelligence to cover known bad indicators or programs.
Additionally, machine-learning and behavior-based protection reduces the attack surface against NetWalker and detects new exploitation attack techniques.
For more information on how McAfee Endpoint Security 10.7 can prevent or identify the techniques used in NetWalker, review these additional blog posts.
The following chart summarizes the critical security controls expected to have the most effect against exploitation stage techniques and the McAfee solutions to implement those controls where possible.
The impact stage is where the attacker encrypts the target system, data and perhaps moves laterally to other systems on the network. Protection at this stage is heavily dependent on adaptable anti-malware on both end user devices and servers, network controls and security operation’s capability to monitor logs for anomalies in privileged access or network traffic. The following chart summarizes the controls expected to have the most effect against impact stage techniques and the McAfee solutions to implement those controls where possible.
As a threat intel analyst or hunter, you might want to quickly scan your systems for any of NetWalker indicators. Of course, you can do that manually by downloading a list of indicators and searching with available tools. However, if you have MVISION EDR, you will be able to that search right from Insights, saving precious time. Hunting the attacker can be a game of inches so every second counts. Of course, if you found infected systems or systems with indicators, you can take action to contain and start an investigation for incident response immediately from the MVISION EDR console.
Proactively Detecting NetWalker Techniques
Many of the exploit stage techniques in this attack use legitimate Windows tools or valid accounts to either exploit, avoid detection or move laterally. These techniques are not easily prevented but can be detected using MVISION EDR. As security analysts, we want to focus on suspicious techniques, such as PowerShell, used to download files…
or execute scripts…
or evade defenses…
Monitoring or Reporting on NetWalker Events
Events from McAfee Endpoint Protection and Web Gateway play a key role in NetWalker incident and threat response. McAfee ePO centralizes event collection from all managed endpoint systems. As a threat responder, you may want to create a dashboard for NetWalker-related threat events to understand current exposure. Here is a list (not exhaustive) of NetWalker-related threat events as reported by Endpoint Protection Platform Threat Prevention Module and McAfee Web Gateway.
McAfee Endpoint Threat Prevention Events
Ransom-NetW!AB8D59ABA3DC
GenericRXKU-HO!E33E060DA1A5
PS/Netwalker.a
Ransom-NetW!1B6A2BFA39BC
Artemis!2F96F8098A29
GenericRXKD-DA!645C720FF0EB
GenericRXKD-DA!4E59FBA21C5E
Ransom-NetW!A9E395E478D0
Ransom-NetW!A0BC1AFED896
PS/Netwalker.c
Artemis!F5C877335920
GenericRXKD-DA!B862EBC24355
Artemis!2F96F8098A29
GenericRXKD-DA!63EB7712D7C9
RDN/Ransom
GenericRXKD-DA!F0CC568491CD
Artemis!0FF0D5085F7E
GenericRXKD-DA!9172586C2F87
RDN/Generic.dx
Ransom-NetW!BFF6F7B3A7DB
Ransom-NetW!7B77B436360A
GenericRXKD-DA!BC75859695F6
GenericRXKD-DA!FCEDEA8111AB
GenericRXKD-DA!5ABF6ED342FD
PS/Netwalker.d
GenericRXKD-DA!C0DDA75C6EAE
GenericRXKD-DA!ADDC865F6169
GenericRXKD-DA!DBDD7A1F53AA
Artemis!1527DAF8626C
GenericRXKD-DA!608AC26EA80C
Ransom-NetW!3A601EE68000
GenericRXKD-DA!8102821249E1
Ransom-NetW!2E2F5FE8ABA4
GenericRXKD-DA!F957F19CD9D7
GenericRXKD-DA!3F3CC36F4298
GenericRXKD-DA!9001DFA8D69D
PS/Agent.bu
GenericRXKD-DA!5F55AC3DD189
GenericRXKD-DA!18C32583A6FE
GenericRXKD-DA!01F703234047
Ransom-NetW!62C71449FBAA
GenericRXKD-DA!6A64553DA499
GenericRXKD-DA!0CBA10DF0C89
Artemis!50C6B1B805EC
PS/Netwalker.b
GenericRXKD-DA!59B00F607A75
Artemis!BC96C744BD66
GenericRXKD-DA!DE0B8566636D
Ransom-NetW!8E310318B1B5
GenericRXKD-DA!0537D845BA09
GenericRXKU-HO!DE61B852CADA
GenericRXKD-DA!B4F8572D4500
PS/Netwalker.c
GenericRXKD-DA!D09CFDA29F17
PS/Agent.bx
GenericRXKD-DA!0FF5949ED496
GenericRXKD-DA!2B0384BE06D2
GenericRXKD-DA!5CE75526A25C
GenericRXKD-DA!BDC345B7BCEC
Ransom-CWall!993B73D6490B
GenericRXKD-DA!0E611C6FA27A
GenericRXKU-HO!961942A472C2
Ransom-NetW!291E1CE9CD3E
Ransom-Mailto!D60D91C24570
PS/Agent.bu
GenericRXKU-HO!997F0EC7FCFA
PS/Agent.bx
Ransom-CWall!3D6203DF53FC
Ransom-Netwalker
Ransom-NetW!BDE3EC20E9F8
Generic .kk
GenericRXKU-HO!1DB8C7DEA2F7
GenericRXKD-DA!DD4F9213BA67
GenericRXKD-DA!729928E6FD6A
GenericRXKU-HO!9FB87AC9C00E
GenericRXKU-HO!187417F65AFB
PS/Netwalker.b
McAfee Web Gateway Events
RDN/Ransom
BehavesLike.Win32.RansomCWall.mh
BehavesLike.Win32.Generic.kh
Ransom-NetW!1B6A2BFA39BC
BehavesLike.Win32.MultiPlug.kh
Ransom:Win32/NetWalker.H!rsm
BehavesLike.Win32.Generic.qh
BehavesLike.Win32.Trojan.kh
GenericRXKD-DA!DD4F9213BA67
BehavesLike.Win32.Ipamor.kh
BehavesLike.Win64.Trojan.nh
BehavesLike.Win32.Generic.cz
RDN/Generic.dx
BehavesLike.Win32.RansomCWall.mm
BehavesLike.Win64.BadFile.nh
BehavesLike.Win32.Generic.dm
Summary
Ransomware has evolved into a lucrative business for threat actors, from underground forums selling ransomware, to offering services such as support portals to guide victims through acquiring crypto currency for payment, to the negotiation of the ransom. However, just as attackers work together, defenders must collaborate internally and externally to build an adaptive security architecture which will make it harder for threat actors to succeed and build resilience in the business. This blog highlights how to use McAfee’s security solutions to prevent, detect and respond to NetWalker and attackers using similar techniques.
McAfee ATR is actively monitoring ransomware threats and will continue to update McAfee MVISION Insights and its social networking channels with new and current information. Want to stay ahead of the adversaries? Check out McAfee MVISION Insights for more information.
The NetWalker ransomware, initially known as Mailto, was first detected in August 2019. Since then, new variants were discovered throughout 2019 and the beginning of 2020, with a strong uptick noticed in March of this year.
NetWalker has noticeably evolved to a more stable and robust ransomware-as-a-service (RaaS) model, and our research suggests that the malware operators are targeting and attracting a broader range of technically advanced and enterprising criminal affiliates.
McAfee Advanced Threat Research (ATR) discovered a large sum of bitcoins linked to NetWalker which suggest its extortion efforts are effective and that many victims have had no option other than to succumb to its criminal demands.
We approached our investigation of NetWalker with some possible ideas about the threat actor behind it, only to later disprove our own hypothesis. We believe the inclusion of our thinking, and the means with which we debunked our own theory, highlight the importance of thorough research and we welcome further discussion on this topic. We believe it starts valuable discussions and helps avoid duplicate research efforts by others. We also encourage our peers in the industry to share information with us in case you have more evidence.
McAfee protects its customers against the malware covered in this blog in all its products, including personal antivirus, endpoint and gateway. To learn more about how McAfee products can defend against these types of attacks, visit our blog on Building Adaptable Security Architecture Against NetWalker.
Check out McAfee Insights to stay on top of NetWalker’s latest developments and intelligence on other cyber threats, all curated by the McAfee ATR team. Not only that, Insights will also help you prioritize threats, predict if your countermeasures will work and prescribe corrective actions.
Introduction
Since 2019, NetWalker ransomware has reached a vast number of different targets, mostly based in western European countries and the US. Since the end of 2019, the NetWalker gang has indicated a preference for larger organisations rather than individuals. During the COVID-19 pandemic, the adversaries behind NetWalker clearly stated that hospitals will not be targeted; whether they keep to their word remains to be seen.
The ransomware appends a random extension to infected files and uses Salsa20 encryption. It uses some tricks to avoid detection, such as a new defence evasion technique, known as reflective DLL loading, to inject a DLL from memory.
The NetWalker collective, much like those behind Maze, REvil and other ransomware, threatens to publish victims’ data if ransoms are not paid.
As mentioned earlier, NetWalker RaaS prioritizes quality over quantity and is looking for people who are Russian-speaking and have experience with large networks. People who already have a foothold in a potential victim’s network and can exfiltrate data with ease are especially sought after. This is not surprising, considering that publishing a victims’ data is part of NetWalker’s model.
The following sections are dedicated to introducing the NetWalker malware and displaying the telemetry status before moving on to the technical malware analysis of the ransomware’s behaviour. We will explain how the decrypt