Daily Archives: October 13, 2020

Clues You Have Been Hacked

Some of the most common indicators that you may have been include the following. Your friends tell you that they have received odd emails or messages from you, messages you know you did not send. Your password no longer works for one of your accounts, even though you know you never changed the password. Your anti-virus informs you that one of your files or computer is infected. You receive a pop-up message informing you that the files on your computer have been encrypted and you must pay a ransom to recover them.

Election 2020 – How to Spot Phony Deepfake Videos this Election

Election 2020 – How to Spot Phony Deepfake Videos this Election

Maybe you’ve seen videos where Robert Downey Jr. and other cast members of The Avengers follow the yellow brick road after they swap faces with the cast of 1939’s The Wizard of Oz. Or how about any of the umpteen videos where the face of actor Nicolas Cage is swapped with, well, everybody, from the cast of Friends to Forrest Gump. They’re funny, uncanny, and sometimes a little too real. Welcome to deepfakes, a technology that can be entertaining, yet one that has election year implications—now and for years to come.

What are deepfakes?

Deepfakes are phoney video or audio recordings that look and sound real, so much so that the best of them can dupe people into thinking they’re the real thing. They’re not unlike those face-swapping apps your children or nieces and nephews may have on their phones, albeit more sophisticated. Less powerful versions of deepfaking software are used by the YouTube channels that create the videos I mentioned above. However, more sophisticated deepfake technologies have chilling repercussions when it comes to public figures, such as politicians.

Imagine creating a video of a public figure where you literally put words into their mouth. That’s what deepfakes effectively do. This can lead to threat tactics, intimidation, and personal image sabotage—and in an election year, the spread of disinformation.

Deepfakes sow the seeds of doubt

Deepfakes can make you question if what you’re seeing, and hearing, is actually real. In terms of an election year, they can introduce yet another layer of doubt into our discourse—leading people to believe that a political figure has said something that they’ve never said. And, conversely, giving political figures an “out” where they might decry a genuine audio or video clip as a deepfake, when in fact it is not.

The technology and security industries have responded by rolling out their own efforts to detect and uncover deepfakes. Here at McAfee, we’ve launched McAfee Deepfakes Lab, which provides traditional news and social media organizations advanced Artificial Intelligence (AI) analysis of suspected deepfake videos intended to spread reputation-damaging lies about individuals and organizations during the 2020 U.S. election season and beyond.

However, what can you do when you encounter, or think you encounter, a deepfake on the internet? Just like in my recent blog on election misinformation, a few tips on media savvy point the way.

How to spot deepfakes

While the technology continually improves, there are still typical telltale signs that a video you’re watching is a deepfake. Creators of deepfakes count on you to overlook some fine details, as the technology today largely has difficulty capturing the subtle touches of their subjects. Take a look at:

  • Their face. Head movement can cause a slight glitch in the rendering of the image, particularly because the technology works best when the subject is facing toward the camera.
  • Their skin. Blotchy patches, irregular skin tones, or flickering at the edges of the face are all signs of deepfake videos.
  • Their eyes. Other glitches may come by way of eyeglasses, eyes that look expressionless, and eyes that appear to be looking in the wrong direction. Likewise, the light reflected in their irises may look strangely lit in a way that does not match the setting.
  • Their hair. Flyaway hairs and some of the irregularities you’ll find in a person’s smile continue to be problematic for deepfakes. Instead, that head of hair could look a little too perfect.
  • Their smile. Teeth don’t always render well in deepfakes, sometimes looking more like white bars instead of showing the usual irregularities we see in people’s smiles. Also, look out for inconsistencies in the lip-syncing.

 Listen closely to what they’re saying, and how they’re saying it

This is important. Like I pointed out in my recent article on how to spot fake news and misinformation in your social media feed, deepfake content is meant to stir your emotions—whether that’s a sense of ridicule, derision, outrage, or flat-out anger. While an emotional response to some video you see isn’t a hard and fast indicator of a deepfake itself, it should give you a moment of pause. Listen to what’s being said. Consider its credibility. Question the motives of the producer or poster of the video. Look to additional credible sources to verify that the video is indeed real.

How the person speaks is important to consider as well. Another component of deepfake technology is audio deepfaking. As recently as 2019, fraudsters used audio deepfake technology to swindle nearly $250,000 dollars from a UK-based energy firm by mimicking the voice of its CEO over the phone. Like its video counterpart, audio deepfakes can sound uncannily real, or at least real enough to sow a seed of doubt. Characteristically, the technology has its shortcomings. Audio deepfakes can sound “off,” meaning that it can sound cold, like the normal and human emotional cues have been stripped away—or that the cadence is off, making it sound flat the way a robocall does.

As with all things this election season and beyond, watch carefully, listen critically. And always look for independent confirmation. For more information on our .GOV-HTTPS county website research, potential disinformation campaigns, other threats to our elections, and voter safety tips, please visit our Elections 2020 page: https://www.mcafee.com/enterprise/en-us/2020-elections.html

Stay Updated 

To stay updated on all things McAfee and for more resources on staying secure from home, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Election 2020 – How to Spot Phony Deepfake Videos this Election appeared first on McAfee Blogs.

Lessons From Teaching Cybersecurity: Week 3

As I had mentioned previously, this year, I’m going back to school. Not to take classes but to teach a course at my alma mater, Fanshawe College. I did this about a decade ago and thought it was interesting, so I was excited to give it another go. Additionally, after a friend mentioned that their […]… Read More

The post Lessons From Teaching Cybersecurity: Week 3 appeared first on The State of Security.

VERT Threat Alert: October 2020 Patch Tuesday Analysis

Today’s VERT Alert addresses Microsoft’s October 2020 Security Updates. VERT is actively working on coverage for these vulnerabilities and expects to ship ASPL-909 on Wednesday, October 14th. In-The-Wild & Disclosed CVEs (October 2020 Patch Tuesday Analysis) CVE-2020-16938 This CVE describes an information disclosure in the Windows kernel that could allow a local attacker to disclose […]… Read More

The post VERT Threat Alert: October 2020 Patch Tuesday Analysis appeared first on The State of Security.

Microsoft Patch Tuesday, October 2020 Edition

It’s Cybersecurity Awareness Month! In keeping with that theme, if you (ab)use Microsoft Windows computers you should be aware the company shipped a bevy of software updates today to fix at least 87 security problems in Windows and programs that run on top of the operating system. That means it’s once again time to backup and patch up.

Eleven of the vulnerabilities earned Microsoft’s most-dire “critical” rating, which means bad guys or malware could use them to gain complete control over an unpatched system with little or no help from users.

Worst in terms of outright scariness is probably CVE-2020-16898, which is a nasty bug in Windows 10 and Windows Server 2019 that could be abused to install malware just by sending a malformed packet of data at a vulnerable system. CVE-2020-16898 earned a CVSS Score of 9.8 (10 is the most awful).

Security vendor McAfee has dubbed the flaw “Bad Neighbor,” and in a blog post about it said a proof-of-concept exploit shared by Microsoft with its partners appears to be “both extremely simple and perfectly reliable,” noting that this sucker is imminently “wormable” — i.e. capable of being weaponized into a threat that spreads very quickly within networks.

“It results in an immediate BSOD (Blue Screen of Death), but more so, indicates the likelihood of exploitation for those who can manage to bypass Windows 10 and Windows Server 2019 mitigations,” McAfee’s Steve Povolny wrote. “The effects of an exploit that would grant remote code execution would be widespread and highly impactful, as this type of bug could be made wormable.”

Trend Micro’s Zero Day Initiative (ZDI) calls special attention to another critical bug quashed in this month’s patch batch: CVE-2020-16947, which is a problem with Microsoft Outlook that could result in malware being loaded onto a system just by previewing a malicious email in Outlook.

“The Preview Pane is an attack vector here, so you don’t even need to open the mail to be impacted,” said ZDI’s Dustin Childs.

While there don’t appear to be any zero-day flaws in October’s release from Microsoft, Todd Schell from Ivanti points out that a half-dozen of these flaws were publicly disclosed prior to today, meaning bad guys have had a jump start on being able to research and engineer working exploits.

Other patches released today tackle problems in Exchange Server, Visual Studio, .NET Framework, and a whole mess of other core Windows components.

For any of you who’ve been pining for a Flash Player patch from Adobe, your days of waiting are over. After several months of depriving us of Flash fixes, Adobe’s shipped an update that fixes a single — albeit critical — flaw in the program that crooks could use to install bad stuff on your computer just by getting you to visit a hacked or malicious website.

Chrome and Firefox both now disable Flash by default, and Chrome and IE/Edge auto-update the program when new security updates are available. Mercifully, Adobe is slated to retire Flash Player later this year, and Microsoft has said it plans to ship updates at the end of the year that will remove Flash from Windows machines.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any chinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Becoming resilient by understanding cybersecurity risks: Part 1

All risks have to be viewed through the lens of the business or organization. While information on cybersecurity risks is plentiful, you can’t prioritize or manage any risk until the impact (and likelihood) to your organization is understood and quantified.

This rule of thumb on who should be accountable for risk helps illustrate this relationship:

The person who owns (and accepts) the risk is the one who will stand in front of the news cameras and explain to the world why the worst case scenario happened.

This is the first in a series of blogs exploring how to manage challenges associated with keeping an organization resilient against cyberattacks and data breaches. This series will examine both the business and security perspectives and then look at the powerful trends shaping the future.

This blog series is unabashedly trying to help you build a stronger bridge between cybersecurity and your organizational leadership.

A visualization of how to manage organizational risk through leadership

Organizations face two major trends driving both opportunity and risk:

  • Digital disruption: We are living through the fourth industrial revolution, characterized by the fusion of the physical, biological, and digital worlds. This is having a profound impact on all of us as much as the use of steam and electricity changed the lives of farmers and factory owners during early industrialization.
    Tech-disruptors like Netflix and Uber are obvious examples of using the digital revolution to disrupt existing industries, which spurred many industries to adopt digital innovation strategies of their own to stay relevant. Most organizations are rethinking their products, customer engagement, and business processes to stay current with a changing market.
  • Cybersecurity: Organizations face a constant threat to revenue and reputation from organized crime, rogue nations, and freelance attackers who all have their eyes on your organization’s technology and data, which is being compounded by an evolving set of insider risks.

Organizations that understand and manage risk without constraining their digital transformation will gain a competitive edge over their industry peers.

Cybersecurity is both old and new

As your organization pulls cybersecurity into your existing risk framework and portfolio, it is critical to keep in mind that:

  • Cybersecurity is still relatively new: Unlike responding to natural disasters or economic downturns with decades of historical data and analysis, cybersecurity is an emerging and rapidly evolving discipline. Our understanding of the risks and how to manage them must evolve with every innovation in technology and every shift in attacker techniques.
  • Cybersecurity is about human conflict: While managing cyber threats may be relatively new, human conflict has been around as long as there have been humans. Much can be learned by adapting existing knowledge on war, crime, economics, psychology, and sociology. Cybersecurity is also tied to the global economic, social, and political environments and can’t be separated from those.
  • Cybersecurity evolves fast (and has no boundaries): Once a technology infrastructure is in place, there are few limits on the velocity of scaling an idea or software into a global presence (whether helpful or malicious), mirroring the history of rail and road infrastructures. While infrastructure enables commerce and productivity, it also enables criminal or malicious elements to leverage the same scale and speed in their actions. These bad actors don’t face the many constraints of legitimate useage, including regulations, legality, or morality in the pursuit of their illicit goals. These low barriers to entry on the internet help to increase the volume, speed, and sophistication of cyberattack techniques soon after they are conceived and proven. This puts us in the position of continuously playing catch up to their latest ideas.
  • Cybersecurity requires asset maintenance: The most important and overlooked aspect of cybersecurity is the need to invest in ‘hygiene’ tasks to ensure consistent application of critically important practices.
    One aspect that surprises many people is that software ‘ages’ differently than other assets and equipment, silently accumulating security issues with time. Like a brittle metal, these silent issues suddenly become massive failures when attackers find them. This makes it critical for proactive business leadership to proactively support ongoing technology maintenance (despite no previous visible signs of failure).

Stay pragmatic

In an interconnected world, a certain amount of playing catch-up is inevitable, but we should minimize the impact and probabilities of business impact events with a proactive stance.

Organizations should build and adapt their risk and resilience strategy, including:

  1. Keeping threats in perspective: Ensuring stakeholders are thinking holistically in the context of business priorities, realistic threat scenarios, and reasonable evaluation of potential impact.
  2. Building trust and relationships: We’ve learned that the most important cybersecurity approach for organizations is to think and act symbiotically—working in unison with a shared vision and goal.
    Like any other critical resource, trust and relationships can be strained in a crisis. It’s critical to invest in building strong and collaborative relationships between security and business stakeholders who have to make difficult decisions in a complex environment with incomplete information that is continuously changing.
  3. Modernizing security to protect business operations wherever they are: This approach is often referred to as Zero Trust and helps security enable the business, particularly digital transformation initiatives (including remote work during COVID-19) versus the traditional role as an inflexible quality function.

One organization, one vision

As organizations become digital, they effectively become technology companies and inherit both the natural advantages (customer engagement, rapid scale) and difficulties (maintenance and patching, cyberattack). We must accept this and learn to manage this risk as a team, sharing the challenges and adapting to the continuous evolution.

In the coming blogs, we will explore these topics from the perspective of business leaders and from cybersecurity leaders, sharing lessons learned on framing, prioritizing, and managing risk to stay resilient against cyberattacks.

To learn more about Microsoft Security solutions visit our website.  Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Becoming resilient by understanding cybersecurity risks: Part 1 appeared first on Microsoft Security.

Microsoft and Other Tech Companies Take Down TrickBot Botnet

Days after the US Government took steps to disrupt the notorious TrickBot botnet, a group of cybersecurity and tech companies has detailed a separate coordinated effort to take down the malware's back-end infrastructure. The joint collaboration, which involved Microsoft's Digital Crimes Unit, Lumen's Black Lotus Labs, ESET, Financial Services Information Sharing and Analysis Center (FS-ISAC),

Trickbot botnet disrupted by Microsoft and alliance of tech companies

Microsoft says it, and several tech companies, have at least temporarily taken down the Trickbot botnet, a Russian-based network of devices that has infected more than a million computers since 2016 and is behind scores of ransomware attacks.

“We disrupted Trickbot through a [U.S.] court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world,” Microsoft said in a statement Monday. “We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.”

Other tech companies involved in the effort included ESETLumen’s Black Lotus LabsNTT and Symantec. Also involved was the Financial Services Information Sharing and Analysis Center (FS-ISAC).

Microsoft says these moves represent a legal approach that its Digital Crimes Unit is using for the first time to get the court order: Copyright claims against Trickbot’s malicious use of its software code. “This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.”

Criminals being well-funded and with the ability to find other systems to host their malware, it isn’t clear how long Trickbot will be out of commission. In fact, Microsoft took care to say it has “disrupted” the botnet. “We fully anticipate Trickbot’s operators will make efforts to revive their operations,” Microsoft acknowledged, adding, “we will work with our partners to monitor their activities and take additional legal and technical steps to stop them.”

Cyber criminals are tenacious. The re-birth of the Emotet botnet in 2019 is a recent example. It was down for four months after its command and control (C&C) servers had been shut down — either by law enforcement or a security researcher. But operators may have shut it down to rebuild the infrastructure.

UPDATE: ZDNet reports that the Trickbot operators have replaced the seized domains and command and control servers with new infrastructure.

In a statement, ESET said that over the years Trickbot compromises have been reported in a steady manner, making it one of the largest and longest-lived botnets. “Trickbot is one of the most prevalent banking malware families, and this malware strain represents a threat for internet users globally,” said Jean-Ian Boutin, the company’s head of threat research.

“Throughout its existence, this malware has been distributed in a number of ways. Recently, a chain we observed frequently is Trickbot being dropped on systems already compromised by Emotet, another large botnet. In the past, Trickbot malware was leveraged by its operators mostly as a banking trojan, stealing credentials from online bank accounts and trying to perform fraudulent transfers.”

What makes Trickbot so dangerous, says Microsoft, is its modular capabilities that constantly evolve, infecting victims through a “malware-as-a-service” model. “Its operators could provide their customers access to infected machines and offer them a delivery mechanism for many forms of malware, including ransomware. Beyond infecting end-user computers, Trickbot has also infected a number of “Internet of Things” devices, such as routers, which has extended Trickbot’s reach into households and organizations.”

Trickbot’s operators can also quickly tailor its spam and spear-phishing campaigns. Recent messaging topics have included Black Lives Matter and COVID-19. Microsoft believes Trickbot has been the most prolific malware operation using COVID-19 themed lures.

Trickbot is also known to deliver the Ryuk crypto-ransomware.

The post Trickbot botnet disrupted by Microsoft and alliance of tech companies first appeared on IT World Canada.

CVE-2020-16898: “Bad Neighbor”

CVE-2020-16898: “Bad Neighbor”

CVSS Score: 9.8

Vector: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C

Overview
Today, Microsoft announced a critical vulnerability in the Windows IPv6 stack, which allows an attacker to send maliciously crafted packets to potentially execute arbitrary code on a remote system. The proof-of-concept shared with MAPP (Microsoft Active Protection Program) members is both extremely simple and perfectly reliable. It results in an immediate BSOD (Blue Screen of Death), but more so, indicates the likelihood of exploitation for those who can manage to bypass Windows 10 and Windows Server 2019 mitigations. The effects of an exploit that would grant remote code execution would be widespread and highly impactful, as this type of bug could be made wormable. For ease of reference, we nicknamed the vulnerability “Bad Neighbor” because it is located within an ICMPv6 Neighbor Discovery “Protocol”, using the Router Advertisement type.

Vulnerability Details
A remote code execution vulnerability exists when the Windows TCP/IP stack improperly handles ICMPv6 Router Advertisement packets that use Option Type 25 (Recursive DNS Server Option) and a length field value that is even. In this Option, the length is counted in increments of 8 bytes, so an RDNSS option with a length of 3 should have a total length of 24 bytes. The option itself consists of five fields: Type, Length, Reserved, Lifetime, and Addresses of IPv6 Recursive DNS Servers. The first four fields always total 8 bytes, but the last field can contain a variable number of IPv6 addresses, which are 16 bytes each. As a result, the length field should always be an odd value of at least 3, per RFC 8106:

When an IPv6 host receives DNS options (i.e., RDNSS and DNSSL
options) through RA messages, it processes the options as follows:

   o  The validity of DNS options is checked with the Length field;
      that is, the value of the Length field in the RDNSS option is
      greater than or equal to the minimum value (3) and satisfies the
      requirement that (Length - 1) % 2 == 0.

When an even length value is provided, the Windows TCP/IP stack incorrectly advances the network buffer by an amount that is 8 bytes too few. This is because the stack internally counts in 16-byte increments, failing to account for the case where a non-RFC compliant length value is used. This mismatch results in the stack interpreting the last 8 bytes of the current option as the start of a second option, ultimately leading to a buffer overflow and potential RCE.

It is likely that a memory leak or information disclosure bug in the Windows kernel would be required in order to build a full exploit chain for this vulnerability. Despite this, we expect to see working exploits in the very near future.

Threat Surface
The largest impact here will be to consumers on Windows 10 machines, though with Windows Updates the threat surface is likely to be quickly minimized. While Shodan.io shouldn’t be counted on as a definitive source, our best queries put the number of Windows Server 2019 machines with IPv6 addresses is in the hundreds, not exceeding approximately 1000. This is likely because most servers are behind firewalls or hosted by Cloud Service Providers (CSPs) and not reachable directly via Shodan scans.

Detection
We believe this vulnerability can be detected with a simple heuristic that parses all incoming ICMPv6 traffic, looking for packets with an ICMPv6 Type field of 134 – indicating Router Advertisement – and an ICMPv6 Option field of 25 – indicating Recursive DNS Server (RDNSS). If this RDNSS option also has a length field value that is even, the heuristic would drop or flag the associated packet, as it is likely part of a “Bad Neighbor” exploit attempt.

Mitigation
Patching is always the first and most effective course of action. If this is not possible, the best mitigation is disabling IPv6, either on the NIC or at the perimeter of the network by dropping ICMPv6 traffic if it is non-essential. Additionally, ICMPv6 Router Advertisements can be blocked or dropped at the network perimeter. Windows Defender and Windows Firewall fail to block the proof-of-concept when enabled. It is unknown yet if this attack can succeed by tunneling the ICMPv6 traffic over IPv4 using technologies like 6to4 or Teredo. Our efforts to repeat the attack in this manner have not been successful to date.

For those McAfee customers who are unable to deploy the Windows patch, the following Network Security Platform (NSP) signatures will provide a virtual patch against attempted exploitation of this vulnerability, as well as a similar vulnerability (CVE-2020-16899). Unlike “Bad Neighbor”, the impact of CVE-2020-16899 is limited to denial-of-service in the form of BSoD.

NSP Attack ID: 0x40103a00 – ICMP: Windows IPv6 Stack Elevation of Privilege Vulnerability (CVE-2020-16898)
NSP Attack ID: 0x40103b00 – ICMP: Windows Function Discovery SSDP Provider Elevation of Privilege Vulnerability (CVE-2020-16899)

Additionally, we are releasing Suricata rules to detect potential exploitation of these vulnerabilities. Due to limitations in open source tools such as Snort and Suricata, we found that implementing the minimal detection logic described earlier required combining Suricata with its built-in Lua script parser. We have hosted the rules and Lua scripts at our public GitHub under CVE-2020-16898 and CVE-2020-16899 respectively. Although we have confirmed that the rules correctly detect use of the proof-of-concepts, they should be thoroughly vetted in your environment prior to deployment to avoid risk of any false positives.

The post CVE-2020-16898: “Bad Neighbor” appeared first on McAfee Blogs.

“Best of Breed” – CASB/DLP and Rights Management Come Together

Securing documents before cloud

Before the cloud, organizations would collaborate and store documents on desktop/laptop computers, email and file servers. Private cloud use-cases such accessing and storing documents on intranet web servers and network attached storage (NAS) improved the end-user’s experience. The security model followed a layered approach, where keeping this data safe was just as important as not allowing unauthorized individuals into the building or data center. This was followed by a directory service to sign into to protect your personal computer, then permissions on files stored on file servers to assure safe usage.

Enter the cloud

Most organizations now consider cloud services to be essential in their business. Services like Microsoft 365 (Sharepoint, Onedrive, Teams), Box, and Slack are depended upon by all users. The same fundamental security concepts exist – however many are covered by the cloud service themselves. This is known as the “Shared Security Model” – essentially the Cloud Service Provider handles basic security functions (physical security, network security, operations security), but ultimately the end customer must correctly give access to data and is ultimately responsible for properly protecting it.

The big difference between the two is that in the first security model, the organization owned and controlled the entire process. In the second cloud model, the customer owns the controls surrounding the data they choose to put in the cloud. This is the risk that collaborating and storing data in the cloud brings; once the documents have been stored in M365, what happens if it is mishandled from this point forward? Who is handling these documents? What if my most sensitive information has left the safe confines of the cloud service, how can I protect that once it leaves? Fundamentally: How can I control data that lives hypothetically anywhere, including areas that I do not have control over?

Adding the protection layers that are cloud-native

McAfee and Seclore have extended an integration recently to address these cloud-based use cases. This integration fundamentally answers this question: If I put sensitive data in the cloud that I do not control, can I still protect the data regardless of where it lives?

The solution works like this:

The solution puts guardrails around end-user cloud usage, but also adds significant compliance protections, security operations, and data visibility for the organization.

Data visibility, compliance & security operations

Once an unprotected sensitive file has been uploaded to a cloud service, McAfee MVISION Cloud Data Loss Prevention (DLP) detects the file upload. Customers can assign a DLP policy to find sensitive data such as credit card data (PCI), customer data, personally identifiable information (PII) or any other data they find to be sensitive.

Sample MVISION Cloud DLP Policy

If data is found to be in violation of policy, it means the data must be properly protected. For example, if the DLP engine finds PII, rather than let it sit unprotected in the cloud service, the McAfee policy the customer sets should enact some protection on file. This action is known as an “Response”, and MVISION Cloud will properly show the detection, violating data, and actions taken in the incident data. In this case, McAfee will call Seclore to protect the file. These actions can be performed both in near real-time, or will enact protection whenever data already exists in the cloud service (on demand scan).

“Seclore-It” – Protection Beyond Encryption

Now that the file has been protected, downstream access to the file is managed by Seclore’s policy engine. Examples of policy-based access could be end-user location, data type, user group, time of day, or any other combination of policy choices. The key principle here is the file is protected regardless of where it goes and enforced by a Seclore policy that the organization sets. If a user accesses the file, an audit trail is recorded to assure that organizations have the confidence that data is properly protected. The audit logs show allows and denies, completing the data visibility requirements.

Addressing one last concern; if a file is “lost” or the need to restrict access to files that are no longer in direct control such as when a user leaves the company, or if the organization simply wants to update policies on protected files, the policy on those files can be dynamically updated. This addresses a major data loss concern that companies have for cloud service providers and general data use for remote users. Ensuring files are always protected, regardless of scenario is simple to achieve with Seclore by taking the action to update a policy. Once the policy has been updated, even files on a thumb drive stuffed in a drawer are now re-protected from accidental or intentional disclosure.

Conclusion

This article addresses several notable concerns for customers doing business in a cloud model. Important/sensitive data can now be effortlessly protected as it migrates to and through cloud services to its ultimate destination. The organization can prove compliance to auditors that the data was protected and continues to be protected. Security operations can track incidents and follow the access history of files. Finally, the joint solution is easy to use and enables businesses to confidently conduct business in the cloud.

Next Steps

McAfee and Seclore partner both at the endpoint and in the cloud as an integrated solution. To find out more and see this solution running in your environment, send an inquiry to cloud@mcafee.com

 

The post “Best of Breed” – CASB/DLP and Rights Management Come Together appeared first on McAfee Blogs.

Top 10 Microsoft Teams Security Threats

2020 has seen cloud adoption accelerate with Microsoft Teams as one of the fastest growing collaboration apps, McAfee customers use of Teams increased by 300% between January and April 2020. When we looked into Teams use in more detail in June, we found these statistics, on average, in our customer base:

 

Teams Created                                                                 367

Members added to Teams                                      6,526

Number of Teams Meetings                              106,000

3rd Party Apps added to Teams                                 185

Guest users added to Teams                                  2,906

This means that a typical enterprise has a new guest user added to their teams every few minutes – you wouldn’t allow unknown people to walk into an office, straight past security and walk around the building unescorted looking at papers sitting on people’s desks, but at the same time you want to allow in those guests you trust. For Teams, you need the same controls – allow in those guests you trust, but confirm their identity and make sure that they don’t see confidential information.

Microsoft invests huge amounts of time and money in the security of their systems, but security of the data in those systems and how they are used by the users is the responsibility of the enterprise.

The breadth of options, including inviting guest users and integration with 3rd party applications can be the Achilles heel of any collaboration technology. It takes just seconds to add an external third party into an internal discussion without realizing the potential for data loss, so sadly the risk of misconfiguration, oversharing or misuse can be large.

IT security teams need the ability to manage and control use to reduce risk of data loss or malware entering through Teams.

After working with hundreds of enterprises and over 40 million MVISION Cloud users worldwide and discussing with IT security, governance and risk teams how they address their Microsoft Teams security concerns, we have published a paper that outlines the top ten security threats and how to address them.

Microsoft Teams: Top 10 Security Threats

This collaboration potentially increases threats such as data loss and malware distribution. In this paper, McAfee discusses the top threats resulting from Teams use along with recommended actions.
Download Now

A few of the 10 Top Microsoft Teams Security Threats are below, read the paper for the full list.

  1. Microsoft Teams Guest Users: Guests can be added to see internal/sensitive content. By setting allow and/or block list domains, security can be implemented with the flexibility to allow employees to collaborate with authorized guests via Teams.
  2. Screen sharing that includes sensitive data. Screen sharing is very powerful, but can inadvertently share confidential data, especially if communication applications such as email are showing alerts on the screen.
  3. Access from Unmanaged Devices: Teams can be used on unmanaged devices, potentially resulting in data loss. The ability to set policies for unmanaged devices can safeguard Teams content.
  4. Malware Uploaded via Teams: File uploads from guests or from unmanaged devices may contain malware. IT administrators need the ability to either block all file uploads from unmanaged devices or to scan content when it is uploaded and remove it from the channel, informing IT management of any incidents.
  5. Data Loss Via Teams Chat and File Shares: File shares in Teams can lose confidential data. Data loss prevention technologies with strong sensitive content identification and sharing control capabilities should be implemented on Teams chat and file shares.
  6. Data Loss Via Other Apps: Teams App integration can mean data may go to untrusted destinations. As some of these apps may transfer data via their services, IT administrators need a system to discover third-party apps in use, review their risk profile and provide a workflow to remediate, audit, allow, block or notify users on an app’s status and revoke access as needed.

McAfee has a wealth of experience helping customers security their cloud computing systems, built around the MVISION Cloud CASB and other technologies. We can advise you about Microsoft Teams security and discuss possible threats of taking no action. Contact us to let us help you.

Teams is just one of the many applications within the Microsoft 365 suite and it is important to deploy common security controls for all cloud apps. MVISION Cloud provides security for Microsoft 365 and other cloud-based applications such as Salesforce, Box, Workday, AWS, Azure, Google Cloud Platform and customers’ own internally developed applications.

 

The post Top 10 Microsoft Teams Security Threats appeared first on McAfee Blogs.

Ready, Set, Shop: Enjoy Amazon Prime Day Without the Phishing Scams

For many, Amazon Prime Day is an opportunity to score some great deals. For hackers, Amazon’s annual discount shopping campaign is an opportunity to target unsuspecting shoppers with phishing scams. In fact, researchers at McAfee Labs previously uncovered a phishing kit specifically created to steal personal information from Amazon customers in America and Japan just in time for last year’s Amazon Prime Day. 

Let’s dive into the details of how this phishing kit worked and what you can do to protect yourself while hunting for those Prime Day bargains.  

Phishing Kit Allowed Hackers to Target Amazon Users 

You’ve probably received an email that looked a bit phishyperhaps the logo was just slightly offcentered, or something about it didn’t feel quite right. That is how the phishing kit worked: hackers created fake emails that looked like they originated from Amazon (but didn’t). Once opened, the email prompted the unsuspecting recipient to provide their login credentials on a malicious website. Once logged in, hackers had access to the victim’s entire account, enabling them to make purchases or even worse, steal the victim’s credit card information.  

When this threat first emerged, the McAfee Labs researchers reported widespread use of the phishing kit – with over 200 malicious URLs deployed on innocent online shoppers. The phishing kit was then sold through an active Facebook group with over 300 members looking for a way to access unsuspecting shoppers’ Prime accountsMcAfee notified Facebook of this group’s activity when it surfaced, and the social network took an active role in removing groups and accounts that violate their policies.   

Protect Your Prime Day Shopping with These Tips 

Users hoping to score some online shopping deals this week should familiarize themselves with common phishing tactics to help protect their personal and financial information. If you’re planning on participating in Prime Day, follow these security steps to help you swerve malicious cyberattacks and shop with peace of mind: 

Beware of bogus deals 

If you see an ad for Prime Day that looks too good to be true, chances are that the ad isn’t legitimate. Play it safe and don’t click on the ad. 

Think before you click 

Be skeptical of ads shared on social media sites, emails, and messages sent to you through platforms like Facebook, Twitter, and WhatsApp. If you receive a suspicious message regarding Prime Day, it’s best to avoid interacting with the message altogether. 

Do your due diligence with discount codes 

If a discount code lands in your inbox, you’re best off verifying it through Amazon.com directly rather than clicking on any links. 

If you do suspect that your Amazon Prime account has been compromised due to a cyberthreat, take the following steps: 

Change your password 

Change the passwords to any accounts you suspect may have been impacted. Make sure your new credentials are strong and unique from your other logins. For tips on how to create a more secure password, read our blog on common password habits and how to safeguard your accounts.  

Keep an eye on your bank account 

One of the most effective ways to determine whether someone is fraudulently using your credit card information is to monitor your bank statements. If you see any charges that you did not make, report it to the authorities immediately. 

Consider using identity theft protection 

A solution like McAfee Identify Theft Protection will help you to monitor your accounts and alert you of any suspicious activity. 

Stay Updated 

To stay updated on all things  McAfee  and on top of the latest consumer and mobile security threats, follow  @McAfee_Home  on Twitter, listen to our podcast  Hackable?, and ‘Like’ us on  Facebook. 

The post Ready, Set, Shop: Enjoy Amazon Prime Day Without the Phishing Scams appeared first on McAfee Blogs.

Five Eyes countries press for back doors into applications, again

Canada has again joined its partners in the Five Eyes intelligence co-operative and is calling on tech companies to work with governments to find a legal way around their end-to-end encryption.

In a news release over the weekend, senior cabinet officials from Canada, the U.S., the United Kingdom, Australia and New Zealand, as well as the governments of India and Japan, urged the industry to address concerns that encryption in their products helps criminals by precluding any legal access to unlawful communications.

“Particular implementations of encryption technology … pose significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children,” officials wrote.

The governments are asking industry to help find “reasonable, technically feasible solutions” that do the following:

  • Embed the safety of the public in system designs, thereby enabling companies to act against illegal content and activity effectively with no reduction to safety, and facilitating the investigation and prosecution of offences and safeguarding the vulnerable.
  • Enable law enforcement access to content in a readable and usable format where a (court) authorization is lawfully issued, is necessary and proportionate, and is subject to strong safeguards and oversight.
  • Engage in consultation with governments and other stakeholders to facilitate legal access in a way that is substantive and genuinely influences design decisions.

The demand by governments and law enforcement agencies for lawful access to encrypted communications has been going on for years, and been resisted by privacy experts for just as long.

It’s being raised again, says the statement, because of proposals to apply end-to-end encryption across major messaging services. Many services including WhatsApp and Telegram already offer it. Zoom has been testing it since July.

The issue last hit headlines in the summer of 2019 when the University of Toronto’s Citizen Lab condemned then-Public Safety Minister Ralph Goodale for changing Canada’s policy on lawful access. Before then, Canada said it favoured strong encryption in products to protect citizens. However, after Goodale signed a Five Eyes communique urging tech companies to include “mechanisms in the design of their encrypted products and services whereby governments, acting with appropriate legal authority, can obtain access to data in a readable and usable format.”

Citizen Lab hit back. “In advancing an irresponsible encryption policy that would deny individuals and businesses access to strong encryption, [Ralph Goodale, Minister of Public Safety] and the Government of Canada have failed to publicly acknowledge and present the range of serious harms that would follow should companies voluntarily, or under compulsion, adopt the government’s current policy,” it said.

Briefly, privacy and many encryption experts argue that what governments want is a back door into systems so they can read communications of crooks and nation-states. However, they say even if any back door system needs judicial approval a hole is a hole, and it can be exploited by any skilled attacker. There is no such thing, they argue as a process that can only be used by governments. As a result, such back doors or processes end personal privacy.

The weekend communique acknowledges that technology companies use encryption to protect their users. But, the release also says, law enforcement must find a way to respond to “illegal content, child sexual exploitation and abuse, violent crime, terrorist propaganda and attack planning.” In fact, the Five Eyes argue, end to end encryption hobbles tech companies own efforts to fight these threats.

All that is being asked, according to the Five Eyes community, is for law enforcement agencies to access content “in limited circumstances where necessary and proportionate to investigate serious crimes and protect national security.”

“We challenge the assertion that public safety cannot be protected without compromising privacy or cybersecurity,” the statement reads.  “We strongly believe that approaches protecting each of these important values are possible and strive to work with industry to collaborate on mutually agreeable solutions.”

Suggestions include creating master decryption keys that, in theory, only law enforcement agencies can access with a court order; giving police the ability to get a court order to compel suspects to decrypt their conversations; or creating a way that allows third parties to lawfully listen in to encrypted conversations or messages.

The post Five Eyes countries press for back doors into applications, again first appeared on IT World Canada.

Lemon Duck brings cryptocurrency miners back into the spotlight

Attackers are constantly reinventing ways of monetizing their tools. Cisco Talos recently discovered a complex campaign employing a multi-modular botnet with multiple ways to spread. This threat, known as “Lemon Duck,” has a cryptocurrency mining payload that steals computer resources to mine the Monero virtual currency. The actor employs various methods to spread across the network, like sending infected RTF files using email, psexec, WMI and SMB exploits, including the infamous Eternal Blue and SMBGhost threats that affect Windows 10 machines. Some variants also support RDP brute-forcing. In recent attacks we observed, this functionality was omitted. The adversary also uses tools such as Mimikatz, that help the botnet increase the amount of systems participating in its mining pool.

Read More >>

The post Lemon Duck brings cryptocurrency miners back into the spotlight appeared first on Cisco Blogs.

FedRAMP – What’s the Big Deal?

If you are someone who works for a cloud service provider in the business of federal contracting, you probably already have a good understanding of FedRAMP. It is also likely that our regular blog readers know the ins and outs of this program.

For those who are not involved in these areas, however, this acronym may be more unfamiliar. Perhaps you have only heard of it in passing conversation with a few of your expert cybersecurity colleagues, or you are just curious to learn what all of the hype is about. If you fall into this category – read on! This blog is for you.

At first glance, FedRAMP may seem like a type of onramp to an interstate headed for the federal government – and in a way, it is.

FedRAMP stands for the Federal Risk and Authorization Management Program, which provides a standard security assessment, authorization and continuous monitoring for cloud products and services to be used by federal agencies. The program’s overall mission is to protect the data of U.S. citizens in the cloud and promote the adoption of secure cloud services across the government with a standardized approach.

Once a cloud service has successfully made it onto the interstate – or achieved FedRAMP authorization – it’s allowed to be used by an agency and listed in the FedRAMP Marketplace. The FedRAMP Marketplace is a one-stop-shop for agencies to find cloud services that have been tested and approved as safe to use, making it much easier to determine if an offering meets security requirements.

In the fourth year of the program, FedRAMP had 20 authorized cloud service offerings. Now, eight years into the program, FedRAMP has over 200 authorized offerings, reflecting its commitment to help the government shift to the cloud and leverage new technologies.

Who should be FedRAMP authorized?

Any cloud service provider that has a contract with a federal agency or wants to work with an agency in the future must have FedRAMP authorization. Compliance with FedRAMP can also benefit providers who don’t have plans to partner with government, as it signals to the private sector they are committed to cloud security.

Using a cloud service that complies with FedRAMP standards is mandatory for federal agencies. It has also become popular with organizations in the private industry, which are more often looking to FedRAMP standards as a security benchmark for the cloud services they use.

How can a cloud service obtain authorization?

There are two ways for a cloud service to obtain FedRAMP authorization. One is with a Joint Authorization Board (JAB) provisional authorization (P-ATO) and the other is through an individual agency Authority to Operate (ATO).

A P-ATO is an initial approval of the cloud service provider by the JAB, which is made up of the Chief Information Officers (CIOs) from the Department of Defense (DoD), Department of Homeland Security (DHS) and General Services Administration (GSA). This designation means that the JAB has provided a provisional approval for agencies to leverage when granting an ATO to a cloud system.

The head of an agency grants an ATO as part of the agency authorization process. An ATO may be granted after an agency sponsor reviews the cloud service offering and completes a security assessment.

Why seek FedRAMP approval?

Achieving FedRAMP authorization for a cloud service is a very long and rigorous process, but it has received high praise from security officials and industry experts alike for its standardized approach to evaluate whether a cloud service offering meets some of the strongest cybersecurity requirements.

There are several benefits for cloud providers who authorize their service with FedRAMP. The program allows an authorized cloud service to be reused continuously across the federal government – saving time, money and effort for both cloud service providers and agencies. Authorization of a cloud service also gives service providers increased visibility of their product across government with a listing in the FedRAMP Marketplace.

By electing to comply with FedRAMP, cloud providers can demonstrate dedication to the highest data security standards. Though the process for achieving FedRAMP approval is complex, it is worthwhile for providers, as it signals a commitment to security to government and non-government customers.

McAfee’s Commitment to FedRAMP

At McAfee, we are dedicated to ensuring our cloud services are compliant with FedRAMP standards. We are proud that McAfee’s MVISION Cloud is the first Cloud Access Security Broker (CASB) platform to be granted a FedRAMP High Impact Provisional Authority to Operate (P-ATO) from the U.S. Government’s Joint Authorization Board (JAB).

Currently, MVISION Cloud is in use by ten federal agencies, including the Department of Energy (DOE), Department of Health and Human Services (HHS), Department of Homeland Security (DHS), Food and Drug Administration (FDA) and National Aeronautics and Space Administration (NASA).

MVISION Cloud allows federal organizations to have total visibility and control of their infrastructure to protect their data and applications in the cloud. The FedRAMP High JAB P-ATO designation is the highest compliance level available under FedRAMP, meaning that MVISION Cloud is authorized to manage highly sensitive government data.

We look forward to continuing to work closely with the FedRAMP program and other cloud providers dedicated to authorizing cloud service offerings with FedRAMP.

 

The post FedRAMP – What’s the Big Deal? appeared first on McAfee Blogs.

Fake Windows Defender Antivirus Theme Used to Spread QBot

Digital attackers incorporated a fake Windows Defender Antivirus theme into a malicious document in order to distribute QBot malware. According to Bleeping Computer, the QBot gang began using a new template for their email attack campaigns’ malicious documents beginning on August 25, 2020. The template adopted the disguise of a Windows Defender Antivirus alert in […]… Read More

The post Fake Windows Defender Antivirus Theme Used to Spread QBot appeared first on The State of Security.

Technology as a Security Springboard: How These Experts Pivoted to Cybersecurity

Last week I highlighted some of the brilliant stories which are covered in our new eBook, “Diversity in cybersecurity: A Mosaic of Career Possibilities”.

For this blog, we meet some new folks, and uncover how they got their unique starts in the industry.

What’s interesting about these stories in particular, is that most people started in a general field of technology. But something happened during that time to persuade them to go into cybersecurity.

Katie Moussouris | CEO of Luta Security | @k8em0 | (LinkedIn) 

There wasn’t a defining moment for me because cybersecurity as an industry wasn’t really called an industry yet. I became a hacker at an early age, but back then, we were just focusing on computer security, which was an offshoot of computer science.

I think a lot of people who have been in cybersecurity for as long as I have—over 20 years professionally—have a very meandering path that led them down this career rabbit hole.

For myself, I was a molecular biologist, and I was working on the human genome project at MIT. I decided molecular biology wasn’t for me, but I wasn’t quite sure what I wanted to do.

So I took a detour, which I thought was temporary, into the systems administrators group at the genome center at MIT. I helped them build those systems out, and then, I took another systems administration job at MIT in the Department of Aeronautics and Astronautics. There, I took care of the network that helped launch some Mars rovers. This was the late 90s we’re talking about here.

From there, defending the systems that I was in charge of led me back into the nascent security fold.

Sophia McCall | Junior Security Consultant | @spookphia | (LinkedIn) 

I was interested in computers from a young age. IT was always my favorite subject; I always wanted to pursue something in technology as a career. I remember when I was about 14 or 15, I completed the IT material so quickly in class that the teachers ended up having to write up separate extra exercises just for me every week!

After school, when I was about 16, I progressed to college to complete a BTEC Level 3 Extended Diploma in Software Development. Over two years, I learned to build and program everything you could think of: websites, games, mobile applications, scripts, and more. On this diploma course, we had a networking module that focused on security.

It was at this point when I definitely heard my “calling.”After nearly two years of building things, I discovered that breaking them was much more fun! 

Following this “Eureka” moment, I applied to study a BSc (Hons) in Cyber Security Management at university.

Four years later, including a year’s placement in industry and a huge amount of community involvement, I completed my degree with First Class Honors. I’m now about to commence my first role in the industry as a Junior Security Consultant of penetration testing. 

Ken Westin | Head of Competitive Intelligence, Elastic | @kwestin | (LinkedIn) 

I was working as the Webmaster and Linux Administrator for a company whose endpoint security product blocked USB flash drives from connecting to systems. At that time, my only exposure to security was on the defensive side.

I was curious about how the USB malware we were trying to block worked and how it got into forums where some of these tools were being traded. I therefore started experimenting with them and set out to build several Proofs of Concept (POCs) that would steal data from systems, phone data home to a server, etc.

I went down a lot of rabbit holes in my research, and I even built a website called USBHacks.com that provided samples of the USB malware to help educate network admins. (This was also the first time the FBI reached out to me.)

Around this time, one of my co-workers had his car broken into and his laptop bag stolen. We joked about what would have happened if a thief had stolen my bag and plugged in one of my weaponized flash drives into a computer.  

After the conversation, I started building tools based on my USB malware that were designed to protect devices and data if they were stolen. 

Richard Archdeacon | Advisory Chief Information Security Officer, Duo Security, Cisco | (LinkedIn) 

Like most people, I fell into cybersecurity through exposure to some really big security events. I had a background in IT transformations. Security was becoming increasingly important at the time, but it was still low on the radar unless you worked at a bank or financial organization.  

That all started to change with the big virus attacks. Code Red, Nimda, and the “I Love You” virus all swept us up by surprise at the time (security was still low on the radar unless you worked at a bank or financial organization). In one of the virus attacks, I saw a whole corporation lose its email system.

This didn’t occur simply through the attack; much of it transpired because of a faulty incident response. Everyone at the company was panicking and answering every warning email with a “CC all” reply. So it ground to a halt.  

It struck me that this meant nobody knew how to prevent or respond to these attacks and that security was going to be vital going forward. All our digital transformations would come to naught if a simple attack could cripple us. So we had to develop security in the same way that we were changing IT. 

I think the final confirmation for me came when we read reports from SOCA and other organizations that showed the link between hackers and organized crime. It struck me then that we were not dealing with script kiddies but bad people who were committed to doing bad things to innocent victims. This was more than just a job; it was a calling. 

Omar Santos | Principal Engineer – Product Incident Response Team, Cisco | @santosomar | (LinkedIn)  

It started when I left college and joined the United States Marines. I was in the U.S. Marine Corps, and my military occupational specialty was in electronics and secure communications. From there, I shifted into networking and specifically network security. That’s when I knew that cybersecurity was for me. 

After I left the Marine Corps, I joined Cisco in 2000, and I was part of the technical assistance center. I was supporting firewalls, IPS devices, VPNs, and a lot of encryption. 

From there, I shifted gears into advanced services, which is now called “CX,” or the customer experience. Along the way, I did secure implementations, a lot of network design, and architectural reviews. 

At the end, I was actually doing penetration testing and ethical hacking against many large Cisco customers. I shifted gears again, and now Im part of the product security incident response team where we specialize in vulnerability management. I also concentrate on helping industry-wide efforts. I’m the chair of several industry-wide initiatives like FIRST and OASIS.

Mo Amin | Independent Cyber Security Culture Consultant  | @infosecmo | (LinkedIn) 

When I started out, it wasn’t called “cybersecurity” back then. It was IT security.

The defining moment for me was when I got involved in a forensic investigation after my manager at the time asked if I wanted to shadow him and learn a few things. I was working in desktop support, and I found it fascinating. It was the catalyst for me.  

From there, I made a lot of mistakes, learned a lot, and adapted. I’ve been fortunate enough to work with some really good people along the way, and I still find the work interesting. 

Rebecca Herold | CEO, The Privacy Professor | @PrivacyProf | (LinkedIn) 

I got onto the information security, privacy and compliance path at the beginning of my career as a result of creating and maintaining the change control system at a large multinational financial/healthcare corporation.

I didn’t even realize change control was a critical information security control at the time until I started seeing the ways in which human interactions and noncompliance with procedures caused some major problems, such as down-time (loss of availability) for the entire corporation.

After I went to the IT Audit area, I performed an enterprise-wide information security audit. As a result of that audit, I recommended that an information security department be created.

There, I created all the corporation’s information security and privacy policies along with their supporting procedures, and created the training program, established requirements for the firewalls and web servers, performed risk assessments, established the requirements for one of the very first online banks at a time before there were any regulatory requirements for them, and generally oversaw the program. I’ve loved working in information security and privacy, simultaneously, ever since.

 

Fareedah Shaheed | CEO and Founder, Sekuva | @CyberFareedah | (LinkedIn) 

At first, cybersecurity was just an interesting career path. But once I got into corporate, I realized that there was more to security than coding or networking.  

My corporate job introduced me to the world of security awareness and the human aspect of security that I didn’t know existed. In that instant, my entire world changed, and my career in cybersecurity was solidified. 

Instead of security being reduced to lines of code or sitting at a desk for eight hours, it became about the human brain, teaching, and authentically connecting with people. 

And once I started my own business and brand, I fell deeply in love with creating a movement and tribe around security awareness and education. 

Now, it’s no longer about the “right career” but about the “right calling.”

It became something much more than me and my curiosity. It became an industry where I could create massive transformation and impact.  

Martijn Grooten | Researcher, Writer, and Security Professional | @martijn_grooten | (LinkedIn) 

During my very first security conference back in 2007, I saw a talk on the Julie Amero case: a teacher who faced a long prison sentence because malware on her laptop had displayed adult content to a class of minors. 

It taught me how security can have an impact on people’s lives and also how different people can have very different threat models. 

The latter lesson I think is relevant well beyond IT security. It could help us understand society better as a whole. 

 

Noureen Njoroge | Cybersecurity Consulting Engineer, Cisco | @EngineerNoureen | (LinkedIn) 

Curiosity led me to a cybersecurity career. I was that one student who always had questions to ask.

Upon obtaining my Bachelor’s Degree in Information Technology, I landed a Systems Admin role, which involved lots of routing, switching, and datacenter tasks. Truly humble beginnings, indeed.

Those late-night shifts at the datacenter were the core foundation of my career, as I learned a lot.   

While at this role, I attended a lunch-and-learn session that was hosted by the Infosec team. They shared information on the latest malware trends, tactics, techniques, and procedures used by threat actors.

I was so fascinated by the knowledge shared, and I asked so many questions to the point where they offered me the opportunity to shadow the team in order to learn more. It was this opportunity that deepened my interest in security.  

Later on, I was offered an opportunity to join the MIT Cybersecurity program. From the knowledge I had already attained, I knew that cybersecurity would be the future, and I wanted to be part of it. 

Looking back, I am glad to have embraced every opportunity presented, for “It’s better to be prepared for an opportunity and not have one than to have an opportunity and not be prepared.” – Whitney M. Young, Jr. 

Jason Lau | Chief Information Security Officer, Crypto.com | @JasonCISO | (LinkedIn) 

As part of my engineering degree, we had to experiment with integrated circuit chips and program them to do a variety of different things. It just so happens it was around that time when the first ever PlayStation was released.

In my spare time while getting my engineering degree, I researched and hacked” the boot sequence of the machine with a ModChip” I programmed, and I was able to play video games from different regions around the world. (Back in those days, games were on CDs and had country regional restrictions on them. Some of the best games never came to my region!) 

I was one of the first with these ModChips at that time, so my friend and I started to help others on the side. This freelance job was quite thrilling and exciting!

This was my first experience with hacking and reverse engineering. It taught me how to use root cause analysis to really dig deeper in order to understand the underlying technology and reasons for why things worked (and didn’t work). 

This is a fundamental skill which I have found useful in my cybersecurity career. 

Phillimon Zongo | Chief Executive Officer at Cyber Leadership Institute | @PhilZongo | (LinkedIn) 

I would say my eureka moment came around the end of 2015 when I went back to the drawing board and took a deep look at my career path. I felt like my career had stagnated. 

I wanted to specialize in cybersecurity because by that time it was one of the fastest growing fields within the technology risk space. It was clearly the center of attention for the board of directors, regulators, customers, and even investors. Instead of spreading myself thin across every aspect of technology risk, I wanted to go deep in cybersecurity. 

I realized that there was a major problem in cybersecurity: a lot of the material that I was reading was very technical in nature, but it was almost impossible for me to link cybersecurity tools to strategic business goals.

I realized that the subject of cybersecurity was confined within the corridors of IT. It was supposed to be a responsibility of everyone from the front office staff to the board of directors and cybersecurity professionals themselves. That’s when I realized there was a major gap. 

After months of researching and talking to other people, I realized that I needed to develop skills that would help me translate the complex side of cybersecurity into a language that was understandable by senior business leaders. 

 

Want to learn more about how technology propelled these experts into cybersecurity? Download our eBook: Diversity in cybersecurity: A mosaic of career possibilities today. 

The post Technology as a Security Springboard: How These Experts Pivoted to Cybersecurity appeared first on Cisco Blogs.

Providing NIST Guest Researchers with the Tools they Need

NIST is made up of a wide array of professionals from many different backgrounds, from science and engineering to communications, information technology, and many more. Combining all of these fields onto one campus and cross-department cooperation builds this institute into a leader of standards and technology. Outside of the federal government employees here, there is another group of individuals that bring just as much to the table as anyone else, and they are the guest researchers. Guest researchers at NIST can be broken down into two groups: domestic associates and foreign associates. A

Google Responds to Warrants for “About” Searches

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

Crash Reproduction Series: IE Developer Console UAF

Crash Reproduction Series: IE Developer Console UAF

During a DFIR investigation, using ZecOps Crash Forensics on a developer’s computer we encountered a consistent crash on Internet Explorer 11. The TL;DR is that albeit this bug is not exploitable, it presents an interesting expansion to the attack surface through the Developer Consoles on browsers.

Hear the news first

  • Only essential content
  • New vulnerabilities & announcements
  • News from ZecOps Research Team
We won’t spam, pinky swear 🤞

While examining the stack trace, we noticed a JavaScript engine failure. The type of the exception was a null pointer dereference, which is typically not alarming. We investigated further to understand whether this event can be exploited.

We examined the stack trace below: 

58c0cdba     mshtml!CDiagnosticsElementEventHelper::OnDOMEventListenerRemoved2+0xb
584d6ebc     mshtml!CDomEventRegistrationCallback2<CDiagnosticsElementEventHelper>::OnDOMEventListenerRemoved2+0x1a
584d8a1c     mshtml!DOMEventDebug::InvokeUnregisterCallbacks+0x100
58489f85     mshtml!CListenerAry::ReleaseAndDelete+0x42
582f6d3a     mshtml!CBase::RemoveEventListenerInternal+0x75
5848a9f7     mshtml!COmWindowProxy::RemoveEventListenerInternal+0x1a
582fb8b9     mshtml!CBase::removeEventListener+0x57
587bf1a5     mshtml!COmWindowProxy::removeEventListener+0x29
57584dae     mshtml!CFastDOM::CWindow::Trampoline_removeEventListener+0xb5
57583bb3     jscript9!Js::JavascriptExternalFunction::ExternalFunctionThunk+0x1de
574d4492     jscript9!Js::JavascriptFunction::CallFunction<1>+0x93
[...more jscript9 functions]
581b0838     jscript9!ScriptEngineBase::Execute+0x9d
580b3207     mshtml!CJScript9Holder::ExecuteCallback+0x48
580b2fd3     mshtml!CListenerDispatch::InvokeVar+0x227
57fe5ad1     mshtml!CListenerDispatch::Invoke+0x6d
58194d17     mshtml!CEventMgr::_InvokeListeners+0x1ea
58055473     mshtml!CEventMgr::_DispatchBubblePhase+0x32
584d48aa     mshtml!CEventMgr::Dispatch+0x41e
584d387d     mshtml!CEventMgr::DispatchPointerEvent+0x1b0
5835f332     mshtml!CEventMgr::DispatchClickEvent+0x2c3
5835ce15     mshtml!CElement::Fire_onclick+0x37
583baa8e     mshtml!CElement::DoClick+0xd5
[...]

and noticed that the flow that led to the crash was:

  • An onclick handler fired due to a user input
  • The onclick handler was executed
  • removeEventListener was called

The crash happened at:

mshtml!CDiagnosticsElementEventHelper::OnDOMEventListenerRemoved2+0xb:

58c0cdcd 8b9004010000    mov     edx,dword ptr [eax+104h] ds:002b:00000104=????????

Relevant commands leading to a crash:

58c0cdc7 8b411c       mov     eax, dword ptr [ecx+1Ch]
58c0cdca 8b401c       mov     eax, dword ptr [eax+1Ch]
58c0cdcd 8b9004010000 mov     edx, dword ptr [eax+104h]

Initially ecx is the “this” pointer of the called member function’s class. On the first dereference we get a zeroed region, on the second dereference we get NULL, and on the third one we crash.

Reproduction

We tried to reproduce a legit call to mshtml!CDiagnosticsElementEventHelper::OnDOMEventListenerRemoved2 to see how it looks in a non-crashing scenario. We came to the conclusion that the event is called only when the IE Developer Tools window is open with the Events tab.

We found out that when the dev tools Events tab is opened, it subscribes to events for added and removed event listeners. When the dev tools window is closed, the event consumer is freed without unsubscribing, causing a use-after-free bug which results in a null dereference crash.

Summary

Tools such as Developer Options dynamically add additional complexity to the process and may open up additional attack surfaces.

Exploitation

Even though Use-After-Free (UAF) bugs can often be exploited for arbitrary code execution, this bug is not exploitable due to MemGC mitigation. The freed memory block is zeroed, but not deallocated while other valid objects still point to it. As a result, the referenced pointer is always a NULL pointer, leading to a non-exploitable crash.

Responsible Disclosure

We reported this issue to Microsoft, that decided to not fix this UAF issue.

POC

Below is a small HTML page that demonstrates the concept and leads to a crash.
Tested IE11 version: 11.592.18362.0
Update Versions: 11.0.170 (KB4534251)

<!DOCTYPE html>
<html>
<body>
<pre>
1. Open dev tools
2. Go to Events tab
3. Close dev tools
4. Click on Enable
</pre>
<button onclick="setHandler()">Enable</button>
<button onclick="removeHandler()">Disable</button>
<p id="demo"></p>
<script>
function myFunction() {
    document.getElementById("demo").innerHTML = Math.random();
}
function setHandler() {
    document.body.addEventListener("mousemove", myFunction);
}
function removeHandler() {
    document.body.removeEventListener("mousemove", myFunction);
}
</script>
</body>
</html>

Interested in researching browser & OS bugs daily?

ZecOps is expanding. We’re looking for additional researchers to join ZecOps Research Team. If you’re interested, send us a note at careers@zecops.com