2020 presented us with many surprises, but the world of data privacy somewhat bucked the trend. Many industry verticals suffered losses, uncertainty and closures, but the protection of individuals and their information continued to truck on. After many websites simply blocked access unless you accepted their cookies (now deemed unlawful), we received clarity on cookies from the European Data Protection Board (EDPB). With the ending of Privacy Shield, we witnessed the cessation of a legal … More
Over the last few months, Zero Trust Architecture (ZTA) conversations have been top-of-mind across the DoD. We have been hearing the chatter during industry events all while sharing conflicting interpretations and using various definitions. In a sense, there is an uncertainty around how the security model can and should work. From the chatter, one thing is clear – we need more time. Time to settle in on just how quickly mission owners can classify a comprehensive and all-inclusive, acceptable definition of Zero Trust Architecture.
Today, most entities utilize a multi-phased security approach. Most commonly, the foundation (or first step) in the approach is to implement secure access to confidential resources. Coupled with the shift to remote and distance work, the question arises, “are my resources and data safe, and are they safe in the cloud?”
Thankfully, the DoD is in the process of developing a long-term strategy for ZTA. Industry partners, like McAfee, have been briefed along the way. It has been refreshing to see the DoD take the initial steps to clearly define what ZTA is, what security objectives it must meet, and the best approach for implementation in the real-world. A recent DoD briefing states “ZTA is a data-centric security model that eliminates the idea of trusted or untrusted networks, devices, personas, or processes and shifts to a multi-attribute based confidence levels that enable authentication and authorization policies under the concept of least privilege access”.
What stands out to me is the data-centric approach to ZTA. Let us explore this concept a bit further. Conditional access to resources (such as network and data) is a well-recognized challenge. In fact, there are several approaches to solving it, whether the end goal is to limit access or simply segment access. The tougher question we need to ask (and ultimately answer) is how to do we limit contextual access to cloud assets? What data security models should we consider when our traditional security tools and methods do not provide adequate monitoring? And is securing data, or at least watching user behavior, enough when the data stays within multiple cloud infrastructures or transfers from one cloud environment to another?
Increased usage of collaboration tools like Microsoft 365 and Teams, SLACK and WebEx are easily relatable examples of data moving from one cloud environment to another. The challenge with this type of data exchange is that the data flows stay within the cloud using an East-West traffic model. Similarly, would you know if sensitive information created directly in Office 365 is uploaded to a different cloud service? Collaboration tools by design encourage sharing data in real-time between trusted internal users and more recently with telework, even external or guest users. Take for example a supply chain partner collaborating with an end user. Trust and conditional access potentially create a risk to both parties, inside and outside of their respective organizational boundaries. A data breach whether intentional or not can easily occur because of the pre-established trust and access. There are few to no limited default protection capabilities preventing this situation from occurring without intentional design. Data loss protection, activity monitoring and rights management all come into question. Clearly new data governance models, tools and policy enforcement capabilities for this simple collaboration example are required to meet the full objectives of ZTA.
So, as the communities of interest continue to refine the definitions of Zero Trust Architecture based upon deployment, usage, and experience, I believe we will find ourselves shifting from a Zero Trust model to an Advanced Adaptive Trust model. Our experience with multi-attribute-based confidence levels will evolve and so will our thinking around trust and data-centric security models in the cloud.
The post Data-Centric Security for the Cloud, Zero Trust or Advanced Adaptive Trust? appeared first on McAfee Blogs.
The Sandworm Team hacking group is part of Unit 74455 of the Russian Main Intelligence Directorate (GRU), the US Department of Justice (DoJ) claimed as it unsealed an indictment against six hackers and alleged members on Monday. Sandworm Team attacks “These GRU hackers and their co-conspirators engaged in computer intrusions and attacks intended to support Russian government efforts to undermine, retaliate against, or otherwise destabilize: Ukraine; Georgia; elections in France; efforts to hold Russia accountable … More
The post US charges Sandworm hackers who mounted NotPetya, other high-profile attacks appeared first on Help Net Security.
For years, the United Arab Emirates (UAE) has committed itself to adopting information technology (IT) and electronic communication. The UAE’s Telecommunications Regulatory Authority (TRA) noted that this policy has made the state’s government agencies and organizations more efficient as well as has improved the ability for individuals to collaborate around the world. As such, the […]… Read More
The post UAE’s Information Assurance Regulation – How to Achieve Compliance appeared first on The State of Security.
If you are someone who works for a cloud service provider in the business of federal contracting, you probably already have a good understanding of FedRAMP. It is also likely that our regular blog readers know the ins and outs of this program.
For those who are not involved in these areas, however, this acronym may be more unfamiliar. Perhaps you have only heard of it in passing conversation with a few of your expert cybersecurity colleagues, or you are just curious to learn what all of the hype is about. If you fall into this category – read on! This blog is for you.
At first glance, FedRAMP may seem like a type of onramp to an interstate headed for the federal government – and in a way, it is.
FedRAMP stands for the Federal Risk and Authorization Management Program, which provides a standard security assessment, authorization and continuous monitoring for cloud products and services to be used by federal agencies. The program’s overall mission is to protect the data of U.S. citizens in the cloud and promote the adoption of secure cloud services across the government with a standardized approach.
Once a cloud service has successfully made it onto the interstate – or achieved FedRAMP authorization – it’s allowed to be used by an agency and listed in the FedRAMP Marketplace. The FedRAMP Marketplace is a one-stop-shop for agencies to find cloud services that have been tested and approved as safe to use, making it much easier to determine if an offering meets security requirements.
In the fourth year of the program, FedRAMP had 20 authorized cloud service offerings. Now, eight years into the program, FedRAMP has over 200 authorized offerings, reflecting its commitment to help the government shift to the cloud and leverage new technologies.
Who should be FedRAMP authorized?
Any cloud service provider that has a contract with a federal agency or wants to work with an agency in the future must have FedRAMP authorization. Compliance with FedRAMP can also benefit providers who don’t have plans to partner with government, as it signals to the private sector they are committed to cloud security.
Using a cloud service that complies with FedRAMP standards is mandatory for federal agencies. It has also become popular with organizations in the private industry, which are more often looking to FedRAMP standards as a security benchmark for the cloud services they use.
How can a cloud service obtain authorization?
There are two ways for a cloud service to obtain FedRAMP authorization. One is with a Joint Authorization Board (JAB) provisional authorization (P-ATO) and the other is through an individual agency Authority to Operate (ATO).
A P-ATO is an initial approval of the cloud service provider by the JAB, which is made up of the Chief Information Officers (CIOs) from the Department of Defense (DoD), Department of Homeland Security (DHS) and General Services Administration (GSA). This designation means that the JAB has provided a provisional approval for agencies to leverage when granting an ATO to a cloud system.
The head of an agency grants an ATO as part of the agency authorization process. An ATO may be granted after an agency sponsor reviews the cloud service offering and completes a security assessment.
Why seek FedRAMP approval?
Achieving FedRAMP authorization for a cloud service is a very long and rigorous process, but it has received high praise from security officials and industry experts alike for its standardized approach to evaluate whether a cloud service offering meets some of the strongest cybersecurity requirements.
There are several benefits for cloud providers who authorize their service with FedRAMP. The program allows an authorized cloud service to be reused continuously across the federal government – saving time, money and effort for both cloud service providers and agencies. Authorization of a cloud service also gives service providers increased visibility of their product across government with a listing in the FedRAMP Marketplace.
By electing to comply with FedRAMP, cloud providers can demonstrate dedication to the highest data security standards. Though the process for achieving FedRAMP approval is complex, it is worthwhile for providers, as it signals a commitment to security to government and non-government customers.
McAfee’s Commitment to FedRAMP
At McAfee, we are dedicated to ensuring our cloud services are compliant with FedRAMP standards. We are proud that McAfee’s MVISION Cloud is the first Cloud Access Security Broker (CASB) platform to be granted a FedRAMP High Impact Provisional Authority to Operate (P-ATO) from the U.S. Government’s Joint Authorization Board (JAB).
Currently, MVISION Cloud is in use by ten federal agencies, including the Department of Energy (DOE), Department of Health and Human Services (HHS), Department of Homeland Security (DHS), Food and Drug Administration (FDA) and National Aeronautics and Space Administration (NASA).
MVISION Cloud allows federal organizations to have total visibility and control of their infrastructure to protect their data and applications in the cloud. The FedRAMP High JAB P-ATO designation is the highest compliance level available under FedRAMP, meaning that MVISION Cloud is authorized to manage highly sensitive government data.
We look forward to continuing to work closely with the FedRAMP program and other cloud providers dedicated to authorizing cloud service offerings with FedRAMP.
In January 2020, McAfee released the results of a survey establishing the extent of the use of .GOV validation and HTTPS encryption among county government websites in 13 states projected to be critical in the 2020 U.S. Presidential Election. The research was a result of my concern that the lack of .GOV and HTTPS among county government websites and election-specific websites could allow foreign or domestic malicious actors to potentially create fake websites and use them to spread disinformation in the final weeks and days leading up to Election Day 2020.
Subsequently, reports emerged in August that the U.S. Federal Bureau of Investigations, between March and June, had identified dozens of suspicious websites made to look like official U.S. state and federal election domains, some of them referencing voting in states like Pennsylvania, Georgia, Tennessee, Florida and others.
Just last week, the FBI and Department of Homeland Security released another warning about fake websites taking advantage of the lack of .GOV on election websites.
These revelations compelled us to conduct a follow-up survey of county election websites in all 50 U.S. states.
Why .GOV and HTTPS Matter
Using a .GOV web domain reinforces the legitimacy of the site. Government entities that purchase .GOV web domains have submitted evidence to the U.S. government that they truly are the legitimate local, county, or state governments they claimed to be. Websites using .COM, .NET, .ORG, and .US domain names can be purchased without such validation, meaning that there is no governing authority preventing malicious parties from using these names to set up and promote any number of fraudulent web domains mimicking legitimate county government domains.
An adversary could use fake election websites for disinformation and voter suppression by targeting specific citizens in swing states with misleading information on candidates or inaccurate information on the voting process such as poll location and times. In this way, a malicious actor could impact election results without ever physically or digitally interacting with voting machines or systems.
The HTTPS encryption measure assures citizens that any voter registration information shared with the site is encrypted, providing greater confidence in the entity with which they are sharing that information. Websites lacking the combination of .GOV and HTTPS cannot provide 100% assurance that voters seeking election information are visiting legitimate county and county election websites. This leaves an opening for malicious actors to steal information or set up disinformation schemes.
I recently demonstrated how such a fake website would be created by mimicking a genuine county election website and then inserting misleading information that could influence voter behavior. This was done in an isolated lab environment that was not accessible to the internet as to not create any confusion for legitimate voters.
In many cases, election websites have been set up to provide a strong user experience versus a focus on mitigating concerns that they could be spoofed to exploit the communities they serve. Malicious actors can pass off fake election websites and mislead large numbers of voters before detection by government organizations. A campaign close to election day could confuse voters and prevent votes from being cast, resulting in missing votes or overall loss of confidence in the democratic system.
September 2020 Survey Findings
McAfee’s September survey of county election administration websites in all 50 U.S. states (3089 counties) found that 80.2% of election administration websites or webpages lack the .GOV validation that confirms they are the websites they claim to be.
Nearly 45% of election administration websites or webpages lack the necessary HTTPS encryption to prevent third-parties from re-directing voters to fake websites or stealing voter’s personal information.
Only 16.4% of U.S. county election websites implement U.S. government .GOV validation and HTTPS encryption.
|States||# Counties||# .GOV||% .GOV||# HTTPS||% HTTPS||# BOTH||%BOTH|
We found that the battleground states were largely in a bad position when it came to .GOV and HTTPS.
Only 29% of election websites used both .GOV and HTTPS in North Carolina, 22% in Georgia, 15.3% in Wisconsin, 10.8% in Michigan, 10.4% in Pennsylvania, and 2.4% in Texas.
While 95.5% of Florida’s county election websites and webpages use HTTPS encryption, only 6% percent validate their authenticity with .GOV.
During the January 2020 survey, only 11 Iowa counties protected their election administration pages and domains with .GOV validation and HTTPS encryption. By September 2020, that number rose to 25 as 14 counties added .GOV validation. But 72.7% of the state’s county election sites and pages still lack official U.S. government validation of their authenticity.
Alternatively, Ohio led the survey pool with 87.5% of election webpages and domains validated by .GOV and protected by HTTPS encryption. Four of Five (80%) Hawaii counties protect their main county and election webpages with both .GOV validation and encryption and 73.3% of Arizona county election websites do the same.
What’s not working
Separate Election Sites. As many as 166 counties set up websites that were completely separate from their main county web domain. Separate election sites may have easy-to-remember, user-friendly domain names to make them more accessible for the broadest possible audience of citizens. Examples include my own county’s www.votedenton.com as well as www.votestanlycounty.com, www.carrollcountyohioelections.gov, www.voteseminole.org, and www.worthelections.com.
The problem with these election-specific domains is that while 89.1% of these sites have HTTPS, 92.2% lack .GOV validation to guarantee that they belong to the county governments they claim. Furthermore, only 7.2% of these domains have both .GOV and HTTPS implemented. This suggests that malicious parties could easily set up numerous websites with similarly named domains to spoof these legitimate sites.
Not on OUR website. Some smaller counties with few resources often reason that they can inform and protect voters simply by linking from their county websites to their states’ official election sites. Other smaller counties have suggested that social media platforms such as Facebook are preferable to election websites to reach Internet-savvy voters.
Unfortunately, neither of these approaches prevents malicious actors from spoofing their county government web properties. Such actors could still set up fake websites regardless of whether the genuine websites link to a .GOV validated state election website or whether counties set up amazing Facebook election pages.
For that matter, Facebook is not a government entity focused on validating that organizational or group pages are owned by the entities they claim to be. The platform could just as easily be used by malicious parties to create fake pages spreading disinformation about where and how to vote during elections.
It’s not OUR job. McAfee found that some states’ voters could be susceptible to fake county election websites even though their counties have little if any role at all in administering elections. States such as Connecticut, Delaware, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont administer their elections through their local governments, meaning that any election information is only available at the states’ websites and those websites belonging to major cities and towns. While this arrangement makes county-level website comparisons with other states difficult for the purpose of our survey, it doesn’t make voters in these states any less susceptible to fake versions of their county website.
There should be one recipe for the security and integrity of government websites such as election websites and that recipe should be .GOV and HTTPS.
What IS working: The Carrot & The Stick
Ohio’s leadership position in our survey appears to be the result of a state-led initiative to transition county election-related content to .GOV validated web properties. Ohio’s Secretary of State used “the stick” approach by demanding by official order that counties implement .GOV and HTTPS on their election web properties. If counties couldn’t move their existing websites to .GOV, he offered “the carrot” of allowing them to leverage the state’s domain.
A majority of counties have subsequently transitioned their main county websites to .GOV domains, their election-specific websites to .GOV domains, or their election-specific webpages to Ohio’s own .GOV-validated https://ohio.gov/ domain.
While Ohio’s main county websites still largely lack .GOV validation, Ohio does provide a mechanism for voters to quickly assess if the main election website is real or potentially fake. Other states should consider such interim strategies until all county and local websites with election functions can be fully transitioned to .GOV.
Ultimately, the end goal success should be that we are able to tell voters that if they don’t see .GOV and HTTPS, they shouldn’t believe that a website is legitimate or safe. What we tell voters must be that simple, because the general public lacks a technical background to determine real sites from fake sites.
For more information on our .GOV-HTTPS county website research, potential disinformation campaigns, other threats to our elections, and voter safety tips, please visit our Elections 2020 page: https://www.mcafee.com/enterprise/en-us/2020-elections.html
The post US County Election Websites (Still) Fail to Fulfill Basic Security Measures appeared first on McAfee Blogs.
Following in the footsteps of many other national governments before them, I'm very happy to welcome the Canadian Centre for Cyber Security to Have I Been Pwned. The Canadian Centre for Cyber Security now has full and free access to query all Canadian federal government domains across both past and future breaches.
Canada's inclusion in the service brings the total to 11 federal governments across North America, Europe and Australia. I hope to include more parts of the world in the coming months.
As Congress prepares to return to Washington in the coming weeks, finalizing the FY2021 National Defense Authorization Act (NDAA) will be a top priority. The massive defense bill features several important cybersecurity provisions, from strengthening CISA and promoting interoperability to creating a National Cyber Director position in the White House and codifying FedRAMP.
These are vital components of the legislation that conferees should work together to include in the final version of the bill, including:
One of the main recommendations of the Cyberspace Solarium Commission’s report this spring was to further strengthen CISA, an agency that has already made great strides in protecting our country from cyberattacks. An amendment to the House version of the NDAA would do just that, by giving CISA additional authority it needs to effectively hunt for threats and vulnerabilities on the federal network.
Bad actors, criminal organizations and even nation-states are continually looking to launch opportunistic attacks. Giving CISA additional tools, resources and funding needed to secure the nation’s digital infrastructure and secure our intelligence and information is a no-brainer and Congress should ensure the agency gets the resources it needs in the final version of the NDAA.
Perhaps now more than ever before, interoperability is key to a robust security program. As telework among the federal workforce continues and expands, an increased variety of communication tools, devices and networks put federal networks at risk. Security tools that work together and are interoperable better provide a full range of protection across these environments.
The House version of the NDAA includes several provisions to promote interoperability within the National Guard, military and across the Federal government. The Senate NDAA likewise includes language that requires the DoD craft regulations to facilitate DoD’s access to and utilization of system, major subsystem, and major component software-defined interfaces to advance DoD’s efforts to generate diverse and effective kill chains. The regulations and guidance would also apply to purely software systems, including business systems and cybersecurity systems. These regulations would also require acquisition plans and solicitations to incorporate mandates for the delivery of system, major subsystem, and major component software defined interfaces.
For too long, agencies have leveraged a grab bag of tools that each served a specific purpose, but didn’t offer broad, effective coverage. Congress has a valuable opportunity to change that and encourage more interoperable solutions that provide the security needed in today’s constantly evolving threat landscape.
Creating a National Cyber Director Position
The House version of the NDAA would establish a Senate-confirmed National Cyber Director within the White House, in charge of overseeing digital operations across the federal government. This role, a recommendation of the Cyberspace Solarium Commission, would give the federal government a single point person for all things cyber.
As former Rep. Mike Rodgers argued in an op-ed published in The Hill last month, “the cyber challenge that we face as a country is daunting and complex.” We face new threats every day. Coordinating cyber strategy across the federal government, rather than the agency by agency approach we have today, is critical to ensuring we stay on top of threats and effectively protect the nation’s critical infrastructure, intellectual property and data from an attack.
The FedRAMP Authorization Act, included in the House version of the NDAA, would codify the FedRAMP program and give it a formal standing for Congressional review, a critical step towards making the program more efficient and useful for agencies across the government. Providing this program more oversight will further validate the FedRAMP approved products from across the industry as safe and secure for federal use. The FedRAMP authorization bill also includes language that will help focus the Administration’s attention on the need to secure the vulnerable spaces between and among cloud services and applications. Agencies need to focus on securing these vulnerabilities between and among clouds since sophisticated hackers target these seams that too often are left unprotected.
Additionally, the Pentagon has already committed to FedRAMP reciprocity. FedRAMP works – and codifying it to bring the rest of the Federal government into the program would offer an excellent opportunity for wide-scale cloud adoption, something the federal government would benefit greatly from.
We hope that NDAA conferees will consider these important cyber provisions and include them in the final version of the bill and look forward to continuing our work with government partners on important cyber issues like these.
The post NDAA Conference: Opportunity to Improve the Nation’s Cybersecurity Posture appeared first on McAfee Blogs.
Between January and April of this year, the government sector saw a 45% increase in enterprise cloud use, and as the work-from-home norm continues, socially distanced teamwork will require even more cloud-based collaboration services.
Hybrid and multi-cloud architectures can offer government agencies the flexibility, enhanced security and capacity needed to achieve what they need for modernizing now and into the future. Yet many questions remain surrounding the implementation of multi- and hybrid-cloud architectures. Adopting a cloud-smart approach across an agency’s infrastructure is a complex process with corresponding challenges for federal CISOs.
I recently had the opportunity to sit with several public and private sector leaders in cloud technology to discuss these issues at the Securing the Complex Ecosystem of Hybrid Cloud webinar, organized by the Center for Public Policy Innovation (CPPI) and Homeland Security Dialogue Forum (HSDF).
Everyone agreed that although the technological infrastructure supporting hybrid and multi-cloud environments has made significant advancements in recent years, there is still much work ahead to ensure government agencies are operating with advanced security.
There are three key concepts for federal CISOs to consider as they develop multi- and hybrid-cloud implementation strategies:
There is no one-size-fits-all hybrid environment
Organizations have adopted various capabilities that have unique gaps that must be filled. A clear system for how organizations can successfully fill these gaps will take time to develop. That being said, there is no one-size-fits-all hybrid or multi-cloud environment technology for groups looking to implement a cloud approach across their infrastructure.
Zero-trust will continue to evolve in terms of its definition
Zero-trust has been around for quite some time and will continue to grow in terms of its definition. In concept, zero-trust is an approach that requires an organization to complete a thorough inspection of its existing architecture. It is not one specific technology; it is a capability set that must be applied to all areas of an organization’s infrastructure to achieve a hybrid or multi-cloud environment.
Strategies for data protection must have a cohesive enforcement policy
A consistent enforcement policy is key in maintaining an easily recognizable strategy for data protection and threat management. Conditional and contextual access to data is critical for organizations to fully accomplish cloud-based collaboration across teams.
Successful integration of a multi-cloud environment poses real challenges for all sectors, particularly for enterprises as large and complex as the federal government. Managing security across different cloud environments can be overwhelmingly complicated for IT staff, which is why they need tools that can automate their tasks and provide continued protection of sensitive information wherever it goes inside or outside the cloud.
At McAfee, we’ve been dedicating ourselves to solving these problems. We are excited that McAfee’s MVISION Cloud has been recognized as the first cloud access security broker (CASB) with FedRAMP High authorization. Additionally, we’ve been awarded an Other Transaction Authority by the Defense Innovation Unit to prototype a Secure Cloud Management Platform through McAfee’s MVISION Unified Cloud Edge (UCE) cybersecurity solution.
We look forward to engaging in more strategic discussions with our partners in the private and public sectors to not only discuss but also help solve the security challenges of federal cloud adoption.
The post Multi-Cloud Environment Challenges for Government Agencies appeared first on McAfee Blogs.
Let’s start with the good news. Agencies are adopting cloud services at an increased rate. Adoption has only increased in times of coronavirus quarantine lockdowns with most federal, state and municipal workforce working from home. What’s even better news is that we also see increased adoption of cloud security tools, like CASB, which is commensurate with the expanding cloud footprint of US Public Sector agencies.
So now we have security tools in place to secure our cloud assets in SaaS, PaaS and IaaS. The next step is to determine what security controls need to be implemented. What DLP policies should the agency adopt? What capabilities of a cloud services should be enabled or disabled to maintain a robust security posture? How does an agency actually go about measuring the effectiveness of the security controls that were implemented? How do we find out how we stack up against our peer organizations?
To answer these questions, McAfee developed MVISION Cloud Security Advisor (CSA). Cloud Security Advisor is a portal that is provided “out-of-the-box” with your organization’s MVISION Cloud CASB tenant. CSA provides a comprehensive set of recommendations for organizations to prioritize efforts in implementing their cloud security controls. The recommendations are broken down into Visibility and Control metrics. There is also a section that provides quarterly reports on various parameters, which we will discuss in a little bit.
When you first access Cloud Security Advisor dashboard you are presented with a “magic quadrant” that shows your organization’s security posture relative to other peer organizations on the scales of Control and Visibility and provides a maturity score for both.
There is even an option to select a vertical market to see how your organizations stacks up to organizations in other business sectors.
On the right of the main dashboard are check list items that provide a short description and current progress in following Cloud Security Advisor’s recommendations. CSA scans the organization’s MISION Cloud environment once every 24 hours. Any changes to MVISION Cloud will be reflected in the next scan. In the screenshot below, for example, we see an environment that is not enforcing controls on publicly shared links in Collaboration SaaS apps.
From here, a security admin can simply click on the check list item and then on Enable Policy. This will automatically take the user to the DLP Policy Templates page to select the appropriate policy for enforcement.
Another powerful capability of MVISION Cloud Security Advisor is providing quarterly Cloud Security Reports. These are accessible from the main CSA dashboard by going to View Reports and then selecting a quarter for which you would like to see the report.
From there we can start examining our organization’s cloud footprint to identify total number of Shadow IT services discovered that quarter as well as some additional Shadow IT statistics.
Next we can look at IaaS resources in all our AWS, Azure and GCP environments.
We then proceed to look at summary statistics for DLP and access policy violations. Incidents show policy violations of each type detected across all of the organization’s cloud environments secured by MVISION Cloud CASB.
Next screen shows user behavioral anomalies and threats uncovered by MVISION Cloud UBA machine-learning engine.
The Malware section of the report provides insights into malware uncovered in SaaS and IaaS environments connected to MVISION Cloud.
The Data at Risk report is probably the most pertinent to gauging the effectiveness of the MVISION Cloud CASB solution. This report shows how much of the organization’s data was at risk and how it was secured using MVISION Cloud CASB. As seen from the image, there is a downward trend, indicating progress is being made to secure organization’s data.
The Sensitive Data report shows how organization’s sensitive data is distributed across all cloud services in use by the organization. This report also provides insights into cloud adoption trends for your organization.
The “Users” report is a pivot table of the Sensitive Data report that organizes incidents and policy violations by individual users. Ultimately, the report shows how much of a risk an organization’s users pose to organization’s data.
The Mobile Devices report shows incidents for each type of detected mobile device.
The next three pages of the CSA report provide a deeper dive into the data on the front page of the CSA portal we saw in the beginning of this blog. On the Scores page we see the “magic quadrant” with Control and Visibility axis, together with progress relative to previous quarters. Visibility score and Control score, both on a scale of 100, gauge your organization’s maturity in securing its cloud footprint.
Next, the Visibility metrics page. Visibility metrics measure how well an organization has been doing in gaining visibility into what is out there in their cloud environment and how secure it is.
Finally, the Control metrics page shows how well an organization has performed in placing controls and mitigating security risks for its cloud environment.
And that, in a nutshell, is it. By reviewing the screenshots from the Cloud Security Advisor dashboard you should now have a good idea of the metrics at your disposal to quantify cloud security effectiveness for your organization.
To see MVISION Cloud Security Advisor in action, please check out the video below: