Organizations across the globe have quickly moved to a fast-digital transformation to allow a remote workforce model due to the global pandemic. That has naturally resulted in a multi-fold rise in the IT attack surface of a company.
That’s why security leaders should consider the added risks carried by the remote workforce as well as their personal devices, which aren’t in the purview of the company’s security measures. This post aims to present you with information about which risks you might be exposed to. Are you ready? Then let’s jump in!
Common Risks of an Attack Surface and Remote Workforce
- Accidental exposure throughout the work from home
You will find a higher risk of company critical data being exposed incidentally, with most team members working remotely and accessing data outside of the company’s security measures. That includes codes, applications, and customer data, among others.
- Enhanced third or fourth risks
Keep in mind your third, and fourth-party vendor risks have enhanced multi-fold because of the spurt in organizations selecting to allow their workforce to work from their home remotely, leading to a raised attack surface.
- Not realizing new assets uploaded on the internet during work from home
Many assets went online that today might be open to attack with less time to get ready for the worm from home and keep business continuity. The company’s security team needs to learn which assets are publicly visible and online to the world.
- Isolated IT assets
Organizations can deal with remote work as long as everybody utilizes a calibrated computer, which the IT team has solidified. The issue is that we would need to go back in time and to get ready for the pandemic ahead of time.
IT cannot access such devices to solidify cybersecurity or standardize settings, with many workers utilizing personal devices often or all the time. Every machine has vulnerabilities, which cannot be managed as well as liabilities, which cannot be understood.
- Strained security resources
Remote computers are on their own in the wild, lacking the cybersecurity resources, which the standard enterprise supplies in-house. Each is an isolated endpoint, which should bear the accountability for protecting company networks, applications, and data.
That is a lot to ask for a client-based antivirus and consumer firewall software, particularly when protecting against high-volume offensives and novel attacks.
How to Manage the Risks of Remote Workforce
Working from home is a business essential throughout the present coronavirus pandemic. It is not clear if this trend is limited to the present crisis or if the pandemic will usher in a future with more flexibility for remote work.
Some of the measures to mitigate risk because of work from home are the following:
- Utilize unified endpoint management platforms (UEM)
Keep in mind that UEM platforms can streamline the procedure of rolling out security updates and patching assets through different operating systems.
These tools also enable the security department to deal with the native security capabilities, enforce encryption across operating systems, and get higher visibility across the device.
- Automate threat detection to lessen the burden on security staff
Wide-ranging remote work has made new problems for security experts. You see, automated security tools like data encryption, threat prevention, response, and detection help take a few of the bonus off the admins. That enables them to concentrate their energy on embracing the new challenges that were caused by remote work.
- Support app-focused security
Most workers in the present environment have been obliged to utilize their personal devices to work from their homes. It will help to invest in app-based solutions like vpn or app security, app container, and app virtualization app to safeguard company assets, which are being accessed on personal devices. A good example of that is the Zero trust Application Access as Perimeter 81 that allows security experts to de-emphasize device-centric endpoint protection.
- Talk about the human factor in remote work security
The typical safeguards of the employee against cyberthreats are down with so many distractions at home. They may utilize their personal devices at work, utilize unsecured Wi-Fi networks, share their work devices, or fall prey to phishing emails, becoming the biggest risk.
On top of that, your data system can be put at great risk through poor documentation retention, the use of unsecured channels to send critical data, or the use of unencrypted USB flash drives. Fundamentally, information security protection measures aren’t there. That will leave your network susceptible to cyberattacks.
Testing the responses and awareness of an employee to cybersecurity is essential before letting employees telecommute. That could be performed through a phishing simulator that enables you to set up emails from the IT team, management, or colleagues of convincing staff to open a link, download an attachment or submit credentials.
The data you receive can be utilized to train staff on cybersecurity tips and best practices to prevent cyberattacks.
- Identify possible risks and their possible effect
It is essential to determine possible threats, their possibility, and how they would affect the company, which may hit the network apart from considering the technical and human side of remote work security.
It will help you list all potential attack points, which could be exploited by hackers to access the data or system, especially during the cybersecurity risk assessment. The next phase is to rate the possible effect on the network’s infrastructure as either low, medium and high, based on recoverability and significance.
It will also help if you will assess the control setting that is composed of locating threat prevention, mitigation, and detection. It is time to resolve the possible concerns after discovering where the possible risk could be and have measured a risk rating.
That may include opting to a better email filter, replacing the data backup system, or having a third-party security team. You can then reassess the risk after you update or implement new security controls.
Risk assessment for a remote workforce is a complicated procedure that needs substantial planning and expert knowledge to ensure every person, data, process, and device in the company are covered. That could be worked out with trial and error without professional support.
Everything is different, and yet the same. As we look ahead to the cybersecurity landscape in the next 12 months, it is from a position no one predicted this time last year. Business operations have changed beyond recognition with most employees working from home in a transition that happened almost overnight. Stretched security teams have been challenged to rapidly deploy robust remote working facilities to maintain productivity. Most were writing the ‘pandemic playbook’ as they went along.
Ironically, one of the few certainties of the situation was that cybercriminals would take advantage of disruption to escalate campaigns. In that sense, nothing changed, except that the opportunity was suddenly much greater. As a result, nine in ten security professionals surveyed by our Threat Analysis Unit said they were facing increased attack volumes, which they attributed to the newly distributed working environment.
The effects of COVID-19 will continue to impact the cybersecurity sector for some time, but they are not the only considerations. This year we’ve seen cybercrime and cybercriminal groups continue along a path of technical and industry innovation that will see new strategies and tactics gain traction in 2021. We have also seen cyber defences tested like never before and, for the most part, they have held firm; there is reason for cybersecurity professionals to be optimistic.
With this in mind, the following are six trends we expect to see, and key areas cybersecurity professionals should keep their eyes on in 2021.
1. Remote-Working Focuses Attacker Attention on Mobile Compromise
As business becomes more mobile than ever and remote working persists, mobile devices and operating systems will be increasingly targeted. As employees use personal devices to review and share sensitive corporate information, these become an excellent point of ingress for attackers. If hackers can get into your Android or iPhone, they will then be able to island-hop into the corporate networks you access, whether by deactivating VPNs or breaking down firewalls.
We will also see hackers using malware such as Shlayer to access iOS, ultimately turning Siri into their personal listening device to eavesdrop on sensitive business communications.
Combating these risks requires a combination of new mobile device policies and infrastructure designed to facilitate continued remote working, as well as raising employee awareness of the persistent risks and the importance of digital distancing.
2. Continuing Direct Impacts on Healthcare
In terms of direct impact of COVID-19 the healthcare sector, at the heart of crisis response, will see the adaptations it made to try and maintain patient services become a vulnerability. With growing reliance on telemedicine for routine medical appointments lucrative personally identifiable information (PII) is being accessed from remote locations and as a result is more easily intercepted by hackers. At the same time, vaccine-related data pertaining to trials and formulae is some of the most sought-after intellectual property right now and the drive to get hold of it for financial or political gain is putting healthcare and biotech organisations under intense pressure from external threats and insider risk.
That said, the strain on healthcare cybersecurity is not going unheeded; we will see increased IT and security budgets in the sector to combat the growth in external threats.
3. Emerging Tactical Trends: Cloud-Jacking and Destructive ICS Attacks
As the new year dawns, we will see tried and tested tactics evolving to become more sophisticated and take advantage of changes in network architecture. Cloud-jacking through public clouds will become the island-hopping strategy of choice for cybercriminals as opportunity proliferates due to the overreliance on public clouds by the newly distributed workforce.
It won’t be only the virtual environment under threat. Increasing cyber-physical integration will tempt nation state-sponsored groups into bolder, more destructive attacks against industrial control system (ICS) environments. Critical National Infrastructure, energy and manufacturing companies will be in the crosshairs as OT threats ramp up. Our analysts are seeing new ICS-specific malware changing hands on the dark web and we are likely to see it in action in the coming year.
4. The Ransomware Economy Pivots to Extortion and Collaboration
Another familiar tactic taking on a new twist is ransomware. Ransomware groups have evolved their approach to neutralise the defensive effect of back-ups and disaster recovery by making sure they’ve exfiltrated all the data they need before the victim knows they’re under attack. Once the systems are locked attackers use the data in their possession to extort victims to pay to prevent the breach becoming public. And if that fails, they can sell the data anyway, meaning the victim is doubly damaged.
Ransomware is such big business that the leading groups are collaborating, sharing resources and infrastructure to develop more sophisticated and lucrative campaigns. Not all collaborations will be successful, however, and we’ll see groups disagreeing on the ethics of targeting vulnerable sectors such as healthcare.
5. AI Utilised for Defensive and Offensive Purposes
Technology innovation is as relevant to attackers as it is to defenders and, while artificial intelligence and machine learning have significant benefits in cybersecurity, we can expect to see adversaries continue to advance in the way AI/ML principles are used for post-exploitation activities. They’ll leverage collected information to pivot to other systems, move laterally and spread efficiently – all through automation.
The silver lining is that in 2021 defenders will begin to see significant AI/ML advancements and integrations into the security stack. Security automation will be simplified and integrated into the arsenal of more organisations – not just those with mature SOCs. As awareness of how attackers are using automation increases, we can expect defenders to fix the issue, maximising automation to spot malicious activity faster than ever before.
6. Defender Confidence is Justifiably on the Rise
To finish on a resoundingly positive note, this year we saw cyber defences placed under inconceivable strain and they flexed in response. Yes, there were vulnerabilities due to the rapidity of the switch to fully remote working, but on the whole security tools and processes are working. Defender technology is doing the job is it designed to do and that is no small feat.
The mission-critical nature of cybersecurity has never been more apparent than in 2020 as teams have risen to the challenge of uniquely difficult circumstances. In recognition of this we will see board-level support and a much healthier relationship between IT and security teams as they collaborate to simultaneously empower and safeguard users. 2020 has been the catalyst for change for which we were more than ready.
This week, we announced the latest release of MVISION Unified Cloud Edge, which included a number of great data protection enhancements. With working patterns and data workflows dramatically changed in 2020, this release couldn’t be more timely.
According to a report by Gartner earlier in 2020, 88% of organizations have encouraged or required employees to work from home. And a report from PwC found that, corporations have termed the remote work effort in 2020, by and large, a success. Many executives are reconfiguring office layouts to cut capacity by half or more, indicating that remote work is here to stay as a part of work life even after we come out of the restrictions placed on us by the pandemic.
Security teams, scrambling to keep pace with the work from home changes, are grappling with multiple challenges, a key one being how to protect corporate data from exfiltration and maintain compliance in this new work from home paradigm. Employees are working in less secure environments and using multiple applications and communication tools that may not have been permitted within the corporate environment. What if they upload sensitive corporate data to a less than secure cloud service? What if employees use their personal devices to download company email content or Salesforce contacts?
McAfee’s Unified Cloud Edge provides enterprises with comprehensive data and threat protection by bringing together its flagship secure web gateway, CASB, and endpoint DLP offerings into a single integrated Secure Access Service Edge (SASE) solution. The unified security solution offered by UCE features unified data classification and incident management across the network, sanctioned and unsanctioned (Shadow IT) cloud applications, web traffic, and endpoints, thereby covering multiple key exfiltration vectors.
UCE Protects Against Multiple Data Exfiltration Vectors
1. Exfiltration to High Risk Cloud Services
According to a recent McAfee report, 91% of cloud services do not encrypt data at rest and 87% of cloud services do not delete data upon account termination, allowing the cloud service to own customer data in perpetuity. McAfee UCE detects the usage of risky cloud services using over 75 security attributes and enforces policies, such blocking all services with a risk score over 7, which helps prevent exfiltration of data into high risk cloud services.
2. Exfiltration to permitted cloud services
Some cloud services, especially the high risk ones, can be blocked. But there are others which may not be fully sanctioned by IT, but fulfill a business need or improve productivity and thus may have to be allowed. To protect data while enabling these services, security teams can enforce partial controls, such as allowing users to download data from these services but blocking uploads. This way, employees remain productive while company data remains protected.
3. Exfiltration from sanctioned cloud services
Digital transformation and cloud-first initiatives have led to significant amounts of data moving to cloud data stores such as Office 365 and G Suite. So, companies are comfortable with sensitive corporate data living in these data stores but are worried about it being exfiltrated to unauthorized users. For example, a file in OneDrive can be shared with an unauthorized external user, or a user can download data from a corporate SharePoint account and then upload it to a personal OneDrive account. MVISION Cloud customers commonly apply collaboration controls to block unauthorized third party sharing and use inline controls like Tenant Restrictions to ensure employees always login with their corporate accounts and not with their personal accounts.
4. Exfiltration from endpoint devices
An important consideration for all security teams, especially given most employees are now working from home, is the plethora of unmanaged devices such as storage drives, printers, and peripherals that data can be exfiltrated into. In addition, services that enable remote working, like Zoom, WebEx, and Dropbox, have desktop apps that enable file sharing and syncing actions that cannot be controlled by network policies because of web socket or certificate pinning considerations. The ability to enforce data protection policies on endpoint devices becomes crucial to protect against data leakage to unauthorized devices and maintain compliance in a WFH world.
5. Exfiltration via email
Outbound email is one of the critical vectors for data loss. The ability to extend and enforce DLP policies to email is an important consideration for security teams. Many enterprises choose to apply inline email controls, while some choose to use the off-band method, which surfaces policy violations in a monitoring mode only.
UCE provides a Unified and Comprehensive Data Protection Offering
Using point security solutions for data protection raises multiple challenges. Managing policy workflows in multiple consoles, rewriting policies, and aligning incident information in multiple security products result in operational overhead and coordination challenges that slow down the teams involved and hurt the company’s ability to respond to a security incident. UCE brings web, CASB, and endpoint DLP into a converged offering for data protection. By providing a unified experience, UCE increases consistency and efficiencies for security teams in multiple ways.
1. Reusable classifications
A single set of classifications can be reused across different McAfee platforms, including ePO, MVISION Cloud, and Unified Cloud Edge. For example, if a classification is implemented to identify Brazilian driver’s license information to apply DLP policies on endpoint devices, the same classification can be applied in DLP policies on collaboration policies in Office 365 or outgoing emails in Exchange Online. Alternatively, if the endpoint and cloud were secured by two separate products, it would require creating disparate classifications and policies on both platforms and then ensuring the 2 policies have the same underlying regex rules to keep policy violations consistent. This increases operational complexity and overhead for security teams.
2. Converged incident infrastructure
Customers using MVISION Cloud have a unified view of cloud, web, and endpoint DLP incidents in a single unified console. This can be extremely helpful in scenarios where a single exfiltration act by an employee is spread across multiple vectors. For example, an employee attempts to share a company document with his personal email address, and then tries to upload it to a shadow service like WeTransfer. When both these attempts don’t work, he uses a USB drive to copy the document from his office laptop. Each of these fires an incident, but when we present a consolidated view of these incidents based on the file, your admins have a unique perspective and possibly a different remediation action as opposed to trying to parse these incidents from separate solutions.
3. Consistent experience
McAfee data protection platforms provide customers with a consistent experience in creating a DLP policy, whether it is securing sanctioned cloud services, protecting against malware, or preventing data exfiltration to shadow cloud services. Having a familiar workflow makes it easy for multiple teams to create and manage policies and remediate incidents.
As the report from PwC states, the work from home paradigm is likely not going away anytime soon. As enterprises prepare for the new normal, a solution like Unified Cloud Edge enables the security transformation they need to gain success in a remote world.
The post Finally, True Unified Multi-Vector Data Protection in a Cloud World appeared first on McAfee Blogs.
Every day, new apps are developed to solve problems and create efficiency in individuals’ lives. Employees are continually experimenting with new apps to enhance productivity and simplify complex matters. When in a pinch, using DropBox to share large files or an online PDF editor for quick modifications are commonalities among employees. However, these apps, although useful, may not be sanctioned or observable by an IT department. The rapid adoption of this process, while bringing the benefit of increased productivity and agility, also raises the ‘shadow IT problem’ where IT has little to no visibility into the cloud services that employees are using or the risk associated with these services. Without visibility, it becomes very difficult for IT to manage both cost expenditure and risk in the cloud. Per the McAfee Cloud Adoption and Risk report, the average enterprise today uses 1950 cloud services, of which less than 10% are enterprise ready. To divert a data breach (with the average cost of a data breach in the US being $7.9 million), enterprises must exercise governance and control over their unsanctioned cloud usage. Does this sound all too familiar? It’s because these are many of the issues we face with Shadow IT, and are facing today regarding a similar security risk with connected apps.
What are Connected Apps? Collaboration platforms such as Office 365 enable teams and end-users to install and connect third-party apps or create their own custom apps to help solve new and existing business problems. For example, Microsoft hosts the Microsoft Store, where end-users can browse through thousands of apps and install them into their company’s Office 365 environment. These apps help augment native Microsoft office capabilities and help increase end–user productivity. Some examples include WebEx to set up meetings from Outlook or a Survey Monkey add-in to initiate surveys from Microsoft Teams. When these apps are added, they will often ask the end–user to authorize access to their Cloud app resources. This could be data stored in the app, like in SharePoint, or calendar information or email content. Authorizing access to third party apps creates concerns for many organizations.
Reason 1: Risky Data Exfiltrated to 3rd Party Apps
What if the app itself is risky? For example, PDF converter apps ask for access to all data so they can generate PDF versions for sharing. Corporate data is moving out of the corporate cloud app into these risky applications. Or, even if the app is not risky, it may be accessing cloud resources such as mail, drive, calendar, which contain data considered highly sensitive by the company. For example, the Evernote app for Outlook can be used for saving email data. Now, the app itself is not risky, but the company may not have approved it for employees to use. If that is the case, an introduction of apps in this manner represents a data exfiltration of corporate data.
Reason 2: No Coverage with Existing Controls
Connected Apps establishes a cloud-to-cloud connection with your sanctioned cloud services that is not visible to existing network policies and controls. So, if a company has put in place controls on the web gateway or firewall to block unauthorized file sharing services, then it is still possible for employees to add the connected app from the marketplace and bypass these existing controls. Even the API based DLP policies do not apply to data moving into Connected Apps. All of this means that organizations need to exercise more oversight and control on the usage of Connected apps by their employees.
Reason 3: Shared Responsibility
The Shared Responsibility model applies to Connected Apps as well. Cloud services like Google and Microsoft provide a marketplace for customers to add apps, but they expect the companies to take responsibility for their data and users and ensure that the usage of these connected apps is in line with security and compliance policies.
MVISION Cloud provides comprehensive security solutions through visibility, control, and the ability to troubleshoot into third-party applications connected to sanctioned cloud services, such as these marketplace apps. With a database of over 30,000 cloud services, MVISION Cloud provides comprehensive and up to date information on Connected Apps plugged into corporate cloud services such as Microsoft 365 and G Suite. Customers can use this visibility to apply controls to block, allow, or selectively allow apps for some users. As large users deploy Connected Apps to their hundreds of thousands of users, MVISION Cloud also provides troubleshooting tools to track activities and add notes to allow for quick diagnosis and resolution of Support issues. To learn more, see the brief video below provides a deeper look into securing connected apps with MVISION Cloud.
The post 3 Reasons Why Connected Apps are Critical to Enterprise Security appeared first on McAfee Blogs.
Government and Private Sector organizations are transforming their businesses by embracing DevOps principles, microservice design patterns, and container technologies across on-premises, cloud, and hybrid environments. Container adoption is becoming mainstream to drive digital transformation and business growth and to accelerate product and feature velocity. Companies have moved quickly to embrace cloud native applications and infrastructure to take advantage of cloud provider systems and to align their design decisions with cloud properties of scalability, resilience, and security first architectures. The declarative nature of these systems enables numerous advantages in application development and deployment, like faster development and deployment cycles, quicker bug fixes and patches, and consistent build and monitoring workflows. These streamlined and well controlled design principles in automation pipelines lead to faster feature delivery and drive competitive differentiation.
As more enterprises adapt to cloud-native architectures and embark on multi-cloud strategies, demands are changing usage patterns, processes, and organizational structures. However, the unique methods by which application containers are created, deployed, networked, and operated present unique challenges when designing, implementing, and operating security systems for these environments. They are ephemeral, often too numerous to count, talk to each other across nodes and clusters more than they communicate with the outside endpoints, and they are typically part of fast-moving continuous integration/continuous deployment (CI/CD) pipelines. Additionally, development toolchains and operations ecosystems continue to present new ways to develop and package code, secrets, and environment variables. Unfortunately, this also compounds supply chain risks and presents an ever-increasing attack surface.
Lack of a comprehensive container security strategy or often not knowing where to start can be a challenge to effectively address risks presented in these unique ecosystems. While teams have recognized the need to evolve their security toolchains and processes to embrace automation, it is imperative for them to integrate specific security and compliance checks early into their respective DevOps processes. There are legitimate concerns that persist about misconfigurations and runtime risks in cloud native applications, and still too few organizations have a robust security plan in place.
These complex problem definitions have led to the development of a special publication from National Institute of Standards and Technology (NIST) – NIST SP 800-190 Application Security Container Guide. It provides guidelines for securing container applications and infrastructure components, including sectional review of the fundamentals of containers, key risks presented by core components of application container technologies, countermeasures, threat scenario examples, and actionable information for planning, implementing, operating, and maintaining container technologies.
MVISION Cloud Native Application Protection Platform (CNAPP) is a comprehensive device-to-cloud security platform for visibility and control across SaaS, PaaS, & IaaS platforms. It provides deep coverage on cloud native security controls that can be implemented throughout the entire application lifecycle. By mapping all the applicable risk elements and countermeasures from Sections 3 and 4 of NIST SP 800-190 to capabilities within the platform, we want to provide an architectural point of reference to help customers and industry partners automate compliance and implement security best practices for containerized application workloads. This mapping and a detailed review of platform capabilities aligned with key countermeasures can be referenced here.
As outlined in one of the supporting charts in the whitepaper, CNAPP has capabilities that effectively address all the risk elements described in the NIST special publication guidance.
While the breadth of coverage is critical, it is worth noting that the most effective way to secure containerized applications requires embedding security controls into each phase of the container lifecycle. If we leverage Department of Defense’s Enterprise DevSecOps Reference Design guidance as a point of reference, it describes the DevSecOps lifecycle in terms of nine transition stages comprising of plan, develop, build, test, release, deliver, deploy, operate, and monitor.
The foundational principle of DevSecOps implementations is that the software development lifecycle is not a monolithic linear process. The “big bang” style delivery of the Waterfall SDLC process is replaced with small but more frequent deliveries, so that it is easier to change course as necessary. Each small delivery is accomplished through a fully automated process or semi-automated process with minimal human intervention to accelerate continuous integration and delivery. The DevSecOps lifecycle is adaptable and has many feedback loops for continuous improvement.
Specific to containerized applications and workloads, a more abstract view of a container’s lifecycle spans across three high-level phases of Build, Deploy, and Run.
The “Build” phase centers on what ends up inside the container images in terms of the components and layers that make up an application. Usually created by the developers, security efforts are typically focused on reducing business risk later in the container lifecycle by applying best practices and identifying and eliminating known vulnerabilities early. These assessments can be conducted in an “inner” loop iteratively as developers perform incremental builds and add security linting and automated tests or can be driven via an “outer” feedback loop that’s driven by operational security reviews and penetration testing efforts.
In the “Deploy” phase, developers configure containerized applications for deployment into production. Context grows beyond information about images to include details about configuration options available for orchestrated services. Security efforts in this phase often center around complying with operational best practices, applying least-privilege principles, and identifying misconfigurations to reduce the likelihood and impact of potential compromises.
“Runtime” is broadly classified as a separate phase wherein containers go into production with live data, live users, and exposure to networks that could be internal or external in nature. The primary purpose of implementing security during the runtime phase is to protect running applications as well as the underlying container infrastructure by finding and stopping malicious actors in real time.
By applying this understanding of container lifecycle stages to respective countermeasures that can be implemented and audited upon within MVISION Cloud, CNAPP customers can establish an optimal security posture and achieve synergies of shift left and runtime security models. Security assessments are critically important early in planning and design, where important decisions are made about architecture approach, development tooling and technology platforms and where mistakes or misunderstandings can be dangerous and expensive. As DevOps teams move their workloads into the cloud, security teams will need to implement best practices that apply operations, monitoring and runtime security controls across public, private, and hybrid cloud consumption models.
CNAPP first discovers all the cloud-native components mapped to an application, including hosts, IaaS/PaaS services, containers, and the orchestration context that a container operates within. With the use of native tagging and network flow log analysis, customers can visualize cloud infrastructure interactions including across compute, network, and storage components. Additionally, the platform scans cloud native object and file stores to assess presence of any sensitive data or malware. Depending on the configuration compliance of the underlying resources and data sensitivity, an aggregate risk score is computed per application which provides detailed context for an application owner to understand risks and prioritize mitigation efforts.
As a cloud security posture management platform, CNAPP provides a set of capabilities that ensure that assets comply with industry regulations, best practices, and security policies. This includes proactive scanning for vulnerabilities in container images and VMs and ensuring secure container runtime configurations to prevent non-compliant builds from being pushed to production. The same principles apply to orchestrator configurations to help secure how containers get deployed using CI/CD tools. These baseline checks can be augmented with other policy types to ensure file integrity monitoring and configuration hardening of hosts (e.g., no insecure ports or unnecessary services), which help apply defense-in-depth by minimizing the overall attack surface.
Finally, the platform enforces policy-based immutability on running container instances (and hosts) to help identify process-, service-, and application-level whitelists. By leveraging the declarative nature of containerized workloads, threats can be detected during the runtime phase, including any exposure created as a result of misconfigurations, application package vulnerabilities, and runtime anomalies such as execution of reverse shell or other remote access tools. While segmentation of workloads can be achieved in the build and deploy phases of a workload using posture checks for constructs like namespaces, network policies, and container runtime configurations to limit system calls, the same should also be enforced in the runtime phase to detect and respond to malicious activity in an automated and scalable way. The platform defines baselines and behavioral models that can specially be effective to investigate attempts at network reconnaissance, remote code execution due to zero-day application library and package vulnerabilities, and malware callbacks. Additionally, by mapping these threats and incidents to the MITRE ATT&CK tactics and techniques, it provides a common taxonomy to cloud security teams regardless of the underlying cloud application or an individual component. This helps them extend their processes and security incident runbooks to the cloud, including their ability to remediate security misconfigurations and preemptively address all the container risk categories outlined in NIST 800-190.
The post Securing Containers with NIST 800-190 and MVISION CNAPP appeared first on McAfee Blogs.
The move to a distributed workforce came suddenly and swiftly. In February 2020, less than 40% of companies allowed most of their employees to work from home one day a week. By April, 77% of companies had most of their employees working exclusively from home.
Organizations have been in the midst of digital transformation projects for years, but this development represented a massive test. Most organizations were pleasantly surprised to see that their employees could remain productive while working from home thanks to successful cloud migration projects and the adoption of various mobility and remote access technologies, but companies have become more worried that they have far less visibility into data on employees’ systems when they are working remotely. Traditional Network DLP can protect data while it is traversing through the network up to the corporate edge, but it has little visibility to data once it is out of the corporate network and its effectiveness is further limited when the workforce is distributed.
More than three-quarters of CIOs are concerned with the impact that this increased data sprawl is having on security. Despite the fact that roughly half of all corporate data was stored in the cloud last year, only 36% of companies could enforce data protection policies there. Many organizations therefore forced home-based users to hairpin all traffic back to the corporate data center via VPN so that they could be protected by the network data loss prevention (DLP) system. This maintained security, but it came at the cost of poor performance and reduced worker productivity.
Cloud-native security is part of the solution
Organizations that employed cloud-based security technologies like a Cloud Access Security Broker (CASB), DLP, or Secure Web Gateway (SWG) could enable their users to perform their jobs with fast and secure direct-to-cloud access. However, this still leads to headaches: IT organizations have to manage multiple disparate solutions, while users face latency while their traffic needs to bounce between multiple siloed technologies before they can access their data.
The Secure Access Service Edge (SASE) presents a solution to this dilemma by providing a framework for organizations to bring all of these technologies together into a single integrated cloud service. End users enjoy low-latency access to the cloud, while IT management and costs are simplified. So everyone wins, right? Not entirely.
Many SASE proponents posit that the best way to architect a distributed Work From Home environment would be to have all security functionality in the cloud at the “service edge”, while end user devices have only a small agent to redirect traffic to that service edge. However, this model poses a data protection dilemma. While a cloud-delivered service can extend data protection to data centers, cloud applications, and web traffic, there are a number of blind spots:
- Every remote worker’s home is now a remote office with a range of unmanaged, unsecured devices like printers, storage drives, and peripherals that can be compromised or be used to exfiltrate data.
- Attached devices like USB keys can be used to get data off of a corporate device and beyond the reach of and data protection controls.
- Cloud applications like Webex, Dropbox, and Zoom all have desktop companion apps that enable actions like file syncing or screen/file sharing; these websocket apps run locally on the user’s system and are not subject to cloud-based data protection policies.
These blind spots can only be addressed by endpoint-based data loss prevention (DLP) that enforces data protection policy on the user’s device. This is not dissimilar to how SASE frameworks rely on SD-WAN customer premises equipment (CPE) that perform essential network flow functionality at branch office locations. Therefore, it’s imperative to look for SASE solutions that include endpoint DLP coverage.
Bringing it all together is the key
It’s great to say that to address the challenges of cloud transformation and the remote workforce, existing network DLP solutions – with their dedicated management interface, data classifications, and policy workflows – need to be accompanied by similar capabilities in the cloud, and then again on the endpoint. Of course, that’s completely impractical where IT organizations are already struggling to deal with the status quo due to finite budgets and skilled personnel. Not only is it impractical, but it undermines the consolidation, simplification, and cost reduction promised both by digital transformation and the SASE framework.
The answer to this dilemma is a comprehensive data protection solution that encompasses networks, devices, and the cloud, something that is uniquely delivered by McAfee MVISION Unified Cloud Edge (UCE). MVISION UCE is a cloud-native solution that seamlessly converges core security technologies such as Data Loss Prevention (DLP), cloud access security broker (CASB) and next-gen secure web gateway (SWG) to help accelerate SASE adoption. MVISION UCE features multi-vector data protection that features unified data classification and incident management across the network, sanctioned and unsanctioned Shadow IT cloud applications, web traffic, and equally important, endpoint DLP. This provides corporate information-security teams the necessary visibility, control and management capability to secure home-based and mobile workers as they access data anywhere.
To manage data security of a distributed workforce, linking device security to corporate policy becomes extremely important. With a managed DLP agent on the device, IT security can know where sensitive data exists, block untrusted services and removable media, protect against cloud services and desktop apps, and educate employees to potential dangers.
Historically, data protection has focused on a central point like the network or the cloud because implementing it on the device has been difficult. However, with McAfee’s Unified Computing Edge (UCE), DLP becomes an easy-to-deliver feature.
Centrally managed by McAfee MVISION ePO, McAfee DLP can be easily deployed to endpoints. With its unique device-to-cloud DLP features, on-prem DLP policies can be easily extended to the Cloud with a single click and as fast as under one minute. Shared data classification tags ensure consistent multi-environment protection for your most sensitive data across endpoints, network and cloud. —
Incorporating security into the cloud and the edge, and delivering data protection at the endpoint, are the only way to really deliver on what SASE promises and unlock your remote workforce. Looking to the future, a widely distributed workforce is here to stay. Companies need to take steps to secure devices and data wherever they are.
To find out more, please visit www.mcafee.com/unifiedcloud.
The post Think Beyond the Edge: Why SASE is Incomplete Without Endpoint DLP appeared first on McAfee Blogs.
Malicious actors are increasingly taking advantage of the burgeoning at-home workforce and expanding use of cloud services to deliver malware and gain access to sensitive data. According to an Analysis Report (AR20-268A) from the Cybersecurity and Infrastructure Security Agency (CISA), this new normal work environment has put federal agencies at risk of falling victim to cyber-attacks that exploit their use of Microsoft Office 365 (O365) and misuse their VPN remote access services.
McAfee’s global network of over a billion threat sensors affords its threat researchers the unique advantage of being able to thoroughly analyze dozens of cyber-attacks of this kind. Based on this analysis, McAfee supports CISA’s recommendations to help prevent adversaries from successfully establishing persistence in agencies’ networks, executing malware, and exfiltrating data. However, McAfee also asserts that the nature of this environment demands that additional countermeasures be implemented to quickly detect, block and respond to exploits originating from authorized cloud services.
Read on to learn from McAfee’s analysis of these attacks and understand how federal agencies can use cloud access security broker (CASB) and endpoint threat detection and response (EDR) solutions to detect and mitigate such attacks before they have a chance to inflict serious damage upon their organizations.
The Anatomy of a Cloud Services Attack
McAfee’s analysis supports CISA’s findings that adversaries frequently attempt to gain access to organizations’ networks by obtaining valid access credentials for multiple users’ O365 accounts and domain administrator accounts, often via vulnerabilities in unpatched VPN servers. The threat actor will then use the credentials to log into a user’s O365 account from an anomalous IP address, browse pages on SharePoint sites, and then attempt to download content. Next, the cyberthreat actor would connect multiple times from a different IP address to the agency’s Virtual Private Network (VPN) server, and eventually connect successfully.
Once inside the network, the attacker could:
- Begin performing discovery and enumerating the network
- Establish persistence in the network
- Execute local command line processes and multi-stage malware on a file server
- Exfiltrate data
Basic SOC Best Practices
McAfee’s comprehensive analysis of these attacks supports CISA’s proposed best practices to prevent or mitigate such cyber-attacks. These recommendations include:
- Hardening account credentials with multi-factor authentication,
- Implementing the principle of “least privilege” for data access,
- Monitoring network traffic for unusual activity,
- Patching early and often.
While these recommendations provide a solid foundation for a strong cybersecurity program, these controls by themselves may not go far enough to prevent more sophisticated adversaries from exploiting and weaponizing cloud services to gain a foothold within an enterprise.
Why Best Practices Should Include CASB and EDR
Organizations will gain a running start to identifying and thwarting the attacks in question by implementing a full-featured CASB such as McAfee MVISION Cloud, and an advanced EDR solution, such as McAfee MVISION Endpoint Threat Detection and Response.
Deploying MVISION Cloud for Office 365 enables agencies’ SOC analysts to assert greater control over their data and user activity in Office 365—control that can hasten identification of compromised accounts and resolution of threats. MVISION Cloud takes note of all user and administrative activity occurring within cloud services and compares it to a threshold based either on the user’s specific behavior or the norm for the entire organization. If an activity exceeds the threshold, it generates an anomaly notification. For instance, using geo-location analytics to visualize global access patterns, MVISION Cloud can immediately alert agency analysts to anomalies such as instances of Office 365 access originating from IP addresses located in atypical geographic areas.
When specific anomalies appear concurrently—e.g., a Brute Force anomaly and an unusual Data Access event—MVISION Cloud automatically generates a Threat. In the attacks McAfee analyzed, Threats would have been generated early on since the CASB’s user behavior analytics would have identified the cyber actor’s various activities as suspicious. Using MVISION Cloud’s activity monitoring dashboard and built-in audit trail of all user and administrator activities, SOC analysts can detect and analyze anomalous behaviors across multiple dimensions to more rapidly understand what exactly is occurring when and to what systems—and whether an incident concerns a compromised account, insider threat, privileged user threat, and/or malware—to shrink the gap to remediation.
In addition, with MVISION Cloud, an agency security analyst can clearly see how each cloud security incident maps to MITRE ATT&CK tactics and techniques, which not only accelerates the entire forensics process but also allows security managers to defend against similar attacks with greater precision in the future.
Furthermore, using MVISION Cloud for Office 365, agencies can create and enforce policies that prevent the uploading of sensitive data to Office 365 or downloading of sensitive data to unmanaged devices. With such policies in place, an attacker’s attempt to exfiltrate sensitive data will be mitigated.
In addition to deploying a CASB, implementing an EDR solution like McAfee MVISION EDR to monitor endpoints centrally and continuously—including remote devices—helps organizations defend themselves from such attacks. With MVISION EDR, agency SOC analysts have at their fingertips advanced analytics and visualizations that broaden detection of unusual behavior and anomalies on the endpoint. They are also able to grasp the implications of alerts more quickly since the information is presented in a format that reduces noise and simplifies investigation—so much so that even novice analysts can analyze at a higher level. AI-guided investigations within the solution can also provide further insights into attacks.
With a threat landscape that is constantly evolving and attack surfaces that continue to expand with increased use of the cloud, it is now more important than ever to embrace CASB and EDR solutions. They have become critical tools to actively defend today’s government agencies and other large enterprises.
The post How CASB and EDR Protect Federal Agencies in the Age of Work from Home appeared first on McAfee Blogs.
There are new and expanding opportunities for women’s participation in cybersecurity globally as women are present in greater numbers in leadership. In recent years, the international community has recognized the important contributions of women to cybersecurity, however, equal representation of women is nowhere near a reality, especially at senior levels.
The RSA Conference USA 2019 held in San Francisco — which is the world’s largest cybersecurity event with more than 40,000 people and 740 speakers — is a decent measuring stick for representation of women in this field. “At this year’s Conference 46 percent of all keynote speakers were women,” according to Sandra Toms, VP and curator, RSA Conference, in a blog she posted on the last day of this year’s event. “While RSAC keynotes saw near gender parity this year, women made up 32 percent of our overall speakers,” noted Toms.
Forrester also predicts that the number of women CISOs at Fortune 500 companies will rise to 20 percent in 2019, compared with 13 percent in 2017. This is consistent with new research from Boardroom Insiders which states that 20 percent of Fortune 500 global chief information officers (CIOs) are now women — the largest percentage ever.
Research from Cybersecurity Ventures, which first appeared in the media early last year, predicts that women will represent more than 20 percent of the global cybersecurity workforce by the end of 2019. This is based on in-depth discussions with numerous industry experts in cybersecurity and analyzing and synthesizing third-party reports, surveys, and media sources.
Either way, the 20 percent figure is still way too low, and our industry needs to continue pushing for more women in cyber. Heightened awareness on the topic — led by numerous women in cyber forums and initiatives — has helped move the needle in a positive direction.
Thursday, November 5, 2020
10am PT | 12pm CT | 1pm ET
Meet the speakers:
Chief Information Security Officer
Alexandra Heckler is Chief Information Security Officer at Collins Aerospace, where she leads a diverse team of cyber strategy and defense experts to protect against cyber threats and ensure regulatory compliance. Prior to joining Collins, Alexandra led Booz Allen’s Commercial Aerospace practice, building and overseeing multi-disciplinary teams to advise C-level clients on cybersecurity and digital transformation initiatives. Her work centered on helping aerospace manufacturers manage the convergence of cyber risk across their increasingly complex business ecosystem, including IT, OT and connected products. Alexandra also helped build and led the firm’s automotive practice, working with OEMs, suppliers and the Auto-ISAC to drive industry-leading vehicle cyber security capabilities. During her first few years at Booz Allen, she supported technology, innovation and risk analysis initiatives across U.S. government clients. Throughout her tenure, she engaged in Booz Allen’s Women in Cyber—a company-wide initiative to attract, develop and retain female cyber talent—and supported the firm’s partnership with the Executive Women’s Forum. She also served as Finance and Audit Chair on the Executive Committee of the newly-founded Space-ISAC. Alexandra holds a B.S. in Foreign Service with an Honors Certificate in International Business Diplomacy, and a M.A. in Communication, Culture and Technology from Georgetown University.
Sr. Director/CISO of IT Risk Management
Diane Brown is the Sr. Director/CISO of IT Risk Management at Ulta Beauty located in Bolingbrook, IL. In this role, Diane is accountable for the security of the retail stores, cyber-security, infrastructure, security/network engineering, data protection, third-party risk assessments, Directory Services, SOX & PCI compliance, application security, security awareness and Identity Management. Diane has more than three decades of IT experience in the retail environment and has honed her expertise in information technology leadership with a focus on risk management for the past 15 years. She values her strategic alliances with the business focusing on delivery of secure means to deploy new technologies, motivating people and managing an expanding technology portfolio. She holds a Bachelor’s degree in Information Security and CISSP/ISSAP certifications and is a member of the Executive Security Council for NRF and one of the original members of the RH-ISAC.
Director, Industry Solutions Americas Solutions Architecture & Customer Success
Amazon Web Services
Elizabeth has been with AWS for 5-1/2 years and leads Industry Solutions within the Americas Solutions Architecture and Customer Success organization. Elizabeth’s team of Specialist Solutions Architects provide industry specific depth for customers in the following segments: Games, Private Equity, Media & Entertainment, Manufacturing/Supply Chain, Healthcare Life Sciences, Financial Services, and Retail. They focus on accelerating cloud migration and building customer confidence and capability on the AWS platform through expert, prescriptive guidance on Foundations (Security, Identity, and Networking), Cost Optimization, Developer Experience, Cloud Migrations and Modernization.
Prior to her role at AWS, Elizabeth led the pre-sales Oracle Enterprise Architecture team within Oracle’s North America Public Sector Consulting organization. She helped customers maximize their investment in Oracle technologies, align business initiatives with the right IT solutions, and mitigate risk of implementations, focused on Oracle Engineered Systems, Database, and Infrastructure solutions.
Elizabeth got her start in technology with Metropolitan Regional Information Systems (MRIS), the nation’s largest Multiple Listing Service (MLS) and real estate information provider. She spent 15 years at this small company across multiple functions: DBA, data architect, system administrator, technical program lead, and operations leader. Most notably, she led design, deployment and growth of the patented database behind the Cornerstone Universal Data Exchange.
She earned a bachelor’s degree in International Business from Eckerd College in St. Petersburg, Florida.
Director of Cyber Risk & Security Services
American Electric Power
Deana Elizondo is the Director of Cyber Risk & Security Services at American Electric Power. She has been with AEP for 16 years and has spent the last 11 years in Cybersecurity. Deana’s organization includes Security Ambassadors, Security Education & Regional Support, Data Protection & Privacy, Enterprise Content Management, and Strategy, Risk & Policies. Deana’s passion is growing and developing her leaders and team members, as well as educating the entire AEP workforce on the value and benefits of reducing Security risk.
Aderonke (Addie) Adeniji
Director Information Assurance Office of Cybersecurity
House of Representatives
Addie Adeniji is a seasoned cybersecurity professional with expertise in Federal IT security governance, risk and compliance (GRC). Currently, she serves as the Director of Information Assurance, within the Office of Cybersecurity, for the U.S. House of Representatives. In this role, she oversees Information Assurance standard and process development and directs risk management and audit compliance efforts across the House. Ms. Adeniji works with House staff to identify, evaluate and report risks to ensure the House maintains a strengthened security risk posture. Her past experience includes security consulting within the Federal health (i.e., FDA, NIH, and HHS headquarters) and energy domains.
Brooke Noelke (Moderator)
Senior Enterprise Cloud Security Strategist/Architect
Brooke joins McAfee’s Customer Cloud Security Architecture team after leading McAfee IT’s cloud technical architects and business-facing cloud service management efforts, driving McAfee’s cloud transformation and migration of 70% of our applications to the cloud. She’s spent most of her career in technical leadership roles in cloud strategy, architecture and engineering, spanning professional services strategy though IT delivery leadership. She believes cloud services have already rewritten our IT universe, and we’re all just catching up… but that the cloud “easy buttons” we’re handing developers and business functions aren’t as risk-free as commonly assumed. Her mission is to make the secure path, the easy path to deploying new products, solutions and intelligence in the cloud, through enablement of organizational change, agile automation and well-designed, reusable cloud security reference architectures
McAfee MVISION Cloud was the first to market with a CASB solution to address the need to secure corporate data in the cloud. Since then, Gartner has published several reports dedicated to the CASB market, which is a testament to the critical role CASBs play in enabling enterprise cloud adoption. Today, Gartner named McAfee a Leader in the 2020 annual Gartner Magic Quadrant for Cloud Access Security Brokers (CASB) for the fourth time evaluating CASB vendors.
Cloud access security brokers have become an essential element of any cloud security strategy, helping organizations govern the use of cloud and protect sensitive data in the cloud. Security and risk management leaders concerned about their organizations’ cloud use should investigate CASBs.
In its fourth Magic Quadrant for Cloud Access Security Brokers, Gartner evaluated eight vendors that met its inclusion criteria. MVISION Cloud, as part of the MVISION family of products at McAfee, is recognized as a Leader in the report; and for the fourth year in a row. To learn more about how Gartner assessed the market and MVISION Cloud, download your copy of the report here.
This year, Gartner commissioned a highly rigorous process to compile its Gartner Magic Quadrant for Cloud Access Security Brokers (CASB) report and they relied on numerous inputs to compile the report, including these materials from vendors to understand their product offerings:
- Questionnaire – A 300+ point questionnaire resulting in hundreds of pages of responses
- Financials – Detailed company financial data covering CASB revenue
- Documentation – Access to all product documentation
- Customer Peer Reviews – Gartner encourages customers to submit anonymized reviews via their Peer Insights program. You can read them here.
- Demo – Covering over 50 Gartner-defined use cases to validate product capabilities
In 2020, McAfee made several updates and additions to its solutions, strengthening its position as an industry experts including:
- MVISION Cloud added capabilities to help organizations protect cloud-native application infrastructure with MVISION Cloud Native Infrastructure Security.
- Unified Cloud Edge (UCE) was introduced to the McAfee MVISION Cloud platform, making McAfee the only vendor to provide a converged security solution, to simplify the adoption of Secure Access Service Edge (SASE) architecture.
- The introduction of MITRE ATT&CK framework into MVISION Cloud marked McAfee as the first CASB provider to tag and visualize cloud security events within an ATT&CK. McAfee then teamed with the University of California, Berkeley’s Center for Long-Term Cybersecurity (CLTC) to show how MITRE ATT&CK framework adoption improves cloud security.
- Support of encryption enhancements in Microsoft Teams, becoming the only CASB that is Certified for Microsoft Teams.
- McAfee became the first CASB platform provider to be granted a Federal Risk and Authorization Management Program (FedRAMP) High Impact Provisional Authority to Operate (P-ATO) from the U.S. Government’s Joint Authorization Board (JAB).
- McAfee’s CASB Connect Program, which allows cloud service providers or partners to build lightweight API connections to McAfee MVISION Cloud, leading several new service providers to adopt McAfee MVISION Cloud.
McAfee also received recognition as the only vendor to be named the January 2020 Gartner Peer Insights Customers’ Choice for Cloud Access Security Brokers based on customer feedback and ratings for McAfee MVISION Cloud.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The Gartner Peer Insights Logo is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved
Gartner Peer Insights ‘Voice of the Customer’: Cloud Access Security Brokers, Peer Contributors, 13 March 2020. Gartner Peer Insights reviews constitute the subjective opinions of individual end users based on their own experiences and do not represent the views of Gartner or its affiliates. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.
The post McAfee Named a Leader in the 2020 Gartner Magic Quadrant for CASB appeared first on McAfee Blogs.
Most businesses cannot survive without being connected to the internet or the cloud. Websites and cloud services enable employees to communicate, collaborate, research, organize, archive, create, and be productive.
Yet, the digital connection is also a threat. External attacks on cloud accounts increased by an astounding 630% in 2019. Ransomware and phishing remain major headaches for IT security teams, and as users and resources have migrated outside of the traditional network security perimeter, it’s become increasingly difficult to protect users from clicking on a link or opening a malicious file.
This challenge has increased the tension between two IT mandates—allowing unfettered access to necessary services, while preventing attacks and blocking access to malicious sites. Automation helps significantly with modern security pipelines blocking about 99.5% of malicious and suspicious activity by filtering known bad files and sites, as well as using sophisticated anti-malware scanning and behavioral analytics.
Security is a lot of work
However, the remaining half of 1% still represents a significant number of sites and potential threats that require time for a team of security analysts to triage. Therefore, IT managers are faced with the challenge of devising balanced security policies. Many companies default to blocking unknown traffic, but over-blocking of web sites and content can hinder user productivity while creating a surge in help-desk tickets as users attempt to go to legitimate sites that have not yet been classified. On the flipside, web policies that allow access too freely greatly increases the likelihood of serious, business-threatening security incidents.
With a focus on digital transformation, accelerated by the change in work habits and locations during the pandemic, companies need flexible, transparent security controls that enable safe user access to critical web and cloud resources without overwhelming security teams with constant help desk calls, policy changes, and manual triaging. Remote Browser Isolation – if implemented properly – can help achieve this.
While security solutions leveraging URL categorization, domain reputation, antivirus, and sandboxes can stop 99.5% of threats, remote browser isolation (RBI) can handle the remaining unknown events, rather than the common strategy of choosing to rigidly block or allow everything. RBI allows web content to be delivered and viewed in a safe environment, while analysis is conducted in the background. Using RBI, any request to an unknown site or URL that remains suspicious after traversing the web protection defense-in-depth pipeline will be rendered remotely, preventing any impact to a user’s system in the event the content is malicious.
Relying on RBI
Remote browser isolation blocks malicious code from running on an employee’s system just because they clicked a link. The technology will also prevent pages from using unprotected cookies to try and gain access to protected web services and sites. Such protections are particularly important in the age of ransomware, when an inadvertent click on a malicious link can lead to significant damage to a company’s digital assets.
Given the benefits of remote browser isolation, some companies have deployed the technology to render every site. While this can very effectively mitigate security risk, isolating all web and cloud traffic demands considerable computing resources and is prohibitively expensive from a license cost point of view.
By integrating remote browser isolation (RBI) technology directly into our MVISION Unified Cloud Edge (UCE) solution, McAfee integrates RBI with the existing triage pipeline. This means that the rest of the threat protection stack – including global threat intelligence, anti-malware, reputation analysis, and emulation sandboxing – can filter out the majority of threats while only one out of every 200 requests needs to be handled using the RBI. This dramatically reduces overhead. McAfee’s UCE makes this approach dead simple: rather than positioning remote browser isolation as a costly and complicated add-on service, it is included with every MVISION UCE license.
Full Protection for High-Risk Individuals
However, there are specific people inside a company—such as the CEO or the finance department—with whom you cannot take chances. For those privileged users, full isolation from potential internet threats is also available. This approach ensures full virtual segmentation of the user’s system from the internet and shields it against any potential danger, enabling him to use the web and cloud freely and productively.
McAfee’s approach greatly reduces the risk of users being compromised by phishing campaigns or inadvertently getting infected by ransomware – such attacks can incur substantial costs and impact an organization’s ability to operate. At the same time, organizations benefit from a workforce that is freely able to access the web and cloud resources they need to be productive, while IT staff are freed from the burden of rigid web policies and constantly addressing help-desk tickets. .
Want to know more? Check out our RBI demonstration.
The post Catch the Most Sophisticated Attacks Without Slowing Down Your Users appeared first on McAfee Blogs.
You’ve more than likely heard the phrase “with great power comes great responsibility.” Alternatively called the “Peter Parker Principle” this phrase became well known in popular culture mostly due to Spider-Man comics and movies – where Peter Parker is the protagonist. The phrase is so well known today that it actually has its own article in Wikipedia. The gist of the phrase is that if you’ve been empowered to make a change for the better, you have a moral obligation to do so.
However, what I’ve noticed as I talk to customers about cloud security, especially security for the Infrastructure as a Service (IaaS) is a phenomenon I’m dubbing the “John McClane Principle” – the name has been changed to protect the innocent
The John McClane Principle happens when someone has been given responsibility for fixing something but at the same time has not been empowered to make necessary changes. At the surface this scenario may sound absurd, but I bet many InfoSec teams can sympathize with the problem. The conversation goes something like this:
- CEO to InfoSec: You need to make sure we’re secure in the cloud. I don’t want to be the next [insert latest breach here].
- InfoSec to CEO: Yeah, so I’ve looked at how we’re using the cloud and the vast majority of our problems are from a lack of processes and knowledge. We have a ton of teams that are doing their own thing in the cloud, and I don’t have complete visibility into what they’re doing.
- CEO to InfoSec: Great, go fix it.
- InfoSec to CEO: Well the problem is I don’t have any say over those teams. They can do whatever they want. To fix the problem they’re going to have change how they use the cloud. We need to get buy-in from managers, but those managers have told me they’re not interested in changing anything because it’ll slows things down.
- CEO to InfoSec: I’m sure you’ll figure it out. Good luck, and we better not have a breach.
That’s when “with no power comes more responsibility” rings true.
And why is that? The reason being is that IaaS has fundamentally changed how we consume IT and along with that how we scale security. No longer do we submit purchase requests and go through a long, lengthy processes to spin up infrastructure resources. Now anyone with a credit card can spin up the equivalent of a data center within minutes across the globe.
The agility however introduced some unintended changes to InfoSec and in order to scale, cloud security cannot be the sole responsibility of one team. Rather cloud security must be embedded in process and depends on collaboration between development, architects, and operations. These teams now have a more significant role to play in cloud security, and in many cases are the only ones who can implement change in order to enhance security. InfoSec now acts as Sherpas instead of gatekeepers to make sure every team is marching to the same, secure pace.
However, as John McClane can tell you the fact that more teams look after cloud security doesn’t necessarily mean you have a better solution. In fact, having to coordinate across multiple teams with different priorities can make security even more complex and slow you down. Hence the need for a streamlined security solution that facilitates collaboration between developers, architects, and InfoSec but at the same time provides guardrails, so nothing slips throw the cracks.
With that, I’m excited to announce our new cloud security service built especially for customers moving and developing applications in the cloud. We call it MVISION Cloud Native Application Protection Platform – or just CNAPP because every service deserves an acronym.
What is CNAPP? CNAPP is a new security service we’ve just announced today that combines solutions from Cloud Security Posture Management (CSPM), Cloud Workload Protection Platform (CWPP), Data Loss Prevention (DLP), and Application Protection into a single solution. Now in beta with a target launch date of Q1, 2021, we built CNAPP to provide InfoSec teams broad visibility into their cloud native applications. For us, the goal wasn’t how do we slow things down to make sure everything is secure; rather how do we enable InfoSec teams the visibility and context they need for cloud security while allowing dev teams to move fast.
Let me briefly describe what features CNAPP has and list some features that are customer favorites.
The vast majority of breaches in IaaS today are due to service misconfigurations. Gartner famously said in 2016 that “95% of cloud security failures will be the customer’s fault.”Just last year Gartner updated that quote to say “99% of cloud security failures will be the customers’ fault.” I’m waiting for the day when Gartner’s says “105% will be the customer’s fault.”
Why is the percentage so high? There are multiple reasons, but we hear a lot from our customers that there is a huge lack of knowledge on how to secure new services. Each cloud provider is releasing new services and capabilities at a dizzying pace with no blockers for adoption. Unfortunately, the industry hasn’t matched pace of having a workforce that knows and understands how best to configure these new services and capabilities. CNAPP provides customers with the ability to immediately audit all cloud services and benchmark those services against best security practices and industry standards like CIS Foundations, PCI, HIPPA, and NIST.
Within that audit (we call it a security incident), CNAPP provides detailed information on how to reconfigure services to improve security, but the service also provides the ability to assign the security incident to dev teams with SLAs so there’s no ambiguity on who owns what and what needs to change. All of these workflows can be automated so multiple teams are empowered in near real-time to find and fix problems.
Additionally, CNAPP has a custom policy feature where customers can create policies for identifying risky misconfigurations unique to their environments as well as integrations with developer tools like Jenkins, Bitbucket, and GitHub that provide feedback on deployments that don’t meet security standards.
IaaS platforms have become catalysts for Open Source Software (OSS) like Linux (OS), Docker (container), and Kubernetes (orchestration). The challenge with using these tools is the inherit risk of Common Vulnerabilities and Exposures (CVE) found in software libraries and misconfigurations in deploying new services. Another famous quote by Gartner is that “70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.” But how does the InfoSec team quickly spot those vulnerabilities and misconfigurations, especially in ephemeral environments with multiple developer teams pushing frequent releases into CI/CD pipelines?
Based on our acquisition of NanoSec last year, CNAPP provides full workload protection by identifying all compute instances, containers, and container services running in IaaS while identifying critical CVEs, misconfigurations in both repository and production container services, and introducing some new protection features. These features include application allow listing, OS hardening, and file integrity monitoring with plans to introduce nano-segmentation and on-prem support soon.
We’ve had a great time working jointly with our customers to release CNAPP. I’d like to highlight some of the use cases that have proven to be game changers for our customers.
- In-tenant DLP scans: many of our customers have legitimate use cases for publicly exposed cloud storage services (sometimes referred to as buckets), but at the same time need to ensure those buckets don’t have sensitive data. The challenge with using DLP for these services is many solutions available in the market copy the data into the vendor’s own environment. This increases customer costs with egress charges and also introduces security challenges with data transit. CNAPP allows customers to perform in-tenant DLP scans where the data never leaves the IaaS environment, making the process more secure and less expensive.
- MITRE ATT&CK Framework for Cloud: the language of Security Operation Centers (SOC) is MITRE, but there is a lot of nuance in how cloud security incidents fit into this framework. With CNAPP we built an end-to-end process that maps all CSPM and CWPP security incidents to MITRE. Now InfoSec and developer teams can work more effectively together by automatically categorizing every cloud incident to MITRE, facilitating faster responses and better collaboration.
- Unified Application Security: CNAPP is built on the same platform as our MVISION Cloud service, a Gartner Magic Quadrant Leader for Cloud Access Security Broker (CASB). Customers are now able to get detailed visibility and security control over their SaaS applications along with applications they are building in IaaS with the same solution. Our customers love having one console that provides a holistic picture of application risk across all teams – SaaS for consumers and IaaS for builders.
There are a lot more features I’d love to highlight, but instead I invite you to check out the solution for yourself. Visit https://mcafee.com/CNAPP for more information on our release or request a demo at https://mcafee.com/demo. We’d love to get your feedback and hear how MVISION CNAPP can help you become more empowered and responsible in the cloud.
This post contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Contact your McAfee representative to obtain the latest forecast, schedule, specifications, and roadmaps.
Almost all businesses nowadays use web applications for their targeted growth, but these apps’ security is mostly compromised if proper steps are not taken. During the web application development, all other features are given time and preference, but very few pay attention to the web application security they deserve. The vulnerabilities in your web application can be easily exploited by cybercriminals who always remain in search of sites with lower security protection.
Here are one of the most important security practices that you should implement to secure your web application from the most common threats:
Install SSL Certificates
One of the most effective measures to secure your web applications from cyberattacks is through encoding all the information shared on it. SSL certificates use SSL (Secure Socket Layers) or TLS (Transport Layer Security) security protocols to protect the data from the reach of cybercriminals through encryption.
If you do not activate SSL certificates on your web applications, hackers can easily read the shared information if they somehow get access to it. SSL certificates use cryptographic keys to make it impossible for the attackers to read the data.
The certificate authorities ensure that data transfer is encrypted throughout the communication process. Before buying an SSL certificate for your web app make sure you are purchasing it from a trustworthy SSL Authority like a ClickSSL that provides some of the most popular SSL certificates in very reasonable price.
Manage User Permissions
Wisely managing user’s permissions makes your web applications more secure than before. There would be numerous employees working in your company, and you know that not every worker needs full access to the system to perform his/her job. So, it would be best to implement the “Principle of least privilege” to limit every user’s access.
If you have granted full access permissions to everyone working in your organization, it will take a single cyber-attack by the scammers to access your entire system. So, to avoid any data breaches, you should strictly implement the least privilege principle in your firm. This may be a time-consuming process, but it will save your web app from many potential threats and malicious workers too.
Train your Employees
If you are running an organization, you should never expect that most of your employees will have a decent knowledge of current cyber security threats. Most of your staff members would have the necessary information about these scams. This may put you and your company in hot waters, as your employees with no sound knowledge of cyberattacks can quickly become the victim of hackers.
So, to protect your web application, you need to conduct proper cybersecurity training sessions for your employees. You must hire a web application security master to train all your staff about your web app and operating environment’s potential threats.
This cyber security training will help your employees independently identify and save themselves and your business from all security threats.
Hire Professional Hackers
Ethical hackers use the same tricks and techniques applied by cybercriminals to exploit your web application’s vulnerabilities. But they do this for your benefits to understand the security risks in your web app. Professional white hackers use the following techniques to test your web app’s security:
Cross-site scripting (XSS)
Distributed Denial-of-service (DDoS) attacks
Sensitive data exposure
After your web app’s penetration test (Pen-testing), you would become familiar with your website’s security weaknesses that will help you improve your web application’s security.
Secure Web App during Development
This is one of the essential security steps in protecting your web apps from the reach of hackers. This technique is all about preventing your software from security issues that occur during the development lifecycle. For this, you need to hire developers who have full knowledge of all the prevalent security problems and prevent malicious code in the actual program of the web application.
And if they find any malicious activity during the development lifecycle, they should identify and eliminate that issue.
With multiple network security threats, it is essential to release regular updates for your web apps security. Outdated software lacks recent security features and can easily be manipulated by malicious hackers. Depending on your web app’s infrastructure, you need to update your web app’s components. Keeping your web application up to date will protect it from the known attacks by hackers.
Keep Monitoring your App Regularly
To stay on the safe side, you should regularly keep looking for security vulnerabilities in your web app. It would help if you used different techniques for testing your mobile app security level. You can use dynamic and static application security testing tools to monitor your web app’s performance and security level. Regular testing of your system will help you know the vulnerabilities and implement new protection schemes to protect your web application.
Backup all Data
With an increase in the number of cyberattacks in today’s world, your web app data remains under threat every time. Hackers may get full access to your web app data that will put you in serious trouble. To avoid such a situation, you need to store all your web app data at another location. It may be a good idea to replicate the archives of all your information in multiple places to protect you from heavy losses in case your primary backup location is damaged or compromised.
Employ Security Experts
You need to invest more in security services to protect your web application from cybercriminals. Hiring security experts is a wise step towards improving your web app security. A security specialist or security service company uses specialized tools to monitor the security level of your website. The scanning results show the vulnerabilities present in your site. They then help you implement new security techniques to protect your web applications.
Before hiring anyone for security improvements, do complete research and check the individual’s reputation or the firm to validate their competence and authenticity.
Cybercriminals are finding new ways to take advantage of the weaknesses in your web applications. They always remain searching for websites that have poor web application security to launch an attack on them. To protect your web applications, you need to stay updated about all the known security threats. For organizations, dealing with malicious attacks is dependent on all employees. If any of your workers make a mistake in handling the potential cyberattack, it can put all your firm’s data in danger.
Cybersecurity protection starts with training your employees and implementing the right security techniques to secure your web applications. Implementing the above-listed best security practices will keep your web applications safe from all types of cyberattacks.
The post Best Security Practices to Protect your Web Application from Future Threats appeared first on CyberDB.
Over the last few months, Zero Trust Architecture (ZTA) conversations have been top-of-mind across the DoD. We have been hearing the chatter during industry events all while sharing conflicting interpretations and using various definitions. In a sense, there is an uncertainty around how the security model can and should work. From the chatter, one thing is clear – we need more time. Time to settle in on just how quickly mission owners can classify a comprehensive and all-inclusive, acceptable definition of Zero Trust Architecture.
Today, most entities utilize a multi-phased security approach. Most commonly, the foundation (or first step) in the approach is to implement secure access to confidential resources. Coupled with the shift to remote and distance work, the question arises, “are my resources and data safe, and are they safe in the cloud?”
Thankfully, the DoD is in the process of developing a long-term strategy for ZTA. Industry partners, like McAfee, have been briefed along the way. It has been refreshing to see the DoD take the initial steps to clearly define what ZTA is, what security objectives it must meet, and the best approach for implementation in the real-world. A recent DoD briefing states “ZTA is a data-centric security model that eliminates the idea of trusted or untrusted networks, devices, personas, or processes and shifts to a multi-attribute based confidence levels that enable authentication and authorization policies under the concept of least privilege access”.
What stands out to me is the data-centric approach to ZTA. Let us explore this concept a bit further. Conditional access to resources (such as network and data) is a well-recognized challenge. In fact, there are several approaches to solving it, whether the end goal is to limit access or simply segment access. The tougher question we need to ask (and ultimately answer) is how to do we limit contextual access to cloud assets? What data security models should we consider when our traditional security tools and methods do not provide adequate monitoring? And is securing data, or at least watching user behavior, enough when the data stays within multiple cloud infrastructures or transfers from one cloud environment to another?
Increased usage of collaboration tools like Microsoft 365 and Teams, SLACK and WebEx are easily relatable examples of data moving from one cloud environment to another. The challenge with this type of data exchange is that the data flows stay within the cloud using an East-West traffic model. Similarly, would you know if sensitive information created directly in Office 365 is uploaded to a different cloud service? Collaboration tools by design encourage sharing data in real-time between trusted internal users and more recently with telework, even external or guest users. Take for example a supply chain partner collaborating with an end user. Trust and conditional access potentially create a risk to both parties, inside and outside of their respective organizational boundaries. A data breach whether intentional or not can easily occur because of the pre-established trust and access. There are few to no limited default protection capabilities preventing this situation from occurring without intentional design. Data loss protection, activity monitoring and rights management all come into question. Clearly new data governance models, tools and policy enforcement capabilities for this simple collaboration example are required to meet the full objectives of ZTA.
So, as the communities of interest continue to refine the definitions of Zero Trust Architecture based upon deployment, usage, and experience, I believe we will find ourselves shifting from a Zero Trust model to an Advanced Adaptive Trust model. Our experience with multi-attribute-based confidence levels will evolve and so will our thinking around trust and data-centric security models in the cloud.
The post Data-Centric Security for the Cloud, Zero Trust or Advanced Adaptive Trust? appeared first on McAfee Blogs.
Securing documents before cloud
Before the cloud, organizations would collaborate and store documents on desktop/laptop computers, email and file servers. Private cloud use-cases such accessing and storing documents on intranet web servers and network attached storage (NAS) improved the end-user’s experience. The security model followed a layered approach, where keeping this data safe was just as important as not allowing unauthorized individuals into the building or data center. This was followed by a directory service to sign into to protect your personal computer, then permissions on files stored on file servers to assure safe usage.
Enter the cloud
Most organizations now consider cloud services to be essential in their business. Services like Microsoft 365 (Sharepoint, Onedrive, Teams), Box, and Slack are depended upon by all users. The same fundamental security concepts exist – however many are covered by the cloud service themselves. This is known as the “Shared Security Model” – essentially the Cloud Service Provider handles basic security functions (physical security, network security, operations security), but ultimately the end customer must correctly give access to data and is ultimately responsible for properly protecting it.
The big difference between the two is that in the first security model, the organization owned and controlled the entire process. In the second cloud model, the customer owns the controls surrounding the data they choose to put in the cloud. This is the risk that collaborating and storing data in the cloud brings; once the documents have been stored in M365, what happens if it is mishandled from this point forward? Who is handling these documents? What if my most sensitive information has left the safe confines of the cloud service, how can I protect that once it leaves? Fundamentally: How can I control data that lives hypothetically anywhere, including areas that I do not have control over?
Adding the protection layers that are cloud-native
McAfee and Seclore have extended an integration recently to address these cloud-based use cases. This integration fundamentally answers this question: If I put sensitive data in the cloud that I do not control, can I still protect the data regardless of where it lives?
The solution works like this:
The solution puts guardrails around end-user cloud usage, but also adds significant compliance protections, security operations, and data visibility for the organization.
Data visibility, compliance & security operations
Once an unprotected sensitive file has been uploaded to a cloud service, McAfee MVISION Cloud Data Loss Prevention (DLP) detects the file upload. Customers can assign a DLP policy to find sensitive data such as credit card data (PCI), customer data, personally identifiable information (PII) or any other data they find to be sensitive.
Sample MVISION Cloud DLP Policy
If data is found to be in violation of policy, it means the data must be properly protected. For example, if the DLP engine finds PII, rather than let it sit unprotected in the cloud service, the McAfee policy the customer sets should enact some protection on file. This action is known as an “Response”, and MVISION Cloud will properly show the detection, violating data, and actions taken in the incident data. In this case, McAfee will call Seclore to protect the file. These actions can be performed both in near real-time, or will enact protection whenever data already exists in the cloud service (on demand scan).
“Seclore-It” – Protection Beyond Encryption
Now that the file has been protected, downstream access to the file is managed by Seclore’s policy engine. Examples of policy-based access could be end-user location, data type, user group, time of day, or any other combination of policy choices. The key principle here is the file is protected regardless of where it goes and enforced by a Seclore policy that the organization sets. If a user accesses the file, an audit trail is recorded to assure that organizations have the confidence that data is properly protected. The audit logs show allows and denies, completing the data visibility requirements.
Addressing one last concern; if a file is “lost” or the need to restrict access to files that are no longer in direct control such as when a user leaves the company, or if the organization simply wants to update policies on protected files, the policy on those files can be dynamically updated. This addresses a major data loss concern that companies have for cloud service providers and general data use for remote users. Ensuring files are always protected, regardless of scenario is simple to achieve with Seclore by taking the action to update a policy. Once the policy has been updated, even files on a thumb drive stuffed in a drawer are now re-protected from accidental or intentional disclosure.
This article addresses several notable concerns for customers doing business in a cloud model. Important/sensitive data can now be effortlessly protected as it migrates to and through cloud services to its ultimate destination. The organization can prove compliance to auditors that the data was protected and continues to be protected. Security operations can track incidents and follow the access history of files. Finally, the joint solution is easy to use and enables businesses to confidently conduct business in the cloud.
McAfee and Seclore partner both at the endpoint and in the cloud as an integrated solution. To find out more and see this solution running in your environment, send an inquiry to firstname.lastname@example.org
The post “Best of Breed” – CASB/DLP and Rights Management Come Together appeared first on McAfee Blogs.
2020 has seen cloud adoption accelerate with Microsoft Teams as one of the fastest growing collaboration apps, McAfee customers use of Teams increased by 300% between January and April 2020. When we looked into Teams use in more detail in June, we found these statistics, on average, in our customer base:
Teams Created 367
Members added to Teams 6,526
Number of Teams Meetings 106,000
3rd Party Apps added to Teams 185
Guest users added to Teams 2,906
This means that a typical enterprise has a new guest user added to their teams every few minutes – you wouldn’t allow unknown people to walk into an office, straight past security and walk around the building unescorted looking at papers sitting on people’s desks, but at the same time you want to allow in those guests you trust. For Teams, you need the same controls – allow in those guests you trust, but confirm their identity and make sure that they don’t see confidential information.
Microsoft invests huge amounts of time and money in the security of their systems, but security of the data in those systems and how they are used by the users is the responsibility of the enterprise.
The breadth of options, including inviting guest users and integration with 3rd party applications can be the Achilles heel of any collaboration technology. It takes just seconds to add an external third party into an internal discussion without realizing the potential for data loss, so sadly the risk of misconfiguration, oversharing or misuse can be large.
IT security teams need the ability to manage and control use to reduce risk of data loss or malware entering through Teams.
After working with hundreds of enterprises and over 40 million MVISION Cloud users worldwide and discussing with IT security, governance and risk teams how they address their Microsoft Teams security concerns, we have published a paper that outlines the top ten security threats and how to address them.
A few of the 10 Top Microsoft Teams Security Threats are below, read the paper for the full list.
- Microsoft Teams Guest Users: Guests can be added to see internal/sensitive content. By setting allow and/or block list domains, security can be implemented with the flexibility to allow employees to collaborate with authorized guests via Teams.
- Screen sharing that includes sensitive data. Screen sharing is very powerful, but can inadvertently share confidential data, especially if communication applications such as email are showing alerts on the screen.
- Access from Unmanaged Devices: Teams can be used on unmanaged devices, potentially resulting in data loss. The ability to set policies for unmanaged devices can safeguard Teams content.
- Malware Uploaded via Teams: File uploads from guests or from unmanaged devices may contain malware. IT administrators need the ability to either block all file uploads from unmanaged devices or to scan content when it is uploaded and remove it from the channel, informing IT management of any incidents.
- Data Loss Via Teams Chat and File Shares: File shares in Teams can lose confidential data. Data loss prevention technologies with strong sensitive content identification and sharing control capabilities should be implemented on Teams chat and file shares.
- Data Loss Via Other Apps: Teams App integration can mean data may go to untrusted destinations. As some of these apps may transfer data via their services, IT administrators need a system to discover third-party apps in use, review their risk profile and provide a workflow to remediate, audit, allow, block or notify users on an app’s status and revoke access as needed.
McAfee has a wealth of experience helping customers security their cloud computing systems, built around the MVISION Cloud CASB and other technologies. We can advise you about Microsoft Teams security and discuss possible threats of taking no action. Contact us to let us help you.
Teams is just one of the many applications within the Microsoft 365 suite and it is important to deploy common security controls for all cloud apps. MVISION Cloud provides security for Microsoft 365 and other cloud-based applications such as Salesforce, Box, Workday, AWS, Azure, Google Cloud Platform and customers’ own internally developed applications.
Guest article by Andrea Babbs, UK General Manager, VIPRE
2020 has forced businesses to revise many of their operations. One significant transition being the shift to a remote working model, for which many were unprepared in terms of equipment, infrastructure and security. As the government now urges people to return to work, we’re already seeing a shift towards a hybrid workforce, with many employees splitting their time between the office and working from home.
As organisations are now reassessing their long-term office strategies, front and centre to that shift needs to be their IT security underpinned by a dependable and flexible cloud infrastructure. Andrea Babbs, UK General Manager, VIPRE, discusses what this new way of working means long-term for an organisation’s IT security infrastructure and how businesses can successfully move from remote working to a secure and agile workforce.
Power of the Cloud
COVID-19 accelerated the shift towards Cloud-based services, with more data than ever before now being stored in the Cloud. For those organisations working on Cloud-based applications and drives, the challenges of the daily commute, relocations for jobs and not being able to ‘access the drive’ are in the past for many. Cloud services are moving with the user – every employee can benefit from the same level of security no matter where they are working or which device they are using. However, it’s important to ensure businesses are taking advantage of all the features included in their Cloud subscriptions, and that they’re configured securely for hybrid working.Layered Security Defence
With increased pressure placed on users to perform their roles faster and achieve greater results than ever before, employees will do what it takes to power through and access the information they need in the easiest and quickest way possible. This is where the cloud has an essential role to play in making this happen, not just for convenience and agility but also to allow users to stay secure – enabling secure access to applications for all devices from any location and the detection and deletion of viruses – before they reach the network.
Email remains the most-used communication tool, even more so when remote working, but it also remains the weakest link in IT security, with 91%of cybercrimes beginning with an email. By implementing innovative tools that prompt employees to double-check emails before they send them, it can help reduce the risk of sharing the wrong information with the wrong individual.
Additional layers of defence such as email checking tools, are removing the barriers which slow the transition to agile working and are helping to secure our new hybrid workforce, regardless of the location they’re working in, or what their job entails.Educating the User
For organisations wanting to evolve into a hybrid work environment, their IT security policies need to reflect the new reality. By re-educating employees about existing products and how to leverage any additional functionality to support their decision making, users can be updated on these cyber risks and understand their responsibilities.
Security awareness training programmes teach users to be alert and more security conscious as part of the overall IT security strategy. In order to fully mitigate IT security risks and for the business to benefit from an educated workforce, both in the short and long term, employees need to change their outdated mindset.
Changing the Approach
The evolution of IT and security over the past 20 years means that working from home is now easily achievable with cloud-based setups, whereas in the not too distant past, it would have been impossible. But the key to a successful and safe agile workforce is to shift the approach of full reliance on IT, to a mindset where everyone is alert, responsible, empowered and educated with regular training, backed up by tools that reinforce a ‘security first’ approach.
IT departments cannot be expected to stay one step ahead of cybercriminals and adapt to new threats on their own. They need their colleagues to work mindfully and responsibly on the front lines of cyber defence, comfortable in the knowledge that everything they do is underpinned by a robust and secure IT security infrastructure, but that the final decision to click the link, send the sensitive information or download the file, lies with them.Conclusion
By focusing on getting the basics right and powered by the capabilities of the Cloud, highlighting the importance of layered security and challenging existing mindsets, businesses will be able to shift away from remote workers being the ‘exception,’ to a secure and agile workforce as a whole.
Are you prepared to detect and defend against attacks that target your data in cloud services, or apps you’ve built that are hosted in the cloud?
Nearly all enterprises and public sector customers we work with have enabled cloud use in their organization, with many seeing a 600%+ increase1 in use in the March-April timeframe of 2020, when the shift to remote work rapidly took shape.
The first step to developing a strong cloud security posture is visibility over the often hundreds of services your employees use, what data is within these services, and then how they are being used collaboratively with third parties and other destinations outside of your control.
With that visibility, you can establish full control over end-user activity and data in the cloud, applying your policy at every entry and exit point to the cloud.
That covers your risk stemming from legitimate use by employees, external collaborators, and even API-connected marketplace apps, but what about your adversaries? If someone phished your CEO, stole their OneDrive credentials and exfiltrated data, would you know? What if your CEO used the same password across multiple accounts, and the adversary had access to apps like Smartsheet, Workday, or Salesforce? Are you set up to detect this kind of multi-cloud attack?
Our Research to Uncover the Best Solution
Most enterprise security operations centers (SOCs) use MITRE ATT&CK to map the events they see in their environment to a common language of adversary tactics and techniques. This helps to understand gaps in protection, model how attackers progress from access to exfiltration (or encryption/destruction), and to plan out security policy decisions.
The original ATT&CK framework applied to Windows/Mac/Linux environments, with Android/iOS included as well. For cloud environments, the MITRE ATT&CK framework has a shorter history (released October 2019), but is quickly gaining adoption as the model for cloud threat investigation.
In collaboration with the University of California Berkeley’s Center for Long-Term Cybersecurity (CLTC) and MITRE, we sought to uncover how enterprises investigate threats in the cloud, with a focus on MITRE ATT&CK. In this initiative, researchers from UC Berkeley CLTC conducted a survey of 325 enterprises in a wide range of industries, with 1K employees or above, split between the US, UK, and Australia. The Berkeley team also conducted 10 in-depth interviews with security leaders in various cybersecurity functions.
MITRE has done an excellent job identifying and categorizing adversary tactics and techniques used in the cloud. When asked about the prevalence of these tactics observed in their environment, 81% of our survey respondents had experienced each of the tactics in the Cloud Matrix on average. 58% had experienced the initial access phase of an attack at least monthly.
Given the frequency in which most enterprises experience these adversary tactics and techniques, we found widespread adoption of the ATT&CK Cloud Matrix, with 97% of our respondents either planning to or already using the Matrix.
In the full report, we explore deeper implications of using MITRE ATT&CK for Cloud, including consensus on the value it brings to enterprise organizations, challenges with implementation, and many more interesting results from our investigation. Head to the full report here to dive in.
One of the most promising benefits of MITRE ATT&CK is the unification of events derived from endpoints, network traffic, and the cloud together into a common language. Right now, only 39% of enterprises correlate events from these three environments in their threat investigation. Further adoption of MITRE ATT&CK over time will unlock the ability to efficiently investigate attacks that span multiple environments, such as a compromised endpoint accessing cloud data and exfiltrating to an adversary destination.
This research demonstrates promising potential for MITRE ATT&CK in the enterprise SOC, with downstream benefits for the business. 87% of our respondents stated that adoption of MITRE ATT&CK will improve cloud security in their organization, with another 79% stating that it would also make them more comfortable with cloud adoption overall. A safer transition to cloud-based collaboration and app development can accelerate businesses, a subject we’ve investigated in the past2. MITRE ATT&CK can play a key role in secure cloud adoption, and defense of the enterprise overall.
Dive into the full research report for more on these findings!
81% of enterprise organizations told us they experience the adversary techniques identified in the MITRE ATT&CK for Cloud Matrix – but are they defending against them effectively?
The post MITRE ATT&CK for Cloud: Adoption and Value Study by UC Berkeley CLTC appeared first on McAfee Blogs.
Virtual Private Network (VPN) is a technology that offers total security for all your digital activities. It serves as a barrier against third-party groups, hackers, cyber threats, malware, and sensitive data leakage.
More than ever, we need to invest with high-end protection to ensure our privacy is never compromised. VPNs are of high demand due to the current condition where most people stay at home and work remotely. With increased online activity, it’s high time to protect your privacy.
Free VPNs are enticing and offer ‘great’ security without extra cost. Their services are too-good-to-be-true, which you need to doubt and stay away from it.
Are There Alternatives To Top-Rated VPN Providers?
The threat of using free VPN is high as it does not offer robust encryption compared to paid services. It is better to pay for a cheap VPN service than to compromise your security. Affordable VPN services offer powerful data encryptions for people with limited budgets. They provide standard encryption technology to ensure your privacy is protected and your digital activities are secured.
There are a few reliable and trusted VPN solutions that offer affordable VPN instead of using free services that threaten your security. These are great alternatives that won’t hurt your wallet but will surely be of great help, especially if you’re a constant internet explorer.
5 Facts Why Free VPNs Are A No-No
Free VPN software keeps records of your digital activities and sells them to third parties. They offer encryptions that don’t ‘really’ mask your activities nor protect your identity. Free VPN services log all your sensitive data which is already a threat to your privacy. Aside from that, here are five things you need to remember: Free VPNs are a no-no.
- Monitor And Sell All Collected Data
VPNs act as your protective barrier against digital threats while you’re online. It secures all your data, online activities, and private information against prying eyes, government surveillance, etc. VPNs blocked hackers and your ISP from collecting or selling data to gain profit.
Free VPN shifts the message, and you become their milking cow to fund the service they offer in exchange for the data they collected from you. These sensitive data are then sold to third parties, and prose threats not just to your information, but your privacy is at stake.
- Leaks IP Addresses
Robust VPN solutions offer total security and encryption on all your digital activities and traffic. It serves as your secret portal in the world wide web against cyber threats, hackers, and prying eyes.
Using free VPN is like a tunnel with tons of holes that can leak your data or IP address. Hackers can track your activity, prying eyes can monitor you, and worse can expose you to tons of privacy threats.
- They Are Not Safe
Free VPN solutions are risky. They are a dangerous threat to your security and privacy. Running a VPN service is pricey and offering it for free to users is fishy. That means your data are the menu served for other people to devour.
- Aggressive Ads
Free VPNs practice aggressive ads that can go over a hit where you land into a hazardous site. It can expose you to tons of threats and hackers that can instantly access your information and files. High volume ads can also weigh your system down and affect browsing experience aside from privacy threats.
- Malware Exposure
Free VPN solutions contain malware that can damage not just your privacy but your devices. You have higher chances to get exposed with these nasty bugs when you download such software. Mobile ransomware and malware can steal your sensitive information like social security details and bank login details.
Free VPNs are enticing and offer ‘robust security’ without the need to pay for hundreds of dollars a year. However, your security is at stake, together with your sensitive data, and information.
Though it can help you stream region-restricted websites, you need to reconsider options and potential threats. Free VPNs are not safe; if you want to secure your digital presence, you can opt for an affordable VPN solution that offers high-end encryption to ensure your privacy and data is protected against potential hacks.
McAfee MVISION Cloud for Microsoft Teams, now offers secure guest user collaboration features allowing the security admins to not only monitor sensitive content posted in the form of messages and files within Teams but also monitor guest users joining Teams to remove any unauthorized guests joining Teams.
Working from home has become a new reality for many, as more and more companies are requesting that their staff work remotely. Already, we are seeing how solutions that enable remote work and learning across chat, video, and file collaboration have become central to the way we work. Microsoft has seen an unprecedented spike in Teams usage and they have more than 75 million daily users as of May 2020, a 70% increase in daily active users from the month of March1
What’s New in MVISION Cloud for Microsoft Teams
MVISION Cloud for Microsoft Teams now provides policy controls for security admins to monitor and remove unauthorized guest users based on their domains, the team guest users are joining etc. As organizations use Microsoft Teams to collaborate with trusted partners to exchange messages, participate in calls, and share files, it is critical to ensure that partners are joining teams designated for external communication and only guest users from trusted partner domains are joining the teams.
Organizations can configure policies in McAfee MVISION Cloud to:
- Monitor guest users from untrusted domains and remove the guest users automatically. Security admins do not have to reach out to Microsoft Teams admin and ask them to remove any untrusted guest users manually.
- Define the list of teams designated for external communication and make sure that users from partner organizations are joining only those teams and not any internal teams. If the partner users join any internal-only teams, they will be removed by McAfee MVISION Cloud automatically.
With these new features, McAfee offers complete data protection and collaboration control capabilities to enable organizations to safely collaborate with partners without having to worry about exposing confidential data to guest users.
Here is the comprehensive list of use cases organizations can enable by using MVISION Cloud for Microsoft Teams.
- Modern data security. IT can extend existing DLP policies to messages and files in all types of Teams channels, enforcing policies based on keywords, fingerprints, data identifiers, regular expressions and match highlighting for content and metadata.
- Collaboration control. Messages or files posted in channels can be restricted to specific users, including blocking the sharing of data to any external location.
- Guest user control. Guest users can be restricted to join only teams meant for external communication and unauthorized guest users from any domains other than trusted partner domains can be automatically removed.
- Comprehensive remediation. Enables auditing of regulated data uploaded to Microsoft Teams and remediates policy violations by coaching users, notifying administrators, quarantining, tombstoning, restoring and deleting user actions. End users can autonomously correct their actions, removing incidents from IT’s queue.
- Threat prevention. Empowers organizations to detect and prevent anomalous behavior indicative of insider threats and compromised accounts. McAfee captures a complete record of all user activity in Teams and leverages machine learning to analyze activity across multiple heuristics to accurately detect threats.
- Forensic investigations: With an auto-generated, detailed audit trail of all user activity, MVISION Cloud provides rich capabilities for forensics and investigations.
- On-the-go security, for on-the-go policies. Helps secure multiple access modes, including browsers and native apps, and applies controls based on contextual factors, including user, device, data and location. Personal devices lacking adequate control over data can be blocked from access.
McAfee MVISION Cloud for Microsoft Teams is now in use with a substantial number of large enterprise customers to enable their security, governance and compliance capabilities. The solution fits all industry verticals due to the flexibility of policies and its ease of use.
Around the world, IT teams are struggling with choosing between less critical, but important tasks, versus focusing on innovative projects to help transform your business. Both are necessary for your business and need to be actioned, but should your team do all of it? Have you thought about allowing someone else to guide you through the process while your internal team continues to focus on transforming the business?
|DRaaS Data protection dilemma; outsourcing or self-managing?|
Outsourcing your data protection functions vs. managing them yourself
Information technology has raised many questions about how it really should be done. Some experts favour the Disaster Recovery as a Service (DRaaS) approach. They believe that data protection, although necessary, has very little to do with core business functionality. Organisations commonly outsource non-business services, which has driven many to consider the idea of employing third parties for other business initiatives. This has led some companies to believe that all IT services should be outsourced, enabling the IT team to focus solely on core business functions and transformational growth.
Other groups challenge the concept and believe that the idea of outsourcing data protection is foolish. An organisation’s ability to quickly and completely recover from a disaster - such as data loss or an organisational breach - can be the determining factor as to whether the organisation will remain in business. Some may think that outsourcing something as critical as data protection, and putting your organisation’s destiny into the hands of a third party, is a risky strategy. The basic philosophy behind this type of thinking can best be described as: “If you want something done right, do it yourself.”
Clearly, both sides have some compelling arguments. On one hand, by moving your data protection solution to the cloud, your organisation becomes increasingly agile and scalable. Storing and managing data in the cloud may also lower storage and maintenance costs. On the other hand, managing data protection in-house gives the organisation complete control. Therefore, a balance of the two approaches is needed in order to be sure that data protection is executed correctly and securely.
The answer might be somewhere in the middle
Is it better to outsource all of your organisation’s data protection functions, or is it better to manage it yourself? The best approach may be a mix of the two, using both DRaaS and Backup as a Service (BaaS). While choosing a cloud provider for a fully managed recovery solution is also a possibility, many companies are considering moving away from ‘do-it-yourself’ disaster recovery solutions and are exploring cloud-based options for several reasons.
Firstly, purchasing the infrastructure for the recovery environment requires a significant capital expenditure (CAPEX) outlay. Therefore, making the transition from CAPEX to a subscription-based operating expenditure (OPEX) model makes for easier cost control, especially for those companies with tight budgets.
Secondly, cloud disaster recovery allows IT workloads to be replicated from virtual or physical environments. Outsourcing disaster recovery management ensures that your key workloads are protected, and the disaster recovery process is tuned to your business priorities and compliance needs while also allowing for your IT resources to be freed up.
Finally, cloud disaster recovery is flexible and scalable; it allows an organisation to replicate business-critical information to the cloud environment either as a primary point of execution or as a backup for physical server systems. Furthermore, the time and expense to recover an organisation’s data is minimised, resulting in reduced business disruption.
Consequently, the disadvantages of local backups is that it can be targeted by malicious software, which targets backup applications and database backup files, proactively searching for them and fully encrypting the data. Additionally, backups, especially when organisations try to recover quickly are prone to unacceptable Recovery Point Objectives (RPO).
What to look for when evaluating your cloud provider
It is also essential when it comes to your online backups to strike a balance between micromanaging the operations and completely relinquishing any sort of responsibility. After all, it’s important to know what’s going on with your backups. Given the critical nature of the backups and recovery of your data, it is essential to do your homework before simply handing over backup operations to a cloud provider. There are a number of things that you should look for when evaluating a provider.
- Service-level agreements that meet your needs.
- Frequent reporting, and management visibility through an online portal.
- All-inclusive pricing.
- Failover assistance in a moment’s notice.
- Do it yourself testing.
- Flexible network layer choices.
- Support for legacy systems.
- Strong security and compliance standards.
Ultimately, using cloud backups and DRaaS is flexible and scalable; it allows an organisation to replicate business-critical information to the cloud environment either as a primary point of execution or as a backup for physical server systems. In most cases, the right disaster recovery provider will likely offer you better recovery time objectives than your company could provide on its own, in-house. Therefore as you review your options, cloud DR could be the perfect solution, flexible enough to deal with an uncertain economic and business landscape.
The McAfee team is very proud to announce that once again McAfee was named a Gartner Peer Insights Customers’ Choice for SIEM for its McAfee Enterprise Security Manager (ESM) Solution, a recognition of high satisfaction from a number of reviews by verified end-user professionals.
We are most appreciative of our customers who support our solutions and share their opinions through forums like Gartner Peer Insights. The voice and passion of our customers is instrumental in shaping our success and motivates us each day to improve and innovate.
To that end we have taken our SIEM product and made it deployable on-prem or in the cloud via ESM Cloud. By leveraging the power of cloud computing, the new McAfee ESM Cloud allows customers to accelerate time to value for security operations centers by removing operational barriers, providing automated deployment, 24/7 system health monitoring, regular software updates, and patches, thereby allowing teams to focus efforts on security tasks.
Here are some quotes from customers that contributed to Gartner Peer Insights’ recognition of ESM:
“Provides the features you need, in a simple easy to use, easy to understand display”
“Integration and deployment was very easy, we integrated the McAfee Enterprise Security Manager (ESM), McAfee Event Receiver (ERC), and McAfee Enterprise Log Manager (ELM) in our lab in just a little under 4 hours… In under 4 hours we were collecting from a variety of MS Windows systems and a variety of Linux systems (RHEL, Ubuntu, and CENTOS). Other SIEM systems that we were evaluating took days to get running and then we still spent time on having to tune them.”
Cybersecurity Architect, Government. Read full review here
“A complete realistic security solution equipped with all major tools to secure structures.”
“This security manager is the best choice out there… McAfee Manager is best in the ways that we can view and analyze all the major activities being performed in the company’s system and securities and how we can improve the overall security related concerns. It has all these pre-equipped features which facilitates the overall requirements for enterprises.
Senior Consultant, Services Industry. Read full review here.
To all our customers who submitted reviews, thank you! These reviews mold our products and our customer journey, and we look forward to building on the experience that earned us this distinction!
- Learn more about our award winning SIEM solution by visiting the ESM solutions page.
- Read the SIEM reviews written by IT professionals that earned us this distinction by visiting Gartner Peer Insights’ SIEM page.
Gartner Peer Insights ‘Voice of the Customer’: Security Information and Event Management, 3 July 2020. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.
The post McAfee ESM Named a 2020 Gartner Peer Insights Customers’ Choice for SIEM appeared first on McAfee Blogs.
“Features are a nice to have, but at the end of the day, all we care about when it comes to our web and cloud security is architecture.” – said no customer ever.
The fact is that nobody likes to talk about architecture when shopping for the latest and greatest cyber security technology, and most organizations have been content to continue fitting new security tools and capabilities into their existing traditional architectures. However, digital transformation projects including cloud migration and ubiquitous mobile access have revealed architectural cracks, and many companies have seen the dam burst with the explosion in remote access demand in recent months. As a result, organizations are coming around to the realization that digital transformation demands a corresponding network and security architectural transformation.
The Secure Access Service Edge (SASE) framework provides organizations for a model to achieve this transformation, by bringing network and security technology together into a single, cloud-delivered service that ensures fast, secure, reliable, and cost-effective access to web and cloud resources. In this blog we are going to focus in on remote offices and how the combination of SD-WAN and Next-Generation Secure Web Gateway capabilities offered by MVISION UCE can enable SASE and deliver on the promise of digital transformation.
The Cloud and the Architectural Dilemma
In the past organizations were largely concentrated in a limited number of locations. Applications and data were hosted on servers at a central data center location on the local area network – typically at or near the headquarters. Users typically worked in the office, so they would also be located at the office and access corporate resources on the same network. Surrounding this network was a perimeter of security controls that could inspect all traffic going in or out of the organization, keeping trusted resources safe while keeping the bad guys out. Remote users and branch offices were logically connected to this central network via technologies like VPN, MPLS, and leased lines, so the secure network perimeter could be maintained.
While this approach sufficed for years, digital transformation has created major challenges. Applications and data storage have migrated to the cloud, so they no longer reside on the corporate network. Logic would dictate that the optimal approach would be for remote users and offices to have direct access to cloud resources without having to route back through the corporate network. But this would result in the organization’s IT security perimeter being completely circumvented, meaning lost security visibility and control, leading to unacceptable security and compliance risks.
So network and security architects everywhere are facing the same dilemma: What is the best way to enable digital transformation without any major compromises? Organizations have generally followed one of the four following architectural approaches based on their willingness to embrace new technologies and bring them together:
We’re going to discuss these four options here, and evaluate them based on four factors: security, speed, latency, and cost. The results will show that there’s only one way to achieve fast, secure, and cost-effective access to web and cloud resources.
Approach 1: STATUS QUO
Due to risk of losing security visibility and control, many organizations have refused to allow “direct-to-cloud” re-architecting. So even when high-speed internet links could connect users directly to cloud and web resources, this approach necessitates that all traffic still be pushed through slower MPLS links back to the corporate network, and then go back out through a single aggregated internet pipe to access web and cloud resources. While this theoretically maintains security visibility and control, it comes at great cost.
For starters, the user experience is greatly hampered by poor performance. Bandwidth suffers from the slow MPLS link back to the corporate office, as well as through the congested company internet connection. In addition, the extra network hops and increased network contention leads to high latency – this has been drastically amplified in recent months as the amount of remote traffic backhauling through the corporate network has exploded well beyond original design expectations. These factors don’t even take into account the potential impact of service disruptions brought about by introducing a single point of failure into the network architecture.
In addition to poor performance, there is a tangibly higher financial cost associated with this approach. Multiple MPLS lines connecting branch offices to the corporate data center are considerably more expensive than public internet connectivity. Additionally, in order to accommodate the routing of ALL user traffic, organizations need to dramatically increase investment in their central network and security perimeter infrastructure capacity, as well as the bandwidth of the shared internet pipe.
So we’re left needing to find a long-term answer to the challenges of speed, latency, and cost. These considerations are what have led many network architects to proceed to deploy SD-WAN.
Approach 2a: GOING DIRECT-TO-CLOUD WITH SD-WAN
The first step in delivering a cloud-ready architecture is removing the bottleneck incurred by forcing all traffic to be routed through slow MPLS lines to the central network and then back out to the cloud. SD-WAN technology can help in this regard. By deploying SD-WAN equipment at the edge of the branch network, optimized traffic policies can be created that route traffic directly to web and cloud resources using fast, affordable internet connections, while using the same internet connection to send only data center-bound traffic directly back to the corporate network over a dynamic set of VPN tunnels. WAN optimization and QoS, as well as various other edge network and security functions like firewall filtering that are better suited to being performed at the network edge, deliver the fastest and most reliable user experience, while minimizing the traffic burden on the central network.
By employing SD-WAN, network architects can achieve substantial cost savings by eliminating expensive MPLS links back to the corporate data center. Additionally, users aren’t constrained by the much slower bandwidth of those MPLS lines.
However, there are major drawbacks to this model. While SD-WAN solutions feature a number of strong flow control capabilities that can be distributed to each remote site – including firewalling, DNS protection, and data obfuscation – they don’t have the same robust data and threat protection capabilities that organizations have built into their network perimeter security. Therefore, architects still need to backhaul all traffic over the internet back to the data center, even if that traffic is ultimately destined to go right back out to the internet! So while the speed and cost-effectiveness of this connection is greatly improved in comparison to the old model, the need to continue backhauling traffic presents the same latency and congestion challenges.
Approach 2b: MCAFEE MVISION UNIFIED CLOUD EDGE
So if traffic paths need to run back to the corporate data center for organizations to maintain security visibility and control, but the majority of resources users are accessing are in the cloud, wouldn’t it make sense to situate the security controls in the cloud a more direct and secure traffic path? Enter McAfee MVISION Unified Cloud Edge.
MVISION UCE’s Next-Gen Secure Web Gateway provides a cloud-native, lightning-fast, 99.999% reliable, hyper-scale secure edge. By converging SWG, CASB, and DLP, and Remote Browser Isolation technologies, MVISION UCE ensures that remote users and offices enjoy the most sophisticated levels of threat, data, and cloud application protection, as well as unique proactive risk management capabilities that even exceed what is possible in a traditional on-premises security framework.
Just as important as the advanced security capabilities is the fact that MVISION UCE is built on a fast, reliable, scalable foundation. Thanks to a global Point of Presence (POP) network and unique peering relationships, MVISION UCE can extend a hyper-scale secure edge wherever users need it. Despite a 240% surge in traffic during the spring of 2020, McAfee was able to maintain 99.999% availability and met all of the latency requirements stipulated in our SLAs. Organizations could count on our infrastructure in the toughest of times, and can continue to do so going forward.
By subscribing to an affordable public internet connection at the branch site and connecting to MVISION UCE, customers can achieve many of the desired benefits. MVISION UCE’s comprehensive data, threat, and cloud application protection capabilities more than satisfy security requirements. And for the majority of user traffic that is destined for the web or cloud, the direct internet connection ensures fast, low-latency access.
However, without deploying SD-WAN in conjunction with UCE, organizations still need to have those slow, expensive MPLS links to maintain connectivity to their legacy data center applications and resources. Therefore, customers won’t be able to realize cost savings, and those connections to data center resources will suffer the same speed and latency challenges. And that is where we finally arrive at the ideal cloud security architecture, bringing MVISION UCE together with SD-WAN.
Approach 3: MVISION UCE + SD-WAN = SASE
By bringing together MVISION UCE with SD-WAN in a seamlessly integrated solution, organizations can deliver SASE and build a network security architecture fit for the cloud era. McAfee makes it possible for customers to easily converge MVISON UCE with virtually any SD-WAN solution via robust native support for SD-WAN connectivity, leveraging industry standard Dynamic IPSec and GRE protocols. Through this integration, customers benefit from the complete range of essential SASE capabilities, with SD-WAN providing the integrated networking functionality and MVISION UCE delivering the security capabilities. McAfee has supported our channel partners in successfully delivering joint SD-WAN-cloud SWG projects with many of the major SD-WAN vendors in the market, and we have forged tight alliances with the industry leaders through our Security Innovation Alliance (SIA).
So how does a combined UCE-SD-WAN solution satisfy the four architectural requirements? Security is clearly addressed by UCE’s threat, data, and cloud application protection capabilities, as well as the distributed firewall capabilities delivered by SD-WAN. By using a single fast internet connection, SD-WAN is able to intelligently and efficiently route traffic directly to cloud resources or back to the corporate data center. With MVISION UCE providing security directly in the cloud, SD-WAN can forward web- and cloud-bound traffic directly, without any excessive latency. Cost savings come from removing the expensive MPLS lines, and since the majority of traffic no longer needs to backhaul through the corporate data center, additional savings can be achieved by reducing central network bandwidth and infrastructure capacity.
Build a Cloud-Ready Network Security Architecture Today
Digital Transformation represents the next great technological revolution, and organizations’ ability to move to the cloud and empower their distributed workforces with fast, secure, simple, and reliable access will likely determine how successful they are in the new era. SASE represents the best way to achieve a direct-to-cloud architecture that doesn’t compromise on security visibility & control, performance, complexity, or cost. By seamlessly integrating our MVISION UCE solution with SD-WAN, it’s never been easier for organizations to deliver SASE to remote offices. As a result, users will benefit from greater productivity, IT personnel will enjoy greater operational efficiency, and companies will enjoy exceptional cost savings as a result of consolidated infrastructure and optimized network traffic.
To learn more about how MVISION UCE and SD-WAN can work together, attend a webinar hosted by McAfee and one of our key SD-WAN technology partners, Silver Peak Systems. Click here to register.
The post Transform your Architecture for the Cloud with MVISION UCE and SD-WAN appeared first on McAfee Blogs.