Oracle said Thursday that it has agreed to acquire cloud security firm Zenedge for an undisclosed sum.
Oracle said Thursday that it has agreed to acquire cloud security firm Zenedge for an undisclosed sum.
SecurityWeek RSS Feed
Cisco's comprehensive cloud-based security endpoint portfolio provides advanced malware protection, internet security, and enterprise mobility ...
With the Right Team Working Together You Can Address the Security, Privacy, and Compliance Challenges of Multicloud
Two more misconfigured databases exposing the personal details of thousands of people were disclosed late last week.
Leading Cloud Service Providers and Majority of AV Engines Failed to Detect New Ransomware Variant
In our personal lives, we consume a huge number of services on demand, from music and television to travel and food. As consumers, we expect to be able to access services monthly, paying only for what we need and leaving the complicated details, such as owning physical assets, upgrading software and making improvements, to the experts.
It makes sense that business leaders are beginning to adopt the same approach when it comes to security. Given the fact that the cloud-based security services market grew by 21 percent in 2017 and is expected to reach almost $9 billion by 2020, it is clear that chief information security officers (CISOs) now want security delivered as a service.
In 2018, SECaaS Is Where It’s At
The security threat landscape is evolving rapidly, and when organizations are faced with a shape-shifting opponent, they don’t want to wait until their business case stacks up to update their security tools. To stay ahead of the threats, they need the most cutting-edge solutions available.
Security-as-a-service (SECaaS) makes the latest updates available instantly. Such offerings are also flexible, scaling to fit the consumer’s needs with the option to add or take away components as those needs change. This allows CISOs to be more reactive to the shifting security landscape and avoid waste in their limited budgets.
SECaaS also has a shorter time to value and lower upfront cost than traditional security offerings, eliminating the need for investment in capital assets and constant physical maintenance of aging infrastructure. In addition, it’s possible to stage a transition from traditional offerings to SECaaS so that security moves over gradually from capital assets on-premises to the cloud.
Adopting Security-as-a-Service to Address the Skills Gap
The security skills gap is a pressing issue for many organizations, and in-house security professionals must be able to spend their time on the most business-critical tasks. By determining which activities, such as software configuration, maintenance and disaster recovery, can be managed by SECaaS or managed security services (MSS) providers, organizations can better prioritize their limited time and resources.
In the past, the prevailing idea was that you could switch security on and just leave it to work, so a large security team was not a high priority. However, this attitude has changed with the expansion of the threat landscape and the recognition of cybersecurity as an ever-evolving battle against increasingly sophisticated cybercriminals. Companies now need to decide whether to hire more security professionals — a struggle in a market with high demand and scarce skills — or rely more on technology and service providers.
The post Navigate the Shifting Threat Landscape With Security-as-a-Service appeared first on Security Intelligence.
Cloud risks have filled the news cycle of late, but the real cloud security landscape is nothing like the headlines would have you believe.
For example, serverside ransomware attacks, despite their high public profile, account for only about 2 percent of recorded incidents, according to Alert Logic’s “2017 Cloud Security Report.” In contrast, nearly three-quarters of security events involve attacks on web applications.
The study asserted that the public cloud is actually fairly secure. In fact, according to the report’s data, organizations using on-premises solutions experienced 51 percent more security incidents than firms that operate in the public cloud. Private and hybrid clouds, however, still have their fair share of security gaps.
Mapping the Cloud Security Landscape
The study examined more than 2.2 million security incidents recorded by more than 3,800 organizations over an 18-month period, CIO Insight reported. It found that web application attacks accounted for 75 percent of all incidents and that 85 percent of firms experience such attacks. Injection attacks, such as SQL injections, were the most common type of incident.
Unsurprisingly, the report found that threat actors are particularly drawn to e-commerce platforms and content management systems (CMS). This supports the notion that cybercriminals are increasingly eager to get their hands on intellectual property.
The survey also looked at comparative rates of attack against different types of application hosting environments. The public cloud fared best, with customers reporting an average of 405 security incidents over the 18-month window. Companies with on-premises storage, on the other hand, averaged 612 incidents over the same period.
Cybercriminals Aim for the Clouds
To be sure, these comparisons must be viewed in context. Private clouds and hybrid cloud environments are generally used by companies that handle a lot of highly sensitive data — the kind that draws attackers. Still, it is notable that public cloud users experience markedly fewer security incidents than on-premises firms.
It’s also worth pointing out that plain old brute-force attacks accounted for 12 percent of incidents, with 52 percent of them aimed at Windows platforms. This is noteworthy because the enterprise world is still largely a Windows environment.
The study recommended a few basic best practices to help organizations protect themselves in an evolving cloud security environment, such as whitelisting, consistent patching and careful handling of access privileges. Despite the widespread uncertainty and many misconceptions about the cloud, one thing is for sure: As long as security gaps exist, cybercriminals will continue to target sensitive data, no matter where it resides.
Kaspersky Lab has expanded its small and medium-sized business (SMB) offering with a new cloud-based product designed to provide an extra layer of security for the Exchange Online email service in Microsoft Office 365.
Since its founding in 2001, the Open Web Application Security Project (OWASP) has become a leading resource for online security best practices. In particular, its list of the top 10 “Most Critical Web Application Security Risks” is a de facto application security standard.
The recently released 2017 edition of the OWASP Top 10 marks its first update since 2013 and reflects the changes in the fundamental architecture of applications seen in recent years. These include:
- Source code being run on untrusted browsers.
- The creation of modular front-end user experiences using single page and mobile apps.
- Microservices being written in node.js and Spring Boot.
- The shielding of old code behind API and RESTful web services.
At Imperva, we deal with the attack types detailed in the Top 10 on a daily basis. Here, we want to offer our thoughts on the list, including what we agreed with (the good), what we thought was lacking (the bad) and what we disagreed with (the ugly).
But first, let’s see what’s changed since the 2013 list was released.
What’s New About the 2017 Report?
OWASP’s 2017 report includes an updated threat/risk rating system comprised of such categories as exploitability, prevalence, detectability, as well as technical and business impacts.
The attacks outlined below represent the newest web application threats, as seen in the 2017 OWASP Top 10.
A4 — XML External Entity (XXE)
A new category primarily supported by SAST data sets, an XML External Entity (XXE) takes advantage of older or poorly-configured XML processors to upload hostile content in an XML document.
By exploiting processors’ vulnerable code, dependencies or integrations, perpetrators can initiate a remote request from the server, scan internal systems and launch denial-of-service (DoS) attacks. This can render any application accepting XML, or inserts into XML documents, vulnerable.
A8 — Insecure Deserialization
Deserialization, i.e., the extraction of data from a series of bytes, is a constantly occurring process between applications communicating with each other. Improperly secured deserialization can lead to remote code execution, replay attacks, injection attacks, privilege escalation attacks and more.
A10 — Insufficient Logging and Monitoring
Insufficient logging and monitoring, when combined with ineffective incident response, allows attackers to strengthen and prolong attack strategies, maintain their persistence, “pivot to more systems, and tamper, extract, or destroy data”.
To make room for these new editions, OWASP adjusted the following threats:
- Insecure direct object references and missing function level access control were merged into a new category called broken access control.
- CSRF was downgraded to number 13 on OWASP’s list of security threats.
- Invalidated redirects and forwards was downgraded to number 25.
Our Take on the 2017 Top 10
The 2017 OWASP Top 10 report contains a number of changes that we feel better reflect the current application threat landscape. That said, there are several points that could have been better explained as well as several missed opportunities to address additional application threats.
The 2017 Top 10 looks sharper than the 2013 version, in that it focuses more on trending topics and technologies.
A number of attacks listed in the 2013 report have since become less of an issue and were removed from the list with good reason. For example, less than five percent of data sets support CSRF today, while less than one percent support invalid redirects and forwards.
Meanwhile, new categories, such as XML external entity (XXE), insecure deserialization, as well as insufficient logging and monitoring allow for a better security posture against new kinds of attacks and threats, such as REST requests, API interactions and XML data transmissions.
There were several categories in the new Top 10 that weren’t described very well. In particular, injection is still too broad a topic and doesn’t add enough background to the types of injections to which vulnerable applications might be exposed.
Regarding “Unvalidated Redirects and Forwards”, which is also an input validation control and a highly probable XSS, we agree with its removal from the Top 10. That said, it’s still a prominent threat that should have been more explicitly mentioned or included as part of another control, and not just downgraded.
While we appreciate the addition of new controls into the Top 10, “insufficient logging and monitoring” doesn’t seem like it fits the bill.
In our opinion, the Top 10 should be focused on tangible controls that can prevent or minimize the risk of being exposed to a bug. A logging and monitoring solution is an important tool for web application security. That said, it’s a reactive control and doesn’t exactly fit with the other controls on the list, which are preventative.
Download a copy of our e-book “Protect Your Applications Against All OWASP Top 10 Risks” to find out how to you can protect your assets against these threats.
Do you agree with our analysis of the 2017 OWASP Top 10? Please let us know in the comments below.
The Federal Risk and Authorization Management Program (FedRAMP) is a framework that provides a standardized approach to authorizing, monitoring and conducting security assessments on cloud services. It is an integral part of the U.S. Department of the Interior’s Cloud First Policy, which is designed to help government agencies leverage cloud solutions securely and more efficiently. This program focuses on reducing redundant work, streamlining processes, closing security gaps and minimizing costs associated with authorization.
Any accredited federal agency, authorized cloud service provider (CSP) or third-party assessment organization (3PAO) can be associated with FedRAMP. However, implementing it can be challenging. It takes time to execute properly and is not comparable to common reporting frameworks such as Statement on Standards for Attestation Engagements (SSAE 16) and Service Organization Control (SOC 2). In fact, FedRAMP is one of the most complex and in-depth compliance programs an organization can undertake.
10 Steps to Evaluate CSPs for FedRAMP Compliance
Below are 10 steps organizations must take to evaluate their CSPs for FedRAMP compliance.
1. Cloud Risk Assessment
Organizations must categorize the data they plan to store and share in the cloud by type and sensitivity. It’s important to remember that data located in the cloud is inherently more difficult to control and protect. Consider whether or to what extent the manipulation or exposure of this data could affect its confidentiality, integrity or availability. You may also want to perform a security assessment to determine whether a public, private or hybrid cloud solution carries more or less risk than simply hosting the data on-premises.
2. Security Policies
The next step is to create a security policy to define the controls and risks associated with the cloud service. This policy should cover which data, services and applications are secure enough to migrate to the cloud. Work with legal counsel before engaging a CSP to ensure that all internal controls meet the organization’s needs.
Many CSPs offer encryption, which is one of the most effective protections against cyberthreats. However, it’s crucial to consider the security of the encryption keys provided by the CSP.
4. Data Backup
To achieve FedRAMP compliance, an organization must have adequate controls that back up cloud data. A business continuity and disaster recovery plan is even more critical and should be tested periodically to avoid outages.
FedRAMP compliance also requires organizations to have robust authentication protocols in place. Most CSPs require an authentication method that facilitates mutual validation of identities between the organization and provider.
These protocols depend on the secret sharing of information that completes an authentication task, which protects cloud-bound data from man-in-the-middle (MitM), distributed denial-of-service (DDoS) and relay attacks. Other methods, such as smart cards, strong passwords and multifactor authentication, defend data against brute-force attacks. Finally, elliptical curve cryptography and steganography help prevent both internal and external impersonation schemes.
6. Determine CSP Capabilities
Cloud providers offer a variety of services, such as software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) offerings. SaaS is a service in which software is licensed to an organization as a subscription-based model. PaaS, on the other hand, is a public or private offering that sits behind a firewall and enables organizations to develop, execute and manage applications. Finally, IaaS solutions provide controlled automation and scalable resources via an application programming interface (API) dashboard. This type of service is often regarded as a virtual data center.
These common cloud services should be evaluated according to the organization’s cloud security policy and risk assessment.
7. CSP Security Policies and Procedures
FedRAMP also requires organizations to ensure that the CSP has policies and procedures to govern security processes and responsibilities. This involves obtaining an independent audit report from an accredited assessor. It is also important to review these procedures to guarantee compliance with other frameworks, such as the International Standards Organization (ISO) 27000 series.
8. Legal Implications
CSPs must adhere to global data security and privacy laws, meaning they must disclose any and all breaches to the appropriate government agencies. Because FedRAMP’s legal guidelines are in flux, always consult with your legal department to ensure compliance with federal and state laws, which are often defined in the cloud provider agreement. In most states, the owner of the data is responsible for maintaining compliance with these regulations.
9. Data Ownership
Data ownership is a vital criterion when it comes to reviewing a cloud service contract. The parameters can be confusing for organizations that have many stakeholders, so establish a comprehensive data governance program and reflect it in the CSP’s contract.
Implement continuous local backups to make sure any cloud outages do not cause permanent data loss. Security leaders should insist that the CSP uses end-to-end encryption on data in motion and at rest. Also remember that different jurisdictions can affect the security of data that is stored and/or transmitted in a foreign country.
10. Data Deletion
Cloud security compliance should be reviewed in the context of the organization’s policies and procedures for data deletion. You must also consider the difficulty of tracing the deletion of encrypted data. Some cloud providers use one-time encryption keys that are subsequently deleted along with the encrypted data, rendering it permanently useless.
The Long Road to Cloud Security
FedRAMP can help organizations reduce costs, save time and maximize cloud-based resources. However, unlocking these benefits requires a significant investment of time and money. Companies must be extremely thorough when evaluating cloud providers, and true compliance requires many more steps than the ones listed above. But these insights can give organizations seeking to do business with government agencies in the cloud a head-start on the long road to cloud security.
The post 10 Steps to Evaluate Cloud Service Providers for FedRAMP Compliance appeared first on Security Intelligence.
In the final post of our series on cloud migration, we’ve put together a list of strategic and immediate considerations as you plan to migrate your business to the cloud. From a high-altitude viewpoint, cloud security is based on a model of “shared responsibility” in which the concern for security maps to the degree of control any given actor has over the architecture stack. Thus, the most important security consideration is knowing exactly who is responsible for what in any given cloud project:
- Software as a Service: Typically, the cloud provider is responsible for the bulk of security concerns.
- Platform as a Service: The PaaS cloud provider is generally responsible for the security of the physical infrastructure. The consumer is responsible for everything they implement on the platform, including how they configure any offered security features.
- Infrastructure as a Service: The cloud provider has primary responsibility for the physical security of the servers and the data vulnerability of the network itself. The cloud user is responsible for the security of everything they build on the infrastructure.
A Simple Cloud Security Process Model
The development of a comprehensive cloud security process must consider a wide range of implementation details such as design models and reference architectures. The following high-level process model for managing cloud security contains only the most essential items that must appear in a cloud security process model:
- Identify enterprise governance, risk, and compliance requirements, and legacy mitigation controls.
- Evaluate and select a cloud provider, a service model, and a deployment model.
- Select your cloud provider, service, and deployment models.
- Define the architecture of your deployment.
- Assess the security controls and identify control gaps.
- Design and implement controls to fill the gaps.
- Develop and implement a migration strategy.
- Modify your implementation as necessary.
Each migration process should be evaluated based on its own set of configurations and technologies, even when these projects are based on a single provider. The security controls for an application deployed on pure IaaS in one provider may look very different than a similar project that instead uses more PaaS from that same provider.
The key is to identify security requirements, define the architecture, and determine the control gaps based on the existing security features of the cloud platform. It’s essential that you know your cloud provider’s security measures and underlying architecture before you start translating your security requirements into cloud-based controls.
Checklist: Applications and Data Security for SPI
The three commonly recognized service models are referred to as the SPI (software, platform and infrastructure) tiers. Here are the main application and data security considerations for businesses using cloud services.
- Cloud users must understand the differences between cloud computing and traditional infrastructure or virtualization, and how abstraction and orchestration impact security.
- Cloud users should evaluate their cloud provider’s internal security controls and customer security features, so the cloud user can make an informed decision.
- Cloud users should, for any given cloud project, build a responsibilities matrix to document who is implementing which controls and how. This should also align with any necessary compliance standards.
- Cloud users should become familiar with the NIST model for cloud computing and the CSA reference architecture.
- Cloud users should use available tools and questionnaires to evaluate and compare cloud providers.
- Cloud users should use available tools to assess and document cloud project security and compliance requirements and controls, as well as who is responsible for each.
- Cloud users should use a cloud security process model to select providers, design architectures, identify control gaps, and implement security and compliance controls.
- Cloud users must establish security measures, such as a web application firewall (WAF), that allow only authorized web traffic to enter their cloud-based data center.
Download our Cloud Migration Guide to learn more about approaches and security considerations for migrating your applications and data to the cloud.
More in the series:
Seagate recently patched several vulnerabilities discovered by researchers in the company’s Personal Cloud and GoFlex products, but some weaknesses impacting the latter remain unfixed.
GoFlex Home vulnerabilities
Seagate recently patched several vulnerabilities discovered by researchers in the company’s Personal Cloud and GoFlex products, but some weaknesses impacting the latter remain unfixed.
GoFlex Home vulnerabilities
SecurityWeek RSS Feed
When we covered SecOps in May 2015 and again in January 2017, we discussed the importance of security within the DevOps-focused enterprise, discussing topics such as what data you gather, threat modeling, encryption, education, vulnerability management, embracing automation, incident management and cognitive.
From a cybersecurity perspective, 2017 brought both wins and challenges to the community. Challenges include:
- High-profile vulnerabilities putting your vulnerability management processes to the test;
- Lack of education of basic IT security best practices, enabling malware to spread fast; and
- Awareness of baseline configuration settings in cloud services, which left adopters exposed from the start.
Looking at the positives, we saw the emergence of cognitive technologies, along with machine learning, playing a key part in cybersecurity. For example, Watson for Cyber Security helped in bridging the skills gap and providing quicker root cause analysis. User behavior analytics with machine learning started closing the insider threat gap in understanding the risks associated with privileged users. There is also closer integration of security information and event management (SIEM) systems with incident response capabilities.
2018 will continue to produce challenges, and we will see GDPR being enforced in Europe, which requires action now. The key steps are:
- Identifying what data is being collected;
- Deciding how to protect the data against internal and external attacks;
- Providing customers with a means to be forgotten; and
- Establishing incident management.
The Crucial Roles of SecOps and Cognitive Security
Information security continues to shift left, whether that be with known secure starting templates or more frequent code scanning via up-to-date cloud services and continuous security testing, and SecOps will play a crucial role in helping to ensure improved security without compromising agility. Cognitive-enabled tools will again be key to faster identification and resolution.
The availability of new hosting technologies such as Kubernetes by the large cloud infrastructure-as-a-service (IaaS) providers will bring interesting new challenges. Adopters must look beyond the hype when selecting vendors and consider key security considerations, including:
- Network protection. Are sufficient firewalling capabilities provided by the service provider?
- Hosting infrastructure security. Is the responsibility shared, and how does it impact our service availability?
Staying Ahead of Threats Through Collaboration
We are only as secure as our weakest link, and if we consume or delegate services to external vendors, then their security posture feeds into ours. Ultimately, we are responsible to our customers, so we must ask our providers for their security posture and what standards they have certified against. Transparency will be a key differentiator as we move forward.
As cloud vendors in 2018, we must stay ahead of our would-be attackers. With the potential for increasing financial and reputational penalties, it’s becoming even more critical. Threat sharing and collaboration will allow us to improve our security as a community while minimizing cost. Leaders in the IT and security spaces recognize the value of this collaboration at an enterprise level, and developers continue to drive content through threat portals such as the X-Force Exchange. We should ask ourselves, are we selecting our security vendors with their community presence in mind?
Yes, GPDR is a big ticket item for 2018, but hopefully it has enabled budgets to be allocated to key security activities.
Notice: Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.
The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. IBM does not provide legal, accounting or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.
A researcher has conducted an analysis of Jenkins servers and found that many of them leak sensitive information, including ones belonging to high-profile companies.
London-based researcher Mikail Tunç used the Shodan search engine to find Jenkins servers accessible from the Internet and discovered roughly 25,000 instances.
If you’re like many businesses, you’re moving applications into public and private cloud infrastructures. You’ve seen how the cloud’s agility, resiliency, and scalability drives business growth. Fortunately, rolling out new apps in the cloud is easy when you have containers, microservices, and DevOps supporting your efforts. But what’s not always as easy to figure out is application security—especially if you’re in the midst of migration and need to keep apps secure both on-premises and in the cloud.
Make no mistake: your apps will be attacked. According to the 2017 Verizon Data Breach Investigations Report, web app attacks are by far the number one cause of data breaches—with denial of service attacks the most common of these security incidents.
The good news? You can secure your apps as easily as you can roll them out when you have a flexible, scalable security solution in place.
In this article, we’ll discuss what you need to take into consideration to securely migrate apps to the cloud, and how Imperva FlexProtect can keep your applications secure wherever they live.
Security Model in the Public Cloud
Leading cloud vendors introduced a shared responsibility model for security in the cloud. Amazon states that AWS has “responsibility for security of the cloud,” while customers have “responsibility for security in the cloud.” Microsoft Azure, Google Cloud and other vendors also adopted this model. What does it mean for you? Cloud vendors provide the tools and services to secure the infrastructure (such as networking and compute machines), while you are responsible for things like network traffic protection and application security.
For example, cloud vendors help to restrict access to the compute instances (AWS EC2/Azure VM/Google CE) on which the web server is deployed (by using security groups/firewalls and other methods); they also deny web traffic from accessing restricted ports by setting only the needed HTTP or HTTPS listeners in the public endpoints (usually the load balancer).
But public cloud vendors do not provide the necessary tools to fully protect against application attacks such as the OWASP Top 10 risks or automated attacks. It’s your responsibility to establish security measures that allow only authorized web traffic to enter your cloud-based data center—just as with a physical data center. Securing web traffic in physical data centers is typically done by a web application firewall (WAF) and fortunately, a WAF can be deployed in the public cloud as well.
Choose Flexible Application Security for the Cloud
When choosing solutions to mitigate different web application threats, it’s important to make sure that they offer flexibility to choose the tools you need. The first mitigation layer is usually common for all attackers, it denies access from badly-reputed sources (“malicious IPs”) and blocks requests based on predefined signatures. This solution could be useful against generic types of attacks, like a botnet attack looking for known vulnerabilities. The more targeted the attack is though, the more fine-grained the tools required to mitigate it—and the higher the level of control your security team needs. When an attacker tries to launch an attack tailored to a specific web service, you need customizable tools to block it.
An ideal solution would offer both generic and customizable tools with the flexibility to be deployed within the private network and public cloud while giving your security administrator full control, including deployment topology and security configuration. An application security solution that is deployed in the public cloud should support several key attributes:
Burst capacity: Automatically spawn new security instances which then register with the existing cluster of gateways.
Multi-cloud security: A security solution should support all the major public cloud infrastructures (AWS, Azure or Google Cloud Platform) and your own data center so you can secure applications wherever they live—now and in the future.
DevOps ready: Security solutions should employ machine learning to automatically understand application behavior.
Automation: Dynamic cloud environments require automation to launch, scale, tune policies and handle maintenance operations.
High availability: Business continuity demands that your security solution be highly available.
Centralized management for hybrid deployment: A security solution should have a centralized management solution that can control hybrid deployments in both the physical data center and in the cloud.
Pricing for Applications
Applications are moving to a more automated architecture and they’re being developed and rolled out faster than ever. If any of the following apply to you, then you need a flexible licensing solution for security:
- Moving to a microservices architecture
- Planning to use serverless computing such as AWS Lambda
- Deploying containers instead of traditional virtual machines
- Have a dedicated application DevOps team in your organization
- Concerned about your API security
- Moving your applications from on-premises to public cloud infrastructure like AWS, Azure or Google Cloud Platform
- Need to keep certain applications on-premises and need security for both cloud and on-premises
Imperva FlexProtect offers a single subscription with the flexibility to mix and match application security tools so you can secure applications wherever they live. FlexProtect security tools protect in-the-cloud and on-premises application portfolios, and keep your applications safe while you navigate the uncertainties of moving to a virtual, cloud-centric architecture.
Imperva application security solutions are available in a flexible, hybrid model that combines cloud-based services with virtual appliances to deliver application security and DDoS defense for the cloud. With FlexProtect, you can choose from the following Imperva security solutions:
- Imperva Incapsula combines security with performance optimization and load balancing to provide complete cloud-based protection for your business.
- Imperva SecureSphere Web Application Firewall (WAF) protects business critical applications from sophisticated cyberattacks.
- Imperva ThreatRadar is an advance-warning system that stops emerging threats before they impact your business.
Your organization needs a simple and flexible solution to facilitate a smooth transition from on-premises to the cloud. Imperva offers a solution that scales with your business while allowing you to choose tools based on your application security requirements. With FlexProtect, Imperva removes the dilemma associated with cloud migration planning and future proofs application security investments.
Contact us today to find out how the FlexProtect licensing model can help you keep your apps safe wherever they live, now and in the future.
Intel Patches for Meltdown and Spectre Cause More Frequent Reboots
SecurityWeek RSS Feed
Intel Patches for Meltdown and Spectre Cause More Frequent Reboots
Ninety-five percent of businesses have adopted some form of the cloud. But, according to recent research, securing cloud-based data remains a major concern.
A new study by Gemalto found that 77 percent of companies recognize the importance of security controls such as encryption. Although this number would seem to suggest a steady march toward more defensible cloud data, just 47 percent of companies queried in the report actually use encryption to secure their sensitive data. This creates a disconnect whereby good knowledge is not backed up by solid global policies, putting cloud data at risk.
The Evolving Cloud Security Challenge
Although 88 percent of survey respondents said they are confident that new global regulations will impact cloud governance and 91 percent believe that the need to encrypt data will become more important over the next two years, security practices don’t match the preaching.
On average, according to the study, just 40 percent of all data stored in the cloud is secured with encryption and key management solutions. Meanwhile, just 25 percent of IT professionals surveyed were “very confident” they knew the exact type and number of cloud services used by their business.
The hard truth here is that these aren’t great numbers — but they’re not exactly surprising, either. Consider the trajectory of the cloud. At first it was a disrupter, but now cloud services have become essential for day-to-day operations, application development and big data analysis.
Giving up the cloud is unthinkable, but the prospect of both securing distributed data and actively keeping track of every cloud-based application is overwhelming for many IT departments. As a result, global cloud policies rarely make it past the drawing board even as more cloud services are added to the corporate roster.
A Growing Cloud Infrastructure
There’s no shortage of cloud infrastructure investment. Google recently announced that it spent $30 billion over the last three years building up cloud infrastructure and now has plans for undersea cables connecting Chile and Los Angeles; the U.S., Ireland and Denmark; and Australia and Southern Asia.
In other words, companies already using the cloud will find it even more convenient to spin up new servers, deploy new applications and store more data. However, organizations with existing security issues will face even greater challenges — especially because 75 percent of survey respondents said it’s more complex to manage privacy and data protection regulations in the cloud than on-premises.
Navigating the Wild West of Cloud Policy
So how do companies grow with the cloud and ensure they’re acting responsibly when it comes to cloud security? It all starts with policy.
Right now, global clouds remain a kind of Wild West, where data unseen is data ignored, and applications roam freely across personal and corporate networks. Clamping down on security issues means drafting a global, cloud-specific policy that addresses emerging problems.
For example, many organizations are now writing policies that embrace the utility of shadow IT while placing it under the purview of IT departments. In effect, this allows employees to retain some control over their cloud environment while granting IT the final word.
Encryption policies, meanwhile, are best designed for new data. Enterprises should mandate that all data moving to cloud storage be properly encrypted, then provide the personnel and technological support to make this a viable outcome. After all, the enemies of great policy are poor budgeting and sky-high expectations. Post-storage encryption is a long-term project that is doomed to sink new policies if attached as a core component.
The bottom line is that companies understand the need for cloud security but lack the global processes to follow through. Better outcomes demand specific policies backed by budgets that accommodate both trained security professionals and cutting-edge cloud solutions.
The post Lacking Cloud Security Policies Leave 60 Percent of Data at Risk appeared first on Security Intelligence.
Tel Aviv, Israel-based startup PureSec emerged from stealth mode on Wednesday with a security platform designed for serverless architectures and a guide that describes the top 10 risks for serverless applications.
Regardless of your current IT environment or your vision for migrating to the cloud, numerous strategies exist that can accommodate your cloud-migration approach. Fortunately, this range of options allows you to proceed with caution while making progress toward your ultimate objective.
Always keep in mind that transitioning to the cloud need not be an “all or nothing” proposition. It is certainly possible, and indeed in many cases desirable, to leave some applications running in a local, traditional data center while others are moved to the cloud. It’s this “hybrid model” that makes it possible for companies to move applications to the cloud at their own pace.
We looked at service and deployment models for migrating to the cloud in our last post. This article provides summary descriptions of the options available to you when considering your cloud migration initiative.
Top Five Cloud Migration Strategies
Sometimes referred to as “lift and shift,” re-hosting simply entails redeploying applications to a cloud-based hardware environment and making the appropriate changes to the application’s host configuration. This type of migration can provide a quick and easy cloud migration solution. There are trade-offs to this strategy, as the IaaS-based benefits of elasticity and scalability are not available with a re-hosting deployment.
However, the solution is made even more appealing by the availability of automated tools such as Amazon Web Services VM Import/Export. Still, some customers prefer to “learn by doing,” and opt to deploy the re-hosting process manually. In either case, once you have applications actually running in the cloud they tend to be easier to optimize and re-architect.
Re-hosting is particularly effective in a large-scale enterprise migration. Some organizations have realized a cost savings of as much as 30 percent, without having to implement any cloud-specific optimizations.
This strategy entails running applications on the cloud provider’s infrastructure. You might make a few cloud-related optimizations to achieve some tangible benefit with relative ease, but you aren’t spending developer cycles to change an application’s core architecture.
Advantages of re-platforming include its “backward compatibility” that allows developers to reuse familiar resources, including legacy programming languages, development frameworks, and existing caches of an organization’s vital code.
An unfortunate downside to this strategy is the nascent state of the PaaS market, which doesn’t always provide some of the familiar capabilities offered to developers by existing platforms.
This solution most often means discarding a legacy application or application platform, and deploying commercially available software delivered as a service. The solution reduces the need for a development team when requirements for a business function change quickly. The repurchasing option often manifests as a move to an SaaS platform such as Salesforce.com or Drupal. Disadvantages can include inconsistent naming conventions, interoperability issues, and vendor lock-in.
4. Refactoring / Re-architecting
This solution involves re-imagining how an application is architected and developed, typically using the cloud-native features of PaaS. This is usually driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment.
Unfortunately, this means the loss of legacy code and familiar development frameworks. On the flip side, it also means access to world-class developer’s tools available via the provider’s platform. Examples of productivity tools provided by PaaS providers include application templates and data models that can be customized, and developer communities that supply pre-built components.
The primary disadvantage to a PaaS arrangement is that the customer becomes extremely dependent on the provider. The fallout from a disengagement with the vendor over policies or pricing can be quite disruptive. A switch in vendors can mean abandoning most if not all, of a customer’s re-architected applications.
Typically, the initial step in the cloud migration process is the discovery of your entire IT portfolio. Often, this discovery process entails application metering to determine the actual usage of deployed applications. It’s not unusual to find that anywhere between 10 to 20 percent of an enterprise IT estate is no longer being used. Retiring these unused applications can have a positive impact on the company’s bottom line; not just with the cost savings realized by no longer maintaining the applications, but by allowing IT resources to be devoted elsewhere, and by minimizing security concerns for the obsolete applications.
Download our Cloud Migration Guide to learn more about approaches and security considerations for migrating your applications and data to the cloud.
Organizations are flocking to cloud services and mobile devices to cut costs and boost productivity. Despite the benefits, these technologies exacerbate the challenge of verifying identities and managing access to applications and data by consumers, employees and business partners from multiple devices and locations.
Let’s take a look at some of the most common identity and access management (IAM) challenges and how organizations can resolve them without compromising employee productivity.
Common Identity and Access Management Challenges
Organizations struggle to vet identities and approve access requests because the data resides in various locations and business units. Requesters often encounter roadblocks when seeking access, leading them to escalate requests to upper management and override the proper vetting process. Furthermore, those tasked with approving requests lack sufficient insight into which employees require access to confidential data.
The lack of a centralized, authoritative identity repository for users makes reconciliation another significant challenge. Additional problems arise when privileges on systems either exceed or lack access levels that were previously granted and provisioned.
When it comes to certification and accreditation, examiners may have insufficient knowledge of access needs. Not to mention, processes tend to be manual, cumbersome and inconsistent between business units. This task becomes even more difficult when examiners must conduct multiple, redundant and granular validations.
Provisioning and deprovisioning identities can pose a critical challenge when manual provisioning processes are ineffective. Organizations that fail to remove improper IAM privileges or resort to cloning access profiles will face similar struggles.
Failure to segregate duties and monitor administrators, power users and temporary access privileges can further impede enforcement. Other issues include lack of support for centralized access management solutions, such as directories and single sign-on, outdated or nonexistent access management policies, and failure to establish rule-based access.
Finally, compliance concerns arise when performance metrics do not exist and/or do not align with security requirements, such as removing identities and access privileges automatically upon an employee’s termination. Laborious and time-consuming audits only make this problem worse.
The CISO’s Role in Resolving IAM Issues
Chief information security officers (CISOs) must meet these challenges. Their teams must vet identities, approve appropriate access entitlements, and grant or revoke user identities, access and entitlements in a timely manner. Security leaders must also provision proper access to applications, data and resources for users who need it and examine identities and the corresponding access privileges periodically to realign with users’ job functions.
Enforcing compliance in accordance with the organization’s IAM policy is another key responsibility of the CISO. A strong IAM strategy also requires security leaders to define performance metrics and implement periodic or real-time automated auditing tools.
Considerations for Mobile and Cloud
Today, many organizations have gone mobile with bring-your-own-device (BYOD) policies, enabling employees to access corporate data remotely. IAM serves as a foundational security component in environments that connect to mobile platforms.
Cloud services have also added daunting complexity to the IAM equation, forcing organizations to operate their capabilities on-premises and integrate with similar capabilities delivered by a cloud service provider (CSP). While these cloud platforms increase reliance on logical access controls, they also reduce network access controls.
Federation, role-based access and cloud-based IAM solutions exist to address these requirements. For example, the need to access apps hosted on the cloud goes hand in hand with the need to manage identities to protect personally identifiable information (PII).
Identity-as-a-service (IDaaS) is another effective solution to accelerate IAM deployments in the cloud. IDaaS supports federated authentication, authorization and provisioning, and it is a viable alternative to on-premises IAM solutions. When it comes to return on security investment, IDaaS eliminates the expense of implementing an on-premises solution.
It’s important to understand the need for IAM capabilities that effectively govern access to internally hosted apps. In a hybrid cloud IAM model, the IDaaS solution will need agent APIs or appliances that operate within the IT infrastructure to completely outsource the function. Securing these agents and interfaces represents a new source of risk for most organizations, and this risk must be managed.
Integrating Identity Management With Data Loss Prevention
It’s common for security professionals to provide identity information from an IAM tool to a data loss prevention (DLP) solution that continuously monitors sensitive data and correlates events to minimize the risk of losing sensitive data. The events are also correlated with analytical artificial intelligence and machine learning tools that analyze historical access behaviors to detect potential fraud.
Both IAM and DLP solutions must be leveraged to address insider threats and emerging threat vectors. Behavioral analytics and incident forensics tools provide additional monitoring capabilities. By integrating both of these solutions, organizations can handle the fast pace of emerging IT trends and threats with mobile and cloud computing.
Securing Social Media Identities
Organizations often leverage social media to interact with their customers, increase brand awareness and create a common identity repository. But if these social identities are breached, companies can face legal, regulatory, operational and reputational risks that may lead to the loss of customers.
Social media services must deploy strong IAM solutions to protect corporate accounts. These solutions include multifactor authentication (MFA) and notifications to alert users of multiple failed login attempts or attempts to authenticate from anomalous geographic regions. Awareness programs to educate employees about social media security must be an essential ingredient. CISOs should also inquire with legal to ensure that service-level agreements (SLAs) with social media providers account for proper IAM practices.
The Best of Both Worlds
In our increasingly mobile and connected world, IAM is more crucial than ever. To remain competitive, businesses around the world must embrace technologies and policies that enable employees to be as productive as possible.
However, it only takes one major data breach to negate all the benefits of that productivity. With a strong IAM program that proactively monitors user behavior for potentially malicious activity and periodically realigns access privileges with shifting job roles, organizations can have the best of both worlds: an empowered, productive workforce and a robust data security strategy.
The post Meeting Identity and Access Management Challenges in the Era of Mobile and Cloud appeared first on Security Intelligence.
Not long ago, for security, compliance or other reasons, it was unthinkable for many regulated organizations to move sensitive data into the cloud. It’s striking how things have changed.
Maybe it was inevitable that services like email were cloud migration candidates. People trust Microsoft, and it’s quite impressive what they have done to make Office 365 simple to adopt and easy to maintain. It also has been reliable and secure enough to gain massive market share as organizations became comfortable with the risks. But aren’t databases different? Aren’t they exceptions?
Yes, they are different, but it seems that they are not exceptions. Resistance is falling to cloud database migration, and even some highly regulated companies are planning to deploy or already migrating databases to cloud-based architectures to meet growing business demands.
Architecture Options and Guidance
What is the right architecture? There is a lot of information available to advise you about that. As an example, if you are a Gartner customer, you can find their summary of architectural options here and a suggested framework for building a plan in this report. Similarly, Forrester has published extensive reports on cloud technology adoption, such as this one on relatively new Database as a Service architectural options.
However, one thing that moving to the cloud won’t change is that businesses and government organizations are still required to audit, monitor, and secure sensitive information, such as personally identifiable information (PII), financial records, or medical records to comply with ever-increasing regulatory mandates. In fact, both existing, and new regulations such as the European Union’s General Data Protection Regulation (GDPR), often come with very expensive penalties for regulatory failure.
Governance and Security Challenges
Moving databases to the cloud – or launching new ones – presents new enterprise governance and security challenges. So, it’s interesting how rapidly the transformation is happening despite the above. Perhaps it’s because the economic pros of off-loading system and/or database maintenance to a cloud platform are so compelling they far outweigh the cons.
Still, despite a clear mandate to proceed, organizations should move with some caution. Application groups should not charge ahead without any compliance and security group participation. So, what is it then that compliance specialists, security staff and other IT professionals will contribute? What are their primary planning concerns?
How Enterprises are Responding
Sometimes, when planning transformational change, it’s helpful to learn what your peers are thinking or have done. Imperva recently commissioned Forrester Consulting to survey 150 IT professionals from different size enterprise organizations who have completed or are in the process of adopting new big data or cloud database technology and questioned them about their concerns, expectations and results. Here we share a few insights from those survey results. You can download the full report here.
Top 5 Security and Compliance Requirements Ranked
One question asked the survey participants to identify their top five governance requirements to achieve the benefits they want from their new architectures. Not surprisingly, the responses focused first on analyzing and managing threats, but also included the need for managing consistent policies across cloud databases, big data lakes, and existing on-premises databases. Here’s the response breakdown from the survey for that question (Figure 1):
Figure 1: Discovering and analyzing vulnerabilities and risk lead as the top three requirements
The consistent policies requirement makes perfect sense when you consider that most cloud migration will be a gradual process, since few organizations will be able to move everything at once. Realistically most organizations will probably end up managing both cloud and data center-hosted systems for a significant length of time.
Benefits of Database Activity Monitoring
The report went on to ask other questions, and for those with completed projects, to rate the outcome for governance measures they have taken. Specifically, respondents were questioned about the use of database activity monitoring (DAM) tools and how they’ve helped bring visibility and control to their processes. The chart below summarizes the expected versus realized benefits they found in a DAM solution (Figure 2):
Figure 2: Improved data compliance is the top realized benefit of a DAM solution.
It’s interesting to note that the one instance where the realized benefit didn’t surpass the expected benefit was with time spent on security. While DAM tools can offer greater security capabilities, they ultimately are not a replacement for dedicated security and compliance efforts from IT teams. In-house security expertise is a must-have regardless of where data resides—in the cloud, on-premises, or both.
Download the complete Forrester survey report here: “Modern Database Architectures Demand Modern Data Security Measures.”
Data security is on everyone’s mind these days, and for good reason. The number of successful data breaches is growing thanks to the increased attack surfaces created by more complex IT environments, widespread adoption of cloud services and the increasingly sophisticated nature of cybercriminals.
One part of this story that has remained consistent over the years, however, is that most security breaches are preventable. Although every organization’s security challenges and goals are different, there are certain mistakes that many companies make as they begin to tackle data security. What’s worse, these mistakes are often accepted as the norm, hiding in plain sight under the guise of common practice.
Should you be concerned about the potential for a data breach? Let’s see if you can fill in the blanks:
- Compliance does not equal ______.
- Recognize the need for _____ data security.
- Establish who _____ the data.
- Fix known ______.
- Prioritize and ______ data activity monitoring.
Five Common Data Security Failures
Below are five common data security failures that, if left unchecked, could lead to unforced errors and contribute to the next major data breach.
1. Failure to Move Beyond Compliance
It is often said that compliance does not equal security, and most security professionals would agree with that statement. However, organizations often focus their limited security resources on achieving compliance and, once they receive their certifications, become complacent. As a result, many of the largest data breaches in recent years have happened in organizations that may have been fully compliant on paper.
2. Failure to Recognize the Need for Centralized Data Security
Compliance can help raise awareness of the need for data security, but without broader mandates that cover data privacy and security, companies forget to move past compliance and actually focus on consistent, enterprisewide data security. A typical organization today has a heterogeneous IT environment that is constantly changing and growing. New types of data sources pop up weekly, if not daily, and sensitive data is dispersed across all of these sources.
3. Failure to Assign Responsibility for the Data Itself
Even if stakeholders are aware of the need for data security, in many companies no one specifically owns responsibility for the sensitive data that’s being collected, shared and leveraged to perform business operations. This becomes obvious once you try to find out who is actually responsible.
4. Failure to Fix Known Vulnerabilities
According to Gartner, 99 percent of all exploits use known vulnerabilities, while malware and ransomware attacks typically leverage vulnerabilities that are at least six months old. Recent high-profile breaches have resulted from known flaws that went unpatched even after fixes were released. Cybercriminals actively seek unpatched vulnerabilities because they are easy points of entry.
5. Failure to Prioritize and Leverage Data Activity Monitoring
In addition to moving past compliance, spreading security awareness, establishing data ownership and addressing vulnerabilities, monitoring data access and use is an essential part of any data security strategy. Organizations need to know who, how and when people are accessing data, whether they should be, whether that access is normal and whether it represents elevated risk.
Taking Steps to Close Data Security Gaps
There is nothing easy about securing sensitive data to combat today’s threat landscape, but companies can take steps to ensure that they are devoting the right resources to their data protection strategy. Few organizations, however, can afford all the security measures they would like to have. When resources and budgets are limited, it is of paramount importance to prioritize and leverage the resources they do have.
To learn more about common data security missteps, read the white paper, “Five Epic Fails in Data Security: Common Data Security Pitfalls and How to Avoid Them” and watch the on-demand webinar, “Epic Fails in Data Security and How To Avoid Them.”
The post Five Epic Fails in Data Security: Do You Know How to Avoid Them? appeared first on Security Intelligence.
The advantages offered by a cloud-based environment make it an easy decision for most companies to make. Still, there are numerous critical choices to be made that can transform the complexities of the migration process into a relatively smooth transition—especially regarding application and data security.
This article describes the options available to you when you are planning a migration of application and data resources to the cloud.
Migration Strategy Fundamentals
As with any nascent methodology, business objectives will most likely drive your migration strategy. In addition, there are fundamental components to a migration strategy that are essential to all cloud migration initiatives:
- Define an end-to-end strategy that takes into consideration your business objectives as well as the impact of cloud migration on IT operations.
- Take the opportunity to discover and evaluate your enterprise application portfolio to see where inefficiencies exist, and where they can be remediated and optimized with available cloud services.
- Redesign your business applications to integrate effectively with the specific service models offered in the cloud.
- Understand the model of shared responsibility as it relates to security policy and risk mitigation, and develop policies and controls accordingly.
After you have a thorough and accurate picture of your application portfolio, you can start looking at where to start your application migration. There will be “low-hanging fruit” that will be the easiest to migrate, along with other applications that present complexities requiring additional time and attention.
Develop a plan that can be used as a framework for each application that you migrate to the cloud and the order in which they are to be migrated. Since the plan requires input from other departments, keep in mind that it will likely need to be amended and modified as you proceed through your application portfolio review.
Three Leading Service Models
There are three commonly recognized service models, sometimes referred to as the SPI (Software, Platform and Infrastructure) tiers (SaaS, PaaS, and IaaS), that describe the foundational categories of cloud services:
- Software as a Service (SaaS) can be compared to a one-stop shop that provides everything you need to run an application. Typically, SaaS providers build applications on top of platform as a service (PaaS) and (IaaS) infrastructure as a service to take advantage of all the inherent economic benefits of the IaaS and PaaS service models.
SaaS examples: Google Apps, Salesforce, Workday, Concur, Citrix GoToMeeting, Cisco WebEx.
- Platform as a Service (PaaS) provides a platform that offers management of servers, networks, and other system components. Cloud users only see the platform, not the underlying infrastructure.
PaaS examples: Salesforce Heroku, AWS Elastic Beanstalk, Microsoft Azure, Engine Yard, and Apprenda.
- Infrastructure as a Service (IaaS) provides a shared pool of computer, network, and storage resources. Cloud providers use the technique of “abstraction,” typically through virtualization, to create a pool of resources. The abstracted resources are then “orchestrated” by a set of connectivity and delivery tools.
IaaS examples: DigitalOcean, Linode, Rackspace, Amazon Web Services (AWS), Cisco Metapod, Microsoft Azure, Google Compute Engine (CGE), Joyent.
Most, if not all communications between cloud components are typically handled by application programming interfaces (APIs). A set of APIs is often made available to the cloud user so they can manage resources and configuration.
Cloud deployment models apply across the entire range of service models. They describe how cloud technologies are implemented and consumed:
- Public Cloud – Owned and operated by a cloud service provider and made available to the public or a major industrial sector on a pay-as-you-go basis.
- Private Cloud – Operated solely for a single organization and managed by the organization or by a third party. A private cloud may be located on- or off-premises.
- Community Cloud – Shared by several organizations with common concerns. These concerns can include security requirements or governance considerations. The deployment can be managed by the community or by a third party. A community cloud can be located on- or off-premises.
- Hybrid Cloud – Typically comprises two or more clouds (private, community, or public) and possibly an on-premises infrastructure as well. The different cloud deployments maintain their discrete identities but can interoperate via standard or proprietary technologies. An example of interoperability is “cloud bursting,” which enables an application to run in a private cloud and burst into a public cloud during increased demands for computing capacity.
There are several options available to you when you consider migrating to the cloud. Your strategy can cover the big picture considerations and also provision for attention to security issues. We’ll take a look in our next blog post on how to migrate your security policies from on-premises to the cloud, depending on the deployment you select.
Learn more about approaches and security considerations for migrating your applications and data to cloud, download our Cloud Migration Guide.
The McAfee Advanced Threat Research (ATR) Team has closely followed the attack techniques that have been named Meltdown and Spectre throughout the lead-up to their announcement on January 3. In this post, McAfee ATR offers a simple and concise overview of these issues, to separate fact from fiction, and to provide insight into McAfee’s capabilities and approach to detection and prevention.
There has been considerable speculation in the press and on social media about the impact of these two new techniques, including which processors and operating systems are affected. The speculation has been based upon published changes to the Linux kernel. McAfee ATR did not want to add to any confusion until we could provide our customers and the general public solid technical analysis.
A fully comprehensive writeup comes from Google Project Zero in this informative technical blog, which allowed ATR to validate our conclusions. For more on McAfee product compatibility, see this business Knowledge Center article and this Consumer Support article.
Meltdown and Spectre are new techniques that build upon previous work, such as “KASLR” and other papers that discuss practical side-channel attacks. The current disclosures build upon such side-channel attacks through the innovative use of speculative execution.
Speculative execution has been a feature of processors for at least a decade. Branch speculation is built on the Tomasulo algorithm. In essence, when a branch in execution depends upon a runtime condition, modern processors make a “guess” to potentially save time. This speculatively executed branch proceeds by employing a guess of the value of the condition upon which the branch must depend. That guess is typically based upon the last step of the same branch’s previous execution. The conditional value is cached for reuse in case that particular branch is taken again. There is no loss of computing time if the condition arrives at a new value because the processor must in any event wait for the value’s computation. Invalid speculative executions are thrown away. The fact that invalid speculations are tossed is a key attribute exploited by Meltdown and Spectre.
Despite the clearing of invalid speculative execution results without affecting memory or CPU registers, data from the execution may be retained in the processor caches. The retaining of invalid execution data is one of the properties of modern CPUs upon which Meltdown and Spectre depend. More information about the techniques is available on the site https://meltdownattack.com.
Because these techniques can be applied (with variation) to most modern operating systems (Windows, Linux, Android, iOS, MacOS, FreeBSD, etc.), you may ask, “How dangerous are these?” “What steps should an organization take?” and “How about individuals?” The following risk analysis is based upon what McAfee currently understands about Meltdown and Spectre.
There is already considerable activity in the security research community on these techniques. Sample code for two of the three variants was posted by the Graz University (in an appendix of the Spectre paper). Erik Bosman has also tweeted that he has built an exploit, though this code is not yet public. An earlier example of side-channel exploitation based upon memory caches was posted to GitHub in 2016 by one Meltdown-Spectre researcher Daniel Gruss. Despite these details, as of this writing no known exploits have yet been seen in the wild. McAfee ATR will continue to monitor researchers’ and attackers’ interest in these techniques and provide updates accordingly. Given the attack surface of nearly every modern computing system and the relative ease of exploitation, it is highly likely that at least one of the aforementioned variants will be weaponized very quickly.
McAfee researchers quickly compiled the public exploit code for Spectre and confirmed its efficacy across a number of operating systems, including Windows, Linux, and MacOS.
Any technique that allows an attacker to cross virtual machine boundaries is of particular interest, because such a technique might allow an adversary to use a cloud virtual machine instance to attack other tenants of the cloud. Spectre is designed to foster attacks across application boundaries and hence applies directly to this problem. Thus, major cloud vendors have rushed to issue patches and software updates in advance of the public disclosure of these issues.
Additionally, both Meltdown and Spectre are exceptionally hard to detect as they do not leave forensic traces or halt program execution. This makes post-infection investigations and attack attribution much more complex.
Even though we have not seen any malware currently exploiting these techniques, McAfee is currently evaluating opportunities to provide detection within the scope of our products; we expect most solutions to lie within processor and operating system updates. Based on published proofs of concept, we have provided some limited detection under the names OSX/Spectre, Linux/Spectre, and Trojan-Spectre.
Microsoft has released an out-of-cycle patch because of this disclosure: https://support.microsoft.com/en-us/help/4056892/windows-10-update-kb4056892. Due to the nature of any patch or update, we suggest first applying manual updates on noncritical systems, to ensure compatibility with software that involves the potential use of low-level operating system features. McAfee teams are working to ensure compatibility with released patches where applicable.
While the world wonders about the potential impact of today’s critical disclosures, we also see a positive message. This was another major security flaw discovered and communicated by the information security community, as opposed to the discovery or leak of “in the wild” attacks. Will this disclosure have negative aspects? Most likely yes, but the overall effect is more global attention to software and hardware security, and a head start for the good guys on developing more robust systems and architectures for secure computing.
The post Decyphering the Noise Around ‘Meltdown’ and ‘Spectre’ appeared first on McAfee Blogs.
It’s that time of the year when we look back at the tech trends of 2017 to provide us with a hint of things to come. Accordingly, let’s engage in our favorite end-of-year pastime: predictions about the coming year.
Equipped with Imperva’s own research, interactions with our customers, and a wealth of crowdsourcing data analyzed from installations around the world, we’ve looked ahead to the future of cybersecurity and compiled a few significant trends IT security pros can expect to see in 2018.
Here are our top five predictions for 2018 and what you can do to prepare for them:
1. Massive Cloud Data Breach
Companies have moved to cloud data services faster than anticipated even in traditional industries like banking and healthcare where security is a key concern. As shown in Figure 1, take-up of cloud computing will continue to increase, attaining a compound annual growth rate (CAGR) of 19%, from $99B in 2017 to $117B in 2018.
In 2018, in parallel with the take-up of cloud computing, we’ll see massive cloud data breaches—primarily because companies are not yet fully aware of the complexities involved with securing cloud data.
Figure 1: Rapid Growth of Cloud Computing (Source: IDC)
Data Breaches: A Troubling Past, A Worrying Future
It is estimated that in 2017 alone, over 99 billion records were exposed because of data breaches. Of the various circumstances behind the breaches, hacking of IT systems is by far the most prevalent cause, followed by poor security, inside jobs, and lost or stolen hardware and media.
Major breaches at healthcare and financial services companies indicate a growing trend of vulnerabilities and exploits in these two vital business sectors.
Healthcare was one of the hardest hit sectors in 2017, and that trend is expected to worsen in the coming year. Some 31 million records were stolen, accounting for 2% of the total and up a whopping 423% from just 6 million.
The financial services industry is the most popular target for cyber attackers (see Figure 2), and this dubious distinction is likely to continue in the upcoming year. Finance companies suffered 125 data breaches, 14% of the total, up 29% from the previous six months.
Data breaches in various other industries totaled 53, up 13% and accounting for 6% of the total. The number of records involved in these attacks was a staggering 1.34 billion (71% of the total) and significantly up from 14 million.
It is estimated that the average cost of a data breach will be over $150 million by 2020, with the global annual cost forecast to be $2.1 trillion.
Figure 2: Data Records Stolen or Lost by Sector (Source: IDC)
Critical Cloud-based Security Misconfigurations
Missteps in cloud-based security configurations often lead to data breaches. This is likely to increase as more organizations move some or most of their operations to the cloud.
As organizations and business units migrate to public cloud services, centralized IT departments will find it increasingly difficult to control their company’s IT infrastructure. These enterprises lack the visibility necessary to manage their cloud environments and don’t have the monitoring tools to detect and report on security governance and compliance. Many are not even aware of the specific workloads they’ve migrated to the cloud. And without a doubt, you can’t secure what you can’t see.
For example, an unsecured Amazon Web Services S3 storage bucket has been an ongoing concern for cloud users. The bucket, which can be configured to allow public access, has in the past leaked highly sensitive information. In one instance of a major security breach, a whopping 111 GB worth was exposed, affecting tens of thousands of consumers.
Most significantly, Amazon is aware of the security issue, but is not likely to mitigate it since it is caused by cloud-user misconfigurations.
2. Cryptocurrency Mining
We expect to see a growth of cryptocurrency mining attacks where attackers are utilizing endpoint resources (CPU/GPU) to mine cryptocurrency either by cross-site scripting (XSS) or by malware. It’s increasingly likely that remotely vulnerable/hackable IoT devices will also be used as a mining force to further maximize an attacker’s profits.
Illegal mining operations set up by insiders, which can be difficult to detect, are also on the rise—often carried out by employees with high-level network privileges and the technical skills needed to turn their company’s computing infrastructure into a currency mint.
These attacks will quickly grow in popularity given their lucrative nature. As long as there is a potential windfall involved, such inside jobs are likely to remain high on the list of cybersecurity challenges faced by companies.
Although attacks that attempt to embed crypto-mining malware are currently unsophisticated, we expect to see an increase in the sophistication of attacks as word gets out that this is a lucrative enterprise. We also expect these attacks to target higher-traffic websites, since the potential to profit increases greatly with higher numbers of concurrent site visitors.
3. Malicious Use of AI/Deception of AI Systems
The malicious use of artificial intelligence (AI) will continue to grow quickly. The industry has started to see early traces of attackers leveraging AI to learn normal behavior and mimic that behavior to bypass current user and entity behavior analytics (UEBA) solutions. It’s still very early stage and will continue to mature beyond 2018. However, it will force current UEBA vendors to come up with a 2.0 approach to identifying anomalous behavior.
AI and internet of things (IoT) use cases drive cloud adoption. Artificial intelligence in the cloud promises to be the next great disrupter as computing is evolving from a mobile-first to an artificial intelligence-first model. The proliferation of cloud-based IoT in the marketplace continues to drive cloud demand, as cloud allows for secure storage of massive amounts of structured and unstructured data central to IoT core functions.
Without proper awareness and security measures, AI can be easily fooled by adversarial behavior. In 2018 we will see more:
- Attacks on AI systems (for example, self-driving cars)
- Cyber attackers who adapt their attacks to bypass AI-based cybersecurity systems
4. Cyber Extortion Targets Business Disruption
Cyber extortion will be more disruption focused. Encryption, corruption, and exfiltration will still be the leaders in cyber extortion, but disruption will intensify this year, manifesting in disabled networks, internal network denials of service, and crashing email services.
In the last few years, attackers have adopted a “traditional” ransomware business model—encrypt, corrupt or exfiltrate the data and extort the owner in order to recover the data or prevent it from leaking. Fortunately, techniques such as deception or machine learning have helped to prevent these types of attacks and made it more difficult for attackers to successfully complete a ransomware attack.
From a cost perspective, most of the damage associated with ransomware attacks is not the data loss itself, since many firms have backups, but the downtime. Often in the case of ransomware, attackers will start to leverage a disrupt-and-extort method. DDoS is the classic and most familiar one, but attackers will probably adopt new techniques. Examples include shutting down an internal network (web app to a database, point-of-sale systems, communication between endpoints, etc.), modifying computer configuration to cause software errors, causing software crashes, system restarts, disruption of your corporate email or disruption of any other infrastructure which is mandatory for an organization’s employees and/or customers day-to-day functions. Basically, any event that leaves the company unable to conduct business.
While absolute protection is impossible, you can help lower your chance of business interruption due to a cyber-attack. Start by creating a formal, documented risk management plan that addresses the scope, roles, responsibilities, compliance criteria and methodology for performing cyber risk assessments. This plan should include a characterization of all systems used at the organization based on their functions, the data they store and process, and their importance to the organization.
5. Breach by Insiders
Businesses are relying more on data which means more people within the business have access to it. The result is a corresponding increase in data breaches by insiders either through intentional (stealing) or unintentional (negligent) behavior of employees and partners.
While the most sensational headlines typically involve infiltrating an ironclad security system or an enormous and well-funded team of insurgents, the truth of how hackers are able to penetrate your system is more boring: it’s your employees.
A new IT security report paints a bleak picture of the actual gravity of the situation. Researchers found that IT workers in the government sector overwhelmingly think that employees are actually the biggest threat to cybersecurity. In fact, 100% of respondents said so.
Fortunately, security-focused companies have begun identifying these traditionally difficult to detect breaches using data monitoring, analytics, and expertise. The difference being that in 2018, more companies will invest in technology to identify this behavior where previously they were blind.
In fact, 75% of IT employees in government reported that rather than their organization having dedicated cybersecurity personnel on staff (which is becoming more and more necessary with each passing year), an overworked IT team was left to deal with security and employee compliance. As a result, 57% reported that they didn’t even have enough time to implement stronger security measures while 54% cited too small of a budget.
Here’s another fact for you: insider threats are the cause of the biggest security breaches out there, and they are very costly to remediate. According to a 2017 Insider Threat Report, 53% of companies estimate remediation costs of $100,000 and more, with 12% estimating a cost of more than $1 million. The same report suggests that 74% of companies feel that they are vulnerable to insider threats, with seven percent reporting extreme vulnerability.
These are the steps every company should take to minimize insider threats:
- Background checks
- Watch employee behavior
- Use the principle of least privilege
- Control user access
- Monitor user actions
- Educate employees
Insider threats are one of the top cybersecurity threats and a force to be reckoned with. Every company will face insider-related breaches sooner or later regardless of whether it is caused by a malicious action or an honest mistake. And it’s much better to put the necessary security measures in place now than to spend millions of dollars later.
Join Imperva on January 23rd for a live webinar where we’ll discuss these trends in more detail and review the security measures necessary to mitigate the risks. Register to attend today.
This has been quite a year for McAfee, as we not only roll out our vision, but also start to fulfill that vision.
We’ve established our world view: endpoint and cloud as the critical control points for cybersecurity and the Security Operations Center (SOC) as the central analytics hub and situation room. While we’ve talked a lot about endpoint and cloud over the past year, we’ve only recently started exposing our thinking and our innovation in the SOC, and I would like to delve a bit deeper.
SOCs provide dedicated resources for incident detection, investigation, and response. For much of the past decade, the SOC has revolved around a single tool, the Security Incident and Event Manager (or SIEM). The SIEM was used to collect and retain log data, to correlate events and generate alerts, to monitor, to report, to investigate, and to respond. In many ways, the SIEM has been the SOC.
However, in the past couple of years, we’ve seen extensive innovation in the security operations center. This innovation is being fueled by an industry-wide acceptance of the increased importance of security operations, powerful technical innovations (analytics, machine learning), and the ever-evolving security landscape. The old ways of doing things are no longer sufficient to handle increasingly sophisticated attacks. We need do something different.
McAfee believes this next generation SOC will be modular, open, and content-driven.
And automated. Integration of data, analytics, and machine learning are the foundations of the advanced SOC.
The reason for this is simple: increased volume. In the last two years, companies polled in a McAfee survey said the amount of data they collect to support cybersecurity activities has increased substantially (28%) or somewhat (49%). There are important clues in all that data, but the new and different attacks get lost in the noise. Individual alerts are not especially meaningful – patterns, context, and correlations are required to determine potential importance, and these constructs require analytics – at high speed and sophistication, with a model for perpetually remaining up-to-date as threat actors and patterns change. We need the machines to do more of the work, freeing the humans to understand business-specific patterns, design efficient processes, and manage the policies that protect each organization’s risk posture.
SIEM remains a crucial part of the SOC. The use cases for SIEM are extensive and fundamental to SOC success: data ingestion, parsing, threat monitoring, threat analysis, and incident response. The McAfee SIEM is especially effective at high performance correlations and real-time monitoring that are now mainstream for security operations. We are pleased to announce that McAfee has been recognized for the seventh consecutive time as a leader in the Gartner Magic Quadrant for Security Information and Event Management.* And we’re not stopping there — we’re continuing to evolve our SIEM with a high volume, open data pipeline that enables companies to collect more data without breaking the bank.
An advanced SOC builds on a SIEM to further optimize analytics, integrating data, and process elements of infrastructure to facilitate identification, interpretation, and automation. A modular and open architecture helps SOC teams add in the advanced analytics and inspection elements that take SOCs efficiently from initial alert triage through to scoping and active response.
Over the past year, we’ve worked extensively partnering with over eight UEBA vendors to drive integration with our SIEM. At our recent customer conference in Las Vegas, MPOWER, we announced our partnership with Interset to deliver McAfee Behavioral Analytics. Look for more information about that in the new year. I also want to reinforce our commitment to being open and working with the broader ecosystem in this space, even as we bring an offer to market. No one has a monopoly on good ideas and good math – we’ve got to work together. Together is Power.
We also launched McAfee Investigator at MPOWER, a net new offering that takes alerts from a SIEM and uses data from endpoints and other sources to discover key insights for SOC analysts at machine speed. Leveraging machine learning and artificial intelligence, McAfee Investigator helps analysts get to high quality and accurate answers, fast.
The initial response is great: we’ve seen early adopter customers experience a 5-16x increase in
analyst investigation efficiency. Investigations that took hours are taking minutes. Investigations that took days are taking hours. Customers are excited and so are we!
In short – we have a lot cooking in the SOC and we are just getting started.
Look for continued fulfillment of McAfee’s vision in 2018. The sky’s the limit.
*Gartner Magic Quadrant for Security Information and Event Management, Kelly M. Kavanagh, Toby Bussa, 4 December 2017. From 2015-16, McAfee was listed as Intel Security, and in 2011, McAfee was listed as Nitro Security since it acquired the company in 2011.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
This post was written by Eric Boerger.
Twenty-one percent of organizations don’t know if their organization has been breached in the cloud.
That uncertainty, lack of control, and limited visibility is a startling indication of the state of cloud use today: The speed of adoption has invited risk that was not foreseen. Understanding that risk is key to gaining control over security in the cloud.
Many more industry insights are revealed in Cloud Security: Defense in Detail if Not in Depth: A SANS Survey completed in November and sponsored by McAfee. The survey especially delves into infrastructure-as-a-service from providers like Amazon Web Services (AWS) and Microsoft Azure, which is driving digital business transformation toward the most agile models to date.
Among the findings, some captured in the chart below, include the benchmark that 40% of organizations are storing customer personally identifiable information (PII) in the cloud – and 15% of those had experienced a misconfiguration due to quickly spun up components.
The inevitable goal of cloud adoption is, of course, quite laudable: To realize agility and costs benefits across the organization. The problem is that many IT departments and developers have rushed in, adjusting their delivery models from dedicated hardware in data centers to cloud instances, containers, and now even serverless infrastructure.
Where was security in that fast adoption? Unfortunately, often left behind. Existing endpoint or data center security tools often can’t simply be transferred to the cloud. They need to be rebuilt to run “cloud-native,” designed specifically for the unique properties of public cloud service provider environments. Added to that adjustment is often the dual responsibility of maintaining the public cloud and a virtual private cloud environment in your datacenter – two to manage.
This requires a cloud strategy across these environments: seek policy unification, not tool unification. Cloud security requires change. But there is no point in burdening the agility of the cloud with disconnected management. Your organization should have one view to your infrastructure with one set of policies that everyone understands.
McAfee teamed up with the SANS Institute on an analysis of this survey’s findings. In this presentation, we dive deeper into these points, providing key perspectives on the cloud industry at this crucial time. Tune in here:
Download and read the full report here: Cloud Security: Defense in Detail if Not in Depth: A SANS Survey. For more information on our approach to cloud security, go to https://mcafee.com/cloudsecurity.
The post Cloud Risk in a Rush to Adopt – New Research from the SANS Institute appeared first on McAfee Blogs.
As enterprises continue their journey to the cloud, many are using a hybrid model that engages both the private and public cloud. McAfee has embraced this “hybrid cloud” strategy to enable companies to migrate to the public cloud, and we are investing in the tools and relationships to enable the transition. Working with Amazon Web Services (AWS) is an important part of bringing enterprise-level security to public cloud deployments, and I’m happy to announce two new partner relationships with AWS. Also, McAfee will be joining AWS at the AWS re:Invent Expo in Las Vegas in late November, where we will demonstrate products that customers can use in their hybrid cloud strategy.
McAfee is Now an APN Advanced Technology Partner
For enterprise engagements, McAfee has become an Amazon Partner Network (APN) Advanced Technology Partner. To become an APN Advanced Technology Partner we have demonstrated that our products, customer relationships, expertise and overall business investments on AWS have grown and are meaningful to AWS.
McAfee builds tools that automate the rollout of security controls and security operations consistently across organizations. Our solutions — such as Virtual Network Security Platform, Cloud Workload Security, and Web Gateway — can play significant roles in helping companies adopt AWS securely:
McAfee Virtual Network Security Platform (vNSP): Designed specifically for fully virtualized public and private clouds, vNSP delivers an elastic security control that provides comprehensive network inline intrusion prevention, application protection, zero-day threat detection and visibility into lateral attack movement. The scalable and highly distributed architecture has been certified as “Well Architected” by Amazon. Integration with orchestration and automation frameworks makes this an ideal solution for adoption in DevSecOps environments.
McAfee Cloud Workload Security (CWS): As data center parameters get redefined, the ability to navigate current datacenter workload assets and plot the journey to the cloud requires a map that will safely show the way. Cloud Workload Security provides visibility and protection for your workloads in the cloud with agility and confidence through an integrated suite of security technologies, ensuring control of new parameters.
McAfee Web Gateway (MWG): With its best-in-class malware protection efficacy and policy flexibility, we now have the ability to deploy MWG directly in AWS. This is in addition to the appliance model and SaaS deployment model. MWG boasts the most flexible options in the industry for Web security. With an AWS deployment, customers can not only offload workload from on-premise appliances through hybrid policy enforcement, they can also provide advanced in-line malware detection for SaaS-based apps. This is the same value proposition that McAfee has historically offered for endpoint protection, but we are now able to offer it for SaaS-based applications as well.
To learn more about our solutions that keep you better protected on AWS, visit mcafee.com/ProtectAWS
McAfee Accepted into the AWS Public Sector Partner Program
In addition to the commercial sector, McAfee knows that Government, Education and Nonprofit customers need quality security in the cloud. AWS has accepted McAfee into its AWS Public Sector Partner Program. This designation reflects McAfee’s strong commitment to support public sector customers in their transition to the cloud. As our presence in the AWS Public Sector Partner Program grows, so too will the value of our solutions specifically targeted for the public sector.
McAfee is a Sponsor at AWS re:Invent
Join us the week of November 27th at the AWS re:Invent event in Las Vegas. Visit the McAfee (Booth 1238) at the Venetian. McAfee experts will share strategies and best practices to help customers secure and manage data on AWS. Plus, you can see live how McAfee vNSP expands network protection across virtualized environments.
Make sure to stop by the booth to say hello in person, or via Twitter.
To find out more about our programs, certifications, qualifications, and technologies supporting AWS, click here.
The post McAfee and Amazon Web Services: A Secure Relationship appeared first on McAfee Blogs.
October is always one of the busiest months of my year with the beginning of Q4 in full swing and the MPOWER Cybersecurity Summit & Americas Partner Summit events in Las Vegas. This is a prime opportunity to engage with the great partners that carry our brand and our products into the field and expertly support our mutual customers.
This year’s MPOWER Cybersecurity Summit was more than just an conference. This was our first official gathering since becoming an independent company again. In keeping with our motto, “Together is Power,” 2017’s MPOWER demonstrated the formidable togetherness and power of the extended McAfee partner community.
Our commitment to the partner community was on full display at MPOWER, where we showcased powerful Security Innovation Alliance integrations and new innovations on tap to help partners and customers transform their security operations centers (SOCs). Together, we will shift the balance of power in the battle against evolving and emerging threats at every stage of the threat defense lifecycle.
McAfee understands partners are dealing not only with a rapidly changing threat landscape but also with a virtual fire hose of new and updated security solutions. At MPOWER, partners witnessed firsthand how our game-changing “Protect, Detect, Correct, and Adapt,” approach aims to reduce complexity and make partners more efficient and effective.
In the coming months, McAfee partners will have access to several important innovations that promise to continue evolving traditional security architecture, including:
- McAfee Enterprise Security 11.0: We’ve added speed, power, and advanced capabilities to our premier endpoint protection suite to deliver our most comprehensive client security product. With McAfee Enterprise Security 11.0, McAfee partners can offer customers a highly scalable platform with advanced analytics, deep and machine learning, powerful event handling, and efficient integration with other security products in their arsenal.
- McAfee Behavioral Analytics: In the SOC, understanding and baselining user behavior often makes the difference between efficient protection and useless noise. Our latest User and Entity Behavior Analytics offering gives McAfee partners powerful analytics to catalog suspicious events and build dynamic threat models based on risky user activity.
- McAfee Investigator: We’re bringing machine learning and artificial intelligence (AI) to bear on threat remediation and incident response, making the process more efficient, more accurate, and up to 10 times faster. By automating much of the manual threat investigations process with technology that learns and improves over time, partners can deliver world-class protection with less overhead.
- McAfee Cloud Workload Security: Increasingly, customers are asking partners to protect cloud-based data and workloads. To that end, McAfee is delivering cloud-native technology to discover, defend, manage, and recover customer information no matter where it resides.
McAfee is committed to helping partners become and remain the trusted security advisors their end users demand. We do that by continuing to develop and deliver tools that provide world-class protection and make our partners second-to-none in cybersecurity. That’s the true power of our partnership.
All of this will take time to roll out, and changes will be made along the way. But by working together, we’ll build and bring to market a better approach to security to counter the dynamic threats we all face. I invite you all to send us feedback on how McAfee is doing and what you need to succeed. We’ll work to empower you.
Together is power.
The post At MPOWER, New Tools Give Partners a Defensive Edge appeared first on McAfee Blogs.
The months leading up to our MPOWER Cybersecurity Summit & Americas Partner Summit are the perfect time to roll up our sleeves and take a hard look at where we’ve been and, more important, where we’re headed. If my conversations with the McAfee partner community taught me one thing this year, it’s that the future of our channel is bright.
After spending a full day immersed in the annual Partner Summit, I remain unwaveringly confident that wringing complexity out of our partner programs and reducing friction in our channel relationships are more important than ever. In a world full of threats and uncertainty, it’s vital that we give partners the resources they need to be more effective trusted security advisors and, subsequently, achieve greater profitability.
How will we do that?
For starters, by simplifying the way you work with us. Our multifaceted deal registration programs will be consolidated, our service-level agreements (SLAs) revised, and many more improvements are in the works.
Together, we must employ our skills and knowledge to address customers’ security challenges. We must apply strategy, give our customers actionable guidance, and innovate to overcome existing and emerging threats.
And we must do it in an efficient, organized, and scalable way to ensure both the effectiveness and profitability of the endeavor.
If you attended the Partner Summit, you heard me talk about the event’s 2017 trifold theme: innovation, collaboration, momentum. These aren’t just buzzwords; they’re pillars of our mutual mission. It’s vital for us — McAfee and our partners — to leverage innovation through collaboration to build momentum. We’re taking a page from the “Three E’s of Management” playbook:
- Enablement: Getting partners involved in sales and technical training that builds the foundation for success
- Engagement: Going after net-new and greenfield opportunities that drive us toward our mutual goals
- Economics: Filling the deal pipeline and leveraging incentives that help partners increase profitability
In practice, our commitment to partner success will include things such as full-day business planning sessions with honest and transparent discussion of our mutual goals and go-to-market strategies in pursuit of a three-year plan. Executive sponsors from both sides will revisit goals and track progress in regular quarterly business reviews, finding ways to help partners bolster and upgrade their portfolio of McAfee products.
And when things aren’t going according to plan, we’ll remain flexible.
We know partners have a lot on their plates. As you raise your game to defend customers, we’re doubling down on our commitment to making the McAfee partner program better and simpler for you. We’re working diligently, based on partner feedback, to reduce the complexity of ordering product, acquiring training and support, accessing marketing and reference materials, and communicating with our various tactical and intelligence teams.
We want to make leading with McAfee as easy as using a smartphone app. And as we make things simpler, our mutual futures will get even brighter.
Together is power.
The post View From the Summit: The Future Looks Bright for Partners appeared first on McAfee Blogs.
Conference organizers waited with anticipation.
In a sleek, high-tech conference hall glowing with McAfee’s deep signature red, a colossal scoreboard flashed the results of a real-time vote for who would take the stage and speak next. The results were a virtual dead heat. The thousands in the audience had just chosen to hear from McAfee’s own chief information security officer Grant Bourzikas – one of their tribe and “customer zero” of cybersecurity products – in a narrow victory over a high-profile presentation of issues much in the news.
Welcome to MPOWER, the first “face-to-face, on-demand conference”, where attendees voted for speakers, topics, and even product names, in real-time. From Chief Executive Officer Chris Young’s opening keynote to the breakout sessions, cybersecurity leaders from around the world were firmly in charge, and they knew what they wanted to work on: cloud security, endpoint protection, and the constantly evolving security operations centers they call home.
Spread out across the sprawling Aria Hotel and Resort on the Las Vegas Strip from Oct. 17-19, the conference pulled together an industry constantly in the headlines, where job openings can’t be filled quickly enough and sinister cyber threats loom constantly. Perhaps the most famous thought leader in the cybersecurity world, blogger and journalist Brian Krebs told the crowd in a keynote address Wednesday: “There’s never been a better time to be gainfully employed in the cybersecurity industry. It’s an incredible time.”
Young kicked off this year’s MPOWER Conference by gazing into his crystal ball (at the audience’s request, of course) where he made some bold predictions for the industry. The endpoint and cloud will be the control points of our future cybersecurity architectures. The security operations center will be one where tools support people, not where people support tools. And, customers will demand an open ecosystem approach to gain vendor choice without backoffice chaos. Within each area, MPOWER provided evidence of McAfee’s unwavering commitment to seeing this future realized.
McAfee’s latest endpoint protection platform, ENS 10.5, released at the company’s conference the year prior, is seeing tremendous success. ENS 10.5 provides machine learning, EDR and traditional signature-based protection, all within a single-agent architecture and on a common platform.
In cloud, McAfee is making virtualized IPS and web gateway capabilities available via Amazon Web Services. Conference goers witnessed a new solution to hybrid cloud challenges, McAfee Cloud Workload Security (CWS), which facilitates enterprises’ safe cloud use by discovering and defending elastic workloads within minutes.
In security operations, McAfee is shifting the narrative from an industrywide talent shortage that litters headlines to the talent efficiency opportunity in front of us. McAfee Investigator applies advanced analytics to increase the SOC’s productivity by completing the normal investigative flow of a typical analyst up to 6-10 times faster, giving SOC teams a force multiplier on productivity.
Finally, McAfee remains fully committed to fostering an open ecosystem to share threat intelligence more seamlessly and bring pre-integrated solutions to market faster for customers. The McAfee Security Innovation Alliance continues to flourish with the addition of 19 more partners, including IBM. And, attendees learned that OpenDXL and Cisco Platform Exchange Grid (pxGrid) are now integrated – allowing two of the industry’s largest messaging fabrics to share threat information and enable automation between networks and endpoints.
I want to thank our customers, partners and employees for making MPOWER such a success. Bringing to life the industry’s first on-demand, face-to-face conference was no small feat and required attendees and presenters to come together to realize the vision. Sounds familiar, doesn’t it? MPOWER was a small microcosm of the tremendous power that is unleashed in our industry when people and technologies work together. Together is power.
More MPOWERs to come
That power continues as we take MPOWER international in the next few weeks:
- Tokyo – Nov. 9 at the Prince Park Tower. 50 sessions, 25 companies, more than 2,000 attendees expected. This conference is FREE. More information here.
- Sydney – Nov. 14 at the International Convention Centre. Keynote speakers include McAfee CEO Chris Young and Troy Hunt, international cybersecurity expert. This conference is FREE. More information here.
- Amsterdam – 28-29 at the Mövenpick Hotel Amsterdam City Centre. This two-day meeting includes top industry keynotes and deep-dive technical breakout sessions. More information here.
The post MPOWER 2017 Highlights: A Cybersecurity Conference On-Demand appeared first on McAfee Blogs.