Category Archives: Cloud Security

FireEye Unveils New Solutions, Capabilities

FireEye this week made several announcements, including the launch of new solutions and capabilities, new pricing and packaging models, and a strategic partnership with Oracle.

One of the new solutions is SmartVision Edition, an offering designed to help organizations detect malicious traffic moving within their network.

read more

Redefining Cybersecurity Coverage to Reflect Next Generation Digital Transformation

The way analysts looked at the cybersecurity market five years ago is not the way they should be looking at it in 2018 and beyond. That’s the message from IDC’s

The post Redefining Cybersecurity Coverage to Reflect Next Generation Digital Transformation appeared first on The Cyber Security Place.

When Nobody Controls Your Object Stores — Except You

In recent months and years, we have seen the benefit of low-cost object stores offered by cloud service providers (CSPs). In some circles, these are called “Swift” or “S3” stores. However, as often as the topic of object or cloud storage emerges, so does the topic of securing the data within those object stores.

CSPs often promise that their environments are secure and, in some cases, that the data is protected through encryption — but how do you know for sure? Furthermore, CSPs offer extremely high redundancy, but what happens if you cannot access the CSP at all, or if you want to move your data out of that CSP’s environment entirely?

Also, who controls the key? Some CSPs propose strategies such as bring-your-own-key (BYOK). However, these approaches are laughable because you have to give the encryption key to the CSP. In that case, it is not your key — it’s their key. BYOK should be called GUYK (give-up-your-key), GUYKAD (give-up-your-key-and-data) or JTMD (just-take-my-data).

Imagine if you could store your data in object stores of any cloud, encrypted under keys that only you control, and transport it easily across multiple clouds and CSPs, enabling you to move between or out of CSPs at your leisure.

What Are Object Stores?

Object stores are systems that store data as objects instead of treating them as files and placing them in a file hierarchy or file system. Some object stores are layered with additional features that allow them to be provided as a desktop service, such as Box or Dropbox. However, the critical value of object stores is that they are inexpensive and highly scalable.

Graphic of Object Store with Centralized Policy ManagementWhether you need a gigabyte or a zettabyte of storage, object stores can provide that storage to you easily and inexpensively. That is the simple part.

Protecting Data in the Cloud

Regardless of the kind of storage you consider, protecting the data therein is necessary in today’s climate. Remember that even when storage is inexpensive, your data is still immensely valuable, you are still are responsible for it and you do not want those assets to become liabilities.

How do we protect this data in the truest sense of the word? The answer is simply to encrypt it. However, if the CSP encrypts the data, you must give it the key. You can consider the thought experiments of BYOK, negotiation, wrapping and other key management practices, but at the end of the day, the CSP still has your key. Is there a way the data can be encrypted and stored in their cloud without the CSP accessing it or preventing you from easily switching to another provider?

Encryption of Cloud Object Store Data You Fully Control

There is only one way to maintain full control over your data when it is stored in a cloud object store: by encrypting the data with keys you own and manage before it actually reaches the cloud object store. But does this mean you have to programmatically encrypt the data before you actually upload it?

Luckily, the answer to that question is no, you do not have to programmatically encrypt the data yourself. The new IBM Multi-Cloud Data Encryption object store agent will transparently and automatically do this for you. In fact, this new capability acts as a proxy between your applications and your cloud object store. It transparently intercepts the data you are uploading to the cloud object store and automatically encrypts it with keys you control. Similarly, it intercepts the data you are retrieving back from the cloud object store and decrypts it using the appropriate keys.

Splitting the Bounty and the Key

We can now extend the concept of the encrypting object stores. There is a well-established practice that has been adopted in cloud data protection called data-splitting, which is combined with the concept of key-splitting, also known as secret-sharing.

The fundamental premise of this practice is based on two specific steps. The first step is to take the data and split it into multiple “chunks.” This is not exactly like taking a $100 bill and ripping into three pieces, but it is similar (we will get to that in a bit).

The second step is to encrypt each chunk of data under its own key. Once you have an encrypted chunk, you can place it in an object store. If you have three chunks, you can store them in three different object stores. However, that is not the whole story.

This approach ensures that no object store (or CSP) has access to the unencrypted data or any single key to decrypt it. Even if an object store CSP had access to the encryption key for the data in its object store, it would still have insufficient information to recover the plaintext — it would need the other chunks and their keys.

But this approach gets even more interesting: In this scenario, a subset of the chunks is required to reassemble the original plaintext. This is sometimes called “M-of-N” because it only requires access to M chunks of all N chunks of the data (M being a subset of N) to recover the data. That means that you can have your N chunks stored in the object stores of N different cloud service providers, but you only need access to a subset (M) of those object stores to recover your data. CSPs have neither access to sufficient information (keys or cipher text) nor a necessary component to recover your object store data, which means that nobody controls your object stores — except you.

Diagram of Encrypted Object Stores with Data Splitting and Key Splitting

Greater Flexibility to Change CSPs

Let’s assume that one day you decide that one of the CSPs no longer meets your criteria. Perhaps it is too expensive, it has been breached, it is in the wrong country, its policies have changed, it supports the wrong team or it just isn’t the awesome piece of chocolate cake you dreamed of.

Now you have greater flexibility to change. Just add a new object store to your N (N+1) and then close your account with the CSP you no longer want to use, and you’re done. The CSP did not have access to your data or keys before, and it can now take back all of that storage that contained those random bits of your encrypted data and sell it to somebody else. This is cryptographic shredding at its finest.

You should anticipate questions concerning the increased cost of storage with this approach, but it is nothing new. Remember that storage is inexpensive, but your data is extremely valuable. As an industry, we have been adopting storage strategies such as Redundant Array of Independent Disks (RAID) for years. The benefits of that kind of redundancy overwhelmingly outweigh the costs of the additional disk drives. Although data-splitting is not exactly the same as RAID, the concepts are very similar, as are the benefits and the return on investment (ROI).

Data- and key-splitting are not new, but their combination in an M-of-N approach to protecting object stores is quickly gaining traction. This is critical to the security, risk reduction and flexibility necessary to accelerate our pursuit of the cloud. We no longer need to trust the CSP or adopt a GUYK, GUYKAD or JTMD strategy.

With M-of-N cryptography, data-splitting and crypto-shredding strategies, you can stay in control of your keys and data and ensure that nobody else controls your object stores except you. This is just the beginning of how we secure the cloud.

The post When Nobody Controls Your Object Stores — Except You appeared first on Security Intelligence.

Free Qualys services give orgs visibility of their digital certs and cloud assets

Qualys announced two new free groundbreaking services: CertView and CloudView. Harnessing the power and scalability of the Qualys Cloud Platform, Qualys CertView and CloudView enable organizations of all sizes to gain such visibility by helping them create a continuous inventory and assessment of their digital certificates, cloud workloads and infrastructure that is integrated into a single-pane view of security and compliance. Qualys CertView CertView helps customers inventory and assess certificates and underlying SSL/TLS configurations and … More

The post Free Qualys services give orgs visibility of their digital certs and cloud assets appeared first on Help Net Security.

McAfee Expands Cloud Security Program

At RSA Conference 2018 at San Francisco, CA, McAfee has announced two additions to its cloud security program, and published a new analysis of the corporate adoption of cloud services. The new services are centered on securing containers in the cloud, and adding consistent security to third-party cloud services. The analysis, Navigating a Cloudy Sky, surveyed 1,400 IT decision makers around the world, and interviewed several C-level executives.

read more

Nearly 4 in 10 IT Professionals Struggle to Detect and Respond to Cloud Security Incidents

Nearly 4 in 10 IT and cybersecurity professionals who responded to a recent survey cited cloud security as a major challenge.

According to the “Oracle and KPMG Cloud Threat Report, 2018,” 38 percent of security practitioners said they struggle to detect and respond to security incidents in the cloud. It was the biggest challenge cited in the survey, beating out lack of visibility across endpoints and the attack surface (27 percent), lack of collaboration between security and IT operations teams (26 percent), and lack of unified policies across different environments (26 percent).

Cloud Security Remains an Ongoing Concern

For the report, Oracle and KPMG commissioned Enterprise Strategy Group (ESG) to survey 450 IT and cybersecurity professionals working at public- and private-sector organizations based in North America, Western Europe and Asia. Their responses highlighted the widespread concern about security gaps at every step of the cloud migration process.

The report suggested that confusion was partly responsible for those gaps. Just 43 percent of survey respondents were able to correctly identify the most widely used infrastructure-as-a-service (IaaS) shared responsibility model. That means fewer than half of security professionals knew they were responsible for cloud security.

Respondents also indicated that employees might be exacerbating those security holes. More than four-fifths (82 percent) of security leaders said they are worried that employees don’t follow corporate cloud security policies. The report cited a variety of factors contributing to this prevalence of shadow IT, including personal preferences, external collaboration and speed requirements.

Bolstering Defenses in the Cloud

Tony Buffomante, U.S. leader of KPMG’s Cyber Security Services, said organizations need to do more to protect themselves against security gaps when migrating to the cloud.

“As many organizations migrate to cloud services,” Buffomante said, “it is critical that their business and security objectives align, and that they establish rigorous controls of their own, versus solely relying on the cybersecurity measures provided by the cloud vendor.”

The survey revealed that more companies could be turning to technology to better protect themselves. Forty-seven percent of respondents said their organization uses machine learning for security purposes, while 35 percent said they planned to invest in solutions equipped with security automation. Investing in both of these technologies, along with adopting security best practices, could help close cloud security gaps.

The post Nearly 4 in 10 IT Professionals Struggle to Detect and Respond to Cloud Security Incidents appeared first on Security Intelligence.

Cloud Clustering Vulnerable to Attacks

The authors thank John Fokker and Marcelo CaroVargas for their contributions and insights.

In our upcoming talk at the Cloud Security Alliance Summit at the RSA Conference, we will focus our attention on the insecurity of cloud deployments. We are interested in whether attackers can use compromised cloud infrastructure as viable backup resources as well as for cryptocurrency mining and other illegitimate uses. The use of containers has increased rapidly, especially when it comes to managing the deployment of applications. Our latest market survey found that 83% of organizations worldwide are actively testing or using containers in production. Applications need authentication for load balancing, managing the network between containers, auto-scaling, etc. One solution (called a cluster manager) for the automated installation and orchestration of containers is Kubernetes.

Some key components in the Kubernetes architecture appear below:

High-level Kubernetes architecture.

  • Kubernetes master server: The managing machine oversees one or more nodes
  • Node: A client that runs tasks as delegated by the user and Kubernetes master server
  • Pod: An application (or part of an application) that runs on a node. The smallest unit that can be scheduled to be deployed. Not intended to live long.

For our article, we need to highlight the etcd storage on the master server. This database stores the configuration data of the cluster and represents the overall state of the cluster at a given time. Kubernetes saves these secrets in Base64 strings; before Version 2.1 there was no authentication in etcd.

With that knowledge, security researcher Giovanni Collazo from Puerto Rico started to query the Shodan database for etcd databases connected to the Internet. He discovered many and by executing a query, some of these databases started to reveal a lot of credentials. Beyond leaking credentials from databases and other accounts, what other scenarios are possible?

Leaking Credentials

There are several ways that we can acquire credentials for cloud services without hacking into panels or services. By “creatively” searching public sites and repositories, we can find plenty of them. For example, when we searched on GitHub, we found more than 380,000 results for certain credentials. Let’s assume that half of them are useful: We would have 190,000 potentially valid credentials. As Collazo did for etcd, one can also use the Shodan search engine to query for other databases. By creating the right query for Django databases, for example, we were able to identify more cloud credentials. Amazon’s security team proactively scans GitHub for AWS credentials and informs their customers if they find credentials.

Regarding Kubernetes: Leaked credentials, complete configurations of the DNS, load balancers, and service accounts offer several possible scenarios. These include exfiltrating data, rerouting traffic, or even creating malicious containers in different nodes (if the service accounts have enough privileges to execute changes in the master server).

Creating malicious containers.

One of the biggest risks concerning leaked credentials is the abuse of your cloud resources for cryptomining. The adversaries can order multiple servers under your account to start cryptomining, enriching their bank accounts while you pay for the computing power “you” ordered.

Open Buckets

We have heard a lot about incidents in which companies have not secured their Amazon S3 buckets. A number of tools can scan for “open” buckets and download the content. Attackers would be most interested in write-enabled rights on a bucket. For our Cloud Security Alliance keynote address at RSA, we created a list of Fortune 1000 companies and looked for readable buckets. We discovered quite a few. That is no surprise, but if you combine the read-only buckets information with the ease of harvesting credentials, the story changes. With open and writable buckets, the adversaries have plenty of opportunities: storing and injecting malware, exfiltrating and manipulating data, etc.

McAfee cloud researchers offer an audit tool that, among other things, verifies the rights of buckets. As we write this post, more than 1,200 writable buckets belonging to a multitude of companies, are accessible to the public. One of the largest ad networks in the world had a publicly writable bucket. If adversaries could access that network, they could easily inject malicious code into advertisements. (As part of our responsible disclosure process, we reported the issue, which was fixed within hours.) You can read an extensive post on McAfee cloud research and how the analysts exposed possible man-in-the-middle attacks leveraging writable buckets.

Clustering the Techniques

To combat ransomware, many organizations use the cloud to back up and protect their data. In our talk we will approach the cloud as an attack vector for spreading ransomware. With the leaked credentials we discovered from various sources, the open and writable buckets created a groundwork for storing and spreading our ransomware. With attackers having a multitude of credentials and storage places such as buckets, databases, and containers, defenders would have difficulty keeping up. We all need to pay attention to where we store our credentials and how well we monitor and secure our cloud environments.

The post Cloud Clustering Vulnerable to Attacks appeared first on McAfee Blogs.

Cloud is Ubiquitous and Untrusted

At the end of 2017, McAfee surveyed 1,400 IT professionals for our annual Cloud Adoption and Security research study.  As we release the resulting research and report at the 2018 RSA Conference, the message we learned this year was clear: there is no longer a need to ask whether companies are in the cloud, it’s an established fact with near ubiquitous (97%) acknowledgement.  And yet, as we dug into the comments and information that industry professionals and executives shared about their use and protection of the cloud, another intriguing theme became clear: companies are investing in cloud well ahead of their trust in it!

For this year’s report, Navigating a Cloudy Sky, we sought respondents from a market panel of IT and Technical Operations decision makers.  These were selected to represent a diverse set of geography, verticals, and organization sizes.  Fieldwork was conducted from October to December 2017, and the results offered a detailed understanding of the current state and future for cloud adoption and security.

Cloud First

More than any prior year – the survey indicated that 97% of organizations worldwide are currently using cloud services, up from 93% just one year ago.  In the past year, a majority of organizations in nearly every major geography have even gone so far as to assert a “cloud first” strategy for new initiatives using infrastructure or technology assets.

Indeed, this cloud-first strategy has driven organizations to take on many different providers in their cloud ecosystem.  As organizations tackle new data use initiatives, intelligence building, new capabilities to store and execute on applications – the growth in cloud is exploding the number of sanctioned cloud providers that businesses are reporting.

In the survey, enterprises are recognizing and reporting at a statistically significant level the explosion in provider count – each a source of potential risk and management need for the organization.  The provider count requires readiness in governance strategy that joins security capabilities and procurement together to protect the data entrusted to each new cloud deployment.  Security operations teams will need enhanced visibility that is unified to compose a picture across so many different environments containing enterprise data.

Data and Trust

This year’s report highlights an intriguing trend – companies are investing their data in cloud providers well in advance of their trust in those providers.  An incredible 83% of respondents reported storing sensitive data in the public cloud – with many reporting nearly every major data sensitive data type stored in at least one provider.

Despite such a high level of data storage in cloud applications, software, and infrastructure, the same business executives are clearly concerned about the continuing ability to trust the cloud provider to protect the data.  While cloud trust continues to gain, and cloud respondents indicated continuing buy-in to using providers and trusting them with critical data and workloads, only 23% of those surveyed said they “completely trust” their data will be secured in the public cloud.

Part of that trust stems from a perception that using public cloud providers is likely to drive use of more proven technologies, and that the risk is not perceived as being any less than in the private cloud.

As cloud deployment trends continue, IT decision makers have strong opinions on key security capabilities that would increase and speed cloud adoption.

  • 33% would increase cloud adoption with visibility across all cloud services in use
  • 32% would increase cloud adoption with strict access control and identity management
  • 28% would increase cloud adoption with control over cloud application functionality

You can download the full report here, and keep following @mcafee_business for more insights on this research.

The post Cloud is Ubiquitous and Untrusted appeared first on McAfee Blogs.

1-in-4 orgs using public cloud has had data stolen

McAfee has polled 1,400 IT professionals across a broad set of countries (and continents), industries, and organization sizes and has concluded that lack of adequate visibility and control is the greatest challenge to cloud adoption in an organization. However, the business value of the cloud is so compelling that some organizations are plowing ahead. Cloud services nearly ubiquitous According to the survey, the results of which have been unveiled at RSA Conference 2018, 97 percent … More

The post 1-in-4 orgs using public cloud has had data stolen appeared first on Help Net Security.

Cloud Protection Moves Into a New Phase

It’s RSA Conference season and a great time to talk about containers and security.

No, not traditional shipping containers.

Containers have become developers’ preferred deployment model for modern cloud applications, helping organizations accelerate innovation and differentiate themselves in the marketplace. This is part of the natural progression of the datacenter, moving from the physical, on-premise servers of old, to virtual servers, and then to the public cloud.

According to a report released today by McAfee, “Navigating a Cloudy Sky,” containers have grown rapidly in popularity over the past few years, with 80 percent of those surveyed using or experimenting with them. However, only 66 percent of organizations have a strategy to apply security to containers, so there is still work to be done.

Realistically, most companies will have a mixed, or “hybrid cloud” solution for some time. A big challenge for customers is to maintain security and visibility as they migrate to the public cloud and adopt new technologies like containers.

As containers gain in popularity, getting visibility of their container workloads and understanding how security policies are applied is something that enterprises will need to assess to ensure workloads are secure in the cloud. In the shared security responsibility model laid out by cloud providers, enterprises can leverage the available native controls and the interconnectivity with production workloads and data stores, but will need to actively manage the security of those workloads. Gaining visibility, mitigating risk and protecting container workloads helps build a strong foundation for secure container initiatives.

McAfee is helping to fill the security need in this new environment by offering hybrid cloud security solutions to customers. For example, the release of McAfee Cloud Workload Security (CWS) v5.1 – announced today and available Q2 2018 – gives customers a tool that identifies and secures Docker containers, workloads and servers in both private and public cloud environments.

McAfee CSW 5.1 quarantines infected workloads and containers with a single click, thus reducing misconfiguration risk and increasing initial remediation efficiency by nearly 90 percent.

Previously, point solutions were needed to help secure containers. But with multiple technologies to control multiple environments, security management faced unnecessary complexities. McAfee CWS can span multi-cloud environments: private data centers using virtual VMware servers, workloads in AWS, and workloads in Azure, all from a single interface.

McAfee CWS identifies Docker containers within five minutes from their deployment and quickly secures them using micro and nano-segmentation, with a new interface and workflow. Other new features include discovery of Docker containers using Kubernetes, a popular open source platform used to manage containerized workloads and services, and enhanced threat monitoring and detection with AWS GuardDuty alerts – available directly within the CWS dashboard.

McAfee is the first company to provide a comprehensive cloud security solution that protect both data and workloads across the entire Software as a Service and Infrastructure as a Service spectrum.  So, when you’re talking containers, be sure to include McAfee in the conversation.

And don’t forget to stop by the McAfee booth, North Hall, #3801, if you’re attending RSA.

The post Cloud Protection Moves Into a New Phase appeared first on McAfee Blogs.

GDPR Planning and the Cloud

Data protection is on a lot of people’s minds this week. The Facebook testimony in Congress has focused attention on data privacy. Against this backdrop, IT security professionals are focused on two on-going developments: the roll-out next month of new European regulations on data (the General Data Protection Regulation, or GDPR) as well as the continued migrations of data to the public cloud.

GDPR is mostly about giving people back their right over their data by empowering them. Among other rights and duties, it concerns the safe handling of data, the “right to be forgotten” (among other data subject rights) and breach reporting. But apparently it will not slow migration to the cloud.

According to a McAfee report being released today, Navigating a Cloudy Sky, nearly half of companies responding plan to increase or keep stable their investment in the public, private or hybrid cloud, and the GDPR does not appear to be a showstopper for them. Fewer than 10 percent of companies anticipate decreasing their cloud investment because of the GDPR.

Getting Help for GDPR Compliance

What is the practical impact of all this? Say your CISO is in the early stages of setting up a GDPR compliance program. In any enterprise it’s important to understand the areas of risk. The first step in managing risk is taking a deep look at where the risk areas exist.

McAfee will feature a GDPR Demo1 at the RSA conference in San Francisco this week that will help IT pros understand where to start. The demo walks conference attendees through five different GDPR compliance scenarios, at different levels of a fictional company and for different GDPR Articles, so that they can start to get a feel for GDPR procedure and see the tools which will help identify risk areas and demonstrate the capabilities for each.

Remember, with GDPR end-users are now empowered to request data that they are the subject of, and can request it be wiped away. With the latest data loss prevention software, compliance teams will be able to service these requests by exporting reports for given users, and the ability to wipe data on those users. But a lot of companies need to learn the specific procedures on compliance with GDPR rules.

GDPR could be looked at as another regulation to be complied with – but savvy companies can also look at it as a competitive advantage. Customers are increasingly asking for privacy and control. Will your business be there waiting for them?

The cloud, GDPR and customer calls for privacy are three developments that are not going away – the best stance is preparation.

1 McAfee will be in the North Hall, booth #N3801 (the “Data Protection and GDPR” booth) and also in the South Hall at the McAfee Skyhigh booth, # S1301.

The post GDPR Planning and the Cloud appeared first on McAfee Blogs.

SecurityWeek RSS Feed: ‘Spectrum’ Service Extends Cloudflare Protection Beyond Web Servers

Cloudflare on Thursday announced the availability of a new service that extends the company’s protection capabilities to gaming, remote access, email, IoT and other types of systems.

read more

SecurityWeek RSS Feed

Open Banking: Tremendous Opportunity for Consumers, New Security Challenges for Financial Institutions

The concept of open banking, as structured by the U.K.’s Open Banking and PSD2 regulations, is designed to enable third-party payment service providers (TTPs) to access account information and perform payments under the authorization of the account owner.

This represents both a challenge and a tremendous opportunity for financial institutions and TPPs. On one hand, it makes the overall market more appealing to consumers and expands the services available to them to include a multitude of new players in the financial market. On the other hand, open banking significantly widens the threat surface and puts consumers and financial institutions at greater risk of attack.

New Standards Overlook Device Security

For this reason, the initiative comes with a new set of security standards. However, these mandates deal mostly with authentication, transaction monitoring and API security, and largely ignore the security of the devices from which transactions originate. This is problematic because compromising mobile devices is a popular activity among cybercriminals. By capturing large volumes of devices, threat actors can raise their profile and increase their ability to either attack devices directly or use them to launch distributed denial-of-service (DDoS) campaigns.

Since cybercriminals commonly target the source of a transaction, it is crucial for security teams in the financial industry to consider the consumer’s security first and use whatever threat intelligence they can gather to calculate the risk associated with a given transaction. This means that the risk level of a transaction should be calculated based not only on whether the user’s account is protected by strong authentication, but also whether malware is present on the device.

Open Banking and the Security Immune System

It’s important to note that opening the financial marketplace to third-party providers will drastically increase the attack surface. While it’s still critical to monitor individual transactions, financial institutions must focus on implementing security controls to reduce the risk of an attack. They can then integrate these tools and processes into a holistic security immune system designed to prevent, detect and respond to incidents.

Open banking also increases the criticality of cloud-based security controls. It is no longer a matter of whether an institution will adopt cloud solutions, but a question of who provides what services to whom. Cloud adoption is intrinsic to open banking, and having visibility into the cloud from a cybersecurity perspective is crucial.

Security teams must integrate these controls with processes that focus on detection to enable them to respond more effectively. By applying the security immune system approach to open banking, financial institutions can offer consumers greater flexibility and convenience — all while keeping their devices secure and their money safe from cybercriminals looking to exploit new security gaps.

The post Open Banking: Tremendous Opportunity for Consumers, New Security Challenges for Financial Institutions appeared first on Security Intelligence.

What is cyber security? How to build a cyber security strategy

Organizations face many threats to their information systems and data. Understanding all the basic elements to cyber security is the first step to meeting those threats.Cyber security is the practice

The post What is cyber security? How to build a cyber security strategy appeared first on The Cyber Security Place.

It’s Time to Bring Cloud Environments Out of the Shadows

The speed and scale of cloud computing has provided companies around the globe with more flexibility, lower overhead costs and quicker time to value for a wide variety of applications. While the business value of cloud adoption is undebatable, this rapid transition can leave security teams in the dark and sensitive information exposed.

Crawl, Walk, Then Run to the Cloud

Eager organizations often rush to address pressing business needs by moving data to cloud environments, but in many cases these moves are made without the knowledge of central IT and security teams. While the business motivations are positive, unmanaged adoption of new cloud services can leave sensitive data uncontrolled and exposed. Below are some of the most common challenges associated with cloud adoption.

Shadow IT

If you’ve ever worked for a company that used a clunky, slow enterprise collaboration tool, you know how amazing solutions such as Box, Dropbox and Google Drive can be. Your employees likely feel the exact same way.

If your company uses tools that generate friction and slow down productivity, chances are high that your users have adopted shadow IT applications to avoid the frustration. When users start adopting cloud-based tools instead of company-sanctioned ones, they often access these solutions with personal login credentials. Once this happens, you lose control of your proprietary data, which can result in unnecessary security and compliance risks.

IaaS Adoption Without Expertise

When lines of business experiment with cloud services for one-off projects, they often lack the security expertise needed to ensure that projects are both operational and secure. While many security experts are familiar with the need to share security responsibilities in infrastructure-as-a-service (IaaS) environments, business teams tend to assume that everything is taken care of by the provider. As new projects spin up and leave basic security requirements unaddressed, these IaaS environments can unintentionally expose data or be hijacked by attackers for nefarious purposes, such as bitcoin mining.

Make the Unknown Known

Most security executives know that they’ve got data in the cloud, but they don’t know how much data, what types of data or what cloud it is stored in. To effectively manage risk, the first thing you need to do is make the unknown known. Then, determine effective policies to secure data and workloads in these environments and proactively monitor them for ongoing risks and threats. Let’s break these steps down further.

Bring IT Out of the Shadows

Before you can take back control of your data, you need to find out where it lives. Network traffic can provide meaningful insights into which users are using which cloud services. By looking at outbound network traffic, you can figure out what software-as-a-service (SaaS) applications and IaaS environments have been adopted and take a baseline inventory of cloud usage within your organization.

Armed with this insight, you can then make risk-based decisions about which services should be authorized as is, which should be authorized but company-managed and which should be blocked. While you’ll likely recognize most cloud services that are discovered, you may uncover some services that you’ve never heard of. Threat intelligence feeds can help you understand potential risks associated with unknown applications.

Take Back Control

Once you’ve determined which services your users are leveraging and which you want to allow, it’s time to start proactively monitoring these cloud environments for risks and threats.

A good security analytics solution should be able to monitor SaaS applications and IaaS environments to provide you with insights into misconfigurations, risks and threats. For example, you’ll want your security team to make sure that Amazon Web Services (AWS) Simple Storage Service (S3) buckets are properly configured and that identity and access management (IAM) users have the appropriate privileges.

You’ll also want to monitor the behavior of your cloud admins and developers. If their credentials are compromised, either through spear phishing or in the process of lateral movement, behavioral analytics can help your team spot breaches early so they can contain and block the attacker’s progression.

Choosing the Right Tools to Manage Cloud Environments

Cloud environments demand the same level of security oversight as on-premises ones — if not more. The fewer point solutions involved in the security monitoring, detection, investigation and response processes, the more effective your team can be.

A strong security analytics solution can help you extend your existing security operations program into cloud environments without requiring separate tools. As you start taking steps to gain visibility into your cloud environments, look for solutions that can span your entire IT environment — be it traditional on-premises, private cloud, SaaS or IaaS — and enable you to manage security across multiple systems from behind a single pane of glass. Cloud is the new IT frontier, and your security analytics vendor should be able to support you throughout each stage of the journey.

Read the interactive white paper: One for all — New parity for your enterprise security

The post It’s Time to Bring Cloud Environments Out of the Shadows appeared first on Security Intelligence.

Organizations want to leverage the cloud but are held back by security misconceptions

iboss has published the findings of its 2018 Enterprise Cloud Trends report. The survey of IT decision makers and office workers in U.S. enterprises found that 64% of IT decision makers believe the pace of software as a service (SaaS) application adoption is outpacing their cybersecurity capabilities. Combined with growing pressures from shadow IT and mobile employees, 91% of IT decision makers agree they need to update security policies to operate in a cloud-first environment. … More

The post Organizations want to leverage the cloud but are held back by security misconceptions appeared first on Help Net Security.

Avoid These Security Mistakes During Cloud Migration

The headlong rush to the cloud continues to accelerate, promising increased efficiency, flexibility and security, but CSOs are not off the hook when it comes to fortifying the privacy and

The post Avoid These Security Mistakes During Cloud Migration appeared first on The Cyber Security Place.

How to Get the Most Out of the RSA Conference 2018

On April 16–20, the security industry will once again convene in downtown San Francisco for the annual RSA conference. For those of you that haven’t been, RSA is the premier security conference and exhibition of the year, with more than 40,000 expected attendees and 500 vendors set to show off the latest technologies in the field.

The expo floor at the Moscone Center will be split into two halls, North and South. The North Hall has the largest and smallest booths, and you can download the exhibit floor plans to see where your favorite vendors will be setting up shop. There is also an educational component, although it can be a mixed bag of hype and helpful information since vendors have to pay to participate.

Introductory seminars and tutorials, which have been well-attended in the past, will take place on Monday, April 16. The conference will wrap up on Friday, April 20, with a stirring closing keynote by RSA Conference Program Chair Hugh Thompson and other industry experts about the exciting yet uncertain future of artificial intelligence (AI). Below is a peek at what attendees can expect during the main part of RSA, which runs from Tuesday to Thursday, including masterful keynotes, gripping panel sessions and interactive experiences that touch on a wide range of today’s most relevant cybersecurity topics.

What to Expect at This Year’s RSA Conference

Emerging security startups will strut their stuff at the Marriott Early Stage Expo, which is located across the street from the Moscone Center and will be open late Tuesday and then all day Wednesday and Thursday. The selection criteria for this exhibit are rigorous, so you might want to spend some time perusing the RSA website to determine which of the dozens of vendors are most worth crossing the street to check out.

Next door to the Early Stage Expo is the RSAC Sandbox, which features several engaging, hands-on sessions, starting with a beer tasting on Tuesday night. The Sandbox and Marriott lobby are great places to meet and network with fellow attendees who want to take a breather from the main conference without getting too far from the action.

It would also be well worth your time to check out some of the panels and keynotes featuring prominent women in cybersecurity and the technology space in general. Below are a few of the most interesting.

More Noteworthy Sessions, Panels and Keynotes

Below are some other noteworthy sessions that you might want to put on your schedule during the main part of the conference, broken down by day.

Tuesday, April 17:

  • The Cryptographers’ Panel” (9:20 a.m.) — This session will feature cryptographer Whitfield Diffie and RSA co-founders Ron Rivest and Adi Shamir.
  • Hype or Myth: Smart Home Security” (11:30 a.m.) — ESET’s Tony Anscombe will talk about the various risks associated with the Internet of Things (IoT) and what you can do about them.

Wednesday, April 18:

Thursday, April 19:

Something for Everyone at RSA 2018

If you aren’t going to the show or don’t want to jump back and forth between several concurrent sessions, you can play back session recordings online via the RSA Conference onDemand program, which will be rolled out for the first time this year. If you do plan to make the trip to San Francisco, be sure reserve your spot at the sessions you hope to attend through the web portal as soon as possible — many of the more popular talks are already at capacity.

No matter where your interests lie, there is something for every cybersecurity enthusiast at this year’s RSA Conference. Don’t miss one of the best chances you’ll have all year to rub elbows with experts, meet like-minded security professionals and immerse yourself in the latest trends, technologies and talking points of cybersecurity.

The post How to Get the Most Out of the RSA Conference 2018 appeared first on Security Intelligence.

ShifLeft: Fully automated runtime security solution for cloud applications

When talking about data loss prevention, the first thing that comes to mind are solutions aimed at stopping users from moving sensitive documents/data out of a network. But there is a different type of data loss that app developers should be conscious and worry about: cloud applications inadvertently sending critical data to unencrypted/public databases/services. Fuelled by the adoption of microservices and short software development cycles, this is the fastest growing problem in application security today. … More

The post ShifLeft: Fully automated runtime security solution for cloud applications appeared first on Help Net Security.

AWS Launches New Tools for Firewalls, Certificates, Credentials

Amazon Web Services (AWS) announced on Wednesday the launch of several tools and services designed to help customers manage their firewalls, use private certificates, and safely store credentials.

Private Certificate Authority

One of the new services is called Private Certificate Authority (CA) and it’s part of the AWS Certificate Manager (ACM). The Private CA allows AWS customers to use private certificates without the need for specialized infrastructure.

Developers can now provision private certificates with just a few API calls. At the same time, administrators are provided central management and auditing capabilities, including certificate revocation lists (CRLs) and certificate creation reports. Private CA is based on a pay-as-you-go pricing model.

AWS Secrets Manager

The new AWS Secrets Manager is designed to make it easier for users to store, distribute and rotate their secrets, including credentials, passwords and API keys. The storage and retrieval of secrets can be done via the API or the AWS Command Line Interface (CLI), while built-in or custom AWS Lambda functions provide the capabilities for rotating credentials.AWS announces new security tools

“Previously, customers needed to provision and maintain additional infrastructure solely for secrets management which could incur costs and introduce unneeded complexity into systems,” explained Randall Hunt, Senior Technical Evangelist at AWS.

AWS Secrets Manager is available in the US East and West, Canada, South America, and most of the EU and Asia Pacific regions. As for pricing, the cost is $0.40 per month per secret, and $0.05 per 10,000 API calls.

AWS Firewall Manager

The new AWS Firewall Manager is designed to simplify administration of AWS WAF web application firewalls across multiple accounts and resources. Administrators can create policies and set up firewall rules and they are automatically applied to all applications, regardless of the region where they are hosted.

“Developers can develop and innovators can innovate, while the security team gains the ability to respond quickly, uniformly, and globally to potential threats and actual attacks,” said Jeff Barr, Chief Evangelist for AWS.

AWS Shield Advanced customers get the new Firewall Manager at no extra cost, while other users will be charged a monthly fee for each policy in each region.

Amazon EFS data encrypted in transit

Amazon also announced that it has added support for encrypting data in transit for the Amazon Elastic File System (EFS), a file system designed for cloud applications that require shared access to file-based storage. Support for encrypting data at rest has already been available.

The company has made it easier for users to implement encryption in transit with the launch of a new EFS mount helper tool.

Related: Amazon Launches Security and Compliance Analysis Tool for AWS

Related: AWS Launches New Cybersecurity Services

view counter
Eduard Kovacs (@EduardKovacs) is a contributing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Why Multi-cloud Security Requires Rethinking Network

The Need to Rethink Security For Our Cloud Applications Has Become Urgent. Companies are utilizing the public cloud as their primary route to market for creating and delivering innovative applications.

The post Why Multi-cloud Security Requires Rethinking Network appeared first on The Cyber Security Place.

Why Multi-cloud Security Requires Rethinking Network Defense

The Need to Rethink Security For Our Cloud Applications Has Become Urgent

Companies are utilizing the public cloud as their primary route to market for creating and delivering innovative applications. Striving to gain a competitive advantage, organizations of all sizes and in all vertical sectors now routinely tap into infrastructure as a service, or IaaS, and platform as a service, or PaaS, to become faster and more agile at improving services through applications.

Along the way, companies are working with multiple cloud providers to create innovative new apps with much more speed and agility. This approach is opening up unprecedented paths to engage with remote workers, suppliers, partners and customers. Organizations that are good at this are first to market with useful new tools, supply chain breakthroughs and customer engagement innovations. 

There’s no question that IaaS, PaaS and their corollary, DevOps, together have enabled businesses to leapfrog traditional IT processes. We are undergoing a digital transformation of profound scope – and things are just getting started. Companies are beginning to leverage the benefits of being able to innovate with unprecedented agility and scalability; however, to take this revolution to the next level, we must take a fresh approach to how we’re securing our business networks.

Limits to legacy defense

Simply put, clunky security approaches, pieced together from multiple vendors, result in a fragmented security environment where IT teams must manually correlate data to implement actionable security protections. This level of human intervention increases the likelihood for human error, leaving organizations exposed to threats and data breaches. What’s more, security tools that are not built for the cloud significantly limit the agility of development teams. 

Cloud collaboration, fueled by an array of dynamic and continually advancing platforms, is complex; and this complexity has introduced myriad new layers of attack vectors. We’ve seen how one small oversight, such as forgetting to change the default credentials when booting up a new cloud-based workload, can leave an organization’s data exposed or allow attackers to leverage resources to mine cryptocurrency. 

Clearly the need to rethink security for our cloud apps has become urgent. What’s really needed is an approach that minimizes data loss and downtime, while also contributing to faster application development, thus allowing the business to experience robust growth. It should be possible to keep companies free to mix and match cloud services, and to innovate seamlessly on the fly, while also reducing the attack surface that is readily accessible to malicious parties.

Frictionless security

The good news is that the cybersecurity community recognizes this new exposure, and industry leaders are innovating, as well, applying their expertise to prevent successful cyberattacks. It is, indeed, possible to keep companies free to mix and match multiple cloud providers, and to innovate seamlessly on the fly, while also reducing opportunities for attack. Ideally, cloud security should speed application development and business growth, while preventing data loss and business downtime.

This requires three key capabilities: advanced application and data breach prevention, consistent protection across locations and clouds, and frictionless deployment and management. Security delivered through private cloud, public cloud and SaaS security capabilities can work together to eliminate the wide range of cloud risks that can cause breaches. 

When you think about it, a different approach to cloud security is inevitable. There’s every reason to drive toward wider use of enterprise-class cloud security capabilities integrated into the cloud app development lifecycle. It’s vital to make cloud security frictionless – for both the development teams and the security teams. This is a linchpin to fulfilling the potential of cloud-centric commerce. We must move toward frictionless security systems, designed to be just as fast and agile as the cloud-based business operations they protect.

Scott Simkin is a Senior Manager in the Cybersecurity group at Palo Alto Networks. He has broad experience across threat research, cloud-based security solutions, and advanced anti-malware products. He is a seasoned speaker on an extensive range of topics, including Advanced Persistent Threats (APTs), presenting at the RSA conference, among others. Prior to joining Palo Alto Networks, Scott spent 5 years at Cisco where he led the creation of the 2013 Annual Security Report amongst other activities in network security and enterprise mobility. Scott is a graduate of the Leavey School of Business at Santa Clara University.

VMware Acquires Threat Detection and Response Firm E8 Security

VMware announced this week that it has acquired threat detection and response company E8 Security, whose technology will be used to improve the Workspace ONE digital workspace platform. This is the third acquisition made by VMware in less than two months.

California-based E8 Security emerged from stealth mode in March 2015 and it has raised a total of nearly $22 million – more than $23 million if you count seed funding.

E8 Security has developed a platform that helps organizations detect malicious activity by monitoring user and device behavior. The product also improves incident response by providing the data needed to analyze threats.VMware acquires E8 Security

VMware plans on using E8 Security’s technology to improve its Workspace ONE product, specifically a recently announced intelligence feature that provides actionable information and recommendations, and automation for remediation tasks.

“By adding E8 Security’s user and entity behavior analytics capabilities to insights from VMware Workspace ONE Intelligence, our customers will be able to streamline management, remediation, and automation to improve the employee experience and the security of their digital workspace,” explained Sumit Dhawan, senior vice president and general manager of VMware’s End-User Computing (EUC) business.

VMware announced in February the acquisition of CloudCoreo, a Seattle-based cloud security startup launched less than two years ago. The company has created a product that allows organizations to identify public cloud risks and continuously monitor cloud infrastructure to ensure that applications and data are safe.

The virtualization giant plans on using the CloudCoreo technology and team to help customers secure their applications in the cloud.

Also in February, VMware announced its intent to buy CloudVelox, a company that specializes in providing workload mobility between the data center and public clouds. CloudVelox’s solutions also include data, system and application security capabilities.

Financial terms have not been disclosed for these recent acquisitions.

Related: Oracle to Acquire Cloud Security Firm Zenedge

Related: Palo Alto Networks to Acquire CIA-Backed Cloud Security Firm for $300 Million

Related: HyTrust Acquires DataGravity, Raises $36 Million

view counter
Eduard Kovacs (@EduardKovacs) is a contributing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Drop Everything and Enable Two-Factor Authentication Immediately

If you haven’t done so already after seeing the title of this article, please stop reading immediately and enable two-factor authentication (2FA) on every system and service you use that

The post Drop Everything and Enable Two-Factor Authentication Immediately appeared first on The Cyber Security Place.

Experiences and attitudes towards cloud-specific security capabilities

Dimensional research conducted a survey of IT professionals responsible for cloud environments. The survey, which is comprised of data collected from over 600 respondents from around the world, provides an overview of experiences and attitudes in regards to cloud security. In your opinion, how does the overall security posture for your company’s cloud services compare to your on-premises security? The cloud is redefining the role of the firewall An overwhelming 83 percent of respondents have … More

The post Experiences and attitudes towards cloud-specific security capabilities appeared first on Help Net Security.

Why You Should Drop Everything and Enable Two-Factor Authentication Immediately

If you haven’t done so already after seeing the title of this article, please stop reading immediately and enable two-factor authentication (2FA) on every system and service you use that allows it. The reality is that no matter how strong your password is — even that 48-character one with uppercase and lowercase letters, numbers and symbols — it’s not strong enough if your desktop or browser is compromised and your credentials are stolen.

While this might have sounded like hyperbole just a few years ago, every system in today’s environment is a target. 2FA is now part of the bare-minimum security we should have in place but too often don’t.

APTs Are Real and 2FA Is Our Best Defense

Imagine that you’ve received an email stating that you and your vendors are currently under attack by cybercriminals looking to steal your login credentials. The communication from one of your threat intelligence feeds warns that there is credible information about both general and targeted attacks against vendors — more specifically, attempts to log in to accounts using stolen credentials. All you have to do is look at the talk of remote access Trojans (RATs) and threats reported by the Financial Services Information Sharing and Analysis Center (FS-ISAC) and other organizations to realize that this is a real threat and not something you have to imagine.

Several years ago, when the term advanced persistent threat (APT) first entered the security lexicon, most security professionals — myself included — thought of it as a marketing term used to describe any attack deemed too complex to be handled by the first lines of defense most organizations had in place. Today, it’s clear that cybercriminals groups really are looking to compromise systems at all levels, and passwords are one of the easiest and best targets. That’s why 2FA is no longer a nice-to-have feature — it’s a necessary protection that no organization can afford to overlook.

Demand for Two-Factor Authentication on the Rise

When it comes to the various types of 2FA, most of us are familiar with tools such as RSA’s SecurID and the host of certificate-based methods that have been available for many years. But this space is seeing a resurgence, from free tools such as Google Authenticator and Microsoft’s similarly named Authenticator app to more independent solutions such as Duo and Authy. These tools all leverage users’ phones and a mathematical algorithm similar to a SecurID token to provide a code to enter during login.

Many enterprises use some form of 2FA to protect their internal environments, but a gap often arises where the internal environment meets external service providers. While you can enable 2FA for Facebook, Gmail, Slack and many other social media services, it’s not yet a universal constant. Fortunately, it is becoming more common as the demand for this security measure grows.

2FA Use Lags as Account Takeover Ramps Up

The sad part is that even where 2FA is offered, many users still don’t take advantage of it. At a recent USENIX conference in California, Google engineer Grzegorz Milka announced that less than 10 percent of active Gmail users are using 2FA. While this doesn’t translate directly to the number of enterprise users who employ 2FA for external sites, it doesn’t take much imagination to extend this trend.

To make matters worse, my own research for Akamai revealed that 43 percent of logins submitted through most sites are account takeover attempts. It is likely that many organizations don’t take advantage of 2FA in the cloud unless their corporate policy requires it and the security team follows up with audits.

Security Professionals Must Lead by Example

Make no mistake: Bad guys are out to get you and your login credentials. This becomes dangerous when the login they’re trying to get is not your heavily protected corporate password, but that of your cloud-based provider or some other service your organization relies on to conduct business. Gaining access to the corporate Twitter account is an old-school tactic, but its impact pales in comparison to the havoc an attacker could wreak by compromising an administrator account for one of your cloud-based services.

As an industry, we have to demand that each and every vendor we use offers 2FA. But as individuals, we also have to enable these controls wherever possible, even if it’s not required under corporate policy. It’s up to us to lead by example, and 2FA is one of the most impactful controls we can put in place to protect our accounts and prevent fraud. So quit reading already and explore what you need to do to enable this invaluable security measure on as many applications as possible!

The post Why You Should Drop Everything and Enable Two-Factor Authentication Immediately appeared first on Security Intelligence.

Security is the biggest driver and obstacle in hybrid cloud migrations

Just 16% of enterprises use just one cloud, with two-thirds having a strategy in place for a hybrid approach, according to a new report.Companies in the early stages of cloud

The post Security is the biggest driver and obstacle in hybrid cloud migrations appeared first on The Cyber Security Place.

Qualys integrates with Google Cloud Platform’s Security Command Centre

Qualys and Google Cloud Platform can now play nicely together with the launch of the security firm’s Cloud Security Command Center (Cloud SCC) integration.The security and data risk platform will

The post Qualys integrates with Google Cloud Platform’s Security Command Centre appeared first on The Cyber Security Place.

NSP Finds Common Ground in a Time of Change

As enterprise customers move from the private to the public cloud, they are looking for safety and uninterrupted coverage, but also multi-platform availability and inter-operativity with other products

The public cloud offers convenience, cost savings, and the opportunity to shift capital infrastructure spending to an operational expense model. But it also introduces a new level of risk, where a vulnerability in publicly-accessible software can enable an attacker to breach the cloud and infiltrate sensitive information, or accidentally expose customer data to other tenants using the same service.

The real world complicates matters further: the journey from the private cloud to the public cloud is often in stages: customers may use a combination of both (i.e. hybrid cloud). Futher, there are big changes happening in the Security Operation Center (SOC) in the multi-cloud environment, with automation increasing and many controls becoming virtual. Customers ask, “How do I respond? How do protect myself?”

A big part of the answer is Intrusion Detection and Prevention Systems (IDPS) software. According to Gartner, by year-end 2020, 70% of new stand-alone IDPS placements will be cloud-based (public or private) or deployed for internal use cases, rather than the traditional placement behind a firewall.1 Download the full Gartner MQ here for more perspective.

Another part of the equation usability. Customers need a cybersecurity product that works for their needs: their specific cloud vendor, their platform, and integration with their other cybersecurity solutions. Also, virtualized security solutions must be flexible and scalable, and, even more importantly, they must function seamlessly with software-defined networking platforms.

We believe that McAfee’s latest IDPS release – the McAfee® Network Security Platform (NSP) – has the answers to many of these questions. NSP discovers and blocks sophisticated threats in cloud architectures with accuracy and simplicity. It’s a complete network threat and intrusion prevention solution that protects systems and data wherever they reside across datacenter, cloud, and hybrid enterprise environments, utilizing multiple signature-less detection technologies.

It’s also important to remember that different customers use IDPS products in different ways. The latest NSP release allows customers to use the software in the way they want. For example:

Cloud Infrastructure Security: NSP (and Virtual Network Security Platform, or vNSP, designed specifically for the cloud) support both Azure and Amazon Web Services (AWS) — today’s leading public cloud services — delivering complete threat visibility of data going through an internet gateway and into east-west traffic. A customer can restore threat visibility and security compliance into public cloud architectures with a platform that delivers true east-west traffic inspection.

Decrypting SSL traffic with dynamic keys: Traditional decrypting technologies are ineffective with encrypted traffic using dynamic keys like the Elliptic Curve Diffe-Hellman Exchange (ECDHE) key, thus creating blind spots in network traffic. NSP now provides a unique solution2 for decrypting dynamic SSL keys like ECDHE (this is a first in the industry). This patent-pending solution scales with workloads delivering high performance.

Ease of Use: With NSP, users have greater control on the host. The console and enhanced graphical user interface put users in control of real-time data with a “single pane of glass,” delivering centralized, web-based management. NSP is the first and only IDPS solution to combine advanced threat prevention and application awareness into a single security decision engine, plugging infrastructure gaps. It’s also a distributed platform that is not performance-hogging.

Platform: vNSP supports AWS and Azure in public cloud workloads on both Windows and Linux.

Integration: NSP works with other McAfee products, as well as the Data Exchange Layer (DXL), which shares data with non-McAfee products.

Open Source Support: NSP supports SNORT, the open source community pushing out AV signatures.

Marketplace: Customers can now access vNSP on the AWS and Azure marketplaces. (available as Bring Your Own License [BYOL]).

Another question we hear from customer is about “machine learning,” which is an important part of the future of cybersecurity in a world of increasing threat complexity. McAfee’s NSP uses machine learning, employing self-learning systems from historical data, including data from other McAfee products, such as Advanced Threat Defense and Endpoint. This is part of the evolution into ML.

Things are changing. The private and public cloud are dynamic. NSP finds common ground.

We believe it’s understandable why Gartner has placed McAfee in the Leaders quadrant in IDPS for the 11th year in a row. Grab a copy of the full report here.

12018 Gartner Magic Quadrant for Intrusion Detection and Prevention Systems

2Available in NSP only (not vNSP).

Gartner Magic Quadrant for Intrusion Detection and Prevention Systems, Craig Lawson, Claudio Neiva, 10 January 2018. From 2014-17, McAfee was included as Intel Security (McAfee). Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post NSP Finds Common Ground in a Time of Change appeared first on McAfee Blogs.

AWS Cloud: Proactive Security and Forensic Readiness – part 3

Part 3: Data protection in AWS

This is the third in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting data within AWS.

Data protection has become all the rage for organisations that are processing personal data of individuals in the EU, because the EU General Data Protection Regulation (GDPR) deadline is fast approaching.

AWS is no exception. The company is providing customers with services and resources to help them comply with GDPR requirements that may apply to their operations. These include granular data access controls, monitoring and logging tools, encryption, key management, audit capability and, adherence to IT security standards (for more information, see the AWS General Data Protection Regulation (GDPR) Center, and Navigating GDPR Compliance on AWS Whitepaper). In addition, AWS has published several privacy related whitepapers, including country specific ones. The whitepaper Using AWS in the Context of Common Privacy & Data Protection Considerations, focuses on typical questions asked by AWS customers when considering privacy and data protection requirements relevant to their use of AWS services to store or process content containing personal data.

This blog, however, is not just about protecting personal data. The following list provides guidance on protecting any information stored in AWS that is valuable to your organisation. The checklist mainly focuses on protection of data (at rest and in transit), protection of encryption keys, removal of sensitive data from AMIs, and, understanding access data requests in AWS.

The checklist provides best practice for the following:

  1. How are you protecting data at rest?
  2. How are you protecting data at rest on Amazon S3?
  3. How are you protecting data at rest on Amazon EBS?
  4. How are you protecting data at rest on Amazon RDS?
  5. How are you protecting data at rest on Amazon Glacier?
  6. How are you protecting data at rest on Amazon DynamoDB?
  7. How are you protecting data at rest on Amazon EMR?
  8. How are you protecting data in transit?
  9. How are you managing and protecting your encryption keys?
  10. How are you ensuring custom Amazon Machine Images (AMIs) are secure and free of sensitive data before publishing for internal (private) or external (public) use?
  11. Do you understand who has the right to access your data stored in AWS?

IMPORTANT NOTE: Identity and access management is an integral part of protecting data, however, you’ll notice that the following checklist does not focus on AWS IAM. I have created a separate checklist on IAM best practices here.

Best-practice checklist

1.    How are you protecting data at rest?
  • Define polices for data classification, access control, retention and deletion
  • Tag information assets stored in AWS based on adopted classification scheme
  • Determine where your data will be located by selecting a suitable AWS region
  • Use geo restriction (or geoblocking), to prevent users in specific geographic locations from accessing content that you are distributing through a CloudFront web distribution
  • Control the format, structure and security of your data by masking, making it anonymised or encrypted in accordance with the classification
  • Encrypt data at rest using server-side or client-side encryption
  • Manage other access controls, such as identity, access management, permissions and security credentials
  • Restrict access to data using IAM policies, resource policies and capability policies
2.    How are you protecting data at rest on Amazon S3?
  • Use bucket-level or object-level permissions alongside IAM policies
  • Don’t create any publicly accessible S3 buckets. Instead, create pre-signed URLs to grant time-limited permission to download the objects
  • Protect sensitive data by encrypting data at rest in S3. Amazon S3 supports server-side encryption and client-side encryption of user data, using which you create and manage your own encryption keys
  • Encrypt inbound and outbound S3 data traffic
  • Amazon S3 supports data replication and versioning instead of automatic backups. Implement S3 Versioning and S3 Lifecycle Policies
  • Automate the lifecycle of your S3 objects with rule-based actions
  • Enable MFA Delete on S3 bucket
  • Be familiar with the durability and availability options for different S3 storage types – S3, S3-IA and S3-RR.
3.    How are you protecting data at rest on Amazon EBS?
  • AWS creates two copies of your EBS volume for redundancy. However, since both copies are in the same Availability Zone, replicate data at the application level, and/or create backups using EBS snapshots
  • On Windows Server 2008 and later, use BitLocker encryption to protect sensitive data stored on system or data partitions (this needs to be configured with a password as Amazon EC2 does not support Trusted Platform Module (TPM) to store keys)
  • On Windows Server, implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions
  • On Linux instances running kernel versions 2.6 and later, you can use dmcrypt and Linux Unified Key Setup (LUKS), for key management
  • Use third-party encryption tools
4.    How are you protecting data at rest on Amazon RDS?


(Note: Amazon RDS leverages the same secure infrastructure as Amazon EC2. You can use the Amazon RDS service without additional protection, but it is suggested to encrypt data at application layer)

  • Use built-in encryption function that encrypts all sensitive database fields, using an application key, before storing them in the database
  • Use platform level encryption
  • Use MySQL cryptographic functions – encryption, hashing, and compression
  • Use Microsoft Transact-SQL cryptographic functions – encryption, signing, and hashing
  • Use Oracle Transparent Data Encryption on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model
5.    How are you protecting data at rest on Amazon Glacier?

(Note: Data stored on Amazon Glacier is protected using server-side encryption. AWS generates separate unique encryption keys for each Amazon Glacier archive, and encrypts it using AES-256)


  • Encrypt data prior to uploading it to Amazon Glacier for added protection
6.    How are you protecting data at rest on Amazon DynamoDB?

(Note: DynamoDB is a shared service from AWS and can be used without added protection, but you can implement a data encryption layer over the standard DynamoDB service)


  • Use raw binary fields or Base64-encoded string fields, when storing encrypted fields in DynamoDB
7.    How are you protecting data at rest on Amazon EMR?
  • Store data permanently on Amazon S3 only, and do not copy to HDFS at all. Apply server-side or client-side encryption to data in Amazon S3
  • Protect the integrity of individual fields or entire file (for example, by using HMAC-SHA1) at the application level while you store data in Amazon S3 or DynamoDB
  • Or, employ a combination of Amazon S3 server-side encryption and client-side encryption, as well as application-level encryption
8.    How are you protecting data in transit?
  • Encrypt data in transit using IPSec ESP and/or SSL/TLS
  • Encrypt all non-console administrative access using strong cryptographic mechanisms using SSH, user and site-to-site IPSec VPNs, or SSL/TLS to further secure remote system management
  • Authenticate data integrity using IPSec ESP/AH, and/or SSL/TLS
  • Authenticate remote end using IPSec with IKE with pre-shared keys or X.509 certificates
  • Authenticate remote end using SSL/TLS with server certificate authentication based on the server common name(CN), or Alternative Name (AN/SAN)
  • Offload HTTPS processing on Elastic Load Balancing to minimise impact on web servers
  • Protect the backend connection to instances using an application protocol such as HTTPS
  • On Windows servers use X.509 certificates for authentication
  • On Linux servers, use SSH version 2 and use non-privileged user accounts for authentication
  • Use HTTP over SSL/TLS (HTTPS) for connecting to RDS, DynamoDB over the internet
  • Use SSH for access to Amazon EMR master node
  • Use SSH for clients or applications to access Amazon EMR clusters across the internet using scripts
  • Use SSL/TLS for Thrift, REST, or Avro
9.    How are you managing and protecting your encryption keys?
  • Define key rotation policy
  • Do not hard code keys in scripts and applications
  • Securely manage keys at server side (SSE-S3, SSE-KMS) or at client side (SSE-C)
  • Use tamper-proof storage, such as Hardware Security Modules (AWS CloudHSM)
  • Use a key management solution from the AWS Marketplace or from an APN Partner. (e.g., SafeNet, TrendMicro, etc.)
10. How are you ensuring custom Amazon Machine Images (AMIs) are secure and free of sensitive data before publishing for internal (private) or external (public) use?
  • Securely delete all sensitive data including AWS credentials, third-party credentials and certificates or keys from disk and configuration files
  • Delete log files containing sensitive information
  • Delete all shell history on Linux
11. Do you understand who has the right to access your data stored in AWS?
  • Understand the applicable laws to your business and operations, consider whether laws in other jurisdictions may apply
  • Understand that relevant government bodies may have rights to issue requests for content, each relevant law will contain criteria that must be satisfied for the relevant law enforcement body to make a valid request.
  • Understand that AWS notifies customers where practicable before disclosing their data so they can seek protection from disclosure, unless AWS is legally prohibited from doing so or there is clear indication of illegal conduct regarding the use of AWS services. For additional information, visit Amazon Information Requests Portal.


For more details, refer to the following AWS resources:


Next up in the blog series, is Part 4 – Detective Controls in AWS – best practice checklist. Stay tuned.


Let us know in the comments below if we have missed anything in our checklist.

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written 😉 . Also, please note that this checklist is for guidance purposes only. For more information, or to request an in-depth security review of your cloud environment, please contact us.


Author: Neha Thethi

Editor: Gordon Smith

The post AWS Cloud: Proactive Security and Forensic Readiness – part 3 appeared first on BH Consulting.

A Brief History of Cloud Computing and Security

According to recent research1, 50% of organizations use more than one public cloud infrastructure vendor, choosing between Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform and a series of others. 85% of those using more than one cloud infrastructure provider are managing up to four1, seeking the best fit for their applications and hedging against downtime and lock-in to specific providers. Naturally, with any trend in cloud adoption, the practice of Shadow IT plays into this fragmentation.

As we look at an evolution like this, it is helpful to first understand historical precedent that lead us to this point in time, to learn from the past and remind ourselves of the progress made by others that we now enjoy. Let’s take a brief trip through history, here with cloud computing, to see how we arrived at our multi-cloud reality.

The Space Age and Dreamers

Around the same time, John F. Kennedy inspired the United States with his decisive proclamation that “We choose to go to the moon!”, leaders in computer science were dreaming of a terrestrial future with similar aspirational bounds. While working at the U.S. Pentagon’s Advanced Research Projects Agency (ARPA, now known as DARPA), then Director of the Information Processing Techniques Office J. C. R. Licklider wrote a memo to his colleagues describing a network of computers which spoke the same language and allowed data to be transmitted and worked on by programs “somewhere else”2. From his 1963 memo:

‘Consider the situation in which several different centers are netted together, each center being highly individualistic and having its own special language and its own special way of doing things. Is it not desirable, or even necessary for all the centers to agree upon some language or, at least, upon some conventions for asking such questions as “What language do you speak?”’3

‘The programs I have located throughout the system…I would like to bring them in as relocatable binary programs…either at “bring-in time” or at “run-time.”’ 3

“With a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data. I have no great objection to making that decision, for a while at any rate, but, in principle, it seems better for the computer, or the network, somehow, to do that.”3

Here he is describing the pre-cursors to the internet, and our now ubiquitous TCP/IP communication language that allows a myriad of connected devices to speak with each other and the cloud. His prediction for bringing in programs at “run-time” is all-too-familiar today with our browser-based access to the cloud applications we use, and the foresight even to predict that the physical location of those programs would not matter – leaving it up to a computer, or network to decide how to allocate resources properly.

Shared resources also sparked concern for Licklider:

“I do not want to use material from a file that is in the process of being changed by someone else. There may be, in our mutual activities, something approximately analogous to military security classification. If so, how will we handle it?” 3

While we have solved the challenge of collaborative editing in cloud applications today, his sights pointed to an issue which would eventually become of paramount importance to the information security community – how to handle sensitive data held in a location you do not physically own.

  1. C. R. Licklider’s predictions quickly transformed to reality, and through further efforts at ARPA resulted in the first iteration of the internet, or ARPANET. His inspiration to the development of the internet and cloud computing is undeniable, and like the title of his memo quoted above, “Memorandum For Members and Affiliates of the Intergalactic Computer Network”, aspires to greatness beyond what many think is possible.

Virtual (Computing) Reality

In parallel to the effort made by ARPA and many university collaborators to connect computing devices together, IBM was developing a way to make their large, “mainframe” computers more cost efficient for their customers. In 1972 they released the first iteration of virtualized computing, the VM/370 operating system.4 From the 1972 program announcement:

VM/370 is a multi-access time-shared system with two major elements:

  • The Control Program (CP) which provides an environment where multiple concurrent virtual machines can run different operating systems, such as OS, OS/VS, DOS and DOS/VS, in time-shared mode.
  • The Conversational Monitor System (CMS) which provides a general-purpose, time-sharing capability.4

Running multiple operating systems through the control program, akin to today’s concept of a hypervisor, on one mainframe computer dramatically expanded the value customers could gain from these systems, and set the stage for virtualizing data center servers in years to come. Time-sharing through the CMS gave users an ability to log in and interact with the individual VMs, a concept still used today in virtualization software and anytime you log in to access a cloud service.

Through the 80’s and 90’s, the rise of personal computers took much attention away from the development of mainframe and early datacenter computing environments. Then in 1998, VMware filed a patent for a “Virtualization system including a virtual machine monitor for a computer with a segmented architecture”5 which was “particularly well-adapted for virtualizing computers in which the hardware processor has an Intel x86 architecture.”5 – starting sales of their technology a year later. While others entered the virtualization space at the same time, VMware quickly took the lead by focusing on the difficult task of virtualizing the widely used x86 architecture, expanding the value of many existing datacenter infrastructure investments.

Cloud computing would likely not exist without the resource efficiency of virtualization. Commercial offerings like Amazon Web Services (AWS), Microsoft Azure, and others achieve their economies of scale through virtualized infrastructure, making high-end computing affordable (and sometimes free) for just about anyone.

With no ties to hardware, the abstraction from physical location Licklider predicted begins to meet reality. Applications can exist anywhere, be accessed from anywhere, and be moved as needed, allowing cloud operators to update underlying hardware without downtime for the services they run. Abstraction from physical location also enables virtualized software and infrastructure to exist far from you – and your country. The predicament of cross-border data regulation is a developing issue, with the E.U.’s General Data Protection Regulation (GDPR) taking arguably the broadest reach to date.

Everything Over the Internet

If you’re an enterprise organization running a datacenter in the late 90’s, starting to virtualize your infrastructure makes economic sense. With 20/20 vision we see in retrospect this also created an excellent business model for commercial vendors to build out virtualized infrastructure and offer software to others, who would be willing to pay less upfront for access than to host and maintain it themselves. jumped on this opportunity early, taking on the likes of Oracle and SAP for the CRM market in 1999.

In 2003, engineer Benjamin Black proposed a new infrastructure for which was “…completely standardized, completely automated, and relied extensively on web services for things like storage.”6 also mentioning “the possibility of selling virtual servers as a service.”6 CEO Jeff Bezos took notice, and in his own retrospect, commented that:  

“…we were spending too much time on fine-grained coordination between our network engineering groups and our applications programming groups. Basically, what we decided to do is build a [set of APIs] between those two layers so that you could just do coarse-grained coordination between those two groups.”7

“On the surface, superficially, [cloud computing] appears to be very different [from our retailing business].” “But the fact is we’ve been running a web-scale application for a long time, and we needed to build this set of infrastructure web services just to be able to manage our own internal house.”8

That infrastructure build-out eventually turned into a commercial service in 2006, with the launch of Elastic Compute Cloud (EC2) from Amazon Web Services (AWS). From their 2006 announcement:

“Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud…designed to make web-scale computing easier for developers. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use.”9

 The early success of, Amazon, and several others proved the economic model of delivering services over the internet, firmly cementing cloud computing as a viable method to interact with software and computing infrastructure. Rapid growth in cloud services resulted in vast amounts of data landing in the hands of organizations who did not own it – and couldn’t practically be held liable for how it was accessed. Cloud Access Security Brokers (CASBs) were first proposed in 2012, offering visibility over where cloud data was located, protection for it within services, and access controls. While CASB is a logical countermeasure to cloud data loss and compliance, many IT organizations are still in the early stages of testing.

Enter the Multi-Cloud

With the release of Microsoft Azure in 2010 and Google Cloud Platform in 2011, attractive alternatives to AWS entered the market and spurred experimentation. It was inevitable for competition to arise, but created a scenario where choosing just one provider wasn’t necessary, or even beneficial. Linux provider Redhat puts it well:

“You might find the perfect cloud solution for 1 aspect of your enterprise—a proprietary cloud fine-tuned for hosting a proprietary app, an affordable cloud perfect for archiving public records, a cloud that scales broadly for hosting systems with highly variable use rates—but no single cloud can do everything. (Or, rather, no single cloud can do everything well.)”10

Fault tolerance can also come into play, with select but major cloud outages proving that adding redundancy across multiple cloud providers can be a sound enterprise strategy. The most pertinent question that has arisen out of this trend however, is how to manage it all? Manual configuration of multiple cloud environments is naturally going to be a time-consuming effort. To speed deployment time, the concept of infrastructure-as-code (alternatively “programmable infrastructure”) was developed, evolving the nature of cloud computing once again. Author Chris Riley describes the concept:

“…instead of manually configuring infrastructure you can write scripts to do it.  But not just scripts, you can actually fully incorporate the configuration in your application’s code.”11

Commercial vendors like Puppet Labs, Chef, and Ansible have built technology on this premise, allowing for automated deployment across multiple cloud providers. For security, the challenge of fragmentation is similar, but so are the solutions. Data and applications need to be protected from external and internal threats, even misconfiguration. AWS, Azure, and Google all have well-documented divisions in the shared security responsibility between themselves and the customer.

That brings us to today, where deployment automation tools are leading the way in bringing consistent management to IT and DevOps teams. Security technology is developing in-step, adapting to infrastructure-as-code by becoming a part of automated deployment process as code itself. We invite you to learn more about how you can automate security in a multi-cloud environment by exploring the scenarios on this page.

If you’re thinking about the next stage in cloud computing’s evolution, hit us up on Twitter @Mcafee_Business and let us know what’s on your mind.


McAfee does not control or audit third-party benchmark data or the websites referenced in this document. You should visit the referenced website and confirm whether referenced data is accurate.

The post A Brief History of Cloud Computing and Security appeared first on McAfee Blogs.

AWS Cloud: Proactive Security and Forensic Readiness – part 2

Part 2: Infrastructure-level protection in AWS 

This is the second in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting your virtual infrastructure within AWS.

Protecting any computing infrastructure requires a layered or defence-in-depth approach. The layers are typically divided into physical, network (perimeter and internal), system (or host), application, and data. In an Infrastructure as a Service (IaaS) environment, AWS is responsible for security ‘of’ the cloud including the physical perimeter, hardware, compute, storage and networking, while customers are responsible for security ‘in’ the cloud, or on layers above the hypervisor. This includes the operating system, perimeter and internal network, application and data.

AWS Defense in Depth

Figure 1: AWS Defense in Depth

Infrastructure protection requires defining trust boundaries (e.g., network boundaries and packet filtering), system security configuration and maintenance (e.g., hardening and patching), operating system authentication and authorisations (e.g., users, keys, and access levels), and other appropriate policy enforcement points (e.g., web application firewalls and/or API gateways).

The key AWS service that supports service-level protection is AWS Identity and Access Management (IAM) while Virtual Private Cloud (VPC) is the fundamental service that contributes to securing infrastructure hosted on AWS. VPC is the virtual equivalent of a traditional network operating in a data centre, albeit with the scalability benefits of the AWS infrastructure. In addition, there are several other services or features provided by AWS that can be leveraged for infrastructure protection.

The following list mainly focuses on network and host-level boundary protection, protecting integrity of the operating system on EC2 instances and Amazon Machine Images (AMIs) and security of containers on AWS.

The checklist provides best practice for the following:

  1. How are you enforcing network and host-level boundary protection?
  2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?
  3. How are you managing the threat of malware?
  4. How are you identifying vulnerabilities or misconfigurations in the operating system of your Amazon EC2 instances?
  5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?
  6. How are you ensuring security of containers on AWS?
  7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?
  8. How are you creating secure custom (private or public) AMIs?

IMPORTANT NOTE: Identity and access management is an integral part of securing an infrastructure, however, you’ll notice that the following checklist does not focus on the AWS IAM service. I have covered this in a separate checklist on IAM best practices here.

Best-practice checklist

1. How are you enforcing network and host-level boundary protection?
  • Establish appropriate network design for your workload to ensure only desired network paths and routing are allowed
  • For large-scale deployments, design network security in layers – external, DMZ, and internal
  • When designing NACL rules, consider that it’s a stateless firewall, so ensure to define both outbound and inbound rules
  • Create secure VPCs using network segmentation and security zoning
  • Carefully plan routing and server placement in public and private subnets.
  • Place instances (EC2 and RDS) within VPC subnets and restrict access using security groups and NACLs
  • Use non-overlapping IP addresses with other VPCs or data centre in use
  • Control network traffic by using security groups (stateful firewall, outside OS layer), NACLs (stateless firewall, at subnet level), bastion host, host based firewalls, etc.
  • Use Virtual Gateway (VGW) where Amazon VPC-based resources require remote network connectivity
  • Use IPSec or AWS Direct Connect for trusted connections to other sites
  • Use VPC Flow Logs for information about the IP traffic going to and from network interfaces in your VPC
  • Protect data in transit to ensure the confidentiality and integrity of data, as well as the identities of the communicating parties.
2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?
  • Use firewalls including Security groups, network access control lists, and host based firewalls
  • Use rate limiting to protect scarce resources from overconsumption
  • Use Elastic Load Balancing and Auto Scaling to configure web servers to scale out when under attack (based on load), and shrink back when the attack stops
  • Use AWS Shield, a managed Distributed Denial of Service (DDoS) protection service, that safeguards web applications running on AWS
  • Use Amazon CloudFront to absorb DoS/DDoS flooding attacks
  • Use AWS WAF with AWS CloudFront to help protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources
  • Use Amazon CloudWatch to detect DDoS attacks against your application
  • Use VPC Flow Logs to gain visibility into traffic targeting your application.
3. How are you managing the threat of malware?
  • Give users the minimum privileges they need to carry out their tasks
  • Patch external-facing and internal systems to the latest security level.
  • Use a reputable and up-to-date antivirus and antispam solution on your system.
  • Install host based IDS with file integrity checking and rootkit detection
  • Use IDS/IPS systems for statistical/behavioural or signature-based algorithms to detect and contain network attacks and Trojans.
  • Launch instances from trusted AMIs only
  • Only install and run trusted software from a trusted software provider (note: MD5 or SHA-1 should not be trusted if software is downloaded from random source on the internet)
  • Avoid SMTP open relay, which can be used to spread spam, and which might also represent a breach of the AWS Acceptable Use Policy.
4. How are you identifying vulnerabilities or misconfigurations in the operating system of your Amazon EC2 instances?
  • Define approach for securing your system, consider the level of access needed and take a least-privilege approach
  • Open only the ports needed for communication, harden OS and disable permissive configurations
  • Remove or disable unnecessary user accounts.
  • Remove or disable all unnecessary functionality.
  • Change vendor-supplied defaults prior to deploying new applications.
  • Automate deployments and remove operator access to reduce attack surface area using tools such as EC2 Systems Manager Run Command
  • Ensure operating system and application configurations, such as firewall settings and anti-malware definitions, are correct and up-to-date; Use EC2 Systems Manager State Manager to define and maintain consistent operating system configurations
  • Ensure an inventory of instances and installed software is maintained; Use EC2 Systems Manager Inventory to collect and query configuration about your instances and installed software
  • Perform routine vulnerability assessments when updates or deployments are pushed; Use Amazon Inspector to identify vulnerabilities or deviations from best practices in your guest operating systems and applications
  • Leverage automated patching tools such as EC2 Systems Manager Patch Manager to help you deploy operating system and software patches automatically across large groups of instances
  • Use AWS CloudTrail, AWS Config, and AWS Config Rules as they provide audit and change tracking features for auditing AWS resource changes.
  • Use template definition and management tools, including AWS CloudFormation to create standard, preconfigured environments.
5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?
  • Use file integrity controls for Amazon EC2 instances
  • Use host-based intrusion detection controls for Amazon EC2 instances
  • Use a custom Amazon Machine Image (AMI) or configuration management tools (such as Puppet or Chef) that provide secure settings by default.
6. How are you ensuring security of containers on AWS?
  • Run containers on top of virtual machines
  • Run small images, remove unnecessary binaries
  • Use many small instances to reduce attack surface
  • Segregate containers based on criteria such as role or customer and risk
  • Set containers to run as non-root user
  • Set filesystems to be read-only
  • Limit container networking; Use AWS ECS to manage containers and define communication between containers
  • Leverage Linux kernel security features using tools like SELinux, Seccomp, AppArmor
  • Perform vulnerability scans of container images
  • Allow only approved images during build
  • Use tools such as Docker Bench to automate security checks
  • Avoid embedding secrets into images or environment variables, Use S3-based secrets storage instead.
7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?
  • Treat shared AMIs as any foreign code that you might consider deploying in your own data centre and perform the appropriate due diligence
  • Look for description of shared AMI, and the AMI ID, in the Amazon EC2 forum
  • Check aliased owner in the account field to find public AMIs from Amazon.
8. How are you creating secure custom (private or public) AMIs?
  • Disable root API access keys and secret key
  • Configure Public Key authentication for remote login
  • Restrict access to instances from limited IP ranges using Security Groups
  • Use bastion hosts to enforce control and visibility
  • Protect the .pem file on user machines
  • Delete keys from the authorized_keys file on your instances when someone leaves your organization or no longer requires access
  • Rotate credentials (DB, Access Keys)
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Ensure that software installed does not use default internal accounts and passwords.
  • Change vendor-supplied defaults before creating new AMIs
  • Disable services and protocols that authenticate users in clear text over the network, or otherwise insecurely.
  • Disable non-essential network services on startup. Only administrative services (SSH/RDP) and the services required for essential applications should be started.
  • Ensure all software is up to date with relevant security patches
  • For in instantiated AMIs, update security controls by running custom bootstrapping Bash or Microsoft Windows PowerShell scripts; or use bootstrapping applications such as Puppet, Chef, Capistrano, Cloud-Init and Cfn-Init
  • Follow a formalised patch management procedure for AMIs
  • Ensure that the published AMI does not violate the Amazon Web Services Acceptable Use Policy. Examples of violations include open SMTP relays or proxy servers. For more information, see the Amazon Web Services Acceptable Use Policy

Security at the infrastructure level, or any level for that matter, certainly requires more than just a checklist. For a comprehensive insight into infrastructure security within AWS, we suggest reading the following AWS whitepapers – AWS Security Pillar and AWS Security Best Practises.

For more details, refer to the following AWS resources:


Next up in the blog series, is Part 3 – Data Protection in AWS – best practice checklist. Stay tuned.


Let us know in the comments below if we have missed anything in our checklist!

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written 😉 . Also, please note that this checklist is for guidance purposes only. For more information, or to request an in-depth security review of your cloud environment, please contact us.


Author: Neha Thethi

Editor: Gordon Smith


The post AWS Cloud: Proactive Security and Forensic Readiness – part 2 appeared first on BH Consulting.

Decyphering the Noise Around ‘Meltdown’ and ‘Spectre’

The McAfee Advanced Threat Research (ATR) Team has closely followed the attack techniques that have been named Meltdown and Spectre throughout the lead-up to their announcement on January 3. In this post, McAfee ATR offers a simple and concise overview of these issues, to separate fact from fiction, and to provide insight into McAfee’s capabilities and approach to detection and prevention.

There has been considerable speculation in the press and on social media about the impact of these two new techniques, including which processors and operating systems are affected. The speculation has been based upon published changes to the Linux kernel. McAfee ATR did not want to add to any confusion until we could provide our customers and the general public solid technical analysis.

A fully comprehensive writeup comes from Google Project Zero in this informative technical blog, which allowed ATR to validate our conclusions. For more on McAfee product compatibility, see this business Knowledge Center article and this Consumer Support article.

The Techniques

Meltdown and Spectre are new techniques that build upon previous work, such as “KASLR”  and other papers that discuss practical side-channel attacks. The current disclosures build upon such side-channel attacks through the innovative use of speculative execution.

Speculative execution has been a feature of processors for at least a decade. Branch speculation is built on the Tomasulo algorithm. In essence, when a branch in execution depends upon a runtime condition, modern processors make a “guess” to potentially save time. This speculatively executed branch proceeds by employing a guess of the value of the condition upon which the branch must depend. That guess is typically based upon the last step of the same branch’s previous execution. The conditional value is cached for reuse in case that particular branch is taken again. There is no loss of computing time if the condition arrives at a new value because the processor must in any event wait for the value’s computation. Invalid speculative executions are thrown away. The fact that invalid speculations are tossed is a key attribute exploited by Meltdown and Spectre.

Despite the clearing of invalid speculative execution results without affecting memory or CPU registers, data from the execution may be retained in the processor caches. The retaining of invalid execution data is one of the properties of modern CPUs upon which Meltdown and Spectre depend. More information about the techniques is available on the site

Because these techniques can be applied (with variation) to most modern operating systems (Windows, Linux, Android, iOS, MacOS, FreeBSD, etc.), you may ask, “How dangerous are these?” “What steps should an organization take?” and “How about individuals?” The following risk analysis is based upon what McAfee currently understands about Meltdown and Spectre.

There is already considerable activity in the security research community on these techniques. Sample code for two of the three variants was posted by the Graz University (in an appendix of the Spectre paper). Erik Bosman has also tweeted that he has built an exploit, though this code is not yet public. An earlier example of side-channel exploitation based upon memory caches was posted to GitHub in 2016 by one Meltdown-Spectre researcher Daniel Gruss. Despite these details, as of this writing no known exploits have yet been seen in the wild. McAfee ATR will continue to monitor researchers’ and attackers’ interest in these techniques and provide updates accordingly. Given the attack surface of nearly every modern computing system and the relative ease of exploitation, it is highly likely that at least one of the aforementioned variants will be weaponized very quickly.

McAfee researchers quickly compiled the public exploit code for Spectre and confirmed its efficacy across a number of operating systems, including Windows, Linux, and MacOS.


To assess the potential impact of any vulnerability or attack technique, we must first consider its value to attackers. These exploits are uniquely attractive to malicious groups or persons because the attack surface is nearly unprecedented, the attack vector is relatively new, and the impacts (privilege escalation and leaks of highly sensitive memory) are detrimental. The only naturally mitigating factor is that these exploits require local code execution. A number of third parties have already identified JavaScript as an applicable delivery point, meaning both attacks could theoretically be run from inside a browser, effectively opening an avenue of remote delivery. As always, JavaScript is a double-edged sword, offering a more user-friendly browsing experience, but also offering attackers an increased attack surface in the context of the browser’s executing scripted code.

Any technique that allows an attacker to cross virtual machine boundaries is of particular interest, because such a technique might allow an adversary to use a cloud virtual machine instance to attack other tenants of the cloud. Spectre is designed to foster attacks across application boundaries and hence applies directly to this problem. Thus, major cloud vendors have rushed to issue patches and software updates in advance of the public disclosure of these issues.

Additionally, both Meltdown and Spectre are exceptionally hard to detect as they do not leave forensic traces or halt program execution. This makes post-infection investigations and attack attribution much more complex.


Because we believe that Meltdown and Spectre may offer real-world adversaries significant value, we must consider how they can be used. There is no remote vector to these techniques; an attacker must first deliver code to the victim. To protect against malicious JavaScript, we always urge caution when browsing the Internet. Allow scripting languages to execute only from trusted sites. McAfee Windows Security Suite or McAfee Endpoint Security (ENS) can provide warnings if you visit a known dangerous site. These McAfee products can also provide an alternate script-execution engine that prevents known malicious scripts from executing.  As operating systems are changed to mitigate Meltdown and Spectre, organizations and individuals should apply those updates as soon as possible.

Even though we have not seen any malware currently exploiting these techniques, McAfee is currently evaluating opportunities to provide detection within the scope of our products; we expect most solutions to lie within processor and operating system updates. Based on published proofs of concept, we have provided some limited detection under the names OSX/Spectre, Linux/Spectre, and Trojan-Spectre.

Microsoft has released an out-of-cycle patch because of this disclosure: Due to the nature of any patch or update, we suggest first applying manual updates on noncritical systems, to ensure compatibility with software that involves the potential use of low-level operating system features. McAfee teams are working to ensure compatibility with released patches where applicable.

While the world wonders about the potential impact of today’s critical disclosures, we also see a positive message. This was another major security flaw discovered and communicated by the information security community, as opposed to the discovery or leak of “in the wild” attacks. Will this disclosure have negative aspects? Most likely yes, but the overall effect is more global attention to software and hardware security, and a head start for the good guys on developing more robust systems and architectures for secure computing.

The post Decyphering the Noise Around ‘Meltdown’ and ‘Spectre’ appeared first on McAfee Blogs.

A Leader-Class SOC: The Sky’s the Limit

This has been quite a year for McAfee, as we not only roll out our vision, but also start to fulfill that vision.

We’ve established our world view: endpoint and cloud as the critical control points for cybersecurity and the Security Operations Center (SOC) as the central analytics hub and situation room. While we’ve talked a lot about endpoint and cloud over the past year, we’ve only recently started exposing our thinking and our innovation in the SOC, and I would like to delve a bit deeper.

SOCs provide dedicated resources for incident detection, investigation, and response. For much of the past decade, the SOC has revolved around a single tool, the Security Incident and Event Manager (or SIEM). The SIEM was used to collect and retain log data, to correlate events and generate alerts, to monitor, to report, to investigate, and to respond. In many ways, the SIEM has been the SOC.

However, in the past couple of years, we’ve seen extensive innovation in the security operations center. This innovation is being fueled by an industry-wide acceptance of the increased importance of security operations, powerful technical innovations (analytics, machine learning), and the ever-evolving security landscape. The old ways of doing things are no longer sufficient to handle increasingly sophisticated attacks. We need do something different.

McAfee believes this next generation SOC will be modular, open, and content-driven.

And automated. Integration of data, analytics, and machine learning are the foundations of the advanced SOC.

The reason for this is simple: increased volume.  In the last two years, companies polled in a McAfee survey said the amount of data they collect to support cybersecurity activities has increased substantially (28%) or somewhat (49%). There are important clues in all that data, but the new and different attacks get lost in the noise. Individual alerts are not especially meaningful – patterns, context, and correlations are required to determine potential importance, and these constructs require analytics – at high speed and sophistication, with a model for perpetually remaining up-to-date as threat actors and patterns change. We need the machines to do more of the work, freeing the humans to understand business-specific patterns, design efficient processes, and manage the policies that protect each organization’s risk posture.

SIEM remains a crucial part of the SOC. The use cases for SIEM are extensive and fundamental to SOC success: data ingestion, parsing, threat monitoring, threat analysis, and incident response. The McAfee SIEM is especially effective at high performance correlations and real-time monitoring that are now mainstream for security operations. We are pleased to announce that McAfee has been recognized for the seventh consecutive time as a leader in the Gartner Magic Quadrant for Security Information and Event Management.* And we’re not stopping there — we’re continuing to evolve our SIEM with a high volume, open data pipeline that enables companies to collect more data without breaking the bank.

An advanced SOC builds on a SIEM to further optimize analytics, integrating data, and process elements of infrastructure to facilitate identification, interpretation, and automation. A modular and open architecture helps SOC teams add in the advanced analytics and inspection elements that take SOCs efficiently from initial alert triage through to scoping and active response.

Over the past year, we’ve worked extensively partnering with over eight UEBA vendors to drive integration with our SIEM. At our recent customer conference in Las Vegas, MPOWER, we announced our partnership with Interset to deliver McAfee Behavioral Analytics. Look for more information about that in the new year. I also want to reinforce our commitment to being open and working with the broader ecosystem in this space, even as we bring an offer to market. No one has a monopoly on good ideas and good math – we’ve got to work together. Together is Power.

We also launched McAfee Investigator at MPOWER, a net new offering that takes alerts from a SIEM and uses data from endpoints and other sources to discover key insights for SOC analysts at machine speed. Leveraging machine learning and artificial intelligence, McAfee Investigator helps analysts get to high quality and accurate answers, fast.

The initial response is great: we’ve seen early adopter customers experience a 5-16x increase in
analyst investigation efficiency. Investigations that took hours are taking minutes. Investigations that took days are taking hours. Customers are excited and so are we!

In short – we have a lot cooking in the SOC and we are just getting started.

Look for continued fulfillment of McAfee’s vision in 2018. The sky’s the limit.




*Gartner Magic Quadrant for Security Information and Event Management, Kelly M. Kavanagh, Toby Bussa, 4 December 2017. From 2015-16, McAfee was listed as Intel Security, and in 2011, McAfee was listed as Nitro Security since it acquired the company in 2011.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post A Leader-Class SOC: The Sky’s the Limit appeared first on McAfee Blogs.

Cloud Risk in a Rush to Adopt – New Research from the SANS Institute

This post was written by Eric Boerger.

Twenty-one percent of organizations don’t know if their organization has been breached in the cloud.

That uncertainty, lack of control, and limited visibility is a startling indication of the state of cloud use today: The speed of adoption has invited risk that was not foreseen. Understanding that risk is key to gaining control over security in the cloud.

Many more industry insights are revealed in Cloud Security: Defense in Detail if Not in Depth: A SANS Survey completed in November and sponsored by McAfee. The survey especially delves into infrastructure-as-a-service from providers like Amazon Web Services (AWS) and Microsoft Azure, which is driving digital business transformation toward the most agile models to date.

Among the findings, some captured in the chart below, include the benchmark that 40% of organizations are storing customer personally identifiable information (PII) in the cloud – and 15% of those had experienced a misconfiguration due to quickly spun up components.

The inevitable goal of cloud adoption is, of course, quite laudable: To realize agility and costs benefits across the organization. The problem is that many IT departments and developers have rushed in, adjusting their delivery models from dedicated hardware in data centers to cloud instances, containers, and now even serverless infrastructure.

Where was security in that fast adoption? Unfortunately, often left behind. Existing endpoint or data center security tools often can’t simply be transferred to the cloud. They need to be rebuilt to run “cloud-native,” designed specifically for the unique properties of public cloud service provider environments. Added to that adjustment is often the dual responsibility of maintaining the public cloud and a virtual private cloud environment in your datacenter – two to manage.

This requires a cloud strategy across these environments: seek policy unification, not tool unification. Cloud security requires change. But there is no point in burdening the agility of the cloud with disconnected management. Your organization should have one view to your infrastructure with one set of policies that everyone understands.

McAfee teamed up with the SANS Institute on an analysis of this survey’s findings. In this presentation, we dive deeper into these points, providing key perspectives on the cloud industry at this crucial time. Tune in here:

Download and read the full report here: Cloud Security: Defense in Detail if Not in Depth: A SANS Survey. For more information on our approach to cloud security, go to

The post Cloud Risk in a Rush to Adopt – New Research from the SANS Institute appeared first on McAfee Blogs.