Category Archives: cloud

Survey: 84% of Security Pros Said Their Organizations Struggled to Maintain Security Configurations in the Cloud

Headlines continue to suggest that organizations’ cloud environments make for tantalizing targets for digital attackers. Illustrating this point, the 2019 SANS State of Cloud Security survey found “a significant increase in unauthorized access by outsiders into cloud environments or to cloud assets” between 2017 (12 percent) and 2018 (19 percent). These findings beg the question: […]… Read More

The post Survey: 84% of Security Pros Said Their Organizations Struggled to Maintain Security Configurations in the Cloud appeared first on The State of Security.

Forensics in the Cloud: What You Need to Know

Cloud computing has transformed the IT industry, as services can now be deployed in a fraction of the time that it used to take. Scalable computing solutions have spawned large cloud computing companies such as Amazon Web Services (AWS), Google Cloud and Microsoft Azure. With a click of a button, personnel can create or reset […]… Read More

The post Forensics in the Cloud: What You Need to Know appeared first on The State of Security.

Test Your Knowledge on How Businesses Use and Secure the Cloud

Security used to be an inhibitor to cloud adoption, but now the tables have turned, and for the first time we are seeing security professionals embrace the cloud as a more secure environment for their business. Not only are they finding it more secure, but the benefits of cloud adoption are being accelerated in-step with better security.

Do you know what’s shaping our new world of secure cloud adoption? Do you know what the best practices are for you to accelerate your own business with the cloud? Test your knowledge in this quiz.

Note: There is a widget embedded within this post, please visit the site to participate in this post's widget.

Not prepared? Lucky for you this is an “open-book” test. Find some cheat sheets and study guides below.

Report: Cloud Adoption and Risk Report: Business Growth Edition

Blog: Top Findings from the Cloud Adoption and Risk Report: Business Growth Edition

Blog: Why Security Teams Have Come to Embrace the Cloud

MVISION Cloud Data Sheet

MVISION Cloud

The post Test Your Knowledge on How Businesses Use and Secure the Cloud appeared first on McAfee Blogs.

Happy Birthday TaoSecurity.com


Nineteen years ago this week I registered the domain taosecurity.com:

Creation Date: 2000-07-04T02:20:16Z

This was 2 1/2 years before I started blogging, so I don't have much information from that era. I did create the first taosecurity.com Web site shortly thereafter.

I first started hosting it on space provided by my then-ISP, Road Runner of San Antonio, TX. According to archive.org, it looked like this in February 2002.


That is some fine-looking vintage hand-crafted HTML. Because I lived in Texas I apparently reached for the desert theme with the light tan background. Unfortunately I didn't have the "under construction" gif working for me.

As I got deeper into the security scene, I decided to simplify and adopt a dark look. By this time I had left Texas and was in the DC area, working for Foundstone. According to archive.org, the site look like this in April 2003.


Notice I've replaced the oh-so-cool picture of me doing American Kenpo in the upper-left-hand corner with the classic Bruce Lee photo from the cover of The Tao of Jeet Kune Do. This version marks the first appearance of my classic TaoSecurity logo.

A little more than two years later, I decided to pursue TaoSecurity as an independent consultant. To launch my services, I painstakingly created more hand-written HTML and graphics to deliver this beauty. According to archive.org, the site looked like this in May 2005.


I mean, can you even believe how gorgeous that site is? Look at the subdued gray TaoSecurity logo, the red-highlighted menu boxes, etc. I should have kept that site forever.

We know that's not what happened, because that wonder of a Web site only lasted about a year. Still to this day not really understanding how to use CSS, I used a free online template by Andreas Viklund to create a new site. According to archive.org, the site appeared in this form in July 2006.


After four versions in four years, my primary Web site stayed that way... for thirteen years. Oh, I modified the content, SSH'ing into the server hosted by my friend Phil Hagen, manually editing the HTML using vi (and careful not to touch the CSS).

Then, I attended AWS re:inforce the last week in June, 2019. I decided that although I had tinkered with Amazon Web Services as early as 2010, and was keeping an eye on it as early as 2008, I had never hosted any meaningful workloads there. A migration of my primary Web site to AWS seemed like a good way to learn a bit more about AWS and an excuse to replace my teenage Web layout with something that rendered a bit better on a mobile device.

After working with Mobirise, AWS S3, AWS Cloudfront, AWS Certificate Manager, AWS Route 53, my previous domain name servers, and my domain registrar, I'm happy to say I have a new TaoSecurity.com Web site. The front page like this:


The background is an image of Milnet from the late 1990s. I apologize for the giant logo in the upper left. It should be replaced by a resized version later today when the AWS Cloudfront cache expires.

Scolling down provides information on my books, which I figured is what most people who visit the site care about.


For reference, I moved the content (which I haven't been updated) about news, press, and research to individual TaoSecurity Blog posts.

It's possible you will not see the site, if your DNS servers have the old IP addresses cached. That should all expire no later than tomorrow afternoon, I imagine.

Let's see if the new site lasts another thirteen years?

Mitigating Risks in Cloud Migration

Companies are moving to incorporate the cloud into their computing infrastructure at a phenomenal rate. This is, without question, a very positive move. It permits companies to scale processing resources up and down in response to changing demands, giving companies the operational equivalent of unlimited resources while paying only for the resources that are actually […]… Read More

The post Mitigating Risks in Cloud Migration appeared first on The State of Security.

As organizations continue to adopt multicloud strategies, security remains an issue

97 percent of organizations are adopting multicloud strategies for mission-critical applications and nearly two-thirds are using multiple vendors for mission-critical workloads, a Virtustream survey reveals. The study, conducted by Forrester Consulting, is based on a global survey of more than 700 cloud technology decision makers at businesses with more than 500 employees. The study examines the current state of enterprise IT strategies for cloud-based workloads and details the increasing interest and needs of IT decision … More

The post As organizations continue to adopt multicloud strategies, security remains an issue appeared first on Help Net Security.

Security and compliance obstacles among the top challenges for cloud native adoption

Cloud native adoption has become an important trend among organizations as they move to embrace and employ a combination of cloud, containers, orchestration, and microservices to keep up with customers’ expectations and needs. To discover more about the motivations and challenges of companies adopting cloud native infrastructure, the O’Reilly “How Companies Adopt and Apply Cloud Native Infrastructure” report surveyed 590 practitioners, managers and CxOs from across the globe, and found that while nearly 70 percent … More

The post Security and compliance obstacles among the top challenges for cloud native adoption appeared first on Help Net Security.

The Next Enterprise Challenge: How Best to Secure Containers and Monolithic Apps Together, Company-wide

Submitted by: Adam Boyle, Head of Product Management, Hybrid Cloud Security, Trend Micro

When it comes to software container security, it’s important for enterprises to look at the big picture, taking into account how they see containers effecting their larger security requirements and future DevOps needs. Good practices can help security teams build a strategy that allows them to mitigate pipeline and runtime data breaches and threats without impacting the agility and speed of application DevOps teams.

Security and IT professionals need to address security gaps across agile and fast pace DevOps teams but are challenged by decentralized organizational structures and processes. And since workloads and environments are constantly changing, there’s no silver bullet when it comes to cybersecurity, there’s only the info we have right now. To help address the current security landscape, and where containers fit in, we need to ask ourselves a few key insightful questions.

How have environments for workloads changed and what are development teams focused on today? (i.e. VMs to cloud to serverless > DevOps, microservices, measured on delivery and uptime).

Many years ago, the customer conversations that we were having were primarily around cloud migration of traditional, legacy workloads from the data center to the cloud. While performing this “forklift,” they had to figure out what IT tools, including security, would operate naturally in the cloud. Many traditional tools they had already purchased previously, before the cloud migration, didn’t quite work out when expanded to the cloud, as they weren’t designed with the cloud in mind.

In the last few years, those same customers who migrated workloads to the cloud, started new projects and applications using cloud native services, and building these new capabilities on Docker, and serverless technologies such as AWS Lambda, Azure functions, and Google Cloud functions. These technologies have enabled teams to adopt DevOps practices where they can essentially continuously deliver “parts” of applications independently of one and other, ultimately delivering outcome much faster to market than one would with a monolithic application. The new projects have given birth to CI/CD pipelines leveraging Git for source code management (using hosted versions from either GitHub or BitBucket), Jenkins, or Bamboo for DevOps automation, and Kubernetes for automated deployment, scaling, and management of containers.

Both of these thrusts are now happening in parallel driving two distinct classes of applications—legacy, monolithic applications, and cloud native microservices. The questions for an enterprise are simple; how do I protect all of this? And, how can I do this at scale?

What’s worth mentioning is also the maturity of IT and how these teams have evolved into leveraging “infrastructure as code.” That is, writing code to automate IT operations. This includes security as code or writing code to automate security. Cloud operations teams have embraced automation and have partnered with application teams to help scale the automation of DevOps driven applications while meeting IT requirements. Technologies like Chef, Puppet, Ansible, Terraform, and Saltstack are popular in our customer base when automating IT operations.

While vulnerabilities and threats will always persist, what is the bigger impact on the organization when it comes to DevOps teams and security?

What we hear when companies talk to us is that the enterprise is not designed to do security at scale for a large set of DevOps teams who are continuously doing build->ship->run and need continuous and uninterrupted protection.

A typical enterprise has a centralized IT and Security Ops teams who are serving many groups of internal customers, typically business units which are responsible for generating the revenue for the enterprise.

So, how do tens or hundreds of DevOps teams who continuously build->ship->run, interact with centralized IT and security Ops teams, at scale? How do IT and security Ops teams embrace these practices and technologies, and ensure that they are secure—both the CI/CD pipelines and the runtime environments?

These relationships between IT teams (including security teams), and the business units have largely been at an executive level (VP and up), but to deliver “secure” outcomes continuously—a more effective, a more automated interplay—between these teams are needed.

We see many DevOps teams across business units incorporating security with varying degrees of rigor—or buying their own security solutions that only work for their set of projects—purchased out of their business unit budgets, implementing them with limited security experience and no tie-back to corporate security requirements or IT awareness. This leads to a fragmented, duplicated, complicated, inconsistent security posture across the enterprise and higher cost models on security tools that becomes more complicated to manage and support. The pressure to deliver faster within a business unit is sometimes at the cost of a coordinated enterprise-wide security plan…we’ve all been there and there’s often a balance that needs to be found.

The relationship, at the working level, between business unit application teams and centralized IT and security Ops teams is not always a collaborative, healthy, working relationship. Sometimes it has friction. Sometimes, the root cause of this friction can be related to application teams having significantly higher understanding of DevOps practices, tools, along with higher understanding of technologies, such as Docker, Kubernetes, and various serverless technologies, than their IT counterparts. We’ve seen painful, unproductive discussions between application teams trying to educate their IT/Security teams on the basics, let alone, get them on board with doing things differently. The friction increases if the IT and security Ops teams don’t embrace the changes in their approach when it comes to container and serverless security. So, to us, the biggest impact right now is if a DevOps team wants to deliver continuously while following an enterprise-wide approach, then they need a continuous relationship with the IT and security operations teams, whom must become well educated in DevOps practices and tools, and microservices technologies (Docker, Kubernetes, etc), where the teams work together to automate security across pipelines and runtime environments. And, the IT and security teams need to level up their skills sets to DevOps and all associated technologies, and help teams move faster, not slower, while meeting security requirements.

To be true DevOps, the “Dev” part would be the application team, the “Ops” part would be ideally IT/security and they would work together. So, we think there could be some pretty big shifts on how enterprises organize their development teams and IT/security Ops teams as the traditional organizational models favor delivery of monolithic, legacy applications that do not do continuous delivery.

The biggest opportunity for IT/security Ops teams is engage the application teams with a set of self-service tools and practices that are positioned to help the teams move faster, while meeting the IT and security requirements for the enterprise.

How can DevOps teams take advantage of the best security measures to better protect emerging technologies like container environments and their supporting tools?

Well this could easily be a book! However, let’s try to summarize at a high level and break this down into “build,” “ship,” and “run.” By no means is this a complete list, but enough to get started. For more information, contact us

Security teams have fantastic opportunity to introduce the following services across the enterprise, for all teams with pipelines and runtimes, in a consistent way.

Build

  • Identification of all source code repositories and CI/CD pipelines across the enterprise, and their owners.
  • Static code analysis.
  • Image scanning for malware.
  • Image scanning for vulnerabilities.
  • Image scanning for configuration assessments (ensure images are hardened).
  • Indicator of Compromise (IoC) queries across all registries.
  • Secrets detection.
  • Automated security testing in staged environments, with generic and custom test suites.
  • Image Assertion – declaring an image to be suitable for the next stage of the lifecycle based on the results of scans, tests, etc.
  • Provide reporting to both application teams and security teams on security scorecards.

Ship

  • Admission control – the allowance or blocking of images to runtime environments based on security policies, image assertion, and/or signed images.
  • Vulnerability shielding of containers – Trend Micro will be releasing this capability later this year.

Run

  • Runtime protection of Docker and Kubernetes, including anomaly detection of abnormal changes or configurations.
  • Hardening of Kubernetes and Docker.
  • Using Kubernetes network policy capabilities for micro-segmentation, and not a third-party solution. Then, ensure Kubernetes is itself protected.
  • Container host-based protection—covering malware, vulnerabilities, application control, integrity monitoring, and log inspection—for full stack defense of the applications and the host itself.
  • Kubernetes pod-based protection (privileged container – one per pod). This can be shipped into Kubernetes environments just like any other container, and no host-based agent is required.

For serverless containers and serverless, application protection in every image or serverless function (AppSec library focusing on RASP, OWASP, malware, and vulnerabilities inside the application execution path). Trend Micro will be releasing an offer later this year to address this.

Trend Micro provides a stronger and more robust full lifecycle approach to container security. This approach helps application teams meet compliance and IT security requirements for continuous delivery in CI/CD pipelines and runtime environments. With multiple security capabilities, complete automation resources, and world class threat intelligence research teams, Trend Micro is a leader in the cybersecurity needs of today’s application and container driven organizations.

Learn more at www.trendmicro.com/containers.

The post The Next Enterprise Challenge: How Best to Secure Containers and Monolithic Apps Together, Company-wide appeared first on .

Worldwide IT spending to grow just 1.1% in 2019

Worldwide IT spending is projected to total $3.79 trillion in 2019, an increase of 1.1 percent from 2018, according to the latest forecast by Gartner. “Currency headwinds fueled by the strengthening U.S. dollar have caused us to revise our 2019 IT spending forecast down from the previous quarter,” said John-David Lovelock, research vice president at Gartner. “Through the remainder of 2019, the U.S. dollar is expected to trend stronger, while enduring tremendous volatility due to … More

The post Worldwide IT spending to grow just 1.1% in 2019 appeared first on Help Net Security.

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.