Category Archives: cloud

Appliance upgrades and excessive network latency delaying Office 365 deployments

Gateway appliance upgrades and excessive network latency continue to delay Office 365 deployments, according to Zscaler. Network congestion The survey showed that 41 percent of enterprises found network congestion as a major factor impacting the user experience. To address network issues, almost half of the enterprises surveyed are exploring the use of direct internet connections, which can reduce congestion and eliminate the latency caused by backhauling traffic. “Modern cloud applications require modern cloud architectures. Many … More

The post Appliance upgrades and excessive network latency delaying Office 365 deployments appeared first on Help Net Security.

Adding to the Toolkit – Some Useful Tools for Cloud Security

With more business applications moving to the cloud, the ability to assess network behavior has changed from a primarily systems administration function to a daily security operations concern. And whilst sec-ops teams are already familiar with firewall and network device log tools, these can be of limited used in a “cloud first” business where much […]… Read More

The post Adding to the Toolkit – Some Useful Tools for Cloud Security appeared first on The State of Security.

12 Common Tools for Your DevOps Team

DevOps is revolutionizing the way enterprises deliver apps to the market by blending software development and information technology operations. This convergence creates an assembly line for the cloud, as Tim Erlin wrote for The State of Security, by increasing the rate at which companies can develop apps and deliver them to users. 12 Common Tools […]… Read More

The post 12 Common Tools for Your DevOps Team appeared first on The State of Security.

Cloud Services: Your Rocket Ship Control Board

The move to the cloud — in many ways — is a return to the early days of computing. When I took my first computer class in 1978, we used an IBM 360 system time share. We rented out time on a remote system — sent our jobs over a modem to a computer at […]… Read More

The post Cloud Services: Your Rocket Ship Control Board appeared first on The State of Security.

How to Secure Your Information on AWS: 10 Best Practices

The 2017 Deep Root Analytics incident that exposed the sensitive data of 198 million Americans, or almost all registered voters at the time, should remind us of the risks associated with storing information in the cloud. Perhaps the most alarming part is that this leak of 1.1 terabytes of personal data was avoidable. It was […]… Read More

The post How to Secure Your Information on AWS: 10 Best Practices appeared first on The State of Security.

The security challenges of managing complex cloud environments

Holistic cloud visibility and control over increasingly complex environments are essential for successful deployments in various cloud scenarios, a Cloud Security Alliance and AlgoSec study reveals. The survey of 700 IT and security professionals aims to analyze and better understand the state of adoption and security in current hybrid cloud and multi-cloud security environments, including public cloud, private cloud, or use of more than one public cloud platform. Key findings of the study include: Cloud … More

The post The security challenges of managing complex cloud environments appeared first on Help Net Security.

Organizations face operational deficiencies as they deal with hybrid IT complexities

While enterprises are taking advantage of cloud computing, all enterprises have on-going data center dependencies, a Pulse Secure report reveals. One fifth of respondents anticipate lowering their data center investment, while more than 40% indicated a material increase in private and public cloud investment. According to the “2019 State of Enterprise Secure Access” report, “the shift in how organizations deliver Hybrid IT services to enable digital transformation must also take into consideration empowering a mobile … More

The post Organizations face operational deficiencies as they deal with hybrid IT complexities appeared first on Help Net Security.

SD-WAN adoption growing as enterprises embrace app-centric architecture transition

The connected era and cloud-based environment have created a need to redesign network operations, according to ResearchAndMarkets. In addition, businesses find it operationally draining to utilize resources on ensuring a connected ecosystem rather than focusing on critical business issues. Software-defined Wide Area Network (SD-WAN) helps enterprises build an agile and automated environment, which is streamlined to support new-age cloud environments and traditional Multiprotocol Label Switching (MPLS) systems in a cost-efficient manner. To understand enterprise perceptions … More

The post SD-WAN adoption growing as enterprises embrace app-centric architecture transition appeared first on Help Net Security.

Letting Go While Holding On: Managing Cyber Risk in Cloud Environments

As recently as 2017, security and compliance professionals at many of Tripwire’s large enterprise and government customers were talking about migration to the cloud as a possibility to be considered and cautiously explored in the coming years. Within a year, the tone had changed. What used to be “we’re thinking about it” became “the CIO […]… Read More

The post Letting Go While Holding On: Managing Cyber Risk in Cloud Environments appeared first on The State of Security.

The Latest Techniques Hackers are Using to Compromise Office 365

It was only a few years back that cloud technology was in its infancy and used only by tech-savvy, forward-thinking organisations. Today, it is commonplace. More businesses than ever are making use of cloud services in one form another. And recent statistics suggest that cloud adoption has reached 88 percent. It seems that businesses now […]… Read More

The post The Latest Techniques Hackers are Using to Compromise Office 365 appeared first on The State of Security.

With Great Freedom Comes Great Cloud Responsibility

Modern digital and cloud technology underpins the shift that enables businesses to implement new processes, scale quickly and serve customers in a whole new way. Historically, organisations would invest in their own IT infrastructure to support their business objectives, and the IT department’s role would be focused on keeping the “lights on.” To minimize the […]… Read More

The post With Great Freedom Comes Great Cloud Responsibility appeared first on The State of Security.

Mitigating Risks in Cloud Migration

Companies are moving to incorporate the cloud into their computing infrastructure at a phenomenal rate. This is, without question, a very positive move. It permits companies to scale processing resources up and down in response to changing demands, giving companies the operational equivalent of unlimited resources while paying only for the resources that are actually […]… Read More

The post Mitigating Risks in Cloud Migration appeared first on The State of Security.

The Next Enterprise Challenge: How Best to Secure Containers and Monolithic Apps Together, Company-wide

Submitted by: Adam Boyle, Head of Product Management, Hybrid Cloud Security, Trend Micro

When it comes to software container security, it’s important for enterprises to look at the big picture, taking into account how they see containers affecting their larger security requirements and future DevOps needs. Good practices can help security teams build a strategy that allows them to mitigate pipeline and runtime data breaches and threats without impacting the agility and speed of application DevOps teams.

Security and IT professionals need to address security gaps across agile and fast pace DevOps teams but are challenged by decentralized organizational structures and processes. And since workloads and environments are constantly changing, there’s no silver bullet when it comes to cybersecurity, there’s only the info we have right now. To help address the current security landscape, and where containers fit in, we need to ask ourselves a few key insightful questions.

How have environments for workloads changed and what are development teams focused on today? (i.e. VMs to cloud to serverless > DevOps, microservices, measured on delivery and uptime).

Many years ago, the customer conversations that we were having were primarily around cloud migration of traditional, legacy workloads from the data center to the cloud. While performing this “forklift,” they had to figure out what IT tools, including security, would operate naturally in the cloud. Many traditional tools they had already purchased previously, before the cloud migration, didn’t quite work out when expanded to the cloud, as they weren’t designed with the cloud in mind.

In the last few years, those same customers who migrated workloads to the cloud, started new projects and applications using cloud native services, and building these new capabilities on Docker, and serverless technologies such as AWS Lambda, Azure functions, and Google Cloud functions. These technologies have enabled teams to adopt DevOps practices where they can essentially continuously deliver “parts” of applications independently of one and other, ultimately delivering outcome much faster to market than one would with a monolithic application. The new projects have given birth to CI/CD pipelines leveraging Git for source code management (using hosted versions from either GitHub or BitBucket), Jenkins, or Bamboo for DevOps automation, and Kubernetes for automated deployment, scaling, and management of containers.

Both of these thrusts are now happening in parallel driving two distinct classes of applications—legacy, monolithic applications, and cloud native microservices. The questions for an enterprise are simple; how do I protect all of this? And, how can I do this at scale?

What’s worth mentioning is also the maturity of IT and how these teams have evolved into leveraging “infrastructure as code.” That is, writing code to automate IT operations. This includes security as code or writing code to automate security. Cloud operations teams have embraced automation and have partnered with application teams to help scale the automation of DevOps driven applications while meeting IT requirements. Technologies like Chef, Puppet, Ansible, Terraform, and Saltstack are popular in our customer base when automating IT operations.

While vulnerabilities and threats will always persist, what is the bigger impact on the organization when it comes to DevOps teams and security?

What we hear when companies talk to us is that the enterprise is not designed to do security at scale for a large set of DevOps teams who are continuously doing build->ship->run and need continuous and uninterrupted protection.

A typical enterprise has a centralized IT and Security Ops teams who are serving many groups of internal customers, typically business units which are responsible for generating the revenue for the enterprise.

So, how do tens or hundreds of DevOps teams who continuously build->ship->run, interact with centralized IT and security Ops teams, at scale? How do IT and security Ops teams embrace these practices and technologies, and ensure that they are secure—both the CI/CD pipelines and the runtime environments?

These relationships between IT teams (including security teams), and the business units have largely been at an executive level (VP and up), but to deliver “secure” outcomes continuously—a more effective, a more automated interplay—between these teams are needed.

We see many DevOps teams across business units incorporating security with varying degrees of rigor—or buying their own security solutions that only work for their set of projects—purchased out of their business unit budgets, implementing them with limited security experience and no tie-back to corporate security requirements or IT awareness. This leads to a fragmented, duplicated, complicated, inconsistent security posture across the enterprise and higher cost models on security tools that becomes more complicated to manage and support. The pressure to deliver faster within a business unit is sometimes at the cost of a coordinated enterprise-wide security plan…we’ve all been there and there’s often a balance that needs to be found.

The relationship, at the working level, between business unit application teams and centralized IT and security Ops teams is not always a collaborative, healthy, working relationship. Sometimes it has friction. Sometimes, the root cause of this friction can be related to application teams having significantly higher understanding of DevOps practices, tools, along with higher understanding of technologies, such as Docker, Kubernetes, and various serverless technologies, than their IT counterparts. We’ve seen painful, unproductive discussions between application teams trying to educate their IT/Security teams on the basics, let alone, get them on board with doing things differently. The friction increases if the IT and security Ops teams don’t embrace the changes in their approach when it comes to container and serverless security. So, to us, the biggest impact right now is if a DevOps team wants to deliver continuously while following an enterprise-wide approach, then they need a continuous relationship with the IT and security operations teams, whom must become well educated in DevOps practices and tools, and microservices technologies (Docker, Kubernetes, etc), where the teams work together to automate security across pipelines and runtime environments. And, the IT and security teams need to level up their skills sets to DevOps and all associated technologies, and help teams move faster, not slower, while meeting security requirements.

To be true DevOps, the “Dev” part would be the application team, the “Ops” part would be ideally IT/security and they would work together. So, we think there could be some pretty big shifts on how enterprises organize their development teams and IT/security Ops teams as the traditional organizational models favor delivery of monolithic, legacy applications that do not do continuous delivery.

The biggest opportunity for IT/security Ops teams is engage the application teams with a set of self-service tools and practices that are positioned to help the teams move faster, while meeting the IT and security requirements for the enterprise.

How can DevOps teams take advantage of the best security measures to better protect emerging technologies like container environments and their supporting tools?

Well this could easily be a book! However, let’s try to summarize at a high level and break this down into “build,” “ship,” and “run.” By no means is this a complete list, but enough to get started. For more information, contact us

Security teams have fantastic opportunity to introduce the following services across the enterprise, for all teams with pipelines and runtimes, in a consistent way.

Build

  • Identification of all source code repositories and CI/CD pipelines across the enterprise, and their owners.
  • Static code analysis.
  • Image scanning for malware.
  • Image scanning for vulnerabilities.
  • Image scanning for configuration assessments (ensure images are hardened).
  • Indicator of Compromise (IoC) queries across all registries.
  • Secrets detection.
  • Automated security testing in staged environments, with generic and custom test suites.
  • Image Assertion – declaring an image to be suitable for the next stage of the lifecycle based on the results of scans, tests, etc.
  • Provide reporting to both application teams and security teams on security scorecards.

Ship

  • Admission control – the allowance or blocking of images to runtime environments based on security policies, image assertion, and/or signed images.
  • Vulnerability shielding of containers – Trend Micro will be releasing this capability later this year.

Run

  • Runtime protection of Docker and Kubernetes, including anomaly detection of abnormal changes or configurations.
  • Hardening of Kubernetes and Docker.
  • Using Kubernetes network policy capabilities for micro-segmentation, and not a third-party solution. Then, ensure Kubernetes is itself protected.
  • Container host-based protection—covering malware, vulnerabilities, application control, integrity monitoring, and log inspection—for full stack defense of the applications and the host itself.
  • Kubernetes pod-based protection (privileged container – one per pod). This can be shipped into Kubernetes environments just like any other container, and no host-based agent is required.

For serverless containers and serverless, application protection in every image or serverless function (AppSec library focusing on RASP, OWASP, malware, and vulnerabilities inside the application execution path). Trend Micro will be releasing an offer later this year to address this.

Trend Micro provides a stronger and more robust full lifecycle approach to container security. This approach helps application teams meet compliance and IT security requirements for continuous delivery in CI/CD pipelines and runtime environments. With multiple security capabilities, complete automation resources, and world class threat intelligence research teams, Trend Micro is a leader in the cybersecurity needs of today’s application and container driven organizations.

Learn more at www.trendmicro.com/containers.

The post The Next Enterprise Challenge: How Best to Secure Containers and Monolithic Apps Together, Company-wide appeared first on .

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.