Category Archives: cloud

Expanding attack surfaces leave security teams stretched thin

30% of businesses globally have seen an increase in attacks on their IT systems as a result of the pandemic, HackerOne reveals. This is according to C-Level IT and security execs at global businesses, 64% of which believe their organization is more likely to experience a data breach due to COVID-19. Remote working and expanding attack surfaces “The COVID-19 crisis has shifted life online,” says Marten Mickos, CEO of HackerOne. “As companies rush to meet … More

The post Expanding attack surfaces leave security teams stretched thin appeared first on Help Net Security.

Half of IT teams can’t fully utilize cloud security solutions due to understaffing

There are unrealized gaps between the rate of implementation or operation and the effective use of cloud security access brokers (CASB) within the enterprise, according to a global Cloud Security Alliance survey of more than 200 IT and security professionals from a variety of organization sizes and locations. Utilize cloud security solutions “CASB solutions have been underutilized on all the pillars but in particular on the compliance, data security, and threat protection capabilities within the … More

The post Half of IT teams can’t fully utilize cloud security solutions due to understaffing appeared first on Help Net Security.

The two sides of data protection – Are you ready for anything?

More than ever before, organizations need a multi-layer approach to secure their data. This includes strategies to both defend and recover the data if the bad guys get in. “Unfortunately, chances are that something is going to slip through at some point,” said Will Urban, Senior Technologist with iland, at a recent ITWC webinar. There…

How Cloud Mitigation Techniques Can Help Prevent Ransomware and Phishing Attacks

The COVID-19 pandemic revealed flaws in the American healthcare system that were always there. The only difference now is that those flaws have been brought to light. In the wake of the pandemic, a new host of cyberattacks occurred within the healthcare sector. Malicious hackers aimed to take advantage of the crisis with a combination […]… Read More

The post How Cloud Mitigation Techniques Can Help Prevent Ransomware and Phishing Attacks appeared first on The State of Security.

Have You Considered Your Organization’s Technical Debt?

TL;DR Deal with your dirty laundry.

Have you ever skipped doing your laundry and watched as that pile of dirty clothes kept growing, just waiting for you to get around to it? You’re busy, you’re tired and you keep saying you’ll get to it tomorrow. Then suddenly, you realize that it’s been three weeks and now you’re running around frantically, late for work because you have no clean socks!

That is technical debt.

Those little things that you put off, which can grow from a minor inconvenience into a full-blown emergency when they’re ignored long enough.

Piling Up

How many times have you had an alarm go off, or a customer issue arise from something you already knew about and meant to fix, but “haven’t had the time”? How many times have you been working on something and thought, “wow, this would be so much easier if I just had the time to …”?

That is technical debt.

But back to you. In your craze to leave for work you manage to find two old mismatched socks. One of them has a hole in it. You don’t have time for this! You throw them on and run out the door, on your way to solve real problems. Throughout the day, that hole grows and your foot starts to hurt.

This is really not your day. In your panicked state this morning you actually managed to add more pain to your already stressed system, plus you still have to do your laundry when you get home! If only you’d taken the time a few days ago…

Coming Back to Bite You

In the tech world where one seemingly small hole – one tiny vulnerability – can bring down your whole system, managing technical debt is critical. Fixing issues before they become emergent situations is necessary in order to succeed.

If you’re always running at full speed to solve the latest issue in production, you’ll never get ahead of your competition and only fall further behind.

It’s very easy to get into a pattern of leaving the little things for another day. Build optimizations, that random unit test that’s missing, that playbook you meant to write up after the last incident – technical debt is a real problem too! By spending just a little time each day to tidy up a few things, you can make your system more stable and provide a better experience for both your customers and your fellow developers.

Cleaning Up

Picture your code as that mountain of dirty laundry. Each day that passes, you add just a little more to it. The more debt you add on, the more daunting your task seems. It becomes a thing of legend. You joke about how you haven’t dealt with it, but really you’re growing increasingly anxious and wary about actually tackling it, and what you’ll find when you do.

Maybe if you put it off just a little bit longer a hero will swoop in and clean up for you! (A woman can dream, right?) The more debt you add, the longer it will take to conquer it, and the harder it will be and the higher the risk is of introducing a new issue.

This added stress and complexity doesn’t sound too appealing, so why do we do it? It’s usually caused by things like having too much work in progress, conflicting priorities and (surprise!) neglected work.

Managing technical debt requires only one important thing – a cultural change.

As much as possible we need to stop creating technical debt, otherwise we will never be able to get it under control. To do that, we need to shift our mindset. We need to step back and take the time to see and make visible all of the technical debt we’re drowning in. Then we can start to chip away at it.

Culture Shift

My team took a page out of “The Unicorn Project” (Kim, 2019) and started by running “debt days” when we caught our breath between projects. Each person chose a pain point, something they were interested in fixing, and we started there. We dedicated two days to removing debt and came out the other side having completed tickets that were on the backlog for over a year.

We also added new metrics and dashboards for better incident response, and improved developer tools.

Now, with each new code change, we’re on the lookout. Does this change introduce any debt? Do we have the ability to fix that now? We encourage each other to fix issues as we find them whether it’s with the way our builds work, our communication processes or a bug in the code.

We need to give ourselves the time to breathe, in both our personal lives or our work day. Taking a pause between tasks not only allows us to mentally prepare for the next one, but it gives us time to learn and reflect. It’s in these pauses that we can see if we’ve created technical debt in any form and potentially go about fixing it right away.

What’s Next?

The improvement of daily work ultimately enables developers to focus on what’s really important, delivering value. It enables them to move faster and find more joy in their work.

So how do you stay on top of your never-ending laundry? Your family chooses to makes a cultural change and decides to dedicate time to it. You declare Saturday as laundry day!

Make the time to deal with technical debt –your developers, security teams, and your customers will thank you for it.

 

The post Have You Considered Your Organization’s Technical Debt? appeared first on .

Fixing cloud migration: What goes wrong and why?

 

The cloud space has been evolving for almost a decade. As a company we’re a major cloud user ourselves. That means we’ve built up a huge amount of in-house expertise over the years around cloud migration — including common challenges and perspectives on how organizations can best approach projects to improve success rates.

As part of our #LetsTalkCloud series, we’ve focused on sharing some of this expertise through conversations with our own experts and folks from the industry. To kick off the series, we discussed some of the security challenges solution architects and security engineers face with customers when discussing cloud migrations. Spoiler…these challenges may not be what you expect.

 

Drag and drop

 

This lack of strategy and planning from the start is symptomatic of a broader challenge in many organizations: There’s no big-picture thinking around cloud, only short-term tactical efforts. Sometimes we get the impression that a senior exec has just seen a ‘cool’ demo at a cloud vendor’s conference and now wants to migrate a host of apps onto that platform. There’s no consideration of how difficult or otherwise this would be, or even whether it’s necessary and desirable.

 

These issues are compounded by organizational siloes. The larger the customer, the larger and more established their individual teams are likely to be, which can make communication a major challenge. Even if you have a dedicated cloud team to work on a project, they may not be talking to other key stakeholders in DevOps or security, for example.

 

The result is that, in many cases, tools, applications, policies, and more are forklifted over from on-premises environments to the cloud. This ends up becoming incredibly expensive. as these organizations are not really changing anything. All they are doing is adding an extra middleman, without taking advantage of the benefits of cloud-native tools like microservices, containers, and serverless.

 

There’s often no visibility or control. Organizations don’t understand they need to lockdown all their containers and sanitize APIs, for example. Plus, there’s no authority given to cloud teams around governance, cost management, and policy assignment, so things just run out of control. Often, shared responsibility isn’t well understood, especially in the new world of DevOps pipelines, so security isn’t applied to the right areas.

 

Getting it right

 

These aren’t easy problems to solve. From a security perspective, it seems we still have a job to do in educating the market about shared responsibility in the cloud, especially when it comes to newer technologies, like serverless and containers. Every time there’s a new way of deploying an app, it seems like people make the same mistakes all over again — presuming the vendors are in charge of security.

 

Automation is a key ingredient of successful migrations. Organizations should be automating everywhere, including policies and governance, to bring more consistency to projects and keep costs under control. In doing so, they must realize that this may require a redesign of apps, and a change in the tools they use to deploy and manage those apps.

 

Ultimately, you can migrate apps to the cloud in a couple of clicks. But the governance, policy, and management that must go along with this is often forgotten. That’s why you need clear strategic objectives and careful planning to secure more successful outcomes. It may not be very sexy, but it’s the best way forward.

 

To learn more about cloud migration, check out our blog series. And catch up on all of the latest trends in DevOps to learn more about securing your cloud environment.

The post Fixing cloud migration: What goes wrong and why? appeared first on .

Happy Birthday TaoSecurity.com


Nineteen years ago this week I registered the domain taosecurity.com:

Creation Date: 2000-07-04T02:20:16Z

This was 2 1/2 years before I started blogging, so I don't have much information from that era. I did create the first taosecurity.com Web site shortly thereafter.

I first started hosting it on space provided by my then-ISP, Road Runner of San Antonio, TX. According to archive.org, it looked like this in February 2002.


That is some fine-looking vintage hand-crafted HTML. Because I lived in Texas I apparently reached for the desert theme with the light tan background. Unfortunately I didn't have the "under construction" gif working for me.

As I got deeper into the security scene, I decided to simplify and adopt a dark look. By this time I had left Texas and was in the DC area, working for Foundstone. According to archive.org, the site look like this in April 2003.


Notice I've replaced the oh-so-cool picture of me doing American Kenpo in the upper-left-hand corner with the classic Bruce Lee photo from the cover of The Tao of Jeet Kune Do. This version marks the first appearance of my classic TaoSecurity logo.

A little more than two years later, I decided to pursue TaoSecurity as an independent consultant. To launch my services, I painstakingly created more hand-written HTML and graphics to deliver this beauty. According to archive.org, the site looked like this in May 2005.


I mean, can you even believe how gorgeous that site is? Look at the subdued gray TaoSecurity logo, the red-highlighted menu boxes, etc. I should have kept that site forever.

We know that's not what happened, because that wonder of a Web site only lasted about a year. Still to this day not really understanding how to use CSS, I used a free online template by Andreas Viklund to create a new site. According to archive.org, the site appeared in this form in July 2006.


After four versions in four years, my primary Web site stayed that way... for thirteen years. Oh, I modified the content, SSH'ing into the server hosted by my friend Phil Hagen, manually editing the HTML using vi (and careful not to touch the CSS).

Then, I attended AWS re:inforce the last week in June, 2019. I decided that although I had tinkered with Amazon Web Services as early as 2010, and was keeping an eye on it as early as 2008, I had never hosted any meaningful workloads there. A migration of my primary Web site to AWS seemed like a good way to learn a bit more about AWS and an excuse to replace my teenage Web layout with something that rendered a bit better on a mobile device.

After working with Mobirise, AWS S3, AWS Cloudfront, AWS Certificate Manager, AWS Route 53, my previous domain name servers, and my domain registrar, I'm happy to say I have a new TaoSecurity.com Web site. The front page like this:


The background is an image of Milnet from the late 1990s. I apologize for the giant logo in the upper left. It should be replaced by a resized version later today when the AWS Cloudfront cache expires.

Scolling down provides information on my books, which I figured is what most people who visit the site care about.


For reference, I moved the content (which I haven't been updated) about news, press, and research to individual TaoSecurity Blog posts.

It's possible you will not see the site, if your DNS servers have the old IP addresses cached. That should all expire no later than tomorrow afternoon, I imagine.

Let's see if the new site lasts another thirteen years?

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.