Daily Archives: July 1, 2018

The Effects of DDoS Attacks on Essential Services

Public services continue to fall victim to distributed denial of service (DDoS) attacks with many industry experts, including Corero, predicting that this is going to get worse before it gets better. Our collective pessimism is being fuelled by dire warnings from government agencies that Nation State sponsored cyber-criminals are continuing to focus their efforts on penetrating critical national infrastructure systems, such as energy grids, nuclear facilities, transportation networks and even drinking water supplies. While motivations may not always be completely clear, the potential effect is an impact on security, economic stability, and even public health.

DDoS attacks can disrupt the availability of essential services we use as part of our everyday life. Previous reports have highlighted the dangers of infrastructure attacks, such as last October’s DDoS attacks against Swedish railway systems which disrupted travel. In addition, the WannaCry ransomware attacks in May last year demonstrated the potential volume and strength of cyberattacks on essential services and reduced people’s ability to access these services.

Only last month, a DDoS attack on Danish rail operator, DSB, paralyzed ticketing systems resulting in travel chaos.

The consequences of a successful DDoS attack against an enterprise can be dire – from financial costs to a negative impact on a brand’s reputation. However, when it comes to the systems that underpin our essential services, the impact from a successful attack can be devastating. For example, network downtime can have a serious economic impact as it can affect productivity, cause physical damage and could even endanger public safety.

Critical infrastructure systems at risk

In recent years, DDoS attacks have become more complex, with many combinations of different attack approaches, known as vectors, being used.

Indeed, the ability to take systems offline has never been easier as DDoS attack tools, whilst illegal in many countries, are readily accessible and inexpensive. So-called DDoS stresser or booter services are frequently enabled by large networks, known as botnets, of hijacked Internet of Things (IoT) devices.

Another serious concern is the number of Internet-connected systems and devices that either form part of or are connected to industrial control systems. As organizations become increasingly reliant on the convenience of Internet accessibility, the potential attack surface for damaging cyber-attacks, including DDoS, increases. As a result, organizations need to ensure they have adequate firewalls, access mechanisms and real-time protections in place to eliminate the Internet-borne threats to their control networks.

Critical infrastructure operators in energy, healthcare and transportation cannot leave DDoS attack resilience to chance. Corero’s recent Freedom of Information survey revealed that most UK critical infrastructure organisations (51%) are potentially vulnerable to these attacks. These organizations have failed to invest in technology that can detect and immediately mitigate short-duration DDoS attacks (i.e. those last less than 10 minutes) on their networks. Corero’s DDoS Trends Reports have long shown that these short duration, modestly scaled attacks dominate the threat landscape. Operators of essential services should not be complacent as even these short attacks can significantly impede service delivery.

NIS Regulations and best practices

On 10th May this year the EU NIS Directive became law in all 28 EU member states. The regulations require that operators of essential services “must take appropriate and proportionate technical and organisational measures to manage risks posed to the security of the network and information systems on which their essential service relies”. In the UK, the best practice guidance is stipulated by the National Cyber Security Centre (NCSC). The NIS Regulations arrive with a £17million big “stick fine” for those who fail. Hopefully, operators will see this as a “carrot” to upgrade their cyber-protection to defend against DDoS and other cyber-threats.

Contact us if you’d like to find out how Corero can help you prevent DDoS attacks impacting your ability to deliver service.

Scaling Network Security: The Scaled Network Security Architecture

Posted under: Research and Analysis

After considering the challenges of existing network security architectures (RIP Moat) we laid out a number of requirements for the new network security. This includes the needs for scale, intelligence, and flexibility. That’s all well and good, but how do you get there? We’ll wrap up this series by discussing a couple key architectural constructs which will influence how you build your future network security architecture.

But before we go into specifics, let’s wrap a few caveats around the architecture. Not everything works for every organization. There may be cultural impediments to some of the ideas we recommend. We point this out because any new way of doing things can face resistance from folks who will be impacted. Yo will need to decide which ideas are suitable for your current problems, and which battles are not worth fighting.

There may also be technical challenges, especially with very large networks. Not so much conceptually – faster networks and increased flexibility are already common, regardless of the size of your network. The challenge is more in terms of phasing migration. But nothing we will recommend requires a flash cutover, nor are any of these ideas incompatible with existing network security constructs. We have always advocated customer-controlled migration, which entails deciding when you will embrace new capabilities – not some arbitrary requirement from a vendor or any other influencer.

Access Control Everywhere

Our first construct to hit is access control everywhere. This is pretty fundamental because network security is about controlling access to key resources. Duh. We have been making pointing out that segmentation is your friend for years. But in traditional networks it became very hard to do true access control scalably, because data flows weren’t predictable, workloads and data move around, and users need to connect from wherever they are.

The advent of software defined everything (including networks) has given us an opportunity to more effectively manage who gets access to what, and when. The key is setting the policy. Yes, you start with critical data and who can & should access it from where to set your baseline. But the larger the network and the more dispersed employees and resources (including mobility and the cloud) are, the tougher it is. So you do the best you can with the initial set of policies, and then hit it from the other side. Your new network security should be able to monitor traffic flows and suggest a workable access control policy. Obviously you’ll need to scrutinize and tune the policy while comparing it against the initial cut you took, but this will accelerate your effort.

Returning to the need for flexibility, you should be able to adapt policies as needed. Sometimes even on the fly, within parameters defined by policy. That doesn’t mean you need to embrace machines making policy changes without human oversight or intervention, at least at first. In a customer-controlled migration you determine the pace of automation, enabling you to get comfortable with policies and ensure maximum uptime and security.

Applying Security Controls

With segmentation reducing attack surface by preventing unauthorized access to critical resources, you still need to ensure authorized connections and sessions are not doing anything malicious. But devices get compromised, so we can’t forget the prevention and detection tactics we’ve been using on our networks for decades. Those are still very much needed, but as described under requirements, we need to be more intelligent about when security controls are used. You have probably spent a couple million ($CURRENCY) on network security controls, so you might as well make the best use of that investment.

Once again we return to the importance of policy-based network security. Depending on the source, destination, application, time of day, geography, and about a zillion other attributes (okay, we may be exaggerating a bit), we want to leverage a set of controls to protect data. Not every control applies to every session, so the network security platform needs to selectively apply controls.


Before you start worrying about which controls to apply to which traffic, you need to make sure you can actually inspect the sessions. With more and more network traffic encrypted nowadays, before you can apply security controls you will likely need to decrypt. We wrote about this at length in Security and Privacy on the Encrypted Network, but things have changed a bit over the past few years.

The standard approach to network decryption involves intercepting the connection to the destination (called person-in-the-middle) and then decrypting the session using a master key. The decryption device then routes the decrypted stream to the appropriate security control per policy, and then sets up a separate encrypted connection to the destination server. And yes, our political correctness may be getting the best of us, but we’re pretty sure that network security equipment is not gender-binary, so we like ‘person’ in the middle.

Any network security platform will need to provide decryption capabilities as needed. But that’s getting more complicated, as described in the TLS 1.3 Controversy. Clearly a person in the middle weakens the overall security of a connection, because any organization (some good – like your internal security team; and some bad – like adversaries) could theoretically get in the middle to sniff the session. The TLS 1.3 specification addresses that weakness by implementing Perfect Forward Security, which uses a different key for each session to prevent a single master key which could monitor everything.

Obviously not being able to get in the middle of network sessions eliminates your ability to inspect traffic and enforce security policies on the network. To be clear, it will take a long time for TLS 1.3 to become pervasive; in the meantime your connections can negotiate down to TLS 1.2, which still allows person-in-the-middle. But we need to start thinking about different, likely endpoint-centric, approaches to inspecting traffic before it hits the encrypted network.

Contextual Protection

Assuming we can inspect traffic on the network, we want to implement a policy-centric security approach. That means identifying the traffic and determining which security control(s) are appropriate based on the specifics of the connection. Context helps ensure you are using the appropriate security controls, which both improves security posture and helps to optimize control capacity (as we’ll discuss below).

The best way to understand this is with a few simple examples:

  • Ingress: In case of an inbound connection you want to protect against malware coming into the network, as well as application attacks. So you can set a policy that routes email traffic through an email security gateway and then a network-based malware scanner. Or maybe you take email from an email security service and then run it through your malware scanner or IPS to ensure any links in the message aren’t malicious. To protect application traffic first the connection goes through a WAF, but you can also run it through an IPS to detect more traditional attacks. Similarly you’d like to be able to leverage different controls if the session originates in a hostile country which demands more scrutiny.
  • Egress: Looking at it from the other end, if you are dealing with outbound traffic you first want to decrypt an encrypted session and then send it through a web filter to will determine whether it is being misused, connecting to a malicious site, or showing patterns which may indicate command and control traffic. But depending on what kind of data is in the payload, you might also want that connection to run through a DLP device to ensure data is not misused. You’ll want to provide context for DLP inspection because it is very resource intensive.

These examples are deliberately oversimplified, but contextual protection enables you to use the controls you need to protect a specific connection.

Optimizing Capacity

As mentioned in our Requirements post, you don’t always have the luxury of upgrading network security controls at the same time as network bandwidth. Additionally, heavy-duty Deep Packet Inspection, as described in the examples of contextual protection above, may not be needed for all traffic – especially given the significant resources it requires. So when determining which controls are used on which connections, it’s important to make sure capacity is factored into the mix.

You don’t want to compromise security due to capacity. But a network security platform which can give you a sense of when specific security controls are at capacity, as well as potentially buffer connections so packets aren’t dropped, can provide a graceful way to manage network capacity.

For another example, if you recently upgraded your data center network to 100GB but don’t have the security budget to increase the speed of your internal segmentation firewalls, you can have a network security platform buffer traffic while the firewalls enforce policy. This is not a great answer because it impacts application traffic, but the alternative might be to either violate segmentation rules (which probably won’t sit well with the auditors) or drop packets.

Another example is intelligently routing connections to authorized SaaS applications, but through your security web gateway service rather than your internal DLP engine, because you already have a CASB monitoring activity in those SaaS applications. That can help your DLP device scale more effectively. Again, simple examples illustrate how intelligently selecting security controls per connection is useful.

Getting There

The key here is to not get wrapped up trying to boil the ocean. You can start small, perhaps implementing an SDN in front of your egress security controls to apply the policies we discussed. Or possibly introducing a packet broker in front of a key application to make sure appropriate security controls are not overwhelmed in case of a traffic flood. You could start thinking about micro-segmentation in your virtualized data center, and map those capabilities to all new applications being deployed in IaaS (Infrastructure as a Service). Or you might be interested in a newfangled Zero Trust access control environment or a Secure Network as a Service offering for employee access, and roll out intelligent networks internally to provide access to some resources (which remote employees need), while segmenting everything else.

The possibilities for how to migrate to this kind of network security platform are endless, and there is no right or wrong answer. There is only the reality that your security controls cannot scale at the same rate as your networks, which means you need to apply intelligence to how security controls are deployed within your environment.

- Mike Rothman (0) Comments Subscribe to our daily email digest