Daily Archives: September 2, 2020

The DRaaS Data Protection Dilemma

Written by Sarah Doherty, Product Marketing Manager at iland

Around the world, IT teams are struggling with choosing between less critical, but important tasks, versus focusing on innovative projects to help transform your business. Both are necessary for your business and need to be actioned, but should your team do all of it? Have you thought about allowing someone else to guide you through the process while your internal team continues to focus on transforming the business? 

DRaaS Data protection dilemma; outsourcing or self-managing?
Disaster recovery can take a lot of time to properly implement so it may be the right time to consider a third-party provider who can help with some of the more routine and technical aspects of your disaster recovery planning. This help can free up some of your staff’s valuable time while also safeguarding your vital data.

Outsourcing your data protection functions vs. managing them yourself
Information technology has raised many questions about how it really should be done. Some experts favour the Disaster Recovery as a Service (DRaaS) approach. They believe that data protection, although necessary, has very little to do with core business functionality. Organisations commonly outsource non-business services, which has driven many to consider the idea of employing third parties for other business initiatives. This has led some companies to believe that all IT services should be outsourced, enabling the IT team to focus solely on core business functions and transformational growth.

Other groups challenge the concept and believe that the idea of outsourcing data protection is foolish. An organisation’s ability to quickly and completely recover from a disaster - such as data loss or an organisational breach - can be the determining factor as to whether the organisation will remain in business. Some may think that outsourcing something as critical as data protection, and putting your organisation’s destiny into the hands of a third party, is a risky strategy. The basic philosophy behind this type of thinking can best be described as: “If you want something done right, do it yourself.”

Clearly, both sides have some compelling arguments. On one hand, by moving your data protection solution to the cloud, your organisation becomes increasingly agile and scalable. Storing and managing data in the cloud may also lower storage and maintenance costs. On the other hand, managing data protection in-house gives the organisation complete control. Therefore, a balance of the two approaches is needed in order to be sure that data protection is executed correctly and securely.

The answer might be somewhere in the middle
Is it better to outsource all of your organisation’s data protection functions, or is it better to manage it yourself? The best approach may be a mix of the two, using both DRaaS and Backup as a Service (BaaS). While choosing a cloud provider for a fully managed recovery solution is also a possibility, many companies are considering moving away from ‘do-it-yourself’ disaster recovery solutions and are exploring cloud-based options for several reasons.

Firstly, purchasing the infrastructure for the recovery environment requires a significant capital expenditure (CAPEX) outlay. Therefore, making the transition from CAPEX to a subscription-based operating expenditure (OPEX) model makes for easier cost control, especially for those companies with tight budgets.

Secondly, cloud disaster recovery allows IT workloads to be replicated from virtual or physical environments. Outsourcing disaster recovery management ensures that your key workloads are protected, and the disaster recovery process is tuned to your business priorities and compliance needs while also allowing for your IT resources to be freed up.

Finally, cloud disaster recovery is flexible and scalable; it allows an organisation to replicate business-critical information to the cloud environment either as a primary point of execution or as a backup for physical server systems. Furthermore, the time and expense to recover an organisation’s data is minimised, resulting in reduced business disruption.

Consequently, the disadvantages of local backups is that it can be targeted by malicious software, which targets backup applications and database backup files, proactively searching for them and fully encrypting the data. Additionally, backups, especially when organisations try to recover quickly are prone to unacceptable Recovery Point Objectives (RPO).

What to look for when evaluating your cloud provider

It is also essential when it comes to your online backups to strike a balance between micromanaging the operations and completely relinquishing any sort of responsibility. After all, it’s important to know what’s going on with your backups. Given the critical nature of the backups and recovery of your data, it is essential to do your homework before simply handing over backup operations to a cloud provider. There are a number of things that you should look for when evaluating a provider.
  • Service-level agreements that meet your needs.
  • Frequent reporting, and management visibility through an online portal.
  • All-inclusive pricing.
  • Failover assistance in a moment’s notice.
  • Do it yourself testing.
  • Flexible network layer choices.
  • Support for legacy systems.
  • Strong security and compliance standards.
These capabilities can go a long way towards allowing an organisation to check on their data recovery and backups, on an as-needed basis, while also instilling confidence that the provider is protecting the data according to your needs. The right provider should also allow you the flexibility to spend as much or as little time on data protection, proportional to your requirements.

Ultimately, using cloud backups and DRaaS is flexible and scalable; it allows an organisation to replicate business-critical information to the cloud environment either as a primary point of execution or as a backup for physical server systems. In most cases, the right disaster recovery provider will likely offer you better recovery time objectives than your company could provide on its own, in-house. Therefore as you review your options, cloud DR could be the perfect solution, flexible enough to deal with an uncertain economic and business landscape.

Top Five Most Infamous DDoS Attacks

Guest article by Adrian Taylor, Regional VP of Sales for A10 Networks 

Distributed Denial of Service (DDoS) attacks are now everyday occurrences. Whether you’re a small non-profit or a huge multinational conglomerate, your online services—email, websites, anything that faces the internet—can be slowed or completely stopped by a DDoS attack. Moreover, DDoS attacks are sometimes used to distract your cybersecurity operations while other criminal activity, such as data theft or network infiltration, is underway. 
Why are DDoS attacks bigger and more frequent than ever?
DDoS attacks are getting bigger and more frequent
The first known Distributed Denial of Service attack occurred in 1996 when Panix, now one of the oldest internet service providers, was knocked offline for several days by an SYN flood, a technique that has become a classic DDoS attack. Over the next few years, DDoS attacks became common and Cisco predicts that the total number of DDoS attacks will double from the 7.9 million seen in 2018 to something over 15 million by 2023.

But it’s not just the number of DDoS attacks that are increasing; as the bad guys are creating ever bigger botnets – the term for the armies of hacked devices that are used to generate DDoS traffic. As the botnets get bigger, the scale of DDoS attacks is also increasing. A Distributed Denial of Service attack of one gigabit per second is enough to knock most organisations off the internet but we’re now seeing peak attack sizes in excess of one terabit per second generated by hundreds of thousands, or even millions, of suborned devices. Given that IT services downtime costs companies anywhere from $300,000 to over $1,000,000 per hour, you can see that the financial hit from even a short DDoS attack could seriously damage your bottom line.

So we’re going to take a look at some of the most notable DDoS attacks to date. Our choices include some DDoS attacks that are famous for their sheer scale while others are because of their impact and consequences.

1. The AWS DDoS Attack in 2020
Amazon Web Services, the 800-pound gorilla of everything cloud computing, was hit by a gigantic DDoS attack in February 2020. This was the most extreme recent DDoS attack ever and it targeted an unidentified AWS customer using a technique called Connectionless Lightweight Directory Access Protocol (CLDAP) Reflection. This technique relies on vulnerable third-party CLDAP servers and amplifies the amount of data sent to the victim’s IP address by 56 to 70 times. The attack lasted for three days and peaked at an astounding 2.3 terabytes per second. While the disruption caused by the AWS DDoS Attack was far less severe than it could have been, the sheer scale of the attack and the implications for AWS hosting customers potentially losing revenue and suffering brand damage is significant.

2. The MiraiKrebs and OVH DDoS Attacks in 2016
On September 20, 2016, the blog of cybersecurity expert Brian Krebs was assaulted by a DDoS attack in excess of 620 Gbps, which at the time, was the largest attack ever seen. Krebs had recorded 269 DDoS attacks since July 2012, but this attack was almost three times bigger than anything his site or, for that matter, the internet had seen before.

The source of the attack was the Mirai botnet, which, at its peak later that year, consisted of more than 600,000 compromised Internet of Things (IoT) devices such as IP cameras, home routers, and video players. Mirai had been discovered in August that same year but the attack on Krebs’ blog was its first big outing.

The next Mirai attack on September 19 targeted one of the largest European hosting providers, OVH, which hosts roughly 18 million applications for over one million clients. This attack was on a single undisclosed OVH customer and driven by an estimated 145,000 bots, generating a traffic load of up to 1.1 terabits per second, and lasted about seven days. The Mirai botnet was a significant step up in how powerful a DDoS attack could be. The size and sophistication of the Mirai network were unprecedented, as was the scale of the attacks and their focus.

3. The MiraiDyn DDoS Attack in 2016
Before we discuss the third notable Mirai DDoS attack of 2016, there’s one related event that should be mentioned: On September 30, someone claiming to be the author of the Mirai software released the source code on various hacker forums and the Mirai DDoS platform has been replicated and mutated scores of times since.

On October 21, 2016, Dyn, a major Domain Name Service (DNS) provider, was assaulted by a one terabit per second traffic flood that then became the new record for a DDoS attack. There’s some evidence that the DDoS attack may have actually achieved a rate of 1.5 terabits per second. The traffic tsunami knocked Dyn’s services offline rendering a number of high-profile websites including GitHub, HBO, Twitter, Reddit, PayPal, Netflix, and Airbnb, inaccessible. Kyle York, Dyn’s chief strategy officer, reported, “We observed 10s of millions of discrete IP addresses associated with the Mirai botnet that were part of the attack.”

Mirai supports complex, multi-vector attacks that make mitigation difficult. Even though Mirai was responsible for the biggest assaults up to that time, the most notable thing about the 2016 Mirai attacks was the release of the Mirai source code enabling anyone with modest information technology skills to create a botnet and mount a Distributed Denial of Service attack without much effort.

4. The Six Banks DDoS Attack in 2012
On March 12, 2012, six U.S. banks were targeted by a wave of DDoS attacks—Bank of America, JPMorgan Chase, U.S. Bank, Citigroup, Wells Fargo, and PNC Bank. The attacks were carried out by hundreds of hijacked servers from a botnet called Brobot with each attack generating over 60 gigabits of DDoS attack traffic per second.

At the time, these attacks were unique in their persistence: Rather than trying to execute one attack and then backing down, the perpetrators barraged their targets with a multitude of attack methods in order to find one that worked. So, even if a bank was equipped to deal with a few types of DDoS attacks, they were helpless against other types of attack.

The most remarkable aspect of the bank attacks in 2012 was that the attacks were, allegedly, carried out by the Izz ad-Din al-Qassam Brigades, the military wing of the Palestinian Hamas organisation. Moreover, the attacks had a huge impact on the affected banks in terms of revenue, mitigation expenses, customer service issues, and the banks’ branding and image.

5. The GitHub Attack in 2018
On Feb. 28, 2018, GitHub—a platform for software developers—was hit with a DDoS attack that clocked in at 1.35 terabits per second and lasted for roughly 20 minutes. According to GitHub, the traffic was traced back to “over a thousand different autonomous systems (ASNs) across tens of thousands of unique endpoints.

Even though GitHub was well prepared for a DDoS attack their defences were overwhelmed—they simply had no way of knowing that an attack of this scale would be launched.

The GitHub DDoS attack was notable for its scale and the fact that the attack was staged by exploiting a standard command of Memcached, a database caching system for speeding up websites and networks. The Memcached DDoS attack technique is particularly effective as it provides an amplification factor – the ratio of the attacker’s request size to the amount of DDoS attack traffic generated – of up to a staggering 51,200 times.

And that concludes our top five line up – it is a sobering insight into just how powerful, persistent and disruptive DDoS attacks have become.

NDAA Conference: Opportunity to Improve the Nation’s Cybersecurity Posture

As Congress prepares to return to Washington in the coming weeks, finalizing the FY2021 National Defense Authorization Act (NDAA) will be a top priority. The massive defense bill features several important cybersecurity provisions, from strengthening CISA and promoting interoperability to creating a National Cyber Director position in the White House and codifying FedRAMP.

These are vital components of the legislation that conferees should work together to include in the final version of the bill, including:

Strengthening CISA

One of the main recommendations of the Cyberspace Solarium Commission’s report this spring was to further strengthen CISA, an agency that has already made great strides in protecting our country from cyberattacks. An amendment to the House version of the NDAA would do just that, by giving CISA additional authority it needs to effectively hunt for threats and vulnerabilities on the federal network.

Bad actors, criminal organizations and even nation-states are continually looking to launch opportunistic attacks. Giving CISA additional tools, resources and funding needed to secure the nation’s digital infrastructure and secure our intelligence and information is a no-brainer and Congress should ensure the agency gets the resources it needs in the final version of the NDAA.

Promoting Interoperability

Perhaps now more than ever before, interoperability is key to a robust security program. As telework among the federal workforce continues and expands, an increased variety of communication tools, devices and networks put federal networks at risk. Security tools that work together and are interoperable better provide a full range of protection across these environments.

The House version of the NDAA includes several provisions to promote interoperability within the National Guard, military and across the Federal government. The Senate NDAA likewise includes language that requires the DoD craft regulations to facilitate DoD’s access to and utilization of system, major subsystem, and major component software-defined interfaces to advance DoD’s efforts to generate diverse and effective kill chains. The regulations and guidance would also apply to purely software systems, including business systems and cybersecurity systems. These regulations would also require acquisition plans and solicitations to incorporate mandates for the delivery of system, major subsystem, and major component software defined interfaces.

For too long, agencies have leveraged a grab bag of tools that each served a specific purpose, but didn’t offer broad, effective coverage. Congress has a valuable opportunity to change that and encourage more interoperable solutions that provide the security needed in today’s constantly evolving threat landscape.

Creating a National Cyber Director Position

The House version of the NDAA would establish a Senate-confirmed National Cyber Director within the White House, in charge of overseeing digital operations across the federal government. This role, a recommendation of the Cyberspace Solarium Commission, would give the federal government a single point person for all things cyber.

As former Rep. Mike Rodgers argued in an op-ed published in The Hill last month, “the cyber challenge that we face as a country is daunting and complex.” We face new threats every day. Coordinating cyber strategy across the federal government, rather than the agency by agency approach we have today, is critical to ensuring we stay on top of threats and effectively protect the nation’s critical infrastructure, intellectual property and data from an attack.

Codifying FedRAMP

The FedRAMP Authorization Act, included in the House version of the NDAA, would codify the FedRAMP program and give it a formal standing for Congressional review, a  critical step towards making the program more efficient and useful for agencies across the government. Providing this program more oversight will further validate the FedRAMP approved products from across the industry as safe and secure for federal use. The FedRAMP authorization bill also includes language that will help focus the Administration’s attention on the need to secure the vulnerable spaces between and among cloud services and applications.  Agencies need to focus on securing these vulnerabilities between and among clouds since sophisticated hackers target these seams that too often are left unprotected.

Additionally, the Pentagon has already committed to FedRAMP reciprocity. FedRAMP works – and codifying it to bring the rest of the Federal government into the program would offer an excellent opportunity for wide-scale cloud adoption, something the federal government would benefit greatly from.

We hope that NDAA conferees will consider these important cyber provisions and include them in the final version of the bill and look forward to continuing our work with government partners on important cyber issues like these.

 

 

The post NDAA Conference: Opportunity to Improve the Nation’s Cybersecurity Posture appeared first on McAfee Blogs.