Category Archives: Application Security

How cloud technology is transforming the healthcare industry

All over the world, many governments face countless issues in their quest for a digitised health service. The healthcare system, as a whole, faces unprecedented challenges, thanks to a reduction

The post How cloud technology is transforming the healthcare industry appeared first on The Cyber Security Place.

Can a Mature Bug Bounty Program Help Solve the Security Talent Shortage?

The IT skills gap has become a cybersecurity risk in its own right. As the security talent shortage increases, many organizations are considering alternatives to traditional hiring, including bug bounty programs, which offer formalized rewards for third-party disclosure of vulnerabilities.

These programs aren’t new: Web browser vendor Netscape launched one of the first bug bounty programs in 1995 when it offered cash rewards to users who discovered security flaws in Netscape Navigator 2.0. While the concept is nearly 25 years old — and familiar to many security leaders — adoption remains relatively low.

What common challenges do organizations encounter when rolling out bug bounty initiatives? How can they overcome these obstacles to maximize their return and soften the impact of the security talent shortage?

Bug Bounties Are No Silver Bullet for Security

Even the most outspoken proponents of bug bounty programs recognize that application vulnerability programs do not constitute a silver bullet solution to close the cybersecurity talent gap. As Katie Moussouris, a subject matter expert and MIT Sloan School of Management visiting scholar, stated in her presentation at the 2018 RSA Conference, bug bounty programs have created perverse incentives for extortion.

However, there’s almost certainly a role for application vulnerability disclosure programs, and the right approach to these initiatives could help solve talent woes. The success of such a program depends on its maturity level, including capacity planning and triage labor for disclosed vulnerabilities.

A 2018 global survey from bug bounty platform HackerOne revealed vital insights into the motivations and demographics of 1,698 self-identified white-hat hackers, who make up the majority of bug bounty hunters around the world. Surprisingly, the survey found that most white hats are more interested in satisfying their curiosity and developing their hacking skills than earning money for their efforts. Roughly 15 percent of respondents cited a desire to learn new techniques, while 14 percent said they participate in vulnerability disclosure programs to challenge themselves.

The money can still be a significant draw for top security talent in global markets. According to the survey, top ethical hackers based in India out-earn median software engineers by 16 times. Top researchers worldwide, meanwhile, earn 2.7 times more than typical software engineers. While 37 percent of ethical hackers identify as hobbyists, many find the pursuit lucrative — with 12 percent earning at least $20,000 per year.

Perhaps the most important takeaway from this research is ethical hackers’ desire to share their findings. Nearly one in four reported failing to disclose a vulnerability because they were unable to find a formal channel for reporting, while 13 percent said they participate in ethical hacking merely because they like a particular brand.

Deriving Value From Bug Bounty Programs

Bug bounties are big business, but many organizations have failed to derive much value from these initiatives. According to Moussouris’ RSA Conference presentation, the security vulnerability program at one major tech company receives 200,000 reports each year. The majority of reported vulnerabilities sent through this channel are related to cross-site scripting (XSS). For organizations with mismanaged vulnerability programs and poor triage processes, bug bounty programs could present a unique drain on resources.

“Capacity planning [and] maturity is the right way forward,” Moussouris noted. In her presentation, she encouraged organizations to create success road maps for vulnerability disclosure — and asserted that it’s time to consider the difference between “paying for bugs versus actually becoming more secure.”

Moussouris also had other suggestions for organizations:

  • Understand the majority of bug bounty flaws and prioritize fixing these vulnerabilities internally.
  • Avoid “low-hanging fruit” security flaws that cause the majority of data breaches, such as insecure S3 buckets.
  • Understand that bug bounty programs are not a path to comprehensive security.
  • Avoid compensating bug bounty hunters better than employees to protect morale.

Standardizing Vulnerability Disclosure

With over two decades of bug bounty history to draw from, enterprises hoping to adopt or refine their application vulnerability disclosure programs have plenty of best practices, models and guidelines to reference. As organizations work toward standardized methodologies for vulnerability identification, triage and patching around bug bounty programs, the adoption of standardized methodologies for vulnerability disclosure and handling is critical.

For vulnerability disclosure, ISO 29147 offers a unified framework for identifying internal and external flaws. This framework can help organizations organize and scale reporting; assign risk and impact; and triage vulnerabilities based on risk. ISO 30111, meanwhile, provides standardized guidelines for vulnerability handling, including responding to and resolving identified flaws.

Bug bounties are not a replacement for third-party penetration testing. While thousands of security researchers around the globe self-identify as white-hat hackers, individuals who participate in vulnerability disclosure programs are motivated by myriad factors — ranging from financial gain to pure curiosity.

Related to this Article

Moussouris encouraged organizations to consider behavioral economics principles in establishing bug bounty rates to attract the right contributions. Avoiding overcompensating bounty-hunters can protect employee morale — while internally practicing good security hygiene can attract talented researchers and contributions.

Making a Dent in the Security Talent Shortage

There’s a place for bug bounty programs within a comprehensive security framework. Third-party researchers can discover security vulnerabilities that were missed by internal and external testing processes. Organizations with carefully structured programs can maximize the latent talent in the white-hat hacker force by carefully managing incentives.

Perhaps more importantly, creating large bounties and tough problems — while managing simple vulnerabilities internally — can enable organizations to leverage vulnerability disclosure programs as a viable tool for recruiting top talent.

Bug bounty programs are unlikely to solve the security talent shortage completely, and they’re certainly no replacement for comprehensive security testing and internal vulnerability identification and handling processes. However, organizations can benefit significantly from formalized channels for vulnerability disclosure by understanding these programs’ relative strengths and weaknesses.

The post Can a Mature Bug Bounty Program Help Solve the Security Talent Shortage? appeared first on Security Intelligence.

Mobile App Security Risky Across Sectors

While mobile app security is an issue across all sectors, 50% of apps that come from media and entertainment businesses are putting users at risk. New research from BitSight found that a

The post Mobile App Security Risky Across Sectors appeared first on The Cyber Security Place.

Clustering App Attacks with Machine Learning (Part 2): Calculating Distance

In our previous post in this series we discussed our motivation to cluster attacks on apps, the data we used and how we enriched it by extracting more meaningful features out of the raw data. We talked about the many features that can be extracted from IP and URL. In this blog post we’ll discuss one of the more difficult and important tasks in any clustering algorithm – how to calculate distance.

Measuring Distance Between Attacks

The next thing we need to do is determine a way to calculate the distance between two attacks. This is a core stage of the algorithm, as it determines when two attacks are similar, which in general is what the algorithm is trying to achieve. Calculating a distance between two points in the plane is easy – there is a precise formula to do it – but how can we calculate the distance between two URLs, or two IPs?

We will need to find a method to calculate the distance for every meaningful feature we have in our data. Here we will suggest a couple of methods to do so for certain features where the method is not that trivial. There is no universal truth about which distance method is the best, some work better in certain situations than others.

It is important to note that although we may use a different method to calculate the distance for each feature, we also need all the distances to be on the same scale, otherwise we may have biased results. For example, if two attacks have a distance between 0 and 1 in feature A and between 0 and 100 in feature B it may cause a significant bias in our results. You can’t assume the meaning of feature A distance 1 for both because the distance ratio for feature B is much bigger. If you did, distance 1 for feature B would seem—inaccurately—extremely small.

Here are some methods for calculating distance:

Levenshtein Distance

The need to calculate the distance between two strings is very common for our data, with one of the most notable features being the URL and all the extra features we extracted from it. The Levenshtein distance, which is part of the larger family of edit distance, is a well-known measure to calculate the distance between two strings.

The Levenshtein distance between two strings is the minimum amount of single character edits required to change one string into the other. By edits we mean insertion, deletion or substitution of a single character. See the following example (Figure 1) where the Levenshtein distance between the URL /pictures/cat.jpg to the URL /pictures/dog.jpg is three:

Levenshtein distance example

Figure 1: Levenshtein distance is the minimum amount of single character edits required to change one string into the other.

Note that the Levenshtein distance measure is bound above by the length of the longer string. That’s why long URLs tend have higher distances between them than short URLs. In order to reduce this bias we can scale the distance measure to be between 0 and 1 by dividing the distance by the length of the longer string of the two.

Discrete Distance

If x and y are two samples of any data (they can be thought of as two URLs or two countries), then:Discrete distanceIn simple terms, if the two objects are exactly the same, then the distance between them is zero, otherwise it is one. This distance measure might seem simple, even too simple, but in certain situations it works extremely well.

For example, say we want to calculate the distance between two resource extensions of the URL. Many resource extensions are words with three characters (css, php, jpg, etc.) and comparing similarity between two very short words is not always a good option. Also, in this case, different resource extensions usually suggest that the attacks targeted different kinds of pages which might indicate that these are actually two different attacks and shouldn’t be in the same cluster.

Distance Between IPs

Here we only deal with IPV4, which is IP in the format of “number4.number3.number2.number1”, where each number is between 0 and 255. IPV6 has different distance measures that relate to its structure. We suggest a few methods to calculate the distance between two different IPs. The important thing to determine is under what conditions two IPs are similar to each other.

First, consider the condition that two IPs are similar if their numbers are close, where numbers to the right are more significant than numbers to the left. So, for example if we have the following IPs:

IP-1 = 203.132.63.117
IP-2 = 203.132.63.54
IP-3 = 203.134.89.117

IP-1 is closer to IP-2 than to IP-3.

IP as Four-Dimensional Data

An IPV4 consists of four numbers between 0 and 255, then we can think of it as a point in a four-dimensional space. It is easy to calculate the distance between two points in a four-dimensional space, but in our case we also must consider the fact that the numbers to the right are more important than the numbers to the left. Given two IPs:

IP-1 = X4.X3.X2.X1
IP-2 = Y4.Y3.Y2.Y1

We can calculate the distance using a weighted Euclidean distance in the following form:

Euclidean distance

The weights here are 1, 10, 100, 1000, and can be changed to fit the desired result. Different weights can give different scales of data, therefore it is recommended to normalize the distance to have values between 0 and 1. We can also take a square root to make the distance more like a regular Euclidean distance, but it is not necessary.

IP as 32-Dimensional Data

It might seem very convenient to view an IP as a point in a four-dimensional space, but actually it is more coherent to view it as point in a 32-dimensional space. Each IPV4 is represented in 32-bits, because it consists of four numbers between 0 and 255, each of them represented in 8-bits.

For example, the following IP:

IP = 203.132.63.117

Would become:

IP-32bits = 11001011100001000011111101110101

We can utilize this representation to calculate distance between IPs. Let IP1 and IP2 be two IPs in a 32-bit representation, then their distance is:

distance between IPs

To explain the mathematical formula in words, we do bitwise XOR between the two IPs and count the leading zeroes (from the left). The distance is 32 minus the number of leading zeroes…then we multiply the result by 1/32  to normalize the distance to be between 0 and 1. This distance measure counts the numbers of identical bits (from the left) the two IPs have.

Although this method may look less intuitive than the previous one, we found it performed a lot better. Also, in this method we don’t have to decide on arbitrary weights and the scaling is very obvious.

IP as Geolocation

It is also possible to view the IP as an exact geographic location using coordinates and calculate the distance between two IPs as the physical distance between the coordinates (see Figure 2).

Distance between IPs as geolocation

Figure 2: Distance between IPs as geolocation. IP from Paris and IP from Frankfurt.

We found this method less successful than the previous methods described. Instead we used the general geographic location which includes the country and subdivision as separate features from the IP, and calculated the distance between them separately using weighted discrete distance. Here we gave more weight to subdivision than country because two attacks from the same subdivision are much more likely to be similar than two attacks from the same country.

Reducing the Correlation

As you may have noticed, there is a very high correlation between certain features we extracted. For example, two attacks that came from the same IP would also have the same class A, B and C, the same country and the same coordinates. Or when two attacks have the same URL they would also have all the same features we enriched from the URL, this would be even more obvious when the attacks targeted some general URL like the home page of an application.

Having correlation between features may cause bad results if we don’t handle it correctly. That is because two attacks that, for example, target the home page of an application may seem to be identical in many different features related to the URL, although all of these features are correlated to one another! Hence we need to find a way to manage this high correlation between all the features we enriched.

In order to handle this correlation issue we decided to split all the features into context “dimensions”. Each dimension represents a single general attribute of the attack—for example the attack origin, attack tool, the target or the type of attack. Each feature is assigned to a single dimension, so the IP and all its enriched features were assigned to the origin dimension, the URL and its enriched features to the target dimension, etc. Next, we calculated the distance for every feature in the data and gave an aggregated distance for each dimension, where we normalized all the distances of the dimensions to be in the same scale. The final distance is a weighted score between the distances of each dimension. See Figure 3 for some of the dimensions we used.

Features split into dimensions

Figure 3: Features split into dimensions – each feature is assigned to a single context dimension. There is no correlation between the different dimensions, only inside each dimension. The final distance is a weighted average of the distances from each dimension.

This method splits the features into dimensions which are entirely uncorrelated with each other. Also, we can give different weights to different attributes of the attack as we see fit based on our knowledge of the domain. For example, we saw in our data that the origin and attack type dimensions are more significant than the target dimension, thus we gave the target less weight in the total calculation.

Next Up: Algorithm Results

To conclude, in this post we discussed one of the core stages of the algorithm – the distance calculation. We explained a couple of methods used to calculate distances between complex features like the URL and IP. Each feature has many different methods to calculate distance, with its own pros and cons, and choosing a best method depends on the specific needs and experiments on test data. Finally we showed a method to reduce the correlation between features using context dimensions. In the next and final post we’ll discuss the clustering algorithm itself. We’ll share how to do clustering in real-time scenarios where only a small amount of data can be stored in memory. And also show application attacks that the algorithm found in actual customers’ data.

Application Security Attacks: Will New NYDFS Regulation Protect NYC Financial Institutions?

You know banks and related financial institutions are primary targets for cyberattacks and other security threats. In fact, notorious 20th-century bank robber Willie Sutton famously said he robbed banks “because that’s where the money is.”

Times really haven’t changed much since then. Even as IT security is tightened, attackers are finding more innovative ways to target financial institutions — which is why it’s imperative to upgrade IT security systems and application security programs regularly.

The banking, financial services and insurance (BFSI) sector is impacted by various regulations that protect such organizations and their customers from potential cyberthreats. The New York Department of Financial Services (NYDFS) introduced a regulation called 23 NYCRR Part 500 for banks, insurers and other financial institutions that operate in New York City. The regulation requires each company to “assess its specific risk-based profile and to tailor a program that addresses the risks identified by self-assessment.”

NYDFS Regulation Aims to Bolster Financial Cybersecurity

The regulation initially came into effect on March 1, 2017 — and it’s the first in the U.S. to mandate such protection by banks, insurers and other financial institutions within the NYDFS’s regulatory jurisdiction. Its overarching goal is to protect institutions’ customer information from potential cyberattacks. Entities impacted by the legislation are required to be in compliance by March 1, 2019.

The legislation specifically addresses several compliance areas, including maintenance of a cybersecurity policy; retention of a chief information security officer (CISO) and other qualified personnel; and the establishment of a written incident response (IR) plan.

In the area of application security, the directive states:

  1. “Each Covered Entity’s cybersecurity program shall include written procedures, guidelines and standards designed to ensure the use of secure development practices for in-house developed applications utilized by the Covered Entity”; and
  2. “All such procedures, guidelines and standards shall be periodically reviewed, assessed and updated as necessary by the CISO (or a qualified designee) of the Covered Entity.”

Don’t Sleep on Application Security

One aspect that’s often neglected during IT security implementation is the importance of securing your organization’s applications. During the development stage, security too often slips through the cracks. However, application security is imperative to protect your organization from security threats.

Your applications house vital, mission-critical data and any security breach could cause significant damage and disruption to your organization and its reputation. Still, security is often lost in the mad dash to accelerate application delivery.

For banks and other financial institutions, application security is even more critical and could become an area of vulnerability if left unaddressed.

With the need to keep track of all of these mind-boggling requirements, you might be wondering where to begin. For starters, security leaders should invest in an IR platform to effectively orchestrate and automate their response and cyber resiliency processes. CISOs must also prepare themselves — and their teams — to deal with myriad IT security issues, such as inadvertent insider threats.

To specifically address your organization’s potential application security challenges, register now for complimentary trials of IBM Security AppScan and IBM Application Security on Cloud. Find out how you can conveniently manage application security risk. IBM’s complimentary risk management e-guide also provides practical guidance to address application security risk more effectively. You can apply lessons learned in the e-guide to all of your current IT security initiatives.

Read the complete e-guide: Five Steps to Achieve Risk-based Application Security Management

The post Application Security Attacks: Will New NYDFS Regulation Protect NYC Financial Institutions? appeared first on Security Intelligence.

Why Isn’t Secure DevOps Being Practiced?

New research reveals that consistent practice of secure development and operations (DevOps) remains a challenge for organizations across industries. Only half of DevOps teams integrate application security testing elements in continuous integration and continuous delivery (CI/CD) workflows — despite widespread awareness of the advantages — according to a May 2018 report, Examining DevSecOps Realities and Opportunities, from Synopsys and 451 Research.

The report surveyed 350 leaders at large enterprises and revealed insight into the state of secure DevOps and perceived barriers. While chief information officers (CIOs) and leaders understand early testing is key to cost control and risk reduction, few teams are practicing secure DevOps in a way that meaningfully reduces risks.

Why Secure Digital Transformation Matters

Fifty percent of respondents across industries are currently using application security testing elements during the DevOps process. While adoption varies by industry, the report found only a 12 percent margin between the highest and lowest adopters by industry. High-tech industries lead with 56 percent adoption, while retail was ranked last at 44 percent integration of app security testing in CI/CD workflows. Most commonly, organizations rely on software analysis scanning solutions, dynamic analysis methodologies and third-party penetration testing when secure DevOps is practiced in the enterprise.

Despite lagging adoption, survey respondents revealed a strong awareness of the benefits of secure DevOps. According to the report, the potential benefits of including application testing in CI/CD workflows include:

  • Improved software quality
  • Meeting compliance and regulatory requirements
  • Reduced risk
  • Speed to release processes

Secure DevOps Is Failing to Translate

While awareness is strong among CIOs and other decision-makers, the reasons organizations are failing to translate it into consistent practice are varied. According to the report, respondents cited barriers that can be mapped to technology, process and talent.

When asked what the most significant challenges are, responses included:

  • Lack of “automated, integrated” security testing tools
  • Inconsistent approaches
  • Security testing “slows things down”
  • False positive results from testing solutions
  • Developer resistance

Three out of the top five responses have roots that are at least partially based in education, culture or awareness. Inconsistency, resistance and a belief that secure DevOps bogs down workflows may indicate at least some need for education, new ways of working or other shifts in thinking.

Is Tech the Root of the Problem?

Due to the close relationship between people, processes and technology in a DevOps environment, it’s likely technological barriers are contributing to negative human perceptions and developer resistance. The report put it simply: “Not all security tools are equal, and the less software testing tools can be integrated and automated into enterprise workflows, the less effective they will be in securing CI/CD pipelines.”

As CIOs consider how to optimize the risk, compliance and agility potential of secure DevOps, overcoming challenges may require smarter technology that fits seamlessly into existing CI/CD workflows. When security and third-party security testing contributes to an organization’s goals of software quality and rapid releases, it may be easier to overcome lingering cultural barriers to secure DevOps.

Balancing Risks and Rewards

Meeting compliance requirements for security by design and default within DevOps workflows may not be the ultimate consideration for CIOs. The most mature enterprises demonstrate significant awareness of the role of IT security in the digital transformation process, according to Ponemon Study: Bridging the Digital Transformation Divide from the Ponemon Institute, sponsored by IBM.

According to the Ponemon study, the best-of-breed organizations meet criteria like achieving “full alignment between IT security and lines of business” and developing a defined secure digital transformation strategy.

While achieving enterprise-wide change is never simple, CIOs must balance risk and reward on the road to greater organizational agility. The report found that failing to address transformation risks can directly result in data breaches. Seventy-four percent of IT security practitioners say it’s “likely” their organization experienced a cybersecurity incident in the past 12 months due to a lack of security in digital transformation processes.

How Mature Organizations Approach Secure Transformation

High-performing organizations demonstrated greater confidence about their security processes, which is directly influenced by the attitudes and actions of senior management, according to the Ponemon study. When asked about leadership’s role in digital transformation, IT security practitioners from the most mature organizations agreed or strongly agreed with the following statements:

  • Investment in emerging security technologies is key, including automation, artificial intelligence (AI) and machine learning
  • Digital transformation creates security risks, which must be managed
  • Adequate funding for IT security is crucial to digital transformation processes
  • Securing digital assets is connected to “trust with customers and consumers”

Not Just a DevOps Problem

There’s a significant risk for enterprises which fail to adopt secure practices in digital transformation, including a failure to bridge the gap between awareness and practice of secure DevOps. These risks can include challenges associated with costly application rework, slower releases, noncompliance, security breaches and loss of consumer trust.

While many CIOs perceive significant barriers to adopting secure CI/CD workflows in DevOps, these challenges may be solved by smarter tools and third-party partnerships. Application testing solutions that increase efficiency and decrease false positives are likely to enable enterprises to unlock the benefits of secure CI/CD workflows while reducing human resistance.

However, the Ponemon study found that the solution to the secure DevOps crisis isn’t just technology. The gap between awareness and adoption may demonstrate insecure digital transformation and a need for leadership to support steps toward enterprise-wide maturity. By understanding that transformation creates risks, leaders can invest wisely in the right emerging technologies to secure digital assets and customer trust.

Read the complete Ponemon Report: Bridging the Digital Transformation Divide

The post Why Isn’t Secure DevOps Being Practiced? appeared first on Security Intelligence.

Jump-Start Your Management of Known Vulnerabilities

Organizations must manage known vulnerabilities in web applications. When it comes to application security, the Open Web Application Security Project (OWASP) Foundation Top 10 is the primary source to start reviewing and testing applications.

The OWASP Foundation list brings some important questions to mind: Which vulnerability in the OWASP Foundation Top 10 has been the root of most security breaches? Which vulnerability among the OWASP Foundation Top 10 has the highest likelihood of being exploited?

While “known vulnerable components” comes in at number nine on the list, it’s the weakness that is most often exploited, according to security firm Snyk. The OWASP Foundation stressed on its website, however, that the issue was still widespread and prevalent: “Depending on the assets you are protecting, perhaps this risk should be at the top of the list.”

So, how can these known vulnerabilities be managed?

Vulnerable Components Can Lead to Breaches

Components in this context are libraries that provide the framework and other functionality in an application. Many cyberattacks and breaches are because of vulnerable components, a trend that will likely continue, according to Infosecurity Magazine.

Recent examples include the following:

  • Panama Papers: The Panama Papers breach was one of the largest-ever breaches in terms of volume of information leaked. The root cause was an older version of Drupal, a popular web server software, as noted by Forbes.
  • Equifax: The Equifax breach was one of the most severe data breaches because of the amount of highly sensitive data it leaked, as noted by Forbes. The root cause was an older version of Apache Struts.

Often, this vulnerability is not given the attention it requires. Many organizations may not even have a proper inventory of dependent libraries. Static code analysis or vulnerability scans usually don’t report components with known vulnerabilities. In many cases, the component versions would have reached their “end of life,” but were still in use.

It’s also worth considering the complexity of managing component licenses. There are many open source licenses with varying terms and conditions. Some licenses are permissive and some are permissive with conditions (strong or weak copyleft). The Open Source Initiative (OSI) lists more than 80 approved licenses.

Most Components Are Older Versions With Known Vulnerabilities

Synopsys reported that more than 60 percent of libraries being used are older versions that have known vulnerabilities. If we take a deep look at our applications component profile, this may not be an exaggeration. Most of the web applications running today are using open source components in some way or another.

The popular open source frameworks for web applications include:

  • Struts
  • Spring MVC
  • Spring Boot
  • MyFaces
  • Hibernate in Java
  • Angular
  • Node.js in JavaScript
  • CSLA framework in .NET
  • Many PHP, Python and Ruby frameworks

There are also many object-relational mapping components, reporting tools, message broker components and a plethora of other utility components to consider. These components provide great advantages to organizations concerning cost-benefit, being future ready and helping foster digital transformation. They also have a wide developer base to develop and maintain components actively.

But are you using an older version of these components? Do they have reported vulnerabilities? Common Vulnerabilities and Exposures (CVE) in components are listed in CVE MITRE and National Vulnerability Database (NVD).

More Than 80 Types of Various Open Source Licenses

Managing open source licenses is an important activity for an organization’s open source strategy and legal and standard compliance programs. Managing licenses for components can be complex. Due care must be given to note the license version, as some may have significantly different terms and conditions from one version to another. Developers may add open source libraries to applications without giving much thought about licenses.

The perception is that open source is “free.” However, the fact is it’s “free” with conditions attached to its usage.

If we review the license clauses carefully, the requirements are more stringent when it comes to distributed software. Reviewing the license requirements of a component will also include reviewing the licenses of transitive dependencies or pedigrees — the components on which it is built. Open source compliance programs usually cover software installed on machines but may not cover the libraries used by web applications.

Automate to Identify Components With Known Vulnerability and License Risks

NVD uses Common Platform Enumeration (CPE) as the structured naming scheme for information technology (IT) systems, software and packages. The tools that automate the process get the CPE dictionary and CVE feed from NVD. The feeds are available in JSON or XML formats. The tools parse the feeds and scan through them with the CPE to provide reports.

The OWASP provides Dependency-Check, which identifies reported vulnerabilities in project dependencies. It’s easy to use and can be used in command line or integrated into the build process. It has plug-ins for popular build management tools, including Maven, Ant, Gradle and Jenkins. The build tool Maven has a “site” plug-in. By running “mvn site” command it produces the application specific report, which also shows the license information on dependencies.

There are many other commercial tools with more sophisticated functionality — beyond vulnerability identification and listing licenses. There are sources other than NVD and CVE MITRE, which provide details on known bugs, such as RubySec, Node Security Platform and many bug tracking systems.

IBM Application Security on Cloud has an Open Source Analyzer to identify component vulnerabilities. It’s recommended to integrate the tools in the build process, so the component profile is taken at the earliest stage of the development phase. This allows users to monitor the component profile during maintenance and enhancements.

Addressing Component Issues: Upgrade, Replace or Migrate

The most important step in managing open source licenses is to have a policy on acceptable licenses. The policy has to be created in consultation with your legal department. The policy should be reviewed periodically and kept up-to-date. Building an inventory of components is also important.

Once the components are identified as being vulnerable or not, and that they comply with license policy, addressing them is context specific. You can either upgrade to the latest version or replace them with alternates. This requires a risk-based approach and planning. Framework upgrades — or moving to a different framework or technology — could result in significant development efforts. The approach has to be decided based on risk and cost, considering all alternate deployment models and technologies.

Upgrading components or migrating can be rewarding. In addition to addressing security issues, it can provide an opportunity to improve the performance of the applications and address compatibility issues because of older component versions.

Component management is a continuous process, as vulnerabilities get frequently reported — even in the latest of versions. Obviously, it’s not practical to upgrade or migrate each time an issue is reported — often patches (minor versions upgrades) will be available to address the issues. Component management should be given adequate consideration and must be an integral part of an organization’s application security and compliance programs.

The post Jump-Start Your Management of Known Vulnerabilities appeared first on Security Intelligence.

Clustering App Attacks with Machine Learning Part 1: A Walk Outside the Lab

A lot of research has been done on clustering attacks of different types using machine learning algorithms with high rates of success. Much of it from the comfort of a research lab, with specific datasets and no performance limitations.

At Imperva, our research is done for the benefit of real customers, solving real problems. Data sets can vary and performance limitations are important, if not critical, to avoid. We were recently tasked by our engineering team with how to cluster application attacks in near real-time scenarios where performance is a key factor. The requirements list was long. So were the challenges. Bottom line, we found reality punches lab statistics in the face. (Not that any actual punches were thrown. The first rule of Imperva research is you talk about Imperva research!)

With that said, in this three-part blog series we’ll share interesting insights and discuss some of the challenges we met—and overcame—as part of our research, such as:

  • Applying a clustering algorithm to a stream of data
  • Extracting meaningful features from limited data
  • Translating different features and determining distance calculations

In this first blog post we start with the motivation for clustering attacks. We’ll discuss the data used for this task, and how we enriched it by adding meaningful features to the raw data, specifically the IP and the URL.

Why Cluster Attacks

Our goal of clustering attacks on web applications was two-fold: 1) finding interesting patterns inside the attacks, and 2) making it feasible to navigate the massive amounts of attacks. Clustering can help us create a “story” out of the attacks (naming them based on behavior), making them more easily understood to a human observer and easier to analyze. For example, when seeing a cluster called, “SQL injection attack from China using a Havij scanner”, the story behind it is much clearer than analyzing the hundreds of attacks this cluster contains and trying to find the common ground between them.

Read: Five Ways Imperva Attack Analytics Helps You Cut Through the Event Noise

The Raw Data

The raw data that entered our algorithm was an HTTP request (see Figure 1) with additional meta data fields added by the WAF that stopped or alerted on the attack. These extra fields include the time the request was received, the IP of the attacker, the attack that was found in the request and sometimes additional information about the attacked application.

parts of a HTTP request

Figure 1: HTTP request – Each request contains a request line with method, URL and protocol, the headers of the requests and the parameters

We can’t just ingest this data as is into a machine learning algorithm and expect it to cluster correctly. We first need to convert data into a structured object which contains all the fields that are interesting to us. After that we need to enrich the data to get as much meaningful information out of it as we can in order to improve our clustering results.

Data Enrichment

The goal of the enrichment process is to extract more meaningful features from the raw data. This way we can ingest our algorithm with more features, which hopefully will give better results. In this phase we extract features which may be correlated. In general, it’s best practice to reduce correlation between different features before ingesting the data into a machine learning algorithm. But in this case correlation isn’t a priority at this stage, we’ll deal with it later on. Here our goal is only to extract as many meaningful features as we can.

Almost every part of the raw data can be structured as a feature and enriched into other features. For example, the headers of the HTTP request which may imply which tool was used to attack, and the type of attack that was found which may imply which system the attacker was trying to target, etc. Here we’ll dive into two important features, the IP and the URL, and how we can extract additional features out of them, although every other feature in the data also has many possibilities of enrichment.

All About the IP

In each request we receive the source IP, that is, the IP from which the attack originated. This is a very important feature as it indicates the origin of the attack. The attacker may use a proxy or an anonymity framework to hide himself, or he may reveal his true origin. In any case all of these features are important to us and can enrich the data significantly.

Class A, B and C

Say we have the following IP: 157.42.65.201. Its class C is 157.42.65.* —or all the IPs that start with 157.42.65—and each class C contains 256 different IP addresses. In the same manner its class B is 157.42.*.*, which contains    IPs, and class A is 157.*.*.*, which contains  IPs. Two attacks originating from the same class of IPs may indicate a connection between them, and the smaller the class, the bigger the connection. In our data we saw many attacks with the same attributes originating from different IPs in the same class.

Geolocation

The IP corresponds to a geographic location, whose extraction requires the use of a GIS or geolocation source. This feature enables us to find similarity between IPs, even from different classes. The geolocation may include country, subdivision or region (such as state or county), city, and geographic coordinates (see Figure 2).

IP as geolocation

Figure 2: IP as geolocation taken from an online database

In our experience using a combination of country and subdivision gives the best results, while using city or the coordinates is too high-resolution and loses the bigger picture in the process.

Anonymity Framework

Many attackers launch their attacks from anonymous origins, or use proxies. This practice enables them to cover their tracks, and launch attacks without being identified by their target. An attacker from the US can launch an attack using Tor and identify himself as if he is from Romania, and a few seconds later launch another attack identifying himself as being from Argentina. The geolocation of an IP that launches attacks using some sort of anonymity framework is usually not important because it doesn’t give any information about the real origin of the attacker. What does matter in our case is whether the attacker uses an anonymity framework at all, and if so, which kind. In our experience attackers who use an anonymity framework and change their geolocation between attacks tend to keep to the same framework. Hence this is also an important feature that can be extracted from the IP. This feature can be extracted using the “X-Forwarded-For” header for proxies or using outside sources, like the Tor network or anonymous proxy databases.

The URL is Greater Than the Sum of its Parts

The attacked URL indicates the target of the attack, that is, which page of the web application the attacker targeted. An attack on a login page has different features than an attack on a search page. Also, the URL may contain hints on the resources that were attacked. See Figure 3 for the parts a URL may contain. These are some of the features we can extract from the URL:

Different parts of the URL

Figure 3: Different parts of the URL – protocol, domain name, directory/folder, web page and file extension

Resource Extension

The resource extensions of the URL are the final part of the URL after the last “.” (dot), this part indicates which resource the URL contains. For example, a URL may end with “.jpg” or “.png” which indicates that this URL contains a picture, or it may end with “.php” or “.aspx” which indicates the server side of the page. Two attacks targeting different URLs but with the same resource extension may indicate a scan on the site for vulnerabilities, especially if the resource extension is not a very common one to that application.

URL Patterns

Many web applications contain different URLs but with the same directories or patterns. For example, a news site can put every article about economics in the URL prefix “/news/economics/[article-name]”. Finding the patterns of these URLs, and especially finding attacks on different URLs with the same patterns, may indicate a phenomenon that our algorithm is trying to discover. This way we may discover a scraping attempt, even when each attack comes from a different IP (maybe even a different country) and with large time gaps between attacks.

Clean URL

Injecting malicious code inside the URL is very common, and was seen a lot in our data. It is possible to clean the code injected in the URL using heuristic methods, like looking for special characters that should not appear in the URL and are used to delimit scripts. See Figure 4 for example.

JavaScript code injected into the same URL

Figure 4: Different JavaScript code injected into the same URL. This code can be “cleaned” by deleting everything after the colon.

By cleaning this code we may reveal a pattern of the attacked URL. For example, attackers may try to inject malicious code to the same URL, and on every attack make minor changes to the code. This may appear like a set of completely different URLs, hence the clean URL may reveal this pattern.

Next Up: Calculating Distance

In this post we discussed the data we used to cluster attacks and how to extract meaningful features from it. In the next post we’ll discuss one of the core stages of our algorithm – how to measure the distance between features. Calculating distance is not always an easy task, especially with complex features like URL or IP. Our final goal will be to cluster like attacks together, so we’ll need to find a way to determine when two attacks are similar based on the extracted features.

Related: Imperva Attack Analytics Makes Sense of Thousands of Security Alerts [Video]

Three-Quarters of US Federal Agencies Face Cybersecurity Risk Challenges

Limited network visibility and a lack of standardized IT capabilities have led to an increase in cybersecurity risk across three-quarters of U.S. federal agencies, according to a new government report.

The U.S. Office of Management and Budget (OMB), in collaboration with the Department of Homeland Security (DHS), recently published the “Federal Cybersecurity Risk Determination Report and Action Plan” in response to a presidential executive order issued last year. The researchers used 76 metrics to assess the way federal agencies protect data. Of the 96 agencies analyzed in the report, the OMB classified 71 as “at risk” or “high risk.”

Cybersecurity Risk Assessment Reveals Persistent Challenges

Although 59 percent of agencies said they have processes in place to communicate cybersecurity risk issues, the report’s authors concluded that 38 percent of federal IT security incidents did not have an identified attack vector. In other words, those who encountered a data breach were not able determine how their defenses were penetrated. As a result, the OMB vowed to implement the Director of National Intelligence’s “Cyber Threat Framework” to improve its situational awareness.

Meanwhile, only 55 percent of agencies said they limit network access based on user roles, which opens up myriad cybersecurity risks, and just 57 percent review and track admin privileges.

Standardization can also help reduce risk in government applications. According to the report, a scant 49 percent of agencies have the ability to test and whitelist sofware running on their systems. The authors also suggested consolidating the disparate email systems used across agencies, since this is where phishing attacks are often aimed.

An Untenable Security Situation

The OMB cited a need to beef up network visibility and defenses. Its cybersecurity risk assessment revealed, for instance, that only 30 percent of agencies have processes in place to respond to an enterprisewide incident, and just 17 percent analyze the data about an incident after the fact.

“The current situation is untenable,” the report asserted. As a result, the authors noted that the DHS is working on a three-phase program to introduce tools and insights to solve security issues, which will begin later this year.

The post Three-Quarters of US Federal Agencies Face Cybersecurity Risk Challenges appeared first on Security Intelligence.

SecurityWeek RSS Feed: IBM Adds New Features to MaaS360 with Watson UEM Product

IBM announced on Monday that it has added two new important features to its “MaaS360 with Watson” unified endpoint management (UEM) solution.

UEM solutions allow enterprise IT teams to manage smartphones, tablets, laptops and IoT devices in their organization from a single management console.

read more



SecurityWeek RSS Feed

IBM Adds New Features to MaaS360 with Watson UEM Product

IBM announced on Monday that it has added two new important features to its “MaaS360 with Watson” unified endpoint management (UEM) solution.

UEM solutions allow enterprise IT teams to manage smartphones, tablets, laptops and IoT devices in their organization from a single management console.

read more

The importance of understanding your cloud application attack surface

Many of today’s available security tools have evolved over years with a focus on a specific problem, one that is static and often very slow.You’ve decided to move to the

The post The importance of understanding your cloud application attack surface appeared first on The Cyber Security Place.

Application Development GDPR Compliance Guidance

Last week IBM developerWorks released a three-part guidance series I have written to help 
Application Developers develop GDPR compliant applications.

Developing GDPR Compliant Applications Guidance

The GDPR
The General Data Protection Regulation (GDPR) was created by the European Commission and Council to strengthen and unify Europe's data protection law, replacing the 1995 European Data Protection Directive. Although the GDPR is a European Union (EU) regulation, it applies to any organizations outside of Europe that handle the personal data of EU citizens. This includes the development of applications that are intended to process the personal information of EU citizens. Therefore, organizations that provide web applications, mobile apps, or traditional desktop applications that can indirectly process EU citizen's personal data or allow EU citizens sign in are subject to the GDPR's privacy obligations. Organizations face the prospect of powerful sanctions should applications fail to comply with the GDPR.

Part 1: A Developer's Guide to the GDPR
Part 1 summarizes the GDPR and explains how the privacy regulation impacts and applies to developing and supporting applications that are intended to be used by European Union citizens.

Part 2: Application Privacy by Design
Part 2 provides guidance for developing applications that are compliant with the European Union’s General Data Protection Regulation. 

Part 3: Minimizing Application Privacy Risk

Part 3  provides practical application development techniques that can alleviate an application's privacy risk.

The Modernization Misstep: A CEO Takes on Digital Transformation

The following story illustrates what can occur when efforts at digital transformation go wrong. Kelly Zheng may not be real, but the challenges she’s confronted with are far from fictitious. Many organizations and industries struggle with concerns about retaining customers in a disruptive and competitive landscape. Facing a “transform or else” paradigm isn’t easy, but it’s increasingly common. Read on to discover the challenges and choices Kelly faces. Did she choose the correct path?

Insurance company CEO Kelly Zheng knew she wasn’t alone in thinking her industry was one of the most disrupted by technology and innovation. However, she always brought her positive (and practical) attitude to the office.

Like many of her fellow CEOs, she juggled a plethora of changing priorities. Her number one concern lately? The goal of practically every industry: Customer retention. Fortunately, Kelly worked alongside a talented team of C-level executives.

After hearing the chief financial officer (CFO) report on the company’s declining revenue and a suspected spike in fraudulent claims, Kelly was worried about the firm’s digital transformation strategy — or lack thereof.

Kelly stared hard at the net promoter score (NPS) chart the chief marketing officer (CMO) had presented, searching for answers in the negative trend line. The dismal data wasn’t the only bad news she’d received that day. After hearing the chief financial officer (CFO) report on the company’s declining revenue and a suspected spike in fraudulent claims, Kelly was worried about the firm’s digital transformation strategy — or lack thereof.

She masked her concern during the CMO’s presentation but revealed her true feelings when the CFO knocked on the door to her office later that day. Kelly knew she needed to act fast and get her leadership team together to find a solution.

“Every company is a technology company in today’s world,” Kelly stressed. “We need to get with the times and offer an omnichannel customer experience. A mobile app is a perfect opportunity to embrace disruption and bring our company to the next level.”

Later, Kelly sounded confident while she outlined her plan to the leadership team: The organization would invest immediately in developing a mobile app. Internally, however, she couldn’t help but wonder if the team could handle a significant digital overhaul against a ticking clock.

Designing a Secure, Frictionless Customer Experience

Kelly knew a mobile app would help the organization stay in touch with its customers, which would ultimately improve customer satisfaction and loyalty. By the time the leadership meeting was over, she had outlined a tentative plan of action to get the mobile app off the ground.

Although the organization’s chief information officer (CIO), Ned Lui, was part of the leadership meeting, Kelly wasn’t able to connect with him until a few days later due to his hectic schedule. She wanted to discuss the app’s possible impact on the company’s current IT infrastructure and operations, but the conversation quickly turned to security risks.

“You should meet with Adela, the chief information security officer,” Ned said. “She will make sure we address app security properly.”

While Kelly was concerned about the mobile app’s security, she needed to get the business requirements for the application and the third-party development team agreement solidified first. She had already asked her design team to take an active role in designing an industry-leading user interface (UI).

Between the world-class user experience (UX), experts at the development agency and her in-house talent, Kelly felt certain her organization was taking the right approach to developing a mobile, omnichannel customer experience — a people-first approach.

Balancing Security and Ease of Use

Kelly recognized that security would be an important concern during the development process, so she kept it top of mind. She highlighted the importance of security in her weekly meetings with the third-party agency she hired to develop the app. She understood there were significant functionality and cost-saving reasons to build the new app with security from the start. However, she wasn’t entirely confident the app agency had the right mindset when it came to balancing security with UX.

Kelly armed herself with app security research and addressed security at every meeting with the development agency.

Kelly armed herself with app security research and addressed security at every meeting with the development agency. While ease of use was critically important, she grilled the agency project manager to make sure the development team wasn’t sacrificing security for convenience.

Kelly was satisfied with the agency’s practice of secure DevOps, and she kept the rest of the leadership team updated on the progress.

Tackling Fraud Head-On

With development efforts in full swing, Kelly shifted her focus to addressing the costly problem of rising fraudulent claims. She was hopeful that the app would create a flood of new customer accounts, but she was also aware that it could make it easier than ever for customers to file fraudulent claims.

Kelly tasked Ned and Adela with developing a plan to authenticate new user accounts. However, when the task force reconvened, Kelly felt overwhelmed by Adela’s recommendation to explore new solutions.

“Legacy approaches to user authentication and identity verification are clunky and, quite frankly, high-risk,” Adela argued. “We can’t rely on passwords. Instead, we need a dynamic approach to verifying users, devices, environments, behavior and activity.”

They weren’t sure how to integrate multifactor authentication (MFA) when development was in full swing.

Everyone knew Adela was almost certainly right. However, they weren’t sure how to integrate multifactor authentication (MFA) when development was in full swing.

After much discussion, Kelly convinced her colleagues they’d have to stick with a framework-based approach to fraud prevention. Context-based authentication tools would have to wait for the next release.

Unleashing a Mobile-Enabled Workforce

As the go-live date approached, Kelly focused on a final puzzle piece: the insurance organization’s newly mobile-powered remote workforce of insurance agents. Mobile app access for agents was necessary to deliver on the promise of real-time updates.

There was, however, an issue of risk. Could the health of the agent’s personal mobile devices compromise the IT infrastructure or (worse) customer data? What if a device was lost or stolen? While the organization provided laptops to their agents, Kelly worried what was next because there simply wasn’t enough budget available to equip the agents with company-owned mobile devices.

Kelly and Ned opted for the best option they felt they had: an updated bring your own device (BYOD) policy. With the help of the human resources team, they decided to invest in a new, written policy that clearly outlined the agent’s responsibility to protect customer and company data on mobile devices. The new BYOD policy was clear about secure behaviors — such as avoiding sketchy Wi-Fi connections and the importance of putting a lock on each mobile device — but didn’t outline what would happen to people who failed to comply.

Achieving Digital Transformation Without Sacrificing Security

Kelly is far from alone when it comes to balancing the pressures of digital transformation and security. Faced with a fast-ticking clock, she didn’t feel that she had the option to focus on security and still release a great product on time. However, there’s an alternate ending to this story that doesn’t involve a vulnerability-riddled app, fraud or mobile data breaches.

Disaster recovery-as-a-service (DRaaS) and backup-as-a-service (BaaS) could have helped her team meet resiliency challenges.

To avoid these modernization missteps, Kelly could have invested in security services to help her task force develop security-focused business requirements and create a comprehensive DevOps framework. For example, disaster recovery-as-a-service (DRaaS) and backup-as-a-service (BaaS) could have helped her team meet resiliency challenges.

Kelly’s developers also could have automated ongoing risk testing in production with a vulnerability scanning tool to avoid the high cost of discovering security risks after the app went live. In addition, an identity and access management (IAM) solution could have helped the development agency protect authentication between mobile and web apps.

Had the app passed a penetration test, Kelly could have approached the go-live date with confidence instead of apprehension. She also could’ve nipped the threat of new account fraud in the bud by investing in a fraud protection solution that examines users, device health and sessions.

Finally, Kelly could have reconciled risk with mobile agents by leveraging a cognitive-enabled unified endpoint management (UEM) soluion. That way, everyone would have won: The agents would’ve been able to keep their phones and game apps, and Kelly’s organization wouldn’t have had to purchase mobile devices for its employees.

Organizations can achieve security by design instead of taking an after-the-fact approach to data protection.

Digital transformation may be increasingly inevitable in many sectors, but you’re not doomed to face disruption or mounting security risks when delivering new mobile experiences or turning around a digital overhaul before your competitors go live with their apps. With expert assistance and augmented intelligence, organizations can achieve security by design instead of taking an after-the-fact approach to data protection.

Read more: Mitigate Your Business Risk Strategically With Cognitive Application Security Testing​

The post The Modernization Misstep: A CEO Takes on Digital Transformation appeared first on Security Intelligence.

Why CIOs need to drive digital transformation

According to Gartner’s 2018 CEO survey, CIOs need to push executives towards digital change and then support them throughout the digital transformation journey. Indeed, the survey revealed that while 62

The post Why CIOs need to drive digital transformation appeared first on The Cyber Security Place.

Five Ways Imperva Attack Analytics Helps You Cut Through the Event Noise

The maddening volume of events security teams have to deal with each day is growing at an exponential pace, making it increasingly difficult to effectively analyze and process credible threats. As more organizations move to cloud-based solutions, applications now reside at multiple locations – on premises, in the cloud or in a hybrid environment – compounding the problem of investigating security events coming from different locations.

Think about it for a second: your cybersecurity system bombards your team with thousands of credible or merely perceived threats, how do you go about actioning each and every one to make sure that you effectively manage it? A lot easier said than done.

Let’s look at five key challenges security teams face right now:

  1. Security systems are sending thousands of alerts every day
  2. Impossible to effectively analyze each and every event
  3. The more alerts, the more difficult it is to single out real threats
  4. Team growth doesn’t always match the increasing volume of threats to deal with
  5. No single, unified view outlining the threats occurring on-premises and in the cloud

There’s clearly a disparity between the number of threat alerts and how many of them security personnel are humanly able to deal with, which is why we look to artificial intelligence (AI) and machine learning for the answers.

Imperva Attack Analytics takes thousands upon thousands of security alerts and condenses them into just a handful of real, actionable narratives that enable IT teams to effectively respond to each threat to their organization. Narratives are ranked according to threat severity, giving security teams access to detailed analysis of targeted attacks, reducing the amount of clutter and cutting straight through to what’s important. Sound good, get in touch and we’ll run you through it.

Attack Analytics collates security events filtered through the SecureSphere and Incapsula WAF solutions and delivers an integrated, accurate, actionable report of security incidents. This approach equips enterprises with the means to secure applications on premises, in the cloud or in a hybrid configuration; without necessarily having to expand their security teams to meet demand.

Attack Analytics cuts through the noise and facilitates:

  1. Improved operational efficiency
  2. Reduced overall risk
  3. Unified visibility
  4. Global insights, and is
  5. Cloud-ready

Imperva Attack Analytics collects data from physical, virtual and cloud-based deployments providing actionable insights into application security across the enterprise estate. It is supported on any existing Incapsula deployment and SecureSphere 12.4 or higher versions.

See Attack Analytics in action.

Survey: 27 Percent of IT professionals receive more than 1 million security alerts daily

Imagine trying to tackle over one million security alerts in a day. That number is so huge that it may sound like hyperbole, but this is exactly what many security teams face. Dealing with such a high volume of potential threats on a regular basis can quickly lead to alert fatigue. Sure, we expect an organization’s security operations center (SOC) to have certain protocols in place that will defend and protect against data breaches, but even the smartest systems still need skilled team members at the helm. And when vital team members are frustrated and exhausted, potential threats have an even greater chance of slipping through the system. During the RSA Conference 2018, Imperva surveyed 179 IT professionals to find out how different teams are dealing with this. Here’s what we found.

Working Through the Noise

A staggering 27 percent of IT professionals reported receiving more than one million threats daily, while 55 percent noted more than 10,000. While it is virtually impossible to respond to such an astronomical number, separating the actual threats from the false-positive alerts also presents a crucial problem. The majority of IT professionals (53 percent) noted that their organization’s SOC’s have struggled to pinpoint which security incidents are critical versus those that are just noise.

And what happens when the SOC has too many alerts for its analysts to process? In some cases, absolutely nothing. An alarming 30 percent of respondents admitted to having flat-out ignored certain categories of alerts, while 4 percent actually turn off the alert notifications altogether. On a slightly more positive note, 10 percent said that they hire on additional SOC engineers to assist with these alerts, and 57 percent tune their policies to reduce alert volume.

Alert Fatigue Can Lead to Neglect

When security teams ignore alerts, it is not for lack of motivation, as we can see based on the volume of daily incidents and the frustrating number of false-positives. Because of this, it can be tempting to disregard future alerts. In our survey, 56 percent of IT professionals admitted to having ignored an alert based on past false-positive experiences. However, alerts that get brushed off can translate to insurmountable losses. Organizations lose money, SOC teams lose valuable time, and consumers are put at risk. 

But even when the alerts do not become actual threats, they still cause problems for those dealing with them. The pace at which these alerts flood in daily inevitably creates a stressful and exhausting work environment for SOC team members. A telling 54 percent of respondents noted experiencing a high amount of stress and frustration, while just 6 percent said that they had no additional stress because of these incidents.

Combating Alert Fatigue

Security teams play an indispensable role in their organizations, and it is of no benefit to have so many members experiencing burnout. Companies need to not only be aware of alert fatigue and how it impacts their workers (and their bottom line), but they should also look to technology that uses artificial intelligence and machine learning for help with streamlining processes and reducing the noise created by security alerts.

Banking Orgs Come Up Short Against Internal Threats

Testers from Positive Technologies succeed in obtaining access to FIs’ financial applications 58% of the time. Banking organizations have built up formidable barriers to prevent external attacks but are falling

The post Banking Orgs Come Up Short Against Internal Threats appeared first on The Cyber Security Place.

The percentage of open source code in proprietary apps is rising

The number of open source components in the codebase of proprietary applications keeps rising and with it the risk of those apps being compromised by attackers leveraging vulnerabilities in them, a recent report has shown. Compiled after examining the findings from the anonymized data of over 1,100 commercial codebases audited in 2017 by the Black Duck On-Demand audit services group, the report revealed that: 96 percent of the scanned applications contain open source components, with … More

The post The percentage of open source code in proprietary apps is rising appeared first on Help Net Security.

One year later: security debt makes me WannaCry

WannaCry rocked the world one year ago, but there are still lessons for us to unpack about the debt we still have to pay to be secure.It is hard to

The post One year later: security debt makes me WannaCry appeared first on The Cyber Security Place.

Ready to Try Threat Modeling? Avoid These 4 Common Missteps

More organizations are using a threat-modeling approach to identify risks and vulnerabilities and build security into network or application design early on to better mitigate potential threats.

“Threat modeling gives you the way of seeing the forest, and a frame for communicating about the work that you (and your team) are doing and why you’re doing it,” said Adam Shostack, president of Shostack and Associates, in an article for MIS Training Institute. “More concretely, [it] involves developing a shared understanding of a product or service architecture and the problems that could happen.”

Threat Modeling Missteps

The benefits seem clear, but it’s still a relatively new strategy. So, you can expect a few stumbles along the learning curve. Here are four common threat-modeling missteps — and how to avoid them.

1. Thinking One Size Fits All

“There are so many different ways to threat-model,” said Shostack. “I routinely encounter people who read the same advice and find it doesn’t quite work for them.” Approaching threat modeling as a single, massive complex process is overwhelming and sets you off on the wrong foot, he stressed.

“I think the biggest thing I see is people who treat it as a monolith,” said Shostack. “We need to communicate the steps as if they are building blocks. If one doesn’t work for you, don’t throw out threat modeling. There is no one-size-fits-all approach.”

One well-known approach is STRIDE:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of Service
  • Elevation of privilege

Of course, this may be more appropriate for some teams than others. Regardless of approach, Shostack advises teams to look at the process as a set of building blocks that go together and break the process up into easily digestible chunks.

2. Starting With the Wrong Focus

When getting started, should you focus on assets? No. What about shifting your focus to thinking like an attacker? No again. Why?

“It’s a common recommendation, but the trouble is it’s hard to know what an attacker is going to do. It’s hard to know what their motivations are,” said Shostack. “For example, when SEC [Syrian Electronic Army] took over the Skype Twitter handle (in 2014), no one expected they were going to break into the law enforcement portal at the same time. Focusing in on the attacker might have distracted people from what they would do — rather than theorizing about their motivations.”

Shostack advocates for starting the process with software at most organizations.

“People building software or systems at a financial institution, a supply chain or a healthcare company should start from the software they’re building because it’s what they know best,” he noted in a post for The New School of Information Security blog. “Another way to say this is that they are surrounded by layers of business analysts, architects, project managers and other folks who translate between the business requirements (including assets) and software and system requirements.”

3. Neglecting the Business Side

Threat modeling is pointless if solely focuses on the network and applications, believes Itay Kozuch, director of threat research at IntSights.

“Many teams conduct common assessments from their network,” said Kozuch. “But it must come from the business side too. When an organization is trying to evaluate risk and do threat modeling, they need to understand the complete assets of the organization. That means not just IT — but on the business side as well.”

This means going beyond just the technology in the threat-modeling process. Failing to involve all of the business’s key stakeholders, Kozuch stressed, leads teams to incorrectly calculate the probability of the threats that need to be considered. He believes there are a lot of angles and perspectives for every threat.

“Management must be part of it,” said Kozuch. “It is a business issue. Risk is there because of business.”

4. Miscalculating the Shelf Life of Results

“Threats are always changing,” said Kozuch. “Often — even soon after you’ve completed the process — the results are no longer valid. You can’t base the next few years off of what you’ve uncovered because it doesn’t represent future threats.”

Archie Agarwal, founder and CEO of ThreatModeler Software agrees. A threat model, he said in a post for CSO, cannot be static. He cautioned that you can’t take a critical application, do a threat model on it once and assume you are done.

“Your threat model should be a living document,” Agarwal said. “You cannot just build a threat model and forget about it. Your applications are alive.”

Wherever you are in your exploration or implementation of threat modeling, there are many resources out there to help you get started. Check out this series on threat modeling basics for an overview of approaches and essential elements for a successful program.

The post Ready to Try Threat Modeling? Avoid These 4 Common Missteps appeared first on Security Intelligence.

More Than Half of Risk Assessments Spot Attempts to Bypass Security

The majority of risk assessments examined in a recent insider threat report spotted users who tried to bypass their employer’s security measures using private or anonymous browsing.

Researchers analyzed user threat assessments performed on customers and prospective clients across the globe and found that 60 percent identified such behavior. These analyses provided insight into the types of user actions that put enterprise data at the greatest risk.

Risk Assessments Identify Insider Threats

The report identified malicious users as a “traditional” type of insider threat. After analyzing multiple types of activity, the researchers singled out attempts to bypass company security as the most reliable way to confirm that a user action is malicious.

Other indicators of bad intent included employees’ use of “high-risk applications,” such as PowerShell and uTorrent, and the use of the web for inappropriate purposes, such as gaming and gambling. These factors came in at 72 percent and 67 percent of risk assessments, respectively, according to Dtex Systems’ “2018 Insider Threat Intelligence Report.”

Even so, the security firm noted that negligent insiders tend to be far more common than malicious ones. The authors explained that this type of negligence-based incident can take the form of users downloading risky applications or pirated media due to lack of security awareness. The report also found that companies themselves can create insider threats by leaving data publicly exposed in the cloud (78 percent of risk assessments) or transferring data to unencrypted USB devices (90 percent of risk assessments).

Spotting Risky Behavior

Dtex CEO Christy Wyatt offered some advice to help organizations protect themselves against insider threats.

“Organizations have to secure data, neutralize risky behaviors, and protect trusted employees against attacks and their own errors,” she said. “To accomplish all of this, they have to see how their people are behaving and have a mechanism that provides alerts when things are go wrong.”

Consistent with Wyatt’s advice, the authors of the report advised organizations to create a defense-in-depth strategy that emphasizes visibility into suspicious actions, such as when employees take their devices off the corporate network.

The post More Than Half of Risk Assessments Spot Attempts to Bypass Security appeared first on Security Intelligence.

Bumper to Bumper: Detecting and Mitigating DoS and DDoS Attacks on the Cloud, Part 2

This is the second installment in a two-part series about distributed denial-of-service (DDoS) attacks and mitigation on cloud. Be sure to read part one for an overview of denial-of-service (DoS) and DDoS attack variants and potential consequences for cloud service providers (CSPs) and their clients.

In the first installment of this series, we demonstrated how cybercriminals could circumvent DoS defenses by launching distributed DDoS attacks. The three major types of DDoS variants are:

  • Volume-based attacks
  • Protocol attacks
  • Application-layer attacks

We can demonstrate how these attacks work in a simulated environment using Graphical Network Simulator-3 (GNS3), a network simulation tool.

To understand this, first let’s break down the network diagram below:

A corporate network configured with OSPF and BGP

 

Figure 1: A corporate network configured with OSPF and BGP

The diagram shows a network designed with routers and configured with Open Shortest Path First (OSPF), the company’s internal network, Border Gateway Protocol (BGP), the edge router that reveals the internet service provider (ISP) to the end users and clients and other network devices.

Now let’s examine how threat actors can exploit these systems to launch various types of DoS and DDoS attacks.

Volume-Based DDoS Attacks

Cybercriminals typically leverage tools, such as Low Orbit Ion Cannon (LOIC) and Wireshark to facilitate volume-based attacks through techniques like Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flooding. Let’s take a closer look at how these attacks work.

TCP Flooding

In a TCP flooding attack, threat actors generate a large quantity of traffic to block access to the end resource. The magnitude of this type of attack is commonly measured in bits or packets per second. The diagrams below show a TCP flood attack in which the File Transfer Protocol (FTP) service is flooded with huge volumes of TCP traffic, which eventually brings down the service.

Figure 2: A user connecting to an FTP server hosted on a corporate network

Figure 3: An attacker using bots to send malicious traffic to the target port using the LOIC tool

Figure 4: A client unable to access the FTP service after an attacker has flooded it with corrupt FTP packets

UDP Flooding

UDP flooding means overwhelming the target network with packets to random UDP ports with a forged IP address. It is easy to use a forged IP address in this type of attack since UDP does not require a three-way handshake to establish a connection. These requests force the host to look for the application that is running on those random ports (which may or may not exist) and flood the network with Internet Control Message Protocol (ICMP) destination unreachable packets, thereby blocking legitimate requests.

There are other variations of UDP flooding, such as reflection and amplification attacks. In a reflection attack, a threat actor uses publicly available services, such as the Domain Name System (DNS), to attack the target networks. An amplification attack, on the other hand, targets a protocol in an attempt to amplify the response. For example, an attacker might submit a single query of *.ibm.com to the DNS, which will then gather a massive volume of information related to subdomains of IBM.com.

Figure 5 shows a similar attack using the Network Time Protocol (NTP). This protocol enables network-connected devices to communicate and synchronize time information, which is communicated over UDP. An attacker can forge the source IP address and then use a publicly available NTP application to send queries to the target. Common tools used in this type of attack include Nmap, Metasploit and Wireshark.

Figure 5: An attacker using Nmap to discover hosted NTP servers

Figure 6: An attacker using Metasploit to determine that the target NTP server is vulnerable to a MOD6 UNSETTRAP distributed, reflected denial-of-service (DRDoS) attack, an amplification of 2X packets

In this case, the victim’s response packet would be twice the size of the packet the NTP request sent. By repeatedly sending the request, an attacker could flood the target network with a huge number of responses.

Protocol Attacks

In the scenario shown below, an attacker sends multiple SYN request from several spoofed Internet Protocol (IP) addresses to a corporate network’s Secure Shell (SSH) jump server to disrupt the service. Tools such as Hping3 and Wireshark are commonly used in this type of attack.

Figure 7: A client (Ubuntu Machine) connecting to a company’s jump server (IP: 9.1.1.2) for remote administration

Figure 8: An attacker performing a protocol DDoS attack on a jump server (target IP: 9.1.1.2), preventing the client from accessing the jump server

Figure 9 shows a real-world exploit of a TCP SYNC flood attack performed on a web application as part of a penetration testing (PT) engagement.

TCP SYN flood

Figure 9: A web application becomes unresponsive after a TCP SYNC flood attack

Application-Layer Attacks

In addition to volume-based and protocol attacks, cybercriminals can also launch DDoS campaigns by targeting the application layer. Below are some variations of this attack type.

Slowloris

Slowloris is a very prominent attack in which the connection is never idle but, as the name suggests, it is slow. The client connects gradually by sending data and connection requests to the server. This keeps the connections open indefinitely and, as a result, the server cannot process any new connections. Threat actors typically use Slowhttptest and Wireshark to facilitate this attack.

Figure 10: A client accessing a web server hosted on a company’s cloud network

Figure 11: A legitimate user unable to access a webpage due to a Slowloris attack

Shown below is a real-world exploit of Slowloris performed on a web application as part of a penetration testing exercise.

Slowloris real exploit

Figure 12: A web application becomes unresponsive after a Slowloris attack

HTTP Flood

In an HTTP flood DDoS attack, the attacker sends an HTTP GET/POST request, which seems to be legitimate, to infiltrate a web server or application. Instead of using a forged IP address, this attack leverages botnets, which require less bandwidth. An HTTP flood attack is most effective when it forces the server or application to allocate the maximum resources possible in response to every single request.

Shown here is a real-world HTTP flood attack performed using a Session Initiation Protocol (SIP) INVITE message flood on port 5060, rendering the phone unresponsive.

SIP invite flood

Figure 13: An attacker performing a SIP INVITE flood attack on an IP phone

IP phone becomes unresponsive

Figure 14: The IP phone becomes unresponsive after the attack

DDoS Mitigation On Cloud

To mitigate DDoS attacks on the cloud, security teams must establish a secure perimeter around the cloud infrastructure and allow or drop packets based on specified rules. Below are some key steps organizations can take to harden their security environments to withstand DDoS attempts.

Next-Generation Firewalls

A next-generation firewall is capable of performing intrusion prevention and inline deep packet inspection. It can also detect and block sophisticated attacks, including DDoS, by enforcing security policies at the application, network and session layers. Next-generation firewalls give security teams granular control to define custom security rules pertaining to network traffic. They also provide myriad security features, such as secure sockets layer (SSL) inspection, web filtering and zero-day attack protection.

Content Delivery Network

A content delivery network (CDN) is a geographically distributed network of proxy servers and their data centers that accelerates the delivery of web content and rich media to users. Although CDNs are not built for DDoS mitigation, they are capable of deflecting network-layer threats and absorbing application-layer attacks at the network edge. A CDN leverages this massive scaling capacity to offer unsurpassed protection against volume-based and protocol DDoS attacks.

DDoS Traffic Scrubbing

A DDoS traffic scrubbing service is a dedicated mitigation platform operated by a third-party vendor. This vendor analyzes incoming traffic to detect and eliminate threats with the least possible downtime for the target network. When a DDoS attack is detected, all incoming traffic to the target network is rerouted to one or more of the globally distributed scrubbing data centers. Malicious traffic is then scrubbed and the remaining clean traffic is redirected to the target network.

Anomaly Detection

An anomaly, such as an unusually high volume of traffic from different IP addresses for the same application, should trigger an alarm. But anomaly detection is not quite that simple since attackers often craft packets to mimic real user transactions. Therefore, detection tools must be based on mathematical algorithms and statistics. This works well for both application-based and protocol attacks.

Source Rate Limiting

As the name suggests, source rate limiting blocks any excess traffic based on the source IP from where the attack originates. This is mainly used to limit volume-based traffic by configuring the thresholds and customizing responses when an attack happens. Source rate limiting provides insights into particular websites or applications on a granular level. The drawback is that this method only works for nonspoofed attacks.

Protocol Rate Limiting

This technique blocks suspicious protocols from any source. For example, the Internet Control Message Protocol (ICMP) can be blocked after a fixed rate — say, 5 megabits per second (Mbps) — thereby blocking bad traffic and allowing legitimate traffic. While it works well for volume-based attacks, the limitation of protocol rate limiting is that sometimes even legitimate traffic will be dropped, requiring security teams to manually analyze logs.

Cloud Security Is More Crucial Than Ever

With more and more applications now migrating to the cloud, it is more crucial than ever to secure cloud infrastructure and the applications hosted therein. The DDoS attacks described above can put CSPs and their clients at great risk of data compromise. By employing various defense mechanisms, such as advanced firewalls, traffic scrubbing and anomaly detection, organizations can take major steps toward securing their cloud environments from DDoS attacks.

The post Bumper to Bumper: Detecting and Mitigating DoS and DDoS Attacks on the Cloud, Part 2 appeared first on Security Intelligence.

New DDoS Attack Method Demands a Fresh Approach to Amplification Assault Mitigation

Amplification attack vectors are some of the most commonly used tools in the DDoS attacker’s arsenal. In the last quarter of 2017, we saw NTP amplification employed in roughly 33 percent of all DDoS assaults against our customers, while DNS and SSDP amplification vectors played a part in 17 percent and 13.7 percent of attacks, respectively.

For bad actors, amplification vectors offer a shortcut to launching bandwidth-heavy assaults without the need for equally large botnet resources. From a mitigation point of view, however, they represent a diminished threat as, by now, most mitigation services have scaled to a point where attack bandwidth is no longer a chief concern—or any concern at all.

More importantly, the source port headers of amplification payloads follow a predictable pattern, making them easy to filter at a network border. For example, blocking all packets with source port 53 is considered a tried-and-true method for mitigating DNS amplification attacks.

Still, as the song goes, “the times they are a changin’.” Recently, while mitigating an SSDP amplification attack, we saw evidence of payloads with irregular source port data—something few in our industry consider possible and even fewer are likely to be prepared for.

In the following post, we’ll share our findings of the assault and provide a proof of concept (PoC) for a method that could have been used to launch the attack. In addition, we’ll provide evidence of another attack with similar characteristics spotted in the wild.

The implications of these findings are extensive, as they require mitigation providers to rethink the way they currently deal with amplification DDoS threats.

UPnP Protocol – A Long History of Security Issues

The attack method we’re about to describe is made possible by a well-known, but still not resolved, UPnP (Universal Plug and Play) protocol exploit.

For the uninitiated, UPnP is a networking protocol operating over UDP port 1900 for device discovery and an arbitrarily chosen TCP port for device control. The protocol is commonly used by IoT devices (e.g., computers, routers or printers) to discover each other’s presence and communicate over a LAN.

UPnP has raised security concerns over the years, because of the following:

  1. Bad default settings that leave devices open to remote/WAN access.
  2. Lack of an authentication mechanism, which add to the aforementioned issue.
  3. The existence of UPnP-specific remote code execution vulnerabilities.

Examples of UPnP related vulnerabilities date back all the way to 2001, with the discovery of a buffer overflow exploit that was able to cause crashes and allow for remote code execution (RCE) in Windows XP computers and earlier Windows ME versions.

During the SANE 6 conference five years later, Armijn Hemel presented a paper titled “Universal Plug and Play: Dead Simple or Simply Deadly?” In it, he described how UPnP devices that are insecurely open to WAN access could be reconfigured remotely via XML SOAP API calls.

The concept has since been revisited by a number of security researchers, including reports from Rapid7 and, most recently, Akamai. It was also a topic in several DEFCON presentations, including this one by Ricky Lawshae:

Diving deeper into this body of research could prove interesting. For the purpose of this discussion, however, it’s enough to note that it shows how SOAP API calls can be used for remote execution of AddPortMapping commands, which govern port forwarding rules.

PoC: Evasive Amplification, Brought to You by UPnP Port Forwarding

Our interest in UPnP remote access exploits began with an SSDP amplification assault we mitigated on April 11 of this year. During the assault, which occurred in a number of successive waves, we noticed that a certain percentage of SSDP payloads, sometimes as much as ~12 percent, were arriving from an unexpected source port, and not UDP/1900.

Fig 1: SSDP payloads with source port UDP/1900.

Fig 2: SSDP payloads with randomized source ports.

Surprised by what we were seeing, and concerned with its future implications, we attempted to find an explanation. After ruling out several options, we were finally able to reproduce the assault by creating a PoC for a UPnP-integrated attack method that could be used to obfuscate source port information for any type of amplification payload.

Below are the details of our PoC, performed for DNS amplification attacks.

Step 1: Locating an open UPnP router

This can be done in any number of ways, from running a wide-scale scan with SSDP requests to simply using the Shodan search engine to look for the “rootDesc.xml” file commonly found on such devices.

In the screenshot below, you can see that running this query yielded us over 1.3 million results. While not all of these devices are necessarily vulnerable, finding an exploitable one is still very easy, especially if a bad actor used a script to automate the process.

Fig 3: Locating exploitable UPnP gateway devices via Shodan search

Step 2: Accessing the device XML file

With the device located, the next step is to access the file via HTTP. Still using Shodan as an example, this can be done by replacing the ‘Location’ IP with the actual device IP, like so:

Fig 4: Accessing the rootDesc.xml file by changing the Location IP.

Step 3: Modifying port forwarding rules

Cataloged in rootDesc.xml are all of the available UPnP services and devices. For each, a <SCPDURL> is provided, showing all of the actions that the device will accept remotely.

Fig 5: One of the services listed in rootDesc.xml file.

First on that list of actions is AddPortMapping—a command that can be used to configure port forwarding rules.

Fig 6: The AddPortMapping action used to configure port forwarding rules.

Using the scheme within the file, a SOAP request can be crafted to create a forwarding rule that reroutes all UDP packets sent to port 1337 to an external DNS server (3.3.3.3) via port UDP/53. This is how it looks:

Fig 7: An API request to create a port-forwarding rule.

Some of you might be surprised that something like this could even work, as port forwarding is only supposed to be used for mapping traffic from external IPs to internal IPs and vice versa, not to proxy requests from external IPs to another external IP. In reality, however, few routers actually bother to verify that a provided “internal IP” is actually internal, and abides by all forwarding rules as a result.

Step 4: Launching a port-obfuscated DNS amplification

With the port forwarding rules in place, a DNS request is issued to the device, prompting the following sequence of events:

  1. A DNS request is received by the UPnP device on port UDP/1337.
  2. The request is then proxied to a DNS resolver over destination port UDP/53, due to port forwarding rules.
  3. The DNS resolver responds to the device over source port UDP/53.
  4. The device forwards the DNS response back to the original requestor, but not before changing the source port back to UDP/1337.

Fig 8: DNS amplification with source port obfuscation.

Running this script on our own device got us the smoking gun seen below—a DNS response for Imperva.com that was returned from an irregular source port (UDP/1337):

Fig 9: Our smoking gun—a DNS response with source port 1337.

This was enough to serve as a proof of concept for our hypothesis. In an actual attack scenario, however, the initial DNS request would have been issued from a spoofed victim’s IP, meaning that the response would have been bounced back to the victim.

If so inclined, we could use the device to launch a DNS amplification DDoS assault with evasive ports. These payloads would originate from irregular source ports, enabling them to bypass commonplace scrubbing directives that identify amplification payloads by looking for source port data for blacklisting.

Implications and Further Evidence

The above PoC proves that UPnP devices can be used to obfuscate the source port data of amplification payloads. Notably, the evasion method is not limited to DNS amplification, as our own subsequent test showed it to be effective for SSDP, DNS, and NTP attacks. Furthermore, there is no reason to assume that other amplification vectors (e.g., Memcached) will not work just as well.

This adds up to a major paradigm shift in the way amplification attacks are mitigated today.

With source IP and port information no longer serving as reliable filtering factors, the most likely answer is to perform deep packet inspection (DPI) to identify amplification payloads—a more resource-intensive process, which is challenging to perform at an inline rate without access to dedicated mitigation equipment.

It should be noted that we also considered alternative hypotheses for the attack that prompted our investigation. For instance, that the occurrence in question could have been explained by an internal network setup or a purposeful forwarding configuration, which unintentionally resulted in port obfuscation.

Just as we were developing the PoC, however, our original thinking was reaffirmed by another assault with similar characteristics. Occurring on April 26, that assault was carried out via an NTP amplification vector, with some of the payloads originating from a source port other than UDP/123. The low volume at which these arrived makes us believe that they could be probing attempts.

Fig 10: Example of payloads from NTP amplification attack with irregular source port (UDP/1)

With several indications of source port obfuscation, we were even more motivated to press on with publishing our research. It’s our hope that these findings will help the mitigation industry prepare itself for the above-described evasion tactics before they become more common.

We also hope that our findings will add to the existing body of research focusing on UPnP-related security threats, and help promote better security awareness among IoT manufacturers and distributors.

After all, this and many other UPnP exploits can be very easily avoided just by blocking the devices from being remotely accessible—an option that, in most cases, only exists as an oversight, since it serves no useful function or has any benefit for device users.

Imperva Python SDK – We’re All Consenting SecOps Here

Managing your WAF can be a complicated task. Custom policies, signatures, application profiles, gateway plugins… there’s a good reason ours is considered the best in the world.

Back when security teams were in charge of just a handful of WAF stacks and a few dozen applications, things were relatively manageable. Today, however, with the shift to cloud and microservices, organizations have to deal with securing thousands of web endpoints that change on a daily basis.

I recently met with an Imperva AWS customer with a strict rehydration policy – every 60 days they tear down their entire environment and bring it up from scratch. Everything not source controlled and automated has to go, including their security products and configurations. This poses a unique challenge to security professionals, but we’ve got a solution.

We recently launched the Imperva GitHub, where our global community (we get around) can access tools, code repositories and other neat resources that’ll aid collaboration and streamline development.

To that effect, we developed imperva-sdk, an open source project hosted on our GitHub. ‘Impervians’ around the world can now contribute to the SDK and more projects that are on their way. This new collaboration between Imperva professionals and experienced Imperva customers will bring greater knowledge-sharing and faster deliveries.

Securing thousands of web endpoints doesn’t sound so scary anymore.

For a long time now Imperva Securesphere has been providing automated deployment support and extended management REST API coverage. Still, administrators had to work hard writing their own wrappers and integrations for the granular APIs.

In this blog post I’ll be introducing imperva-sdk – A Python SDK for Imperva SecureSphere Open API. We’ll see how the SDK can be used to automate your SecureSphere management operations, migrate different environments, source control your configuration, and generally switch to a more SecOps mindset.

imperva-sdk is easy to use, changes to the Python objects are propagated immediately to SecureSphere:

The SDK objects are hierarchal and aware of the different connections between resource types:

Standard Python documentation for the SDK is available, including module references and examples to get you started:

Figure 3: imperva-sdk documentation

imperva-sdk objects can be converted to dictionaries and saved as JSON. This allows you to use Python capabilities for advanced automation:

Figure 4: Create a new custom policy from JSON

One of the strongest features imperva-sdk has to offer is the ability to export the entire configuration of your SecureSphere management server to JSON (Note: only APIs that are implemented in the SDK are exported and imported). This gives you the ability to copy configurations between management servers, source control your WAF configuration, and easily incorporate your WAF settings in your CI/CD process.

In the next example we migrate the configuration from a staging management server to production, and in the process replace any reference to “staging” to “v1”:

Figure 5: Copy configuration between management servers

The ability to control the entire configuration from JSON frees users from the need to know Python. We have imperva-sdk wrappers for Jenkins and AWS Lambda, allowing end-users to simply provide management credentials and a JSON configuration file without writing a line of code:

Figure 6: imperva-sdk Jenkins job

The launch of the Imperva GitHub and imperva-sdk allows us even more flexibility and responsiveness when it comes to mitigating threats and extends those benefits to our larger community.

Connect the Dots: IoT Security Risks in an Increasingly Connected World

Nowadays, there is a lot of noise about the Internet of Things (IoT), as the technology has finally emerged into mainstream public view. IoT technology includes everything from wearable devices equipped with sensors that collect biometric data and smart home systems that enable users to control their lights and thermostats to connected toothbrushes designed to help improve brushing habits. These devices typically come with built-in electronics, software, sensors and actuators. They are also assigned unique IP addresses, which enable them to communicate and exchange data with other machines.

IoT devices make our lives easier. Smart home technology, for example, can help users improve energy efficiency by enabling them to turn on (and off) lights and appliances with the tap of a touchscreen. Some connected devices, such as smart medical equipment and alarm systems, can even help save lives.

However, there are also serious security risks associated with this technology. As the IoT ecosystem expands, so does the attack surface for cybercriminals to exploit. In other words, the more we rely on connected technology in our day-to-day lives, the more vulnerable we are to the cyberthreats that are increasingly tailored to exploit vulnerabilities and design flaws in IoT devices.

This presents a daunting challenge for cybersecurity professionals. They must not only protect their own devices, but they must also defend against threats targeting external machines that might connect to their networks.

Avoiding IoT Security Pitfalls

Potential consequences of an IoT data breach include loss of sensitive personal or enterprise information, which can lead to significant financial and reputational damage, massive distributed denial-of-service (DDoS) attacks designed to take down major websites and more. These incidents often stem from misconfigurations, default or easy-to-guess passwords and inherent vulnerabilities in the devices themselves.

Although many experts are calling for regulatory bodies to implement industrywide standards to hold IoT device manufacturers and developers accountable for these pervasive flaws, progress has been slow on that front. In the meantime, IT professionals and device owners must take security into their own hands by following basic IoT best practices.

The most important rule of thumb for IoT devices manufacturers is to test security during each phase of the development process. It is much easier (and less costly) to nip security issues in the bud during the prerelease stages than to waste resources fixing bugs after devices have infiltrated the market. Once developed, devices should undergo rigorous application security testing, security architecture review and network vulnerability assessment.

When devices ship to end users, they should not come with default passwords. Instead, they should require users to establish strong, unique credentials during the installation process. Since IoT devices collect so much personal data, including biometric information, credit card details and locational data, it’s important to embed encryption capabilities according to the least privilege principle.

Protecting Data Privacy

For organizations deploying IoT technology, it’s crucial to establish an incident response team to remediate vulnerabilities and disclose data breaches to the public. All devices should be capable of receiving remote updates to minimize the potential for threat actors to exploit outlying weaknesses to steal data. In addition, security leaders must invest in reliable data protection and storage solutions to protect users’ privacy and sensitive enterprise assets.

This is especially critical given the increasing need to align with data privacy laws, many of which impose steep fines for noncompliance. Because some regulations afford users the right to demand the erasure of their personal information, this capability must be built into all IoT devices that collect user data. Organizations must also establish policies to define how data is collected, consumed and retained in the IT environment.

To ensure the ongoing integrity of IoT deployments, security teams should conduct regular gap analyses to monitor the data generated by connected devices. This analysis should include both flow- and packet-based anomaly detection.

Awareness Is the Key to IoT Security

As with any technology, an organization’s IoT deployment is only as secure as the human beings who operate it. Awareness training and ongoing education throughout all levels of the enterprise, therefore, are critical. This applies to both device manufacturers and the companies that invest in their technology.

The IoT has the potential to boost efficiency and productivity in both domestic and enterprise settings. However, the exposure of IoT data — or the illegal takeover of devices themselves — can cause immeasurable damage to a business’ bottom line and reputation. The keys to unlocking the benefits and avoiding the pitfalls of this technology include embedding security into apps and devices throughout the development life cycle, investing in robust data protection solutions and prioritizing security education throughout the organization.

Listen to the podcast series: Five Indisputable Facts about IoT Security

The post Connect the Dots: IoT Security Risks in an Increasingly Connected World appeared first on Security Intelligence.

Cut Through the Fog: Improve Cloud Visibility to Identify Shadow IT

Last summer, I journeyed to a friend’s lake house in the beautiful Berkshires of Massachusetts for a weekend of boating and fishing. Great Barrington is not too far from Boston, and I expected the road trip along the Massachusetts Turnpike to be clear and easy.

It was smooth sailing out of the gate, and I was making great time. (In fact, I was hoping to get there early enough to enjoy a Friday afternoon on the lake.) But when I was about 45 minutes away from the lake house, I encountered a dense fog that forced me to slow down. My visibility was limited to several hundred feet, and I could no longer see the extended road ahead of me.

The Enterprise Will Extend to the Cloud

Just as I was expecting a speedy arrival, today’s enterprises expect to migrate to the cloud quickly. They hope to take advantage of the dynamic efficiency of cloud computing platforms and software as a service (SaaS) applications.

However, the cloud brings with it a fog that obscures visibility into technology environments and SaaS applications. This fog leads to shadow IT, which impacts cloud security and makes it difficult to travel at speed while keeping your eyes on the road. Without adequate visibility into cloud environments, security teams cannot protect against cloud-based data breaches, malicious insiders, advanced persistent threats (APTs) and other cyberthreats.

When it comes to driving through fog in the real world, standard rules of the road include slowing down, turning on your headlights — and resisting the urge to flip on your high beams. Most importantly? Any good driving instructor will tell you to stay focused on the road, as driving through fog is no time for multitasking.

Unfortunately, these are not viable solutions for organizations competing in today’s markets. So, how can companies cut through the fog to improve overall cost efficiency, reduce IT investment, dynamically scale and deploy business services — and take advantage of cloud automation?

No Time to Slow Down

Slowing down is not an option for competitive organizations aiming to deliver innovative products and services to customers at speed. Both customers and employees demand continuous access and visibility into data. Latency problems and inhibited vision into platform resources can prevent companies from operating at full capacity. As a result, many organizations view security as an impediment to business growth and expansion. These organizations harbor valid concerns about the risks associated with deploying workloads in the cloud and procuring SaaS applications.

Shed Light on Shadow IT

It’s tempting to deploy five different solutions from five different vendors to cover all your cloud security bases, but this introduces unnecessary complexities because the disparate tools will be difficult to integrate and manage. That’s why it’s important for security leaders to weigh the pros and cons of each solution and select the one that best enables them to identify shadow IT, increase visibility and shed light on cloud application usage.

Enterprises cannot afford to place all of their security eggs in one basket either. Organizations that invest all their resources into narrowly focused solutions leave themselves vulnerable to the dynamic threat vectors that exist across business infrastructures. A single, isolated tool with a limited scope has very little to offer to a large organization with a growing cloud footprint. As complexity and diversity increase — and the enterprise continues to extend into the cloud — there is growing demand for a single security platform to provide complete enterprise protection.

Cut Through the Cloud Security Fog

The key to implementing an effective cloud security strategy is to integrate cloud tools with a cutting-edge security information and event management (SIEM) platform. Just as SaaS applications enable organizations to leverage cloud functionality and move at speed, the cloud allows threat actors to move just as quickly.

An effective cloud security strategy relies upon visibility tools integrated with an SIEM solution to quickly discover cloud threats, jump-start investigations with actionable intelligence and respond to incidents with automation.

With visibility comes clarity and the freedom to focus on the road. A single, scalable cloud security solution integrated with an SIEM platform enables enterprises to concentrate on driving business results instead of wasting time stuck in the fog of shadow IT.

Read the interactive white paper: New parity for your enterprise security

The post Cut Through the Fog: Improve Cloud Visibility to Identify Shadow IT appeared first on Security Intelligence.

Want to See What A Live DDoS Attack Looks Like?

We’re fortunate enough to have had Andy Shoemaker, founder of NimbusDDoS, and our own Ofer Gayer chat about DDoS attacks and shed some light on the gaps in many people’s understanding of the threats out there.

In a new BrightTALK webinar alongside Imperva Senior Product Manager, Ofer, Andy discusses the trade-offs of manual versus automatic mitigation strategies, and to that effect showed us a live DDoS attack.

Said Andy: “When we engage with customers that are new to DDoS attacks, we often see sort of a tunnel vision mentality where they think of DDoS preparedness and DDoS attack mitigation as one and the same. The reality is that organizational DDoS preparedness is actually much broader. We break it down into a few high-level areas. What I’m going to show is two hypothetical scenarios that are based on things we’ve seen across various customer tests. First, let’s take a look at a hypothetical incident response that uses a manual mitigation approach. I want to preface this by saying that naturally, procedures can vary from company to company, but this is the common design that we see, and as we step through the process I want you all to take note of the time estimates as we go through that.”

“An important thing to remember when we’re talking about downtime and impact of DDoS is the impact of DDoS more often than not goes beyond the duration of the attack. The duration of the attack, at the very minimum, before mitigation is the impact duration, but in most cases it’ll go far beyond. Attacks that last tens of seconds, they will create downtime, so if a user goes into a website and hits refresh, and it works, then it’s fine, but if it doesn’t happen for 10 or 20 or 30 seconds, now there’s UX impact and we’re actually faced with downtime,” Ofer, who’s been responsible for the Imperva DDoS solutions suite for the past several years, adds.

 

If you’d like to see the entire talk and get some valuable insights from Andy and Ofer, head on over to BrightTALK and check out the full webinar.

The Catch 22 of Base64: Attacker Dilemma from a Defender Point of View

Web application threats come in different shapes and sizes. These threats mostly stem from web application vulnerabilities, published daily by the vendors themselves or by third-party researchers, followed by vigilant attackers exploiting them.

To cover their tracks and increase their attack success rate, hackers often obfuscate attacks using different techniques. Obfuscation of web application attacks can be extremely complicated, involving custom-made encoding schemes made by the attacker to suit a specific need. Alternatively, and as described in a recent spam campaign research we conducted, obfuscation of web application attacks can be as simple as importing common encoding schemes and re-encoding the attack payloads multiple times.

In this blog post, we’ll dive deep into one of the simplest obfuscation techniques commonly used by web application attackers – Base64 – and uncover some of the traits making it so unique and interesting from the defender perspective.

What is Base64?

Base64 is an encoding mechanism used to represent and stream binary data over mediums limited to printable characters only. The name Base64 comes from the fact that each output character is represented in 6-bits, hence there are characters that can be represented… lower and upper case letters, numbers and the “+” and “/” signs.

Originally, Base64 encoding was used to safely transfer email messages, including binary attachments, over the web. Today, Base64 encoding is widely used to transfer any type of binary data across the web as a means to ensure data integrity at the recipient.

In short, Base64 takes three 8–bits ASCII characters as an input, making it 24-bits in total. It then splits these 24 bits into four parts of six bits each and translates each of the six bits into a character using the Base64 encoding table. If there are less than three characters as an input, the encoding pads the Base64 encoding output using the “=” sign.

Since Base64 is commonly used to encode and transfer data over the web, security controls often decode the traffic as a preprocessing step just before analyzing it. Unfortunately, this encoding technique is often abused and used to carry obfuscated malicious payloads disguised as legitimate Base64-encoded content.

Attacks Encoded in Base64 – The Tells

While Base64 encoding is very useful to transfer binary data over the web, there is no practical need to do multiple encoding of the same text. With that in mind, it’s a common practice among attackers to obfuscate their attacks using multiple encodings of the same text—to the extent of encoding an attack a few dozen times to evade detection.

Thanks to some interesting characteristics of Base64, however, encoding the attack payload multiple times in Base64 actually makes things worse for the attacker and easier for the defender.

Here’s why:

1. Inflated Output Size

Every three 8-bits characters encoded in Base64 are transformed into four 6-bits characters, which is why multiple encoding with Base64 increases output. More precisely, the output grows exponentially, multiplying itself by 1.3333 with each encoding (see Figure 1).

Figure 1: Encoding of the letter “a” multiple times using Base64.
The size of output is measured by the number of characters

2. Fixed Prefix

A unique attribute of Base64 encoding is that each piece of text that is encoded several times will eventually have the same prefix. The first letters of the prefix are forever: “Vm0wd”. This same prefix will always appear when doing multiple Base64 encodings, and the size of the prefix will grow as more encodings are done (Figures 2 and 3).

For more details on the fixed prefix, why it always appears—no matter the input or rate at which its size increases—see the detailed Technical Appendix.

Figure 2: The minimum size of the fixed prefix compared to the number of encodings done

Figure 3: Encoding of the letter “a” multiple times in Base64. The fixed prefix is marked in red.

Attacker Lose-Lose Situation

Attackers trying to obfuscate their attacks using multiple Base64 encodings face a problem.

Either they encode their attack payload a small number of times, making it feasible for the defender to decode and identify. Alternatively, they can encode the input multiple times, generating a very large payload making it unfeasible to decode, but also possessing a stronger, fixed, Base64 prefix fingerprint for the defender to detect.

The net net:

Multiple Base64 encoding = Longer fixed prefix = Stronger attack detection fingerprint

Possible Mitigation

There are three primary strategies to consider for mitigation of attacks encoded in Base64:

Multiple decoding

Attacks encoded multiple times in Base64 may be mitigated by decoding the input several times until the real payload is revealed. This method might seem to work, but it opens a door for another vulnerability – Denial of Service (DoS).

Decoding a very long text multiple times may take a lot of time. While attackers need to create the long encoded attack only once, the defender must decode it on every incoming request in order to identify and mitigate the attack in full.

Thus, decoding the input several times opens the door for attackers to launch DoS attacks by sending several long encoded texts. Additionally, even if the defender decodes the input many times, say ten, the attacker can just encode the attacks once more and evade detection.

So, decoding the input multiple times is neither sufficient nor efficient when the attacks are encoded multiple times. Specifically, in the case of Base64, thanks to the special characteristics of the encoding scheme, there are other ways to mitigate multiple encodings.

Suspicious Content Detection

As described above, increasing Base64 encoding = longer fixed prefix = stronger attack detection fingerprint. In accordance, defenders can easily detect and mitigate attacks heavily obfuscated by multiple Base64 encoding.

A web application firewall (WAF) can offer protection based on this detection. Imperva’s cloud and on-prem WAF customers are protected out of the box from these attacks by utilizing the fixed prefix fingerprint phenomena, and based on the assumption that legitimate users have no practical need to do multiple encoding of the same text.

Abnormal Requests Detection

As discussed earlier, increased Base64 encoding equates to increased payload output size. Subsequently, defenders can determine the size of a legitimate incoming payload/parameter /header value, and block inflated payloads, exceeding the predefined limits.

Imperva’s cloud and on-prem WAF customers are protected out of the box here as well. By integrating both web application profiling that understands incoming traffic to the application over time and identifies abnormalities when they occur, and HTTP hardening policies that enforce illegal protocol behavior like abnormally long requests.

Conclusion

Base64 is a popular encoding used to transfer data over the web. It is also often used by attackers to obfuscate their attacks by encoding the same payload several times. Due to some of the characteristics of Base64 encoding, it is possible to detect and mitigate attacks that are obfuscated with several Base64 encodings. To read more about these characteristics see the technical appendix. You can also read more about mitigation techniques using a web application firewall.


Technical Appendix

How Base64 Works

The basic idea behind the Base64 encoding technique is to take three characters, each represented in 8-bits, and turn them into four characters, each represented in 6-bits.

In more detail… assume we get three characters in ASCII. Each character is mapped to an 8-bit number between 0 and 255 based on the ASCII table (see Figure 4). We take the representation of the three characters in 8-bits and join them together to get 24-bits. Next, we split these 24-bits into four parts with 6-bits in each part, and translate each part using the Base64 table (Figure 5). Each 6-bits have 64 options of characters (hence the name Base64), the characters available are numbers, lowercase and uppercase letters, and the symbols ‘+’ and ‘/’.

Overall, Base64 encoding splits the input text into parts of three characters and encodes the three characters as described above. At the end of the process, we might run into a problem where we miss one or two characters to complete the last trio. To solve this, the encoding adds one or two ‘0’ characters at the end to create the last 3-byte group. Then, the Base64 encoding transforms the last characters into ‘=’. That is why sometimes we see Base64-encoded text that ends with one or two ‘=’ characters.

Figure 4: ASCII table

Figure 5: Base64 Encoding table

Here is an example of how Base64 works on a simple three-character word (Figure 6):

Figure 6: Example of Base64 encoding

The fixed prefix

No matter what string is encoded, after encoding to Base64 multiple times, we always end up with the same fixed prefix, which starts with: “Vm0wd”. The reason for this phenomenon is the way the encoding works, and surprisingly, how the letter ‘V’ behaves under the encoding.

First, let’s try to encode the letter ‘V’ using base64. In ASCII, the letter ‘V’ is 86, which in 8-bits representation translates to: 01010110. After encoding and ignoring the padding, as we are interested only in the prefix, we take only the first 6 bits of the representation, which means 010101. In base 64, this is 21, which surprisingly is also ‘V’. This means that every time we try to encode anything that starts with the letter ‘V’ we will end up with an encoded string that also starts with ‘V’ (!). This is a never-ending loop.

 

Letter ASCII (8 bits) Base64 (6 bits)
V 01010110 010101

 

After checking the rest of the characters, ‘V’ is the only one that has this special attribute. So, ‘V’ is the only character that we can put at the beginning of the string we want to encode and end up with the same character at the beginning of the encoded string.

The next question is if we encode some random string using Base64, will we always get an encoded string that starts with ‘V’ after a couple of encodings? The answer is yes.

Below is a graph showing, bottom up, the Base64 re-encoding outcome for each ASCII-readable character and digit. Each color represents the encoding distance to ‘V’:  blue – four encoding iterations; green – three encoding iterations; yellow – two encoding iterations; orange – one encoding iteration.  For instance, it takes four encoding iterations to get to ‘V’ from ‘k’ (k->a->Y->W->V) and two iterations from ‘P’ (P->U->V). Overall, the minimal number of iterations getting to ‘V’ is, of course, 0 (‘V’->’V’ J) while the maximum number of iterations is 5 (for instance, starting with the 128 ASCII char ->w->d->Z->W->V ).

After the ‘V’ in the prefix is set, more encodings will result in longer fixed prefixes. We tested all the available characters and saw that it takes at most two more encodings to get the next prefix character “m”, and at most two more encodings to get the next character “0”.

Before going forward to longer prefixes let’s try to understand why this phenomenon happens. We take the string ‘Vm0’ and encode it using Base64:

What happened here is that the first 6-bits of ‘V’ in its 8-bit representation are exactly its 6-bit representation. Now, taking the extra 2-bits from its 8-bits representation and adding the first 4-bits of the representation of ‘m’ in 8-bits gives exactly the 6-bits representation of ‘m’. The same logic goes with the representation of ‘0’. Note that we are left a remaining 6-bits, which is the representation of ‘w’ in 6-bits. Meaning that what makes the ‘Vm0’ prefix special is that its 8-bits representation is similar to its 6-bits representation.

Inflation of the prefix

It is noteworthy that after encoding the first three letters of the fixed prefix, there is a leftover of 6-bits. These 6-bits will determine the next letter of the prefix. In fact, for every three letters added to the fixed prefix, after encoding there are an extra 6-bits left which will determine an extra character of the prefix. This means that the fixed prefix will inflate in each extra encoding by the number of letters in the prefix, divided by three. For example, if there are nine characters in the fixed prefix, then after another encoding there will be twelve characters in the fixed prefix.

Drupalgeddon3: Third Critical Flaw Discovered

For the third time in the last 30 days, Drupal site owners are forced to patch their installations. As the Drupal team noted a few days ago, new versions of the Drupal CMS were released, to patch one more critical RCE vulnerability affecting Drupal 7 and 8 core.

The vulnerability, code-named Drupalgeddon3, exploits improper input validation in the Form API. The flaw resides in the “destination” parameter that holds another encoded URL as a value. These values were not sanitized, allowing a remote authenticated attacker to execute arbitrary code in the server.

Unlike the previously disclosed Drupalgeddon2, this time, a proof-of-concept (PoC) was published less than 24 hours after the Drupal release. However, since this vulnerability requires the attacker to be authenticated in the attacked host, the volume of attacks is significantly lower. According to a new advisory released by the Drupal security team, this vulnerability is being exploited in the wild.

Attack Data

So far, all the attacks we registered involved reconnaissance attempts (e.g. commands like whoami, uname, etc.). We’re updating this post as more information becomes available, watch this space.

Imperva Customers Protected

Imperva SecureSphere and Incapsula WAF customers were protected from this attack due to our zero-day and RCE detection rules. We also published a new dedicated security rule to provide maximum protection against possible mutations of this attack.

Keeping Your WAF Relevant: Emergency Feed Pushes New Mitigations in Just Hours

We previously reported that the overall number of new web application vulnerabilities in 2017 showed a 212% increase from 2016’s 6,615 to a whopping 14,082. This spike was due, in part, to high-profile vulnerabilities like Heartbleed, Shellshock, POODLE, Apache Struts 2 and more recently, Meltdown and Spectra.

There is, however, good news in the form of a new tool tasked with pushing mitigations for high-profile vulnerabilities like these to the SecureSphere Web Application Firewall (WAF) within a matter of hours.

Ongoing Vulnerability Protection

Tasking your security team with analyzing each and every vulnerability, deciding their relevance and applying the necessary mitigations is near impossible, which is why virtual patching of your WAF is so important. Not updating your WAF regularly is like wearing your old 80s jeans thinking you’re still cool…you’re not. Imperva regularly releases mitigations for new vulnerabilities.

In today’s tech landscape, where constantly up-leveled cyberattacks are one of the most prominent threats to corporate assets, timing is everything.

Once a vulnerability is published it’s only a matter of time until attackers will exploit it. It only takes a few hours for high-quality code snippets to be published and by then, every script-kiddy has had the opportunity to run them against whomever they choose. In the case of a 2017 Apache Struts vulnerability, for example, an official exploit was made public one day after the vulnerability was announced. Clearly, updating mitigations only once every few weeks is not enough.

The Answer: An Emergency Feed

Imperva has incorporated an emergency feed into our ThreatRadar subscription service as an extension of our WAF, which allows Imperva security researchers to push mitigations for high-profile vulnerabilities to the WAF in just a matter of hours. Our goal is to push mitigations via the emergency feed in no less than 24 hours from the time of the vulnerability’s publication, so whether a new vulnerability hits the landscape in the middle of the night or your entire security team is on vacation, your WAF estate is protected.

So, how do we do it?

To apply mitigation through the emergency feed, a vulnerability must be remotely exploited, operational without authentication and have the potential to be highly impactful. In these cases, Imperva researchers analyze the vulnerability, understand its scope, and create the appropriate mitigation. The mitigation is then run through a wide set of Incapsula and SecureSphere customers, on real-world data, to observe its false positive rate and search for the vulnerabilities’ variations. Only when our researchers are convinced that the new mitigation is stable and reliable will they push it into the emergency feed.

Simply put, in just a few hours, all of Imperva’s customers on Incapsula and SecureSphere WAFs are fully protected. The best part? There’s no action required by your in-house security team. As soon as they’re back in the office they have access to a report summarizing the nature of the vulnerability and the mitigation applied.

Included with ThreatRadar Subscription

If you’re a SecureSphere customer with a ThreatRadar subscription, the emergency feed is included and takes only a few clicks to enable. Incapsula customers receive this service out of the box – no registration required.

For SecureSphere customers with ThreatRadar subscription:

  1. Check the Emergency Feed box on the customer portal to register.

  1. In the Imperva SecureSphere WAF dashboard, enable the Emergency Feed services under the ThreatRadar tab.

That’s it. The emergency feed is enabled and will begin receiving new mitigations immediately. With each content update, our researchers will remove the most recent mitigations from the emergency feed and permanently add them to your SecureSphere WAF, so your system is updated. You will be notified of updates via email.

Sonification of DDoS Attacks: Netflow Melodies and a Tomato Panic Button

A focus on innovation and creativity is ever-present in our work. One of the more prominent examples of that is our annual hackathon, which gives us a chance to fuel up on pizza and flex our coding muscles in a 24-hour programming marathon. Up until this year, these hackathons were limited to a business track competition, the purpose of which was to develop blueprints for new features and products. This last time, however, we introduced a “madness” track—a free-for-all in which anything goes, the only limit is how far we could stretch our imaginations.

As security researchers who enjoy the view outside of the box, we were inspired by a TED Talk called “Can We Create New Senses for Humans.” In it, presenter Dr. David Eagleman discusses the ways in which technology can change sensory perceptions and evolve the way we experience our reality. Applying this concept to our craft, we asked ourselves: what if we could listen in on network traffic instead of just looking at it on graphs? This was the seed of an exciting idea that got us looking into how data is converted into sound—a process called sonification.

Sonification 101

Sonification has been around since the turn of the 20th century and the creation of the Geiger counter. It’s been applied in many different ways, including medical devices that use sound to monitor health, auditory displays in airplanes and sonar to help submarines navigate under water.

Auditory perception, we learned, has a lot of advantages oversight, especially in terms of processing spatial, temporal and volumetric information. The ability to register the most delicate differences in frequency resolution and amplitude opens up a Pandora’s Box worth of possibilities in data perception.

In his TED Talk, Eagleman describes how “our visual systems are good at detecting blobs and edges. But they are really bad at… screens with lots and lots of data. We have to crawl that with our attentional systems.” As we dove deeper into our research, we found that his theories are backed up by additional academic evidence that highlights how sonification “is an effective method of presenting information for monitoring as a secondary task.” Furthermore, experiments show how “participants performed significantly better in [a] secondary monitoring task using the continuous sonification.”

Note: All of the scripts used for this project are available as open sourced code in this GitHub repository, to be used under the MIT license.

This was the value proposition we were looking for. Now we just had to figure out how to make the internet sing.

The Sounds of Data

The best and most obvious way to execute our project was to create sounds out of NetFlow logs. To do so, we developed a Python 3 script that collected NetFlow data, which was then processed into OSC (Open Sound Control) messages. To convert the messages into sound, we used a Sonic Pi—a Ruby-based algorithmic synthesizer built to engage computing students through music. Purposefully built for live audio synthesis, Sonic Pi came equipped with a very reliable timing model, making it the perfect tool for our purposes.

In the DIY spirit of the hackathon, we opted to run Sonic Pi on a Raspberry Pi. The result turned out like this:Turning web traffic into sound

Next, we mapped different traffic types according to individual instruments to make the outcome more melodic.

Traffic type from NetFlow Instrument
udp_pps Violas sus
udp_bw 1st Violins
icmp_pps Horn
Icmp_bw Harp
tcp_pps Timpani
tcp_bw Xylophone

Traffic types were assigned to different instruments.

We also used shifts in volume to show increases and decreases in traffic levels. This way, for example, an increase in pitch and volume would alert us to significant traffic build-ups, as in the case of a DDoS attack. Finally, just because we could, we decided to transmit the whole thing over an internet radio. As we did, we found out that the sound of traffic was surprisingly pleasant to the ear.

Judge for yourself with the video below.

Tomato Panic Button

Naturally, it wouldn’t be a proper madness track project without us going a bit overboard. Which is why, when it came to creating the response mechanism to a DDoS assault, we immediately focused the untapped mitigation potential of garden-fresh veggies. The idea was to develop a system that required its operator to squeeze a tomato at the sound of a DDoS attack to activate the mitigation service. And once mitigation was complete, the operator would squeeze a cucumber to signal the all clear. (Care was taken with the tomato, it being the more delicate of the two. No vegetables were harmed in the making of this system.)

For this to work, we connected a tomato and a cucumber to a Wemos D1, a small electronic board with an ESP8266 WiFi microprocessor, and a Makey Makey invention kit. The result was this (ridiculous, but) healthy looking setup:

vegetable DDoS mitigationWemos, Makey Makey, and the Tomato Panic Button

I think we can confidently say this was the first time a tomato has been used in DDoS mitigation. No less important, we’re fairly certain that this was the first time that Wemos or similar technologies, (e.g. Arduino) have been used to interact with a Sonic Pi, which was sort of the whole point.

Not Just Fun and Games

As we were working on this project, we couldn’t help but think that ‘sonifying’ attack alerts using high and low frequencies could play an actual role in the future of security monitoring. While vegetable-based DDoS mitigation probably won’t catch on, the idea of receiving lower-priority information through sound has a lot of validity and will likely attract further research. Sound already plays an important part in other alerting mechanisms, so it’s curious that it isn’t as commonplace in security monitoring. As the SIEM industry is looking for new ways to tackle the growing issue of information overload, expanding the sensory array could be an idea worth exploring.

Science of CyberSecurity: Reasons Behind Most Security Breaches

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 2 of 5.

Q. What – in your estimation – are the reasons behind the many computer security breaches/failures that we see today?
Simply put insecure IT systems and people are behind every breach, insecure IT systems are arguably caused by people as well, whether it is poor system management, lack of security design, insecure coding techniques, and or inadequate support, it all boils down to someone not doing security right. For many years seasoned security experts have advocated that people are the weakest link in security, even hackers say ‘amateurs hack systems, professionals hack people’, yet many organisations still focus most of their resources and funds heavily on securing IT systems over providing staff with sustained security awareness. Maybe this is a result of an IT security sales industry over hyping the effectiveness of technical security solutions. I think most organisations can do more to address this balance, starting with better understanding the awareness level and risk posed by their employees. For instance, the security awareness of staff can be measured by using a fake phishing campaign to detect how many staff would click on a link within a suspicious email. While analysing the root causes of past cyber security incidents is a highly valuable barometer in understanding the risk posed by staff, all can be used as inputs into the cyber risk assessment process.

A developer’s guide to complying with PCI DSS 3.2 Requirement 6 Article

My updated article on "A developer's guide to complying with PCI DSS 3.2 Requirement 6" was released on the IBM Developer Works website today.

This article provides guidance on
 PCI DSS requirement 6, which breaks down into 28 further individual requirements and sits squarely with software developers who are involved in the development of applications that process, store, and transmit cardholder data.